When I joined my organisation, user research was conducted pretty infrequently. Yes, we did perform research studies when we needed to but due to them being organised adhoc, it was relatively expensive in terms of time, effort and cost. As the design team scaled and we hired more designers, our design practice began to mature; we introduced tools such as personas, a design system and more usage of analytics data but our user research initiatives still happened the same way they always had. Plus, when the pressure to deliver an initiative or feature began to ratchet up, user research was often the first thing to be cut. This saw the amount of user research we conduct begin to reduce despite us now having more designers conducting more and more experience design work.
This article is the story of how we matured our research process from adhoc studies conducted with basic and free tools to having an organised, optimised, always-on research cadence using industry best-practice tools.
Everyone, not just the designers, saw the value of the user research that was being conducted in those early days, it was just difficult and slow to organise. Each time we conducted research we needed to repeat the process of finding the tradies or consumers we wanted to talk to, decide the form of research we wanted to conduct, produce the assets required, organise time slots and find budget to pay the participants.
Our research ops function was at best very immature.
Note: I should probably explain that I'm a passionate believer in the power of user research as I have seen the benefits time and again of talking to users to help understand their perspective and stories then designing solutions that fit. I wanted to find a way of improving our research ops function and making conducting primary research quick and easy.
The first thing I did was look for commonalities, what are the things we always need? Can these be templated? I began aggregating our various research assets to product reusable documents such as interview script templates, NDAs, permission to record documents and research finding one-pager templates and presentations. This at least brought a degree of uniformity to how we worked and started us on the journey of iterating those common templates and processes.
Early on we tried out using third party recruitment firms to see if this saved us time and made conducting research easier. To do this we had to find some budget.
We initially framed this as a one off experiment so we didn't look for a monthly budget, just a one off budget. We were able to outline the time saved by the outsourcing and also worked with our teams to coincide multiple product teams' research schedules meaning we could conduct user interviews with multiple teams, thus allowing us to apply a multiplier for the potential savings. After some discussions we were given a one off budget to see if we could make it work. I then coordinated with a recruiter to organise when the interviews were to happen and the type of participant we wanted.
I believe the success in finding budget was based on our stakeholders seeing the value in the research we were already conducting. We didn't have to convince them of the need to do the research, it was just about saving time and money.
This prototype round of user interviews was a success in that two teams were able to interview a collection of users without having to bear the costs of organising the interviews or managing the participants. Based on this success we were able to demonstrate that conducting these research activities effectively was worth spending money on and we were able to secure a small ongoing user research budget primarily to pay for recruitment.
Ultimately, we moved away from using a third party recruiter as we found the people we wanted to speak to most commonly were our users and we were much more capable of reaching these users than an external organisation was but the process of using a third party recruiter helped foster the belief that research was valuable enough to spend money on and by spending this money we were able to make life much easier for the product teams.
A minor but important detail here was that when we used a third party recruiter it was me who was managing that relationship not any of the teams. This maximised the benefit to the product teams as it minimised their cost of conducting research.
As the budget we managed to secure was pretty tiny we initially couldn't afford any specialist research tooling so we had to make use of any tool we already had:
Using these existing tools gave us a few ways to find users for research and then reach out to them. There were still parts of the Research Ops process that we didn't have answers for just yet but utilising these tools and processes really helped accelerate our research maturity.
I found that to empower our product teams to increase the quality and frequency of research conducted I needed to train and help them to self-serve whilst also offering a centralised solution for some aspects of the research ops process. To help teams get better at running research we:
Fortunately my previous place of work had been a consultancy who offered UX services and I'd received training in the above topics so I was able to pass on these skills to my team members.
But, I centralised recruitment of research participants for a few reasons:
One of the main benefits of managing recruitment is that I could influence the cadence of the research we conducted. I wanted to ensure we were regularly talking to our customers, not just when a team felt it needed to. Hence if more than a week or two passes without talking to our customers, I'd organise a round of interviews. I'd then advertise these rounds of interviews to our various product teams who could opt-in to taking part. If no team needs to speak to customers for a given round I had an evergreen list of topics we always require feedback on.
After more than a year of practising regular research we were getting into the swing of it; we were generating impactful insights and primary user research was becoming part of our normal way of producing product. But, there were still issues:
We had been running user interviews for more than a year so we could demonstrate the value they delivered into the business. We could also cite the times when we were not able to conduct the research we wanted to due to lack of budget and the issues this had caused. Over a course of a few meetings we were able to demonstrate how many more user research initiatives we'd like to run and what the additional cost of these would be. We focused our narrative on empowering all the customer-facing product teams to conduct user research at a reasonable frequency. After these meetings we were able to grow our user testing budget significantly for the next financial year.
My want to manage our research cadence and oversee recruitment had slightly backfired and now more and more of my time was being consumed recruiting for user interviews. I needed to find a way to make this much more efficient.
Researching on how to do this became my spare time hobby, if ever I had 30 minutes spare between meetings or some free time at the end of a day it was spent thinking about how to reduce the burden of recruitment. One of the main issues was coordinating the interviews themselves. We were able to efficiently advertise interviews to our customers easily and they were able to volunteer for them without any effort from me. But, it was then a manual process of phoning customers to book them into specific interview slots and send reminders.
During one of my many googling sessions on this topic I found someone who was trying to embed a calendaring tool called Calendly into a survey tool as he had the exact same problem I did. Taking this idea and with a bit of trial and error I was able to integrate a Calendly embed into our research screener surveys meaning customers could now book themselves into interview slots. Plus as Calendly is a really cheap tool, for a extra few dollars a month I was also able to get Calendly to send out email and sms interview reminders which saved me so much time.
Note: There was a downside to this though, we lost the ability to control exactly who we interviewed. Previously we were able to interview a range of user types within a single round of interviews. When customers self serve you have to be happy with what you get. I thought this was an ok compromise as we could always revert to the old, manual process if we needed to for a specific round of interviews.
We had a similar efficiency problem when we started conducting more and more interviews. At the time participants were paid with physical gift cards posted out to them. Moving to digital gift cards which could be emailed saved me a lot of time due to no longer having to buy and post physical cards but it was still a slow process to send out interview incentives.
Serendipitusly the universe gave me an answer. I was chatting with a colleague about the difficulties in paying research participants and he mentioned a friend of his had started an online giftcard platform called Microgifts. After an introduction I was trialling the new platform and discovered it allowed me to add credit in bulk and then upload a spreadsheet which contains the details of whom to send the incentives to. With this I was able to add our new and improved research budget quarterly and then drill down on it with rounds of interviews.
This massively reduced the time needed to pay our participants as the teams were already using a templated google sheet to track which participants turned up to the interviews. All I needed to do was then upload a version of this into the gift card platform and all the incentives were organised within a minute or two.
Pre-covid we conducted fortnightly round of face-to-face interviews. When covid struck we moved to remote video interviews using zoom but still interviews were our main form of research. We were also conducting tests such as usability tests, card sorting tests and value tests within these interviews but we only ran moderated user research. As the number of research cycles we wanted to conduct grew we realised that only doing these tests within interviews was limiting the reach of our research.
To allow us to conduct unmoderated testing we decided to use some of the budget allocated to interview incentives to purchase an unmoderated user testing tool called Useberry. With this tool we were able to run research activities such as concept tests, card sorting tests and usability tests, all without having to sit with the user. Adopting this tool and evolving our research practice not only helped us conduct a more varied mix of research but also helped us increase the frequency of research as the incentives paid for quick unmoderated tests were relatively cheap. Plus as we were able to connect this new tool to our recruitment process, the cost of using this new tool for the product teams was minimal. After I provided some training and guidance to the designers the new tool was enthusiastically adopted .
Like many software production organisations we use a wiki to store our various forms of documentation, to be specific we use Atlassian's Confluence. And, like many software production organisations, our wiki is not the most organised of places with old and new content rubbing shoulders with only the loosest of structures binding it all together. Whilst the research the teams were doing was impactful for the team doing the research it was usually not massively valuable to anyone else. This was creating knowledge silos and hence hindering our organisation from maturing in terms of what we collectively knew about our customers.
I realised we had two problems:
I started by looking for a user research repository tool, something that would allow us to create a structure and then add insights within that structure to help build the bigger picture I was trying to create. After looking around I discovered a small UK start up called Gleanly who were creating an Atomic Research Tool that broke down your learnings into facts, insights and recommendations which fitted my philosophical goal nicely and as they were still in Alpha their product was cheap enough to fit our tiny budget.
We started using Gleanly and set up our tag taxonomy. From now on all research would be broken down into facts, insights and recommendations AND all of these pieces would be tagged and classified. And, the icing on the cake was the tool provided a nifty search feature allowing anyone to search our research insights.
The adoption of a tool to help us capture and organise our insights better did not on it's own achieve my bigger goal of sharing our insights with the organisation and helping maximise their usefulness. To do this I founded the research guild. This basically was a slack channel and a monthly meeting. Each month we would discuss the outcomes of the research conducted over the prior month, watch highlight reels together and see if anyone had any research requests to help them with their work. We also invited people to observe our interviews and help us with our analysis and synthesis. Then in the slack channel, teams shared interesting findings as they were made, marketing shared its research outputs like NPS analysis and we would often recruit participants for internal testing. These meetings really helped the organisation adopt our research and really helped drive awareness that research was something routinely conducted.
Over the years we continued to mature our research operations in a variety of ways, each time incrementally improving our processes:
This journey has taken about four to five years to date and our quest to get better and more mature regarding our research ops never ends. For example right now we're looking to change our unmoderated testing tool as the one we currently use has some usability issues on mobile. Plus we're regrowing our relationship with a recruiter as we now better understand the value we can achieve by testing with non-users for things like value proposition testing and positioning research.
This has been a meandering story about my personal journey to improve how my organisation's ability to conduct research but let's distill this down into the basic steps in the journey:
There's a golden rule in research, "Any research is better than no research". Start small, run guerrilla tests in corridors and lunch rooms to gather feedback on designs or do some basic usability tests with a basic prototype.
Rather than doing everything adhoc, start to standardise and templatise documents and processes. This will really help reduce the cost and burden of conducting research plus you'll begin refining and iterating your templates which is the first step in refining your research ops.
It might be Town hall meetings, All hands or a specially convened meeting about research but share your highlight reels, audio clips and verbatim quotes to raise the profile of your work and start bringing in the voice of the customer into your organisation.
Be sure to learn what tools your organisation already has. I started off borrowing Survey Monkey from marketing before I had budget for a survey tool. Ask your marketing team, your design team, your sales team, your HR / P&C team what tools they use and see if you 'borrow' a license.
When you start talking to your users you don't need to be overly targeted in who you talk to. You could ask your sales or customer service teams to recruit for your or simply put a call to action in your product and ask users to volunteer for interviews.
Sooner or later you'll want to test with a certain cohort so being able to target users based on properties and behaviours is a capability you should try and find a solution for. This was an area I was especially lucky in, we already had the ability to interrogate our users' usage and analytics data. If you intend to recruit research participants from your users then you'll need the ability to interrogate customer data so you can target who you reach out to. This may simply be asking an engineer to pull a list of customers which you can then manipulate in a spreadsheet or you may need to get access to whatever Customer Relationship Management tool your organisation uses.
Once you're able to identify who you want to test with, you now need to engage with them and see if they're able to take part in the research study. You could do this simply via manually sending emails or calling them but being able to engage them whilst they're using your platform is more likely to catch their attention. We use a martec product called Braze to help us do this but there are many products available. Regardless of your choice of tool, you'll need to work with your engineers to get this working as it will involve integrating some form of SDK. If your product is web-based you could use a simple product like Hot Jar that has a free tier (though you'll be usually be limited to a number of responses on free tiers). I'd recommending working with your marketing team on this as they probably would also really like to reach users in-product and may already have an idea what tool they'd like.
Once you have created a list of research participants, start with simple remote video interviews. If you don't have a corporate account for a tool like Zoom, Google Meet or Microsoft Teams you can alway sign up to their free tier products. You can also sign up to the free tier for survey products like Survey Monkey. Well crafted interviews combined with good surveys will get you a long way down the research maturity journey with very little expense for tooling.
You'll be surprised how far you can get with no budget. Use guerrilla testing techniques for short, sharp user tests. Cold call customers and ask if they have 2 minutes to help improve your product. Go to where your users are and ask them questions directly. Then demonstrate the value of the research you're conducting on little to no budget and outline that with even a modest budget you can do more and better research. I find asking for budget to conduct more research is an easier conversation than asking for budget to start conducting research.
I found that asking teams to wholly manage their own research cycles resulted in not much research getting done so I chose to centralise recruitment. To be transparent, I had a role that allowed me to do this, you may not be as fortunate. If you cannot centralise any aspects of your research ops then you should at least ensure things like recruitment are done in a consistent and efficient manner.
Spread the love. The more people conducting research the quicker you'll get good at it plus there may be opportunities to share the cost of research tooling that may help you afford more and better tooling.
Once you're on the road to maturity things begin to get easier, for example asking for $30 a month to automate interview invitations and reminders is a pretty small cost and will save you a heap of time. Once you have budget for simple tools it becomes easier to ask for budget for better tools to replace your simple tools.
Just try and be 1% better every day...
The images in this article were made by Me so please excuse they're rawness. They show an old house of my family being built between May 2015 and November 2015. The builder was Meridian Homes.
Thumbnail: https://unsplash.com/photos/person-in-blue-long-sleeve-shirt-using-black-laptop-computer-5QiGvmyJTsc
Contact
Jules Munford
Phone: 0431 414322
Email: julian.munford@googlemail.com
Twitter: @julesmunford
© Julian Munford 2020