Continuing on from Nile’s Guide to Remote Research Tools, here’s a lightweight guide to how we run remote moderated testing. I talk about UX testing throughout, but the tips below apply to any kind of remote interview-based research.
Lots of teams are slowing down their customer engagement at the moment — and that’s a good response to the crisis, for both research teams and potential participants.
While customer engagement is arguably more important than ever, we’re advising our clients to consider a few key questions before they embark on a period of customer testing — even something as seemingly uncontroversial as UX testing:
- Is it necessary?
- It is necessary now?
- Should we or can we do it differently?
- How can we minimise the impact on our participants?
- How can we care for ourselves? (We may hear traumatic stories.)
Assuming that your research is necessary and you’ve considered alternatives, mitigations and caring strategies, here are our recommendations for how to do UX testing well.
A reminder of the basics of UX testing:
- Why do it? UX testing identifies and validates new or unexpected ways to make a product or service better for both users and the business.
- What does it involve? We observe what target users say and do while using a product or service, and record and synthesise their responses and reactions.
- When should you do it? “Early enough”. It’s a balancing act; we need to have something concrete to test, but can’t wait until things are too late to change.
Now everyone’s remote, what are your options?
There are several ways to conduct UX testing: in-person or remote, and moderated or unmoderated. However, social distancing has eliminated half of our options here.
Now, looking at solely remote UX testing, which should you pick: moderated, or unmoderated?
Pick unmoderated testing when you want:
- Fast results
- At a low cost
- Dealing with basic levels of insight
Unmoderated testing is a great, low-resource, and relatively fast way to identify show stoppers, bugs and issues. Participants interact with your product through an online portal or platform, and successes and failures are logged, tracked, and fed back through the system and / or the participant.
It’s fast, cost-efficient, and does the basics well. The main downside here is you’re only looking at surface-level insight.
It’s fast, cost-efficient, and does the basics well. The main downside here is you’re only looking at surface-level insight. You can’t ask follow-up questions, explore observed behaviours, or probe a participant’s context in any detail. Plus, there’s a higher risk of bumping in to ‘professional,’ unengaged or robotic participants, who’ll click through your tests as quickly as possible.
Pick moderated testing when you want to:
- Explore beyond basic experience or usability — dig into needs, contexts and user behaviours, in relation to a product or service
- Build greater empathy for the user within design teams and stakeholder groups (through live observation)
Remote moderated testing offers a full depth of insight. Using a video conferencing solution and shared screens and shared control, a researcher virtually monitors the testing, building a relationship with the participant while understanding their context and the reasons behind their opinions and decisions.
The real benefit of moderating testing is the ability to identify usability improvements, not just show stoppers, bugs and issues. Done well, such testing can identify unmet user needs, behaviours, attitudes and dissect reasoning behind decisions.
And — if conducted out in the open with project observers and stakeholders, it’s far better for building engagement and empathy in project teams and stakeholders.
Done well, moderated remote testing can identify unmet user needs, behaviours, attitudes and dissect reasoning behind decisions.
BUT: Time to insight is longer, usually looking at a 1–3 week turnaround, mainly due to participant recruitment. Similarly, it’s not competitive with lower-cost unmoderated UX testing
No: remote testing isn’t a ‘worse version’ of in-person testing
Even before the current crisis, we’ve often run a blend of moderated remote and in-person testing on many of our projects. Remote testing has several advantages, and over the years we’ve developed a set of techniques and processes to dial up the advantages and mitigate the disadvantages of remote testing.
Remote UX testing has many advantages:
- Greater diversity of participants from a wider geographical spread, and a lower barrier for hard to recruit audiences
- Lower cost than in-person: lower incentives, no venue hire, and no travel required
- Often faster: doesn’t depend on research venue availability or require complex logistics
- Enables easier observation for project teams and stakeholders (when it’s moderated)
But you’ve got to mitigate against a few common downsides:
- Remote testing can exclude those who don’t have access to digital technology or are less IT literate
- There’s a slightly higher risk of drop-outs without pre-engagement of participants
- If it’s moderated, the moderator has to work harder to build trust and rapport with participants
- The observation experience can be less engaging when observing alone
How to make moderated remote UX testing a success: our top three rules
Remembering these three basic lessons have kept us right whenever we’re doing remote moderated UX testing.
- Depend on technology as the enabler, not the answer. Tried and tested technology that enables running and observing remote UX testing is crucial. But getting it right doesn’t guarantee success — techniques need to change too.
- Focus on building strong relationships with participants. To get the most from recruited research participants, we take deliberate steps to build additional rapport to guarantee participant engagement with the testing.
- Nurture and grow the value of live observation. Insight does not just come directly from users. Live remote observers are a valuable source of insight. We design for observer experience throughout.
First, depend on technology as the enabler, not the answer
Getting the tech right and avoiding problems requires familiarity, preparation and back-up plans. Nile’s tech toolkit contains some of the best software on the market, but the best kit in the world isn’t going to help you out if you’re unprepared.
Our checklist for getting the tech right and avoiding problems:
- Always, always, always run at least one pilot session. Having a plan is great, but you need to test it before trying it with actual research participants.
- Brief the research participant well ahead of the session. It’s always great to set the right expectations and also get participants to install and possibly test applications.
- Have backup options ready. Murphy’s law applies to remote testing. Technology can let you down. We always have backup plans if audio is acting up, redundancies if the recording fails, and strategies to support participants who are struggling technically. Get these in place well in advance.
- Create a tech workflow and connect systems. Streamline remote UX testing workflow by linking systems. For example, we facilitate and record in Zoom, which can automatically feed audio into Otter, giving us a near-instant transcript of a conversation from which we can cut and paste relevant sections directly into Dovetail. [UPDATE: Dovetail has just added some impressive looking video and audio transcription functionality. We haven’t played with it yet, but might eliminate the need for Otter as transcription middleware]
- Have someone ready to help observers. It’s not just the moderator or research participant that can experience problems with technology.
Secondly, focus on building strong relationships with participants
Separating a moderator and a participant with a screen changes things.
Rapport is typically harder to build, and it’s much harder to communicate using body language.
That’s why we intentionally invest time focusing on participant engagement while moderating remote UX testing. Because better engagement means better insight.
But how do you build the right kind of engagement for an effective UX test over a video link? We’ve learnt to do this in four ways:
- Establish trustworthiness. Participants aren’t joining you in a research venue, so you need to signal and establish how you want to be perceived early; both with what you say, how you act, and how your surroundings look
- Focus on learning about them. Without the benefit of face to face interaction, we explicitly invest more time up front getting to know the participants. Sometimes, when it’s appropriate, we’ll call up participants prior to the session to warm them up, familiarise them with the process and talk them through things.
- We make silence acceptable. Video calls are strange. Long silences feel more awkward than they would if you were in person, yet some aspects of UX testing require focus, thinking time, and reflection — all compromised if people feel compelled to fill silence. We make a point of giving people permission to be silent, and sometimes explicitly build it into the session: allocating 60s periods for thought and reflection throughout a test.
- We lean in and are engaged. Demonstrating engagement in person is easy. Eye contact, body language, mirroring and so forth. Demonstrating engagement over video is far harder. We make an extra effort to reflect and playback the things participants share, demonstrating interest in their opinion. It’s a balancing act, however — we’re aware of not overdoing it or signalling value so as to avoid inadvertently leading them.
Finally, nurture and grow the value of live observation
Having teams and stakeholders observing sessions and discussing afterwards leads to new ways to look at a problem.
Right now, everyone is remote. That’s not just the research participant, but also the project teams and stakeholders observing the research. That means we need to go even further to make remote observation engaging, productive and valuable.
Make the effort to do this digitally. Gary Birnie and Calum Macleod pose next to a wall of data gathered from UX Testing observers. Live observation increases the value of the research for the wider organisation, improves the quality and quantity of data, and is generally more interesting for everyone.
Obviously we don’t want to have observers chiming in to the live conversation — we aim to make the moderated test feel as close to a one-to-one conversation as possible for the participant. However, that doesn’t mean observation is a passive exercise.
Take steps to position observation as an active — not passive — exercise.
We take three simple steps to safeguard engagement for observers, transforming them into a second source of insight and observation:
- Before the session: we gather observers and brief them on the ways we need them to contribute, walking them through the observation documentation board and process (often on Miro, and prepared as part of the discussion guide). If it makes sense, we’ll build and share collaborative note-taking documents; that way multiple observers can contribute observations simultaneously, in a way that is seen by others and actively demonstrates engagement to teammates.
- During the session: we work to recreate an observation room vibe as far as possible: that means running a parallel live-chat for observers (as a way to share notes live, and raise questions with the moderator), sharing collaborative notetaking tools, and building up post-it notes in real-time on a shared virtual whiteboard (again, Miro is great for this).
- After the session: we move straight into a remote debrief, firing up a new zoom call to bring the observers back together. These debrief sessions are structured in a deliberate way using templates and reflection frameworks built in Miro. With everyone looking at the same thing, it’s like huddling around a whiteboard after an interview and running rapid synthesis on what we’ve seen.
Here’s what to remember
As with venue based, in-person user testing (and most things in general), it’s not enough to get the tech right. Successful remote user testing, while everyone is remote, depends on intentionally building strong human relationships with research participants as well as nurturing observer collaboration.