What is Rapid Usability Testing?


  • Usability refers to how easy it is to use a technology or service.
  • User experience research is the investigation of how people interact with, make sense of, and respond to technology and services.
  • UXR improves scientific software products by revealing differences between software developers’ and users’ perspectives and assumptions. UXR can also surface insights that can simplify tool use and adoption and help determine user needs amid a rapidly changing technological landscape.
  • UXR can measure a product’s learnability, efficiency, memorability, errors, or utility and user’s satisfaction.
  • Rapid usability tests are tools for observing how users interact with a tool or service. Tests should involve narrowly scoped tasks and measurable outcomes. Results from these tests are used to guide development work.
  • Rapid usability tests are most effective for teams that already have an interactive prototype, a minimum viable product, or more mature tool or service

Preparing a rapid usability test


  • Rapid usability testing should involve observing participants for no more than an hour, preferably less than 30 minutes. Choose the number of tasks you ask participants to complete based on your priorities and how much time you have available.
  • Task prompts should be goals users might have and should not use language visible in your user interface.
  • Evaluation criteria should be determined in advance. Multiple criteria can be used to evaluate a single concept like ease of use.
  • Without specialized software, capturing some data like clicks or idle time may be difficult. However, many other common metrics are relatively simple to evaluate if you can record a session and/or present survey questions. If you are evaluating a command line tool, you may ask participants to copy their terminal contents and email them to you at the end of the session.
  • When asking participants survey questions, do not do so verbally. Make sure you have a way of associating their anonymous response with their recorded session; anonymous participant IDs are a good choice.
  • Preparing a script and the test environment ensures you run the same test with each participant and helps make sure you gather all the data you meant to.
  • Your test sessions should begin with some orientation and rapport building, then move on to the tasks before wrapping up.
    • During orientation, introduce yourself and outline what will happen during the study. Reassure participants that they are not being tested—only the tool is being evaluated.
    • When building rapport, ask the participant a question about themselves that they can confidently answer.
    • When presenting the tasks, try to order them so that your most prioritized tasks go first, ensuring you get to them. If there is a logical sequence to them, you might apply that structure instead.
    • In your script, include links to any appropriate webpages or survey questions so you can easily share this information with participants; put them next to the appropriate task, not at the top of the page.
  • You will need to link any anonymous survey responses to their study session. A simple way is to assign each participant an ID number and tell them this ID number before sending them the survey; they can then enter that into the survey.
  • Piloting your study helps ensure you have accurate estimates of how long a session will take, helps refine your script and environment set-up, and can inspire additional questions or tasks to include. However, if you anticipate difficulty recruiting, you should limit your piloting so that you don’t practice with too many potential actual participants.

Recruiting and tracking participants


  • Identify your targeted population for your user study by considering the constraints that your research question implies and your tool’s value proposition. Some constraints might be true prerequisites for participation while others might be nice-to-haves that can be forgone if recruitment is difficult.
  • Recruitment targets can be reached through iteration; about 10 people should be sufficient to gain insights.
  • Recruitment should be done conscientiously so that participants understand what they are being asked, what their data is being used for, and how it is being stored. Conducting human subjects research ethically also involves ensuring there are appropriate benefits for participation, that participants are treated with respect, and that you never coerce them.
  • Snowball sampling is when a potential participant refers the researcher to additional potential participants. This is great for recruitment but can introduce bias and must be done with care to protect participants’ privacy. Posting to community forums, leveraging your code repository, and identifying then contacting users based on software citations are other tactics you can use to find participants.
  • Recruitment should be tracked in a private location.
  • Recruitment efforts for one study can support future studies—ask participants if they are willing to be contacted about future user research opportunities.

Conducting a rapid usability test


  • To protect their privacy, participants sharing their screen should be able to share only what is needed for the study.
  • Having participants think-aloud is a good way to learn more about their reactions and opinions. It can slow them down, however, so reconsider this approach if you are using time as an evaluation metric.
  • Ensure errors from one task don’t propagate to the next by sending participants new links at the start of each task.
  • As you collect data, anonymize it and link to that data in your tracking spreadsheet.

Analyzing data and reporting results


  • Apply a coding scheme to label qualitative data like transcripts. Iteratively review the data and labels you have applied so that you represent the data as best you can. You can combine labels together or make new ones that better represent your data.
  • Specialized tools are helpful for labeling and exploring qualitative data but spreadsheets or printed transcripts and post it notes can do the job too. Whatever your system, you want to be able to explore labels assigned to data and, conversely, data assigned to labels.
  • Errors can be labeled by severity or type, allowing you to more easily recognize which issues are the highest priority.
  • Evaluate and report on tasks individually so that you have finer grained insight into users’ experiences. Multiple metrics can help inform your interpretation of how usable the tool is.
  • When reporting results, tell your audience what the goal of your research was, what you did, and who you did it with.
  • When presenting data, ensure the audience can understand exactly what metrics mean and provide information like ranges, medians, and modes to assist with interpretation.
  • Leverage the labeled data to help you report trends in what participants were doing, thinking, and feeling throughout their tasks.
  • An actionable insight describes what the insight is (e.g. uses’ expectation or a common problem) and an achievable, concrete step you should take next. These should be included in your reporting.
  • Rapid usability testing can be integrated into your development process so that you continuously improve your understanding of your tool’s UX.