Archive

Back to overview

Four things you need to know about human factors validation for your mobile app

Whether it’s breathing new life into aging patents or capitalizing on the quantified self craze, pharmaceutical companies are finding ways to expand the reach and utility of their drug brands by developing digital companion applications that track, monitor, log, and calculate therapeutic data. If you are a product manager considering developing an app for that, you know that the app may be subject to some of the same human factors regulatory requirements that drug delivery systems must meet.

Given the simplicity of the tasks and the supporting visual design in many of these apps, it can be shocking to realize just how much effort and coordination goes into planning and preparation for a human factors validation test, especially where the perceived risk of harm is slim to none. After all, it’s software, not a device, right? Wrong. If the software provides information or data used to make decisions about administration of care, there is a good chance human factors and risk will be assessed similarly to a medical device. It’s true that rigorous attention to detail is required to create and adhere to a robust and effective human factors validation protocol. But it’s not impossible! Here are four common stumbling blocks, and how to avoid making mountains out of molehills.

Before you start:

  1. Know how it’s done IRL (in real life): Consider instances where the official prescribing information may differ from the rules of thumb employed by real people. We’ve seen cases where the app design was bound by specifications in the prescribing information related to upper and lower limits and injection rotation specifications. However, in testing we discovered that real doctors, nurses, and patients tended to bend these rules according to their own personal circumstances and clinical opinions. If the app is rigid and won’t accommodate/ doesn’t reflect real use scenarios, not only will it be confusing and frustrating, it may be entirely unusable.

 

  1. Don’t just automate—provide a service. Make sure there is clear value in the utility of the app that is greater than the effort required to seek out, download, and learn to use it. If a dosing app designed for nurses is just multiplying some number by two, an operation that can almost always be done in the head, why would they use an app for it? If the interface visualizes data in irrelevant ways, how will it support decision making? No one wants to see participants asking “why should I care about this?” in their validation study.
  2. Understand the risk of harm. The FDA is primarily concerned with patient safety. Think through and analyze the potential risks to the user associated with unintentional misuse of the app. The potential harm that could befall someone who miscalculates or misinterprets a recommended insulin dose is far more obvious than the potential harm that could befall someone who misreads an injection rotation diagram, but it’s still the manufacturer’s responsibility to conduct due diligence and determine the level of criticality associated with foreseeable user errors. With criticality defined and mapped onto a task analysis, the next step is to carefully define essential and critical tasks in your study protocol and spell out in detail the conditions of success and failure. You’d be surprised at how many different circumstances can lead to a participant doing or not doing something that is part of the expected task work flow. Know in advance which deviations are OK, which are artifacts, and which actually represent a true use error that needs to be analyzed for root cause and residual risk. A challenging proposition for device validation, this gets even trickier when testing perception and interpretation of screens or data in an app. Decide ahead of time what success needs to look like: Does each participant need to understand the concept behind the app inputs and outputs? Do they need to interpret trends? If so, then decide what needs to be interpreted and how, and know how the researcher will know if and when it has been interpreted correctly.

 

  1. Engineer your data: When designing your test protocol, think about whether you will test with fake (pre-defined) data or if you will let participants use personal reference points when performing tasks with the app. This isn’t limited just to name, email address, and DOB. It could include other key assumptions about the users’ identity and training such as multipliers and dosing protocols as well as familiar volume increments and conversion methods. If you are building an app that calculates something a certain way, make sure you recruit participants who do it that way too, or at least establish important facts about the participants’ frame of reference in advance of administering tasks. If you are asking participants to draw meaning from trend data, make sure the trends displayed would make sense for a real person, and haven’t been randomly generated. In other words, think about the variability that could be introduced if you allow participants to use their own points of reference, but balance it against the test artifacts that could result if you don’t.

 

For more information, contact Kirsten Bruckbauer at kirsten.bruckbauer@gfk.com.

Back to overview

Write a comment

*required field

Leave a Reply

Your email address will not be published. Required fields are marked *

Name*
E-mail*
Your comment*