When applying human factors engineering in medical and drug delivery device development, the end goal for manufacturers is a successful validation study. Proper application of best practices in human factors engineering throughout the development process, not just at the end, is how that success is achieved. Having managed and executed hundreds of such studies, we’ve observed some common pitfalls that, if not navigated properly, will likely result in FDA requests for additional research – pitfalls that can lead to time wasted, money lost and effort exhausted.
Four best practices represent examples of how to apply human factors engineering to reduce time and money, and increase your rate of success:
1. Ensure participants in the human factors validation testing are representative of intended end users.
Do not assume. Base your definition of intended users on data gleaned from past research, and document the inputs to your definition. We often see incomplete or incorrect assumptions about the nature of the end user. For example, during one study a manufacturer assumed physicians would use a particular device to accomplish a task, but ethnographic observations revealed that physicians typically handed the device to a nurse to complete the task.
FDA guidance indicates that human factors validation testing must include participants who are representative of intended end users (adult patients, pediatric patients, various types of HCPs and caregivers). In some cases, support personnel (i.e. staff who perform equipment maintenance, repairs, cleaning, etc.) may need to be included as a separate user group, likely with separate tasks. The FDA requires a minimum of 15 users per user group, and sometimes more.
2. Assess tasks and sub-tasks associated with product use with sufficient granularity to truly understand failure modes.
It’s crucial to perform a task analysis that is granular enough to identify every interaction a user has with an interface, breaking those interactions down into elements of perception, cognition and action. This helps to understand key failure modes. For example, we conducted formative research for a manufacturer with a goal towards identifying any opportunities for refinement in the packaging and labeling for a drug. Previously, a graphic designed to communicate the proper dose was made larger in an attempt to reduce improper dosing. We helped the manufacturer redesign, rather than enlarge the graphic and saw a reduction in improper dosing in later research.
For critical (and essential) tasks, it’s crucial to observe behavior through simulated-use scenarios because what users say they would do versus what they actually do can be vastly different. Craft each scenario allowing participants to demonstrate what they would do if they were at home/at work/in other intended use environments. Control environmental factors (light, sound, distractions, etc.) to be representative of the intended use environment.
3. Conduct preliminary analyses with an eye towards defining and documenting context of use in addition to designing the product and associated materials.
Product manufacturers often assume that because they have implemented a training program, all of their users will be trained as they prescribe. But when we’ve conducted contextual inquiries or ethnography in clinical settings, it’s not at all uncommon to hear that some clinicians have skipped training. Or that a”train the trainer” model is only loosely followed. This results in scenarios where the user might interact with the device without any formal training or a long time after they were initially trained.
Taking the opportunity during preliminary analyses to evaluate the context of use, who is using the product and how is just as important as formative usability testing to ensure safe and effective use will be validated at the conclusion of the human factors effort.
4. Prepare for complexity of validation by establishing robust team training on best practices in application of human factors engineering, and control for quality and consistency.
In validation studies sample sizes are typically larger. Representative user populations are often difficult to identify and require data collection across multiple markets. Representative contexts of use must be simulated carefully. Add to all this the variety of team members involved in the execution of such an effort (research leads, participant recruiters, site coordinators, moderators, note-takers and trainers). It is important to have a robust system in place that ensures the team is appropriately trained for research protocols, that good documentation practices are adhered to, that a robust root cause analysis has yielded sufficient understanding of all observed use errors and that any adverse events have been reported. Any missteps and, at best, significant time, effort and cost go into documenting and explaining deviation from protocols. At worst, the validity of your data falls into question and leaves you with a need to conduct more research.
Ultimately, implementing these best practices will not only support a successful validation study, but they are also critical to ensuring the product you are developing lives up to the promise of your innovation by delivering a superior user experience.
For more information on our best practices for safeguarding drug and delivery device innovations, contact email@example.com.
Most Recent Posts
- Demystifying quantitative methods: Three easy steps to drive your pharmaceutical pricing strategy
- Putting payers in the spotlight: Shifting the market access mindset to focus on shaping payer attitudes
- The rise of patient-centricity, clinicians seeking knowledge – the role of the internet
- How consumer research innovations can help boost health intelligence
- Dispatches from eyeforpharma: Digitizing patient and professional engagement