This article is re-posted from User Centric’s blog.
How do we ensure ‘realistic training conditions’?
Much of the recent discourse between the medical device industry and federal regulators seems to focus on meeting the criteria necessary for a successful summative human factors validation study according to existing standards and guidance. Some frequently discussed topics include proper preparation, reporting results sufficiently, and demonstrating that the device in question is safe for use by the intended user population. One common stumbling block during preparation is the definition of how participants during the study will be trained. This is an important point because if participants in a validation study are trained improperly, the results from the study can be invalidated.
The simple solution is to train participants in such a way that realistic training conditions are simulated during the validation study. Defining “realistic training conditions,” however, is easier said than done. The research team must define these conditions prior to summative human factors validation studies. This discussion assumes that the team has already outlined a reasonable simulation of realistic training conditions during the formative research phase, including:
– Who administers the training
– Topics covered during training
– Training environment
– The amount of interactivity during the training process
– The amount of delay between the conclusion of training and when the user first interacts with the device independently (training decay)
Even after these realistic conditions are defined, it is challenging to ensure the training is administered consistently throughout a human factors validation study, which is necessary to gather valid data. Will the trainers be given strict instructions to follow a script exactly? Or will trainers be given a checklist and the freedom to do so however they choose, as long as the topics are covered?
Scripts vs. Checklists
There are advantages and disadvantages to each training approach. If all trainers in a human factors validation study are given a script to follow closely while training the participants, it is easier to ensure consistency assuming the script is followed. But some measure of realism may also be lost in this approach. Depending on the device it may be unlikely trainers will follow a script when training new users.
If trainers are given only a checklist of topics to cover during training and the freedom to describe those topics however they choose, this may simulate the presence of variability in the real world. However, the amount of variability may actually be greater in the simulated use setting since trainers may not have as much experience in that role as real world trainers would.
Identify Multiple Scenarios Early, Enforce Consistency Within Scenarios
In our experience, we have had the most success when identifying different training scenarios or conditions early on, and enforcing consistency within those conditions. For example, if there is a debate as to whether or not trainers will provide all the detail that is included in one version of a script, perhaps there are actually two conditions – high detail script and low detail script. But within those two conditions, trainers should be consistent in how they administer instructions. This is preferable to only providing a more loosely defined checklist of topics to cover because variability between trainers within a training scenario can threaten the validity of training scenario comparisons.
If consistency within training conditions is not enforced in some systematic way, it will not be possible to determine whether observed variation in performance is a result of the design of the interface, or just an artifact of inconsistent training. Variability in how users are trained in the field is unavoidable, but implementing consistent – though perhaps systematically manipulated – training protocols in a summative human factors validation study is necessary to understand the impact of that variability.