The revolutionary speech recognition technology has been successfully adopted in many clinical practices to help with documentation as it is a time-saving and cost-effective solution. Well, we all know that speech recognition software can help reduce documentation overload, but how far can it be safely and efficiently implemented in a healthcare practice? A study published by JAMA Network Open in the year 2018 says that the clinical documents generated by the ASR software are susceptible to errors. The potential errors may result in patient safety issues, which is a cause for serious concern.

Assessing the accuracy of ASR in EMR documentation

JAMA Open Network conducted a study wherein they set to identify and analyze the errors in the dictated clinical documents using ASR technology as well as professional transcriptionists. For this purpose, they randomly picked some 217 notes dictated by 144 physicians using speech recognition software. The studies were conducted in two healthcare facilities – Partners Healthcare System in Boston and the University of Colorado Health System in Aurora.  These were their findings

  1. Of the 217 notes it was found that 96.3% of the unedited notes generated by the speech recognition software had errors.
  2. Of the errors, deletions were found to be very common and contributed to 34.7% and insertions came up to 27%.
  3. Errors were also frequently noted in converting numbers. For e.g., the “17-year-old” was incorrectly converted to the “70-year old”. Other errors found were missing or incorrect prefixes. For e.g., the word “inadequate” was erroneously converted to “adequate”.
  4. Several other potentially serious medical errors were also noted. As per the notes generated by the speech recognition software, the medical condition DKA (diabetic ketoacidosis) was incorrectly converted as “Dengue”. In another instance, the speech recognition software wrongly interpreted a patient having “groin mass” as “grown mass”.
  5. The study found that the error rate was 7.4% in unedited documents and the error rate decreased to 0.4% after the document was manually edited and reviewed by a transcriptionist. However, at this stage most of the errors, i.e. 26.9 % were associated with the clinical information, of which 8.9% were regarded as “clinically significant”.
  6. The error rate of the final version of the clinical notes after it was signed off by the physician substantially reduced and was observed at 0.3%. It was also noted that one out of every 250 words contained clinically significant errors.

Even though the speech recognition software was introduced to make medical transcription highly efficient and effective, it is prone to errors.  The research findings of JAMA Network Open 2018 demonstrate the need for human intervention in manual review and editing, software training, quality assurance, and auditing. This ultimately improves the quality of the speech recognition-generated document, which in turn increases patient safety.

Leave a Reply

Your email address will not be published. Required fields are marked *