Sunday, July 18, 2010
Adopting voice-recognition software – Am I an innovator or am I reckless?
Our traditional dictation and transcription system in the office had been used for several decades. We dictated our correspondence on to cassette tapes and this was transcribed by our staff in the office. In addition to dictating letters and consultations, we would also include any instructions such as x-rays to schedule or follow-up office appointments. We generate a lot of letters every day in the office, as this is a consulting practice and we try to communicate promptly back to referring physicians. We were finding that our office transcriptionists were having difficulty keeping up on the volume of transcription generated every day.
Several years ago, we switched to using digital voice recorders and sending the files offshore to be transcribed. Our dictation would be transcribed into a word document which was then returned to us electronically. Our staff would still have to paste the document into our electronic medical record so that it was assigned to the correct patient. They would also add the referring doctor’s address and the patient identifying information. A letter would then be printed out and given to the doctor to proofread. Staff would then fax it to the referring physician.
When we switched to a different EMR system last fall, we stopped printing out letters, but the letter would still be placed in an electronic queue that had to be reviewed by each urologist before being faxed to the referring doctor. If I were to be away from the office for more than one week, I would leave instructions for staff to send out letters "dictated but not read". This would speed the process of getting the consultation letter back to the referring physician, rather than waiting until I returned to work. However, even though our transcriptionists are very diligent in looking for errors in our letters, a misplaced decimal point in a drug dosage or laboratory result, or the word "not" inserted, or omitted, by accident can completely change the context of a sentence. As such, I prefer to proofread all my letters. The downside of this is that the letter has to come back to me and I have to spend the time reading it, sometimes referring back to the patient's chart to see whether the information contained in the letter is correct.
At best, the time from dictation to receipt of the letter by the referring physician would be 48 hours. That's a pretty good turnaround time. However, reviewing dictation tends to be a low priority as compared with reviewing lab reports, or returning patients phone calls. As such, letters would sometimes wait a week before being faxed to the referring physician.
With the Dragon voice recognition software, we hoped to be able to dictate consultation letters directly into our EMR. Because the EMR takes the text of our consultation and then generates all the "fixin's" for the letter (e.g. letterhead, date, referring physician name and address, salutation, patient identifying information), we wouldn't need our transcription staff to do that. It's a matter of only a few mouse clicks to get a consultation letter faxed directly to the referring physician.
That means that our consultation letters get to the referring physician almost immediately after we've seen the patient. But, this improvement in turnaround time isn't the main reason that we decided to try voice-recognition software.
Being able to see my dictation immediately lets me correct any errors right away rather than needing to see the letter again for proofreading. While proofreading usually only takes a few seconds, I sometimes need to return to the patient's chart to double check lab or x-ray results. When there are 20 or 30 letters to check at a time, this review can take 10 or 15 minutes. So, voice-recognition software may be a way to improve our workflow.
Also, our current dictation system involves the cost of offshore transcription and also our office transcriptionists who receive the transcribed text and generate letters in our EMR. The voice recognition software is a onetime cost and we should be able to save the fee from our offshore transcription service.
Theoretically...
While the latest version of Dragon is quite impressive right out of the box, it does take some training to allow the software to recognize your voice and patterns of speech. The software comes with several prepared texts that the user reads to train the software. We are using the medical version of Dragon and it has several medical scripts to read. It's a fairly lengthy process that takes 2 or 3 hours to go through. However, it was immediately obvious that training the software made a big difference in how we could recognize my voice.
Also, as I do daily dictation, any errors that the software makes can be corrected and the program can be "trained" to recognize how I pronounce certain words. This has been very important with some medical vocabulary. However, I have found that, even with repeat training on the same word, Dragon keeps making the same mistake. For urologists, having to repeatedly correct "nephrostomy" (often misspelled as "frosty me") and "bladder" (often misspelled as "blatter") can be quite annoying. However, in this 3rd week that I've been using Dragon, I've been noticing marked improvements in how it recognizes my voice and gets the spelling correct. Or, perhaps I have become more accustomed to speaking slowly and clearly with better diction. Either way, I'm more satisfied this week than I was in the 1st 2 weeks.
Even so, it's obvious to me that using Dragon voice recognition takes a little bit longer than our traditional system of dictating into a recorder and then handing that recorder to our staff. Many of the corrections and all of the formatting of letters are then done by our transcription staff. The question is whether overall workflow improves (including initial dictation, proofreading and getting the letter out to the referring physician) with voice recognition software. After I had been using Dragon for 2 weeks, I did a little trial on this. I wanted to compare how long it took to dictate a consultation letter using Dragon versus how long our traditional dictation would take.
Initially, I thought I would measure the difference by timing how long it took to dictate a letter in Dragon, including any corrections. I would then do a "simulated dictation" by reading the Dragon letter that I had just dictated at about the same speed that I was used to dictating into a digital recorder. I expected that the 2nd reading would be quicker. But, it seemed it would be somewhat artificial because the 2nd reading would not require any references back to the chart to look up x-ray results or lab data.
With that in mind, I decided to do the simulated dictation first, including pauses to look back at chart results or think about what I wanted to say in the next sentence. I would then dictate the same consultation letter (from memory) in Dragon, trying to re-create the same content. I would pause to make corrections and also include the time for review/proofreading at the end of the Dragon dictation. This method probably wouldn't stand up to scientific scrutiny, but it seemed like a reasonable comparison for my needs.
I measured dictation for 4 patients (admittedly, a small sample size) on July 9. The average "simulated" dictation time (mm:ss) was 1:54, and the average Dragon time was 2:48.
I felt that 2 minutes would be the average time I would take to dictate a full consultation letter. The Dragon dictation took almost twice as long as that or, an additional 2 minutes. While this doesn't sound like much time, it's an extra half-hour of dictation for a half-day clinic of 16 patients. In one case, the Dragon dictation was especially lengthy as there were many medical/urologic terms that I had to correct, train the program for, or typed in by hand. This was quite frustrating.
Then, I realized that I had missed out one part of the workflow, namely receiving the simulated dictation back for proofreading. I didn't want to do a simulated proofreading immediately after I had just dictated these letters, as I felt it would not realistically represent the 2 to 3 day time lag between dictation (and familiarity with the patient's medical record) and review. I wanted to leave some time before reviewing the letters so that I would not remember details of lab results and x-ray reports. If it was necessary to refer back to the chart, I would include that time in the "review time".
The average review time for these 4 letters was 0:27.
This was somewhat artificial as well, because all the letters that I was reviewing were ones that I had already proofread as I dictated them in Dragon. I've corrected all the mistakes been, so it was just a case of reading straight through the letter. I did not need to stop and make corrections. Also, these particular letters didn't correspond to cases where there was a lot of lab data or x-ray information to review. So, the review time I have measured is probably the shortest possible time.
Even factoring in the review time, Dragon dictation is taking longer. As I mentioned before, I made these measurements when I had been using the voice recognition software for about 2 weeks. Over the last week, I have noticed a definite improvement in accuracy and my ability to dictate at a more rapid and natural pace of speech. In fact, I've been dictating all of this blog post in Dragon and have been quite pleased with the software's accuracy. Of course, I'm not using a lot of medical jargon and that does seem to make a difference.
During a trial period, 4 of us are testing the Dragon software. It's fairly expensive, and we didn't want to implement it for the whole office if it looked like it would not be useful. At this point, I think I will be sticking with the Dragon software, but I don't think it would be suitable for all of our partners. It required a lot of extra work for the 1st 2 weeks and there was a lot of frustration with having to make corrections and train the software properly. Unfortunately, all of that extra work has to be done while conducting all of our regular clinical work. If there were an obvious and pronounced workflow improvement, I think this would be a big selling point for my partners who are less "technologically keen". Perhaps I will get to the stage using Dragon that I can make that claim to them, but at present, I don't think it will be worth the frustration to them to try this software.
Obviously, we selected the 4 partners who were most keen on new technology to try out the Dragon voice recognition software. Even so, there have been different levels of enthusiasm and it's not clear that everyone is going to stick with using it. We will only know in retrospect whether it was worth trying. Even if just a few of us are using it however, we should save a significant amount of money on the transcription that we were previously outsourcing.
The uncertainty as to whether our trial of voice recognition software will turn out to be a success or failure made me think about that classic representation of diffusion of innovation -- the Rogers curve. Even if you don't recognize the name, you've likely seen this bell-shaped curve before. At one end of the curve are the innovators who take a risk in adopting changes very quickly. Early adopters are next, followed by the early majority. The late majority and laggards accept change last. The subtext of this model is that the innovators are brilliant and the laggards are Luddites.
This interpretation depends on which innovation you choose. For something that has, in retrospect, changed lives for the better, such as electricity or handwashing, then the Rogers curve makes sense. But, what if we choose an innovation that turns out to be unsuccessful or harmful, such as thalidomide or drilling a deep water oil well in the Gulf of Mexico? In that case, I propose a different version of the innovation uptake curve. (If you want to start calling it the Visvanathan curve, who am I to stop you?)
In this curve, the innovators would be "reckless", early adopters would be "foolhardy", and the early majority would be "conformists". The late majority would be "skeptics", and the laggards would be renamed "fine, sensible folk - brilliant, in fact!" It would all depend on whether or not time and society judged the particular innovation to be successful.
It remains to be seen whether trying the Dragon voice recognition software is going to rank me as an innovator or as reckless.
Sunday, July 4, 2010
Private CT clinics: Cornucopia or Juggernaut?
Get your reading glasses on. And get ready to rumble. It’s time for health policy cagefighting! In this corner – the Advanced Access Afficionado. In the other corner – politicians, bureaucrats and political commentators. Guess who’s wearing black?
Last month, the Saskatchewan government announced that it was looking for a 3rd party supplier to provide CT scan services (1). The intent is to reduce wait times. Of course, that got my attention.
(Note: Because some links to media sources seem to vanish unpredictably, I’ve included the text of all the stories referenced in this post in an appendix. If you try a link and it doesn’t lead anywhere, scroll down to the end of the post. P.S. July12,2010 -because of some concerns about copyright, have removed the text that was initially pasted at the end of this post. So, sorry if the links to op-eds turn into deadends. KV)
The article focused on the response from the opposition NDP party, namely that this was a step toward the piece-by-piece privatization of health care. Commentary by the Leader-Post’s Murray Mandryk (2) lambasted the NDP for being hypocritical and dogmatic in their opposition to privately-operated CT clinics.
Whether or not the NDP is hypocritical in opposing this CT clinic is beside the point. The clinic has been portrayed as necessary because Saskatchewan needs more CT scanning capacity. Fans of wait time reduction strategies should smell a rat. Healthcare wait times sometimes result from inadequate capacity, but more often result from a mismatch between demand and capacity. Over time, backlog builds, even when demand and capacity are balanced.
Adding permanent capacity to manage backlog will be successful, but in the end, is wasteful. Once the backlog is dealt with, you need to mothball that extra capacity. Expensive CT scanners, professional staff and clinic investors don’t like mothballs. That’s the point I tried to make in an op-ed response (3), giving our clinic’s experience with Advanced Access as an example of ways to cut wait times without permanently adding capacity.
Weighing in on the same issue was Steven Lewis who, in addition to providing some analysis around safety and appropriateness of CT scans (4), called for open discussion around the risks and benefits of a privately-operated clinic. Stan Rice expressed his skepticism (5) with a financial analysis of private vs public CT scanners.
Mandryk responded to the op-ed pieces with “Informed health debate overdue” (6). While his statement “Like me, many of you might be troubled by the underlying premise that we can somehow turn back the clock by performing fewer diagnostic tests” puts him firmly in the “more is better” camp, I agree with his call for debate around this issue. I don’t think it’s going to happen, though.
The government has already stated its intention to support the privately-operated CT clinic, and has called for proposals. Sask Health doesn’t lack expertise around wait time reduction strategies, so I can’t imagine that this decision was made without full (internal) discussion of alternatives. If I were in the decision-maker’s shoes, I can see the appeal of the private option. It’s actually easier to take this approach than opt for the drawn out process of increasing efficiency and appropriateness of testing. To saying nothing of having to change the culture of “more is better”!
I don’t doubt that this strategy is going to work. Wait times will drop. It will make for some very satisfying headlines. And, as long as that’s as deep as the analysis goes, certain skeptics will be invited to eat their words.
It’s very tempting to wonder why “they just don’t get it”. Why can’t “they” see this issue as clearly as me? But, as soon as I start thinking that way, I play the Switch game in my head. What is it in this situation that I’m missing? If I’m truly convinced that Advanced Access methods can reduce wait times and provide appropriate, timely testing for Saskatchewan, and that building privately-operated capacity is not the answer, what’s the appropriate forum for debate? What’s the best way to illustrate the admittedly counterintuitive principles of Advanced Access so that policy-makers will embrace them over the more expeditious solution?
If politicians are driven by the belief that citizens need the quick fix afforded by an extra CT scanner, maybe the audience to be convinced is the entire (voting) population of Saskatchewan. I think I’m in over my head.
In answer to the question in the title of this post, it’s both. It’s a juggernaut because it seems unstoppable. It’s a cornucopia because many patients will benefit from the bounty of increased capacity.
But, can you have such a bountiful harvest without some of the fruit going to waste? How much goes to waste, and whether anyone bothers to keep track, remains to be seen.