Friday, July 11, 2008

Healthy Skepticism

Canadian Medicine/National Review of Medicine recently featured an Annals of Internal Medicine paper that reported an attempt to implement Advanced Access in several American primary care practices. The Canadian Medicine post summarizes the study's findings (you can read the abstract here); essentially that it was difficult to maintain improved wait times in the study groups. Also, the study didn't find any improvement in other parameters like no-show rates, and patient and staff satisfaction.

As noted by commentators on both the Canadian Medicine and Annals sites, the lack of change in these measurements is not surprising, as the practices didn't successfully implement Advanced Access, and therefore couldn't be expected to reap its benefits. Advanced Access-expert Mark Murray pointedly diagnoses the problems with this study.

It may be that there was a lack of buy-in among the clinic staff in the practices studied. Even though the investigators who wrote the paper and supported the implementation efforts may have been highly committed, if the "troops on the ground" weren’t engaged, the initiative would fall apart.

This report raises the issue of the tension between evidence-based medicine’s rigid approach to assessment and the Quality Improvement movement’s "just do something" mantra. IHI’s Don Berwick commented on this in a March 2008 JAMA editorial. He advocates embracing methods of statistical proof other than randomized clinical trials (RCT). RCTs are notoriously difficult to conduct, and are resistant to mid-course modification should unexpected findings arise. However, other commentators stand by RCTs’ proven value in eliminating unforeseen biases when new treatments, technologies, and techniques are studied.

A classically-designed RCT puts me in mind of a "square wheel." It takes a huge amount of preparation and energy to achieve one lurching movement of such a wheel. Whereas a Quality Improvement "round wheel" (epitomized by the PDSA cycle), can be kept in motion by many small pushes (tests of change).

In the case of this study of Advanced Access, if you had your shoulder to the square wheel, and the first big push didn’t seem to make progress, you’d be very tempted to quit. However, if you were rolling the round wheel, the momentum built up by successful pushes will keep you going despite the occasional failure.

In an attempt to quantify Advanced Access, I wonder if the study authors ignored the fact that Advanced Access is really part of a larger initiative: Clinical Practice Redesign (CPR). That is, you can’t succeed with AA unless you’re seeing it as part of a broader attempt to eliminate inefficiencies in a practice. And, I think CPR is not going to achieve its optimal effect unless it’s considered as part of the Patient and Family-Centred Care concept: All the changes we make should be focused on improving value for our patients/clients – as determined by our patients/clients!

Even though this study has been touted as a failure of Advanced Access to achieve its goals, I think it’s really a demonstration of the challenges of implementing change. It would be interesting to see a measure of the degree of commitment among the staff and team leaders at the start of the project (we’ve used the DICE tool in our office AA project).

1 comment:

  1. Originally posted by Steven Lewis, (Access Consulting) 07/11/08 12:45 PM

    Kishore, very insightful as usual, on two counts. The first is of course the consequences of a too-fundamentalist adherence to the RCT religion. It is ideal for some tests of hypotheses and interventions and either impractical or just plain inadequate for more multi-dimensional and multi-variate phenomena. The second is in my view even more intriguing: that technical and procedural success is dependent on larger and perhaps less tangible cultural changes. I've frequently mused that perhaps we may be going about improvement backwards. We develop tools and techniques, enlist participants to try them out, and hope for an eventual sea change. It strikes me that if we instilled in people a culture of evidence-seeking, measurement, self-awareness, and enlightened self-criticism - i.e. a quality-oriented, evidence-based culture - they would find their own tools and techniques, instinctively pursue PDSA, and "pull" what we now "push" at them. Teach a man to do an HBA1c and he'll test for HBA1c for awhile. Teach him a culture of improvement and excellence and he'll not only do the HBA1c, he'll try different interventions, measure their outcomes, adjust for different patient characteristics, and apply his findings to arthritis as well as diabetes. How do we work on the cultural transformation front other than inductively (i.e., one labour-intensive collaborative after another)? How do we measure practitioners' and organizations' quality-oriented cultural attributes? Would love a blog or two on this one.