Friday, July 25, 2008

Bang for Your Buck

A new MRI machine was installed in St. Paul's Hospital this month. While this latest-generation scanner will add new diagnostic capabilities, its main task is to add diagnostic capacity. That is, it may have some new tricks, but mainly it's going to be doing much more of the same old tricks. Not that that's a bad thing.

Or is it?

A paper in the latest Canadian Association of Radiologists Journal reviewed CT/MRI scan use in Ontario. The authors looked at the stated indications for the examination, and correlated it with the final report (normal/indeterminate/abnormal). Any conclusions are limited by the retrospective, chart-abstraction design of the study, and the authors are careful to point this out. However, some findings should lead to further study.

"Less than 2% of CT scans of the brain for headache found abnormalities that could explain the headache." That's a lot of normal CT scans (which, aside from utilization issues, are not completely risk-free). The authors point out that a negative CT scan may still be valuable to reassure the patient, but also wonder whether the same reassurance may come from a frank discussion between physician and patient about the (un)likelihood that the CT scan will show any significant abnormality.

Headache was the stated indication for 26.8% of outpatient brain CTs, so reducing this demand for service could have a significant impact on access to scans.

It sometimes seems more expedient to use the "brute force" approach of adding capacity (more MRI/CT scanners) to manage queues, rather than looking at managing demand (are the tests being ordered appropriately?). I've griped about this before in a different context, namely the CMA's "Help Wanted" campaign to expand the physician pool.

Which brings me to our latest attempts to manage demand in our office. We started thinking about the frequency of internal demand (urologists recalling a patient for review) last fall. I posted some initial data in April. When I circulated the early results on how frequently each urologist was asking for patients to be recalled, my partners told me that the data was confusing and it wasn't clear what it indicated. So, we've continued to collect the data, and tried to show it in a more useful format.

Wow! That's a lot of variation. Some docs hardly recall any patients at all. Some recall a lot of patients on an annual basis (yellow bars) and some are recalling patients every 3 months (blue bars).

But, this first chart we generated is somewhat misleading. It's showing the number of patients recalled and doesn't account for the total number of patients seen by each urologist. We need to look at the recall rate (number of recalls/total number of patients). This will also level the playing field between part-time (lower volume) and full-time practitioners.

Friday, July 11, 2008

Healthy Skepticism

Canadian Medicine/National Review of Medicine recently featured an Annals of Internal Medicine paper that reported an attempt to implement Advanced Access in several American primary care practices. The Canadian Medicine post summarizes the study's findings (you can read the abstract here); essentially that it was difficult to maintain improved wait times in the study groups. Also, the study didn't find any improvement in other parameters like no-show rates, and patient and staff satisfaction.

As noted by commentators on both the Canadian Medicine and Annals sites, the lack of change in these measurements is not surprising, as the practices didn't successfully implement Advanced Access, and therefore couldn't be expected to reap its benefits. Advanced Access-expert Mark Murray pointedly diagnoses the problems with this study.

It may be that there was a lack of buy-in among the clinic staff in the practices studied. Even though the investigators who wrote the paper and supported the implementation efforts may have been highly committed, if the "troops on the ground" weren’t engaged, the initiative would fall apart.

This report raises the issue of the tension between evidence-based medicine’s rigid approach to assessment and the Quality Improvement movement’s "just do something" mantra. IHI’s Don Berwick commented on this in a March 2008 JAMA editorial. He advocates embracing methods of statistical proof other than randomized clinical trials (RCT). RCTs are notoriously difficult to conduct, and are resistant to mid-course modification should unexpected findings arise. However, other commentators stand by RCTs’ proven value in eliminating unforeseen biases when new treatments, technologies, and techniques are studied.