Monday, July 17, 2006

Ding ding

We doctors have come a bit late to the accountability game. There are lots of reasons, I suppose, not the least of which is that it's very hard to come up with meaningful parameters. And comparing one case to another is fraught with difficulties: no two are exactly alike. Nevertheless, it's laudable that attempts are being made. And yet...

The WSMA (Washington State Medical Association) Newsletter arrived today. In it was an announcement of a new program for quality improvement. Primary care docs (another reason I'm glad I'm not one) will be receiving "registries" which will contain patient names, conditions, treatments, and those treatments will be compared to "best practice" guidlines. Sounds good? Sort of. It's a good concept. Fact is, at the end of my book (hawk, hawk) I bloviate about things needed to fix healthcare, and one of them is figuring out why some docs get better results, and spreading the word. But there's a catch. It's the old "garbage in, garbage out" thing. I note from the Newsletter that the guy running the program is a former partner of mine. Think I'm going to email him with this story:

A few years ago, in preparation for a visit to the hospital by the accrediting folks, our medical director had records reviewed for "dingable" items: such things as untimely dictations of op notes, histories, discharge summaries, etc. Delayed discharges because of failure to order tests on time. Now I happen to be among the most compulsive people on Earth when it comes to such things. My desk is a mess, my shirts (what few I have) are wrinkled, but my charting is perfect. I go to the record room every Monday to sign my charts; I dictate everything exactly when it occurs. I make rounds so often the nurses get sick of seeing me; when things need doing, I get them done. So I figured the data accumulated in this little pre-inspection test run would show me in a most shining light.

Most of the staff had zero or one ding. A few had two. I was among only a couple with four fouls. Mortified, horrified, I tracked down the medical director and demanded to see the charts (there were no specifics in the report, only the number of transgressions.) Calm down, he said. It's only a dry run. No one will see it. Oh yeah? I growled. Someone typed the damn thing. At least a few people in medical records think I'm a bad guy. I want to see the charts.

He hemmed and humpfed. He said the data were good. Each chart had been looked at by three people. If I had dings, they were real, and they were mine. He said.

If you knew me well, you'd not be surprised at how upset I was. It took me days to harass the director into ordering the record room to cough up the charts. In doing so, he again said I was being ridiculous, and that I'd find it was accurate. Any guesses?

Every single one of the instances involved a trauma patient I'd admitted (the general surgeon is the captain of the trauma team: guy breaks his ankle and nothing else, he might go straight to ortho. Breaks his ankle and has a pimple, it's a trauma team thing, and the surgeon gets called.) In each case, as per protocol, I'd eventually signed off when the patient was stable and down to a pure ortho problem, and the chart issues were ortho's. But I was listed as the admitting doc so the strikes went to me, and none of the three reviewers had noticed.

Not a big deal; not even a great example, perhaps. But the point is this: hospitals get nailed because of mortality rates. Docs get nailed because of failure to do this or that. It's all based on data collected ex-post facto from charts. And it's frequently wrong.

Some docs are better than others. It stands to reason, and it's true. But getting a real handle on it -- establishing meaningful, useful, helpful, reproducible parameters by which to judge -- that's not easy. So we've settled, so far, on pretty mundane things. For surgeons, it's whether pre-operative antibiotics were given on time; things like that. Not insignificant, but hardly as important as choosing the right operation, doing it at the right time, and preparing for it and carrying it out properly. So despite a crying need for real quality monitoring, docs will be increasingly subject to very specific data-gathering which will inevitably contain lots of fallacious data and which will widely miss the mark, where the aim is not to harass but to lead to improved care.

It's a start. It's needed. But (another reference to my book) we need to keep our eye on the doughnut and not the hole. It'll take people smarter than me to find the right methods. As it is now, it's like judging your skills as a car owner and driver by whether you keep the tires properly inflated. Not irrelevant; hardly comprehensive.

But if they want to know how to take out a colon in 45 minutes and get the patient home in two days feeling fine, they only need to ask.

1 comment:

Tom Leith said...

> It's a start.

Yes -- we're starting with what we have. Which isn't great. But nothing will improve the data faster than having it used with actual consequences. And it'll take guys like you who are willing to spend n hours badgering the medical records people to make the improvement happen. For your sake I am sorry about that. But as you say, its necessary.

t

Sampler

Moving this post to the head of the list, I present a recently expanded sampling of what this blog has been about. Occasional rant aside, i...