Through blogs and comments, patients and experts explore what it takes to find good health care and make the most of it.

Do Hospital Ratings Matter?

|

Another hospital report card showed up last week adding to the pile of ratings already available.  A few years ago there were more than one hundred offered by various for-profit and not-for-profit businesses and government agencies.  The newest one is the Hospital Safety Score report card from the Leapfrog Group, an organization led by employers that buy employee health insurance and aims to improve the safety of U.S. hospitals.  Leapfrog hopes to get the public's attention by assigning just a single letter grade, from A to F, based on twenty-six different measurements of hospital performance, such as entering physician orders into a computer to avoid handwriting errors.

Hospital report cards have been around for awhile now, but the public has never really used them in selecting a hospital, says Jordan Rau, a writer for Kaiser Health News who reports on hospital quality.  A lot of things that put you in the hospital are immediate problems that don't lend themselves to comparison shopping, and consumers are directed to hospitals based on their doctors' preferences, their insurance coverage, geographic convenience or their general sense of a hospital's reputation.' It's the reputation thing that bothers hospitals, many of which have spent gobs of money polishing their image through slick PR activities designed to make sure the public thinks well of them.

So, naturally, many hospitals have never liked rating schemes very much and have usually found fault in them, especially when their facility didn't fare well. Some of the gold-plated hospitals in the country did not get good marks from Leapfrog.' '  New York-Presbyterian in New York City and the Cleveland Clinic both got a C.''  Leapfrog gave UCLA Ronald Reagan in Los Angeles the mark of 'Grade Pending,' which Rau points out is 'Leapfrog's euphemism for below a C. Predictably the hospitals squealed.'  The chief quality officer at the Cleveland Clinic, Dr. Michael Henderson, complained that the data Leapfrog used was old.'  He said the questions the public needs to ask are: Are you working on this?  Are you getting better?'' 

Just as predictably, health policy researchers argue the new Leapfrog ratings are a step forward and will get better over time. As it so happens, I found myself in England a few days ago at a workshop for health journalists along with one of that country's experts on hospital safety and quality, Sue Lister.'  For many years, Sue taught health care improvement at Coventry University and is an emeritus professor of quality and safety in health care at the University of Massachusetts and a teaching faculty member at the Institute for Innovation and Improvement, a part of Britain's National Health Service.

Eager to bring a new voice to the American debate, I asked Lister about hospital ratings. 'Just because we can rate something, doesn't mean it improves quality," Lister told me.'  "My main fear is, with this sort of measurement, the things being measured become an inappropriate target which alters the focus of how we work.'  Monitoring doesn't change behavior'you don't increase milk yield by weighing the cow'and it is a retrospective measure.' Furthermore, Sue questions reducing measurement to a single grade.'  'I do not see how A to F is logical for something you do or you don't do,' she explained.'  For example, giving an antibiotic to a patient one hour before surgery is a good thing, but, she asks, 'It's either administered, or it isn't.'  If grading is about how often something is done correctly, it doesn't actually help.''  If the antibiotic is given 70 percent of the time, it doesn't help the 30 percent who did not get it, exposing them to possible post-surgical infections.

For them,' she pointed out, 'It was a 100 percent failure.' Then, too, ratings aren't very helpful if you don't live in a place where the good hospitals are as determined by the rater.'  Leapfrog found that Massachusetts, Maine and Vermont were the only states where half or more of the hospitals received an 'A.''  But what if you live in South Dakota, Alabama or Arkansas where at least two-thirds of hospitals got a 'C' or lower?'  You probably don't have much opportunity to go to a hospital with a higher grade. This all circles back to some tough questions for which the answers are not clear:'  Just what should a consumer make of the Leapfrog ratings? ' When our local hospital gets an 'F', what can we do about it?' 

Should we 'talk to our doctor at that hospital and urge them to improve their safety protocols' as Leapfrog suggests we do?'  Is it helpful or realistic to provide people with numerous checklists to help us 'Prepare for your Hospital Stay'?'  The Kaiser Health News reporter, Jordan Rau, reminded us that, 'a lot of things that put you in the hospital are immediate problems'. Don't the experts behind these hospital ratings understand that too?

More Blog Posts by Trudy Lieberman

author bio

Trudy Lieberman, a journalist for more than 40 years, is an adjunct associate professor of public health at Hunter College in New York City. She had a long career at Consumer Reports specializing in insurance, health care, health care financing and long-term care. She is a longtime contributor to the Columbia Journalism Review and blogs for its website, CJR.org, about media coverage of health care, Social Security and retirement. As a William Ziff Fellow at the Center for Advancing Health, she contributes regularly to the Prepared Patient Blog. Follow her on twitter @Trudy_Lieberman.


Tags for this article:
Medical/Hospital Practice   Trudy Lieberman   Seek Knowledge about your Health   Inside Healthcare  


Comments on this post
Please note: CFAH reserves the right to moderate all comments posted to the Prepared Patient® Blog. Any inappropriate postings will be removed.


John Lynch says
June 19, 2012 at 11:52 AM

I strongly believe patients need to accept the inevitability of their families' eventual need for hospital care and commit the time necessary to plan for such events. This is unlikely unless it can be facilitated through employer, church or community groupings with incentives for doing so. Employers might offer enhanced insurance coverage to those who complete a patient education program that includes an assessment of local medical resources.

Consumers need to be reminded that planning is the key to preventing unwanted outcomes and financial burdens they'll increasingly be responsible for paying, at least in the U.S. And that's with or without healthcare reform.

As for the mechanics of which hospital (and doctor) rating systems to use, none of them are sufficient by themselves. Figuring out how to integrate their results and reconcile their differences could be one of the challenges to put before those most affected - patients and their families. Employers are probably the most obvious forums for such group activities - especially given their central role, and financial risk, in America's highly fragmented healthcare system.

One potential fringe benefit of this approach is these groups could evolve into agents for improvement among local medical providers. Hospitals respond to pressure. They get plenty of pressure from all kinds of players in our healthcare system - except seldom do these include patients themselves. And if they prove unresponsive, a committee of local citizen-patients is more likely to find a receptive local media than an isolated patient experience.

Hospital ratings are just tools. What's missing are the willing minds and bodies to put them to their best use as tools for patient safety and avoidance of medical bills that threaten to bankrupt even more Americans in years to come.

Jim Jaffe says
June 20, 2012 at 6:05 PM

Don't think I'll get any argument by simply stipulating that these ratings -- and all their cousins -- are imperfect and that its somewhat useful to consider their flaws. but the more critical question is, "compared to what?" and whether some ratings, despite their crudeness, are better than none. As a general rule, I think they are, and that such efforts simultaneously deserve applause for their commitment and pressure to do better. In a previous inning on this topic, there was a general feeling that (a) patients generally ignore all types of ratings and (b) that providers take them seriously and try to get better scores. that makes sense to me and the fact that the ratings aren't very helpful to today's patients release pressures on institutions to do better in treating tomorrow's. Folks in the health commentariat have problems with imperfections and are all to hasty in promoting the perfect as the enemy of the good. good's enough.