[EMAIL PROTECTED] writes:

>I have a question regarding risk-adjustment.  I recently
>read an article where some researchers developed some
>risk-adjustment models for detecting outlier hospitals based 
>on the hospital's c-section rate.  A logistic regression model 
>was fit using data from all of the hospitals in the study. Each 
>hospital's average predicted c-section rate was obtained using
>the model. Then each hospital's "adjusted" rate was calculated
>by dividing the observed hospital c-section rate by the mean
>predicted  rate at that hospital and multiplying that by the
overall c-section rate of all the hospitals.

>I am hoping to do a similar type thing with some PCP cost
>data.  I want to "adjust" a PCP's cost in hopes of identifying
>outliers this way.  My outcome is continuous (total cost per
>member per month - pmpm) and so I will obviously not use
>logistic regression.  I am somewhat new to risk-adjustment
>and my big qustion is, can I still do this - obtain each PCP's
>predicted pmpm cost, divide it by the actual cost and multiply
>by the overall pmpm cost.
>
>Is this "legal"?

I'm glad you put the word "legal" in quotes. You can do just about anything
you want, but the question is: will your approach be well accepted by your
colleagues and peers? Better yet: will it withstand the scrutiny of peer
review?

Before you go too far into this project, you should clearly state your
objectives. For example, you want to identify PCP's (private care
practices?) that have unfairly high adjusted costs so you can embarrass them
on national TV. Or, you want to find those PCP's with unusually low costs so
they can serve as models or benchmarks for others. Or you want to show that
PCP's in managed care have lower costs than PCP's in a fee-for-service plan.
Unless you have a clear objective in mind, it would be hard to decide what
approach is appropriate (or even if the whole enterprise should be
scrapped--see my last paragraph).

You should read up a bit on the controversy involving "league tables". A
good starting point is

Hospital league tables
Andrew Bamji and Jammi N Rao
BMJ 2001; 322: 992
http://www.bmj.com/cgi/content/full/322/7292/992/a

and be sure to search for the phrase "league tables" at the BMJ web site.

I think it would be safe to say that any results that you publish would be
subjected to criticism and accusations of faulty statistics. Some of these
might be politically motivated. But others will be legitimate; it is very
hard to identify outliers in a heterogenous population. Risk adjustments are
at best very crude, and many analyses fail to adequately model all sources
of variation. Depending on your objective, you might want to familiarize
yourself with the concept of libel. A simplistic analysis that identifies
individual PCP's and that causes some of them to lose customers might open
you up to some legal problems.

A promising approach is the use of random effects models, empirical bayes
approaches, and shrinkage estimates (these are all interrelated). These
models, unfortunately, are very complex, and require extensive consultation
with a professional statistician.

The approach you have seen used strikes me as a bit simplistic. There are no
adjustments, for example, for uncertainty. Some hospitals have a smaller
case load and these estimates are more unstable. So an outlier for a small
hospital may simply be normal variation.

There appear to be other problems with that approach. Suppose there are two
hospitals. One has the lowest actual and the lowest predicted c-section
rate. But the predicted rate is slightly lower than the actual rate. The
other has the highest actual and the highest predicted c-section rate. But
the predicted rate is slightly higher than the actual rate. Do the math. The
first hospital has a worse rating than the second because even though it has
the lowest rate, it failed (perhaps just slightly) to meet expectations. It
seems like an illogical formula, unless I am misunderstanding your
explanation.

There's a more fundamental philosophical issue. We have a tendency,
especially in the United States, to want to rank and rate everything in
sight. We have the top 100 movies of the past century, and the Places Rated
Almanac of the best places to live. Many companies are returning to employee
evaluation systems that enforce a quota of at least x percent unsatisfactory
ratings. These efforts to rank and rate seem innocuous enough on the
outside, but do they really serve a useful purpose? What are the hidden
costs? It may be worthwhile to read some of the thoughts of W. Edwards
Deming, Alfie Kohn, and Peter Scholtes. After looking at their perspective,
you may decide that your efforts to identify good and bad PCP's may not be
appropriate.

Steve Simon, [EMAIL PROTECTED], Standard Disclaimer.
STATS: STeve's Attempt to Teach Statistics. http://www.cmh.edu/stats
Watch for a change in servers. On or around June 2001, this page will
move to http://www.childrens-mercy.org/stats



=================================================================
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
                  http://jse.stat.ncsu.edu/
=================================================================

Reply via email to