As I wrote here last year for Milliman, the “narrow network” concept, in which health plans offer coverage through smaller networks (at a lower premium), in exchange for increased traffic to the network providers and (hopefully) a better risk profile, has existed for decades. The idea was extensively tested during the integrated delivery system years of the 1980s and 1990s, and while it did produce cost savings, it eventually faded from use for a variety of reasons, notably the insistence of members to continue seeing their existing providers.
However, the idea is currently experiencing new life because of the Affordable Care Act (ACA) and its renewed emphasis on price competition (thanks to the Exchange /FFM environment), as well as alternative reimbursement models and the emergence of risk adjustment. In short: Narrow Networks are back, despite having proven unworkable in earlier iterations, and are likely to play an increasing role in the years to come.
Even more important will be the re-emergence of quality measures as a method of determining provider reimbursements. Payers and providers utilizing the Narrow Network concept have always experimented with a variety of methods for determining which providers would constitute the networks. Besides the obvious one (lower fees to providers in exchange for the promise of higher volume of members from health plans), various outcome/quality measurements have been employed in an attempt to ascertain the most “efficient” providers. Payers used these measures to create “tiered networks,” in which visiting the “best” providers (as judged by the quality measures) would result in better benefits for the members, such as lower copays. While this approach often caused challenges from providers on methodology (resulting in the often-heard claim that “my patients are sicker”), the idea of quality measures (or “provider profiling” as it was sometimes disparagingly called) remained an ideal for both payers and providers to consider, and is more important now than ever: Reimbursements will increasingly be based on quality measures, and and the concept of efficiency (or “value”) in healthcare – efficacy versus cost – will only increase as scarce resources are continually reduced. Quality measures are here to stay.
But what is “quality?” As philosopher Robert Pirsig famously notes in “Zen and the Art of Motorcycle Maintenance,” “quality” is an extremely difficult thing to define. In healthcare, entire industries have sprung up around the idea of measuring quality (usually against some predefined benchmark), from episodes of care to case-mix adjustment and much more. So what is the answer?
Many argue that using benchmarks to make resource allocation decisions is the solution (giving rise to the infamous “death panel” accusations), but the underlying issue remains: There is no agreement on the cultural, ethical and moral factors that would go into such a decision. For example: Treatment in the last 30 days of a patient’s life, which accounts for a large percentage of healthcare spending, is obviously “inefficient.” Who decides whether that makes it indefensible?
And, what constitutes “efficient” care? Most approaches have traditionally used patient outcomes, but in addition to not automatically taking case-mix adjustment into account, such methods do not measure how efficiently the care was actually performed. In other words, just because a patient recovers nicely, does not mean that the care he/she received could not have been performed for less money. In the “new world” of the compliance-driven ACA, such issues will become increasingly critical.
What if the comparative cost efficiency of, say, two different providers, assuming similar quality outcomes, could result from the differences in the approaches each provider used? In other words, given similar results, it stands to reason that one provider’s approach would be more cost-effective and efficient than another’s. Again, entire sub-industries have emerged to provide ideal guidelines for care, but even these do not take into account the proclivity of a particular provider to use a particular approach, even if it’s more costly. The provider may have good reasons to use that approach, even though it’s more expensive and produces the same results as another. But sometimes, the result might just be inefficient care, and lower value.
Here’s a personal example: My wife Christy suffers from a lifelong condition in which chronic ear infections (the kind for which kids get “tubes in their ears”) produce scar tissue that blocks her ear canals, diminishing her hearing. As young marrieds in the 1980s, we visited many specialists who proposed (and performed) many different procedures, none of which were effective.
But one specialist stands out in our memories. He insisted that nasal septum repair was the answer – he argued that, by addressing her nasal passages, the scar tissue in her ear canals would be addressed. The result was an expensive, painful procedure, that had no impact on my wife’s condition.
It was only after the surgery that we realized that this particular specialist performed scores of nasal septum repair procedures each year. His proclivity was to perform that procedure, regardless of whether it was the best one for my wife’s condition. To be fair, determining the “right” procedure was elusive (as is often the case). But the fact remains that this doctor was inclined to perform this procedure, and knowing about this would have been useful – for us as patients, for his provider organization, and for the payer. The knowledge would have been beneficial, not just to affect cost or quality (although it would have helped with both.). The knowledge would help to improve the value of the entire engagement, for all the parties.
How often does proclivity play a role in the cost of care? The first step in managing the phenomenon is to measure it. This information will help to truly improve quality, while lowering cost. MDBushCo has developed innovative ways to measure proclivity, and and powerful ways with which to take action on the results. Contact me for more information.
Mike Bush