Brunswick Review The Predictions Issue

Healthcare oracle

Richard Evans tells the Brunswick Review about the process of prediction that made him a top healthcare analyst

Duh. That was the response from a normally articulate senior editor at a respected publication.

He’d asked me to recommend a healthcare expert who could clarify a complex policy issue, and I’d asked if he’d already spoken to Richard Evans, co-founder of the investment research firm SSR Health.

“Duh.”

Translation: “Of course I’ve spoken to Richard Evans. What kind of journalist would I be if I hadn’t?”

Given Dr. Evans’ influence on the policy debate and media coverage surrounding the US healthcare industry, it was a reasonable response. For 20 years, Dr. Evans has consistently produced high-quality, insight-filled research and bet-the-house predictions on one of the most dynamic, byzantine and fiercely debated industrial sectors.

A graduate of Yale University School of Management, he also holds a degree in veterinary medicine and worked as Vice President for pharmaceutical and diagnostics giant Roche before becoming an analyst for Sanford C. Bernstein, where he was ranked No. 1 by both Bloomberg and Institutional Investor for his US pharmaceuticals coverage. His book, Health and Capital, was published in 2009, the same year he co-founded SSR.

What sets Dr. Evans apart from his peers is how often he has accurately foretold outcomes that few considered probable or even possible: the failure of the Affordable Care Act to reach its insurance enrollment targets; the downfall of Valeant Pharmaceuticals; the over-reliance of pharmaceutical companies on price increases; the precise way that Amazon would enter the drug supply chain. These are but a few of the high-stakes calls Dr. Evans got right and almost everyone else got wrong.

He is used to occupying lonely terrain, holding unconventional and unpopular opinions, to the benefit of his clients. He spoke to Brunswick about the rigor behind his predictions and the important role of humility.

People refer to you as a research analyst, but your track record for accurate prediction implies a wider purview. How would you describe what you do?

On my first day at Bernstein, my boss Marc Mayer said to me, “Where there is controversy, there you must be.” I’ve followed that rule ever since. We try to predict the outcome of controversies that are relevant to our clients, typically money managers, investors or leaders of pharmaceutical companies, wholesalers, pharmacy benefit managers or hospitals. The controversies we seek can take various forms – political, economic or pricing-related, just to name a few. We’re not trying to create controversies. In healthcare, the controversies tend to be easy to find. [Laughs]

How do you define controversy?

You could soften it by saying “uncertainty” – an uncertainty about which people care a lot and on which people hold competing views. Disagreement makes a market. It creates a buyer and a seller, a bull and a bear. That creates capital market interest and a receptive audience for our research.

At the risk of revealing trade secrets, what is your process for predicting the outcome of a controversy?

Our process begins with a simple question: “Why guess?” It seems a little silly, because predicting by its very nature is a guess, but it doesn’t have to be a random or unstructured guess.

So, the first thing we do is establish an analytical framework, and the first step in that is what we call context. We ask whether this or something like this has happened before. That requires an intensive process of researching data sources over the last couple of decades, to find a pattern. Every time this has happened, what was the outcome? That context gives you a default probability and allows you to evaluate your current prediction.

Aren’t your competitors doing the same thing?

In the corporate world and on Wall Street, you just don’t have the luxury of time. At SSR, we’re able to choose a controversy, clear the decks and think about nothing but that controversy for a week or two or four.

Of course, there’s anxiety that goes with that, because if you don’t produce something at the end, you don’t get paid.

If you’re going to give people a choice, then you have to analyze how people might make that choice. You have to remain objective and find out what people’s attitudes actually are.

What’s your next step?

The next steps are about avoiding bias. First, don’t fall in love with your own data. We know from behavioral research that people overweight information they gather relative to information gathered by others. To avoid that, we try to find peer-reviewed research that speaks to the controversy we’re considering before we gather our own information.

What other mistake do we need to avoid?

Wishing for your preferred outcome. You may want a certain drug approved or an election outcome. Be cognizant of that bias and make sure that you’re not tilting your analysis toward it. It’s a classic human
behavioral trap. Stay aware of it and fight it.

As an experienced healthcare specialist, does knowing the system so well make it difficult to assess it objectively?

Experience in the industry you’re analyzing is a very good thing, particularly in healthcare because its terminology is incredibly dense and the economics are unique. It’s the quantum physics of the economic world; everything you’ve learned to that point goes out the window. In healthcare economics, everything is an exception, nothing’s a rule.

Thinking back to implementation of the Affordable Care Act, health policy circles got pretty much everything wrong. You, however, saw correctly that the government enrollment predictions were overly optimistic. What led you to that belief?

Well, we didn’t exactly cover ourselves in glory with ACA predictions, because we said the act wouldn’t pass! After that, though, we focused on a framework we called “Camry or Coverage.” Specifically, we looked at the net cost of buying healthcare
insurance for the average family. When you distill that into a kitchen table comparison, buying healthcare costs these families as much as the payments on a Toyota Camry. We then looked at available elasticity data, and it convinced us that people would prefer the Camry more frequently than generally expected.

And you were right. What errors did the enrollment bulls make?

Look, I believe everyone should be insured, but if you’re going to give people a choice, then you have to analyze how people might make that choice. You have to remain objective and find out what people’s attitudes actually are. And when you do that, you realize there’s a lot of ambivalence at the margin about being insured. I think people ignored that because they’d fallen in love with their own data and because the data they aggregated showed their preferred outcome. It’s those same two classic emotional traps.

You don’t really advertise being right about these important events. Why is that?

I’ve been wrong plenty of times. Really, really wrong. The media will judge you on one prediction, but your clients judge you on the entire body of evidence.

There was another great comment that I heard early in my career: “If you survive your first prediction, your clients will stick with you. If you get the first one wrong, youre going down the elevator shaft.” [Laughs]

But when you get one right, you might start thinking, “You know, there is just something about us that’s special.” You can’t do that. Remain humble. Remember the mistakes.

I honestly think that’s the hardest part of forecasting – avoiding the emotional traps.

 

Raul Damas is a Partner in Brunswick’s New York office. He specializes in corporate reputation and crisis management, with a focus on the healthcare industry.

 

Download (93 KB)