1. A somewhat more technical thread than usual.

I'm trying to understand the new @UW_IHME #COVID19 forecasting model that was released this morning.

Here is a web page with background, a link to a whitepaper, and estimates for US states: healthdata.org/research-artic…
2. The model is designed to forecast the trajectory of the epidemic and the demand for in hospital beds, ICU beds, and ventilators in the US, *given that* we implement and maintain effective social distancing measures.

Here's Washington State:
3. The authors find that health care capacity will be overrun the US, badly, and they call for immediate and aggressive efforts to increase capacity. This is undoubtably the right thing to do and must be an urgent national priority.
4. Recall this is a model of successful suppression of the epidemic with no second wave. Still my personal impression is that it's extremely optimistic. The total number of deaths is low and the epidemic passes very quickly.
5. So how does the model work? If I understand correctly, it's a curve-fitting exercise. Because testing is highly variable, the authors fit deaths, not cases.
6. In other words, there's no underlying mechanistic model, SEIR or otherwise. I'm not an expert in these approaches but I do worry about how, using data from early in an epidemic, you avoid the classic pitfalls of Farr's Law-type modeling.
7. (Farr's law, a non-mechanistic modeling approach not so dissimilar to this one if I understand correctly, was famously used in a 1990 JAMA paper to forecast a total of 200,000 HIV cases and an epidemic dying out around 1995.)

documents.aidswiki.net/PHDDC/BREG.PDF
8. NB: It's possible that I'm wrong about this, or anything else in this thread for that matter. It's late and I didn't even get a chance to start reading this paper until midnight tonight. I welcome corrections and will post them to the thread.
9. Another concern: It's not clear to me from the paper whether the authors have in any way corrected for the fact that with testing low and denial around community transmission, many #COVID19 deaths in the US have probably been attributed to other causes, esp. among elderly.
10. One of my biggest concerns about the paper is that if I understand it correctly, it assumes that once a state implements least 3 of {school closures, non-essential business closures, stay-home orders, travel restrictions}, transmission declines as it did in Wuhan.
11. The model also assumes that states that have not yet implemented 3 of the 4 above will do so within a week.

I don't fully understand the details as described below, and welcome further explanation or corrections.
12. If the model is truly assuming Wuhan-levels of lockdown, I'm worried about whether we can reach that, let alone maintain it. In Seattle, we're ahead of the curve but I saw numerous groups of kids hanging out by the lake last time I went out of the house (Tuesday, I think?).
13. And I think we'll see large variation among states, both in the measures enacted and in factors like population density, transit use, and compliance.

Meanwhile we have an executive branch that wants us all back in church on Easter Sunday.
14. The authors acknowledge this issue in the discussion section of the paper, quoted below. That's good. But these caveats don't appear in the forecasts.

This brings me to my biggest issue with the presentation of all of this.

pic.twitter.com/T8t6wRP0wH
15. As @kakape (twitter.com/kakape/status/…) and I j(twitter.com/CT_Bergstrom/s…) discussed here on twitter and as the Imperial College and Oxford groups have discovered first hand, the very facts around this pandemic have been extremely politicized.
16. Scientists make conjectural models that serve valid scientific purposes by show consequences of different methods and assumptions.

Politicians and pundits seize upon these and shove them into policy pigeonholes, using them as bludgeons against their perceived "opposition".
17. The present white paper is just the background to a flashy website that shows hospital demand forecasts for every state in the union—an issue of immediate policy relevance. Few will read it; even for those that do, the caveats are hard to find.
18. What users of the website will see are curves that look like the death curve for NY below. We see big wide confidence intervals, and this seems for all the world to give us a range for what could possibly happen.

Surely reality will be somewhere in the shaded region, right?
19. No. These are nothing like Bayesian confidence intervals in a full-fledged Bayesian modeling framework. These simply show the sources of uncertainty from fixed and random effecting in estimating the curve fits.
20. If I understand correctly, these confidence intervals are predicated on assumptions about the model structure, on the assumption that we can achieve and maintain lockup, and even on the parameters estimated from Wuhan.

I welcome clarification on this point if I am mistaken.
21. I fear this will lead to trouble. Even a well-meaning and well-trained user of the forecast website below could easily be confused on this point.

A politically motivated operative can always lie, as we saw with the Imperial College report. But this makes it job even easier.
22. I've already seen claims that this study proves we need fewer than 40,000 ventilators.

True, I guess, IF the curve fitting approach works and IF the death count data are right and IF we attain Wuhan-scale lockdown and IF we maintain it and IF there's no second wave.
23. I'm not trying to blame the authors for the ways their work could be misrepresented by bad actors. Any model can be misrepresented.

I *do* strongly advocate extensive and aggressive steps by modelers to mitigate this kind of misuse. I don't see those here—and I wish I did.
24. On a positive note, the authors intend this as a framework to use in an ongoing fashion as the epidemic progresses, and will be providing updates.

I believe it will become more informative as time passes—or if it fails to do so, we will learn something by figuring out why.
25. In the meantime, I think it's VITAL that the authors stress that these are forecast based on extremely optimistic assumptions and that the upper confidence intervals shown in the data graphics should be not be interpreted as worst-case scenarios for planning purposes.

/fin
Addendum: I appreciate all of the comments and questions that have already come in while I've been drafting this thread. It's after 3 AM here in Seattle and I need to sleep. I'll try to get back to them tomorrow, but I apologize now to anyone who I am unable to directly answer.
26. As anticipated, the @IHME_UW study is already being misinterpreted by major media. @Merz points to the front page of the @LATimes which frames the IHME's worst-that-can-happen-in-the-best-case-scenario as something very different: a worst-case scenario.

Messaging matters.
27. Good conversation with members of the IHME team tonight. They don't see the model as a model of a "best case scenario" in the way that I describe, because there's no obvious way to set it up to model a "worst case scenario" where the virus goes through to herd immunity.
28. That said, the model is doing something akin to using the Wuhan data to set a prior on what happens once lockdown occurs.

My view is that this prior makes the model optimistic, and pulls all of the probability on the event that the epidemic is successfully suppressed.
29. Hence the note in the FAQ they posted on Sunday/Monday, that only about 3% of the US population is infected under their model.

healthdata.org/covid/faqs
30. My understanding is that if they used a different prior, some or all probability would go to the event that the epidemic gets out of control and infects say 30%-70% of the US population before herd immunity brings it to a halt.
31. This is perhaps a more accurate way of explaining what I mean when I say that this is a "best case scenario" model. It's not because they explicitly model certain best case R0s or anything like that, it's because of the choice of something akin to a "prior" from Wuhan.
32. To summarize, I have a more nuanced picture of their model, but my concerns about it giving an optimistic forecast have not changed. Yes it will get better over time. But we don't have time to waste. We need to plan for the worst case scenario now—and this does not show that.
33. I hope to get a clearer picture when the team releases a full specification of their model. That hasn't happened yet—yet without such, it is impossible for outside researchers to interpret all that's going on. We're left w/ the uncomfortable feeling of being told "trust us."

Create an account for weekly updates and features such as bookmarks & reading history.