Note: This thread is related to #Coronavirus #COVID19

Follow the World Health Organization's instructions to reduce your risk of infection:

1/ Frequently clean hands by using alcohol-based hand rub or soap and water.

2/ When coughing and sneezing cover mouth and nose with flexed elbow or tissue - throw issue away immediately and wash hands.

3/ Avoid close contact with anyone that has fever and cough.

Dan Larremore+ Your Authors @DanLarremore Asst. Prof, University of Colorado Boulder CS and @BioFrontiers. Math, infectious diseases, networks, and computational social science. Jun. 25, 2020 4 min read + Your Authors

Preprint: Viral surveillance testing is crucial, but not all surveillance strategies are equal. We modeled the impacts of test frequency, assay limit of detection, test turnaround time, measuring impact on individuals & epidemics. Here's what we found. 1/ 

The first finding is that limit of detection matters less than we thought. There is only short (1/2 day) window when qPCR is superior during the exp growth phase. We showed this in a simple viral load model, but any model with exp growth between Ct40 and Ct33 would confirm. 2/

So only a high-frequency testing scheme will take advantage of that short window. However, high-frequency testing schemes will have a high impact on the reproductive number, *regardless* of test LOD. ➡️ Ruling out higher LOD tests for surveillance purposes would be a mistake. 3/

We modeled infectiousness as a function of viral load. Upon diagnosis, we assume that the remaining infectiousness is removed. Here's what that looks like for the viral load above. (Qualitatively similar to He et al's linked empirical results cf Fig 1c) 4/ 

Diagnosis and isolation during surveillance remove infected individuals, with or without contact tracing, with earlier diagnosis removing more infectiousness per person. Here, test frequency really drives the pattern, and not so much the test's LOD. Lines are medians. 5/

We aggregated these effects over a population and computed the total infectiousness removed by surveillance alone. We assumed that 20% would be symptomatic and self-isolate after symptom onset, if they didn't test pos first. (cf Lombardy 26.1% <60yo) 6/

To estimate impacts on disease dynamics, we integrated individual viral loads into (i) a fully-mixed model N=20k with constant low importation (like a college) and (ii) an agent-based model N=8.4M with NYC contact structure. Testing weekly leads to R<1. Finite size pop matters 7/

Unfortunately, the story of COVID-19 testing has not just been about test shortages but also about delays in results. So would you rather use a point-of-care assay at 10^5 LOD or leverage a pooling scheme for PCR (10^3 LOD) but with a one-day delay in results? Does it matter? 8/

One nice thing about the viral load + infectiousness models is that it's easy to include explicit sample-to-answer delays. ⏰ Here's an example figure like above, but with an imagined 2-day delay in results—and therefore isolation. Even a 1-day delay can wash out a better LOD. 9/

Again, here are the impacts on individuals for delays 0,1,2d. Notice that delays increase the prob that you dx after the infectious window if at all (blue pts). This could lead, in some cases, to counterproductive isolation—though better to err on the side of caution, surely. 10/

We similarly estimated the impacts of various {frequency, test, delay} combinations on total impact on infectiousness. For any scenario, compare LOD 10^3 with a 1-day delay to LOD 10^5 with no delay. This suggests a high value to point-of-care assays with rapid turnaround. 11/

Unsurprisingly, delays impact dynamics too. In the N=20k model with constant low importation, secondary infections can be controlled, but weekly testing WITH delays is not viable. If you're doing campus planning, for instance, our results suggest dx turnaround time is key. 12/

In sum, we find that test limit of detection is less important 📉, but that testing frequently and short dx turnaround times are more important 📈. We measured these impacts for individuals, populations, on R, and in epi models. BUT... 13/

To be clear: tests have different purposes. A clinical diagnostic test is not the same as a surveillance or screening test. @CT_Bergstrom digs into this in a wonderful thread below 👇. Our results are about what Carl calls mitigation. 14/

Now, limitations. (1) Sensitivity is a function of things other than LOD, including sampling technique—Fang et al compared swab/sputum PCR to chest CT 👇. @awyllie13 et al show saliva-based tests could help with sensitivity 👇. 15/ 

Limitations (2) The validity of our estimates will depend on understanding viral kinetics and infectiousness. There's a ton to be learned here, and continued clarification will be high value, especially early in infection. How sensitive are our findings? Well... 16/

...if you want to explore the model yourself, check out this lovely interactive calculator that Sam Zhang built. You can change the surveillance parameters OR alter the assumptions of viral load trajectories, and perform a DIY sensitivity analysis. 📊 17/ 

Finally, this is a preprint, and feedback on the paper would be valuable. There's no point in this work if it's not useful. And, truly lovely to work with the team: @brwilder, Evan Lester, Soraya Shehata, James Burke, @jameshay218, @MilindTambe_AI, @michaelmina_lab & Roy Parker.

You can follow @DanLarremore.


Tip: mention @threader_app on a Twitter thread with the keyword “compile” to get a link to it.

Enjoy Threader? Sign up.

Since you’re here...

... we’re asking visitors like you to make a contribution to support this independent project. In these uncertain times, access to information is vital. Threader gets 1,000,000+ visits a month and our iOS Twitter client was featured as an App of the Day by Apple. Your financial support will help two developers to keep working on this app. Everyone’s contribution, big or small, is so valuable. Support Threader by becoming premium or by donating on PayPal. Thank you.

Follow Threader