Chronology Current Month Current Thread Current Date
[Year List] [Month List (current year)] [Date Index] [Thread Index] [Thread Prev] [Thread Next] [Date Prev] [Date Next]

Re: [Phys-L] Confessions of a Repentant Modeler in a Time of Plague.



On 4/19/20 1:00 PM, David Bowman wrote:

*without* attempting to model any of the complicated underlying
dynamics of a real communicable disease epidemic, (which would
typically involve many coupled independent degrees of freedom, each
obeying their own mutually coupled ODEs, with frequent time dependent
updating of parameters of said degrees of freedom, and which would
have no closed form solution,
[lots of detail snipped]

I consider a simple model for a cumulative sigmoidal growth process
with single dependent variable y for a temporal independent variable,
t, being the real line (or uniformly sampled real line) having the
following 4 aspects which need to be determined by the fitting
procedure.

[yet more details snipped]

There is a lot of middle ground between those extremes.

In particular, let me call attention to a class of models that
are not detailed or complicated, but make a reasonable effort
to be faithful to the underlying physics.

The central objects of interest are the basic reproduction
number R₀ (which is a function of time) and the number of
infected persons as a function of time. Neither of these
is directly observable in real time, so the model has to
infer them. They are connected to each other by a very
simple equation. Each predicts the other, recursively.

I ought to make diagrams to go with this explanation, but
that would require more time than I can afford right now.

A related object of interest is what I call the "contagion
kernel". That captures that idea that after a person is
exposed there is an incubation period (a few days) and
then the person is contagious for a while (perhaps 15 days)
after which they are either recovered or hospitalized, so
that they are no longer able to spread the disease.

I imagine the kernel is normalized so the area under the
curve is unity. We rely on R₀ to control the magnitude.

The number of new infections is calculated as a convolution,
namely the old infections convolved with the the aforementioned
kernel. In more prosaic terms: just total up the number of
people who were infected more than 5 but less than 5+15 days
ago. Multiply by the current R₀. That tells you the number
of new infections today.

This isn't the official definition of R₀. But it comes
to the same thing when R₀ is constant, and if it's not
constant the official definition makes no sense.

That gives us a differential equation. Actually it's simpler
than that; we model it as a discrete recurrence relation.
You can run the prediction out into the future as far as
you like, but the predictions are predicated on assumptions
about R₀ as a function of time. Masks and hunkering down
and similar measures have a big effect on R₀.

If R₀ is a constant, then:
-- if R₀ is greater than 1, you get exponential growth
-- if R₀ is less than 1, you get exponential decay

If R₀ is changing, it's complicated, but it's not crazy.
You can make reasonable assumptions. A good starting place
is to assume that R₀ is piecewise constant, punctuated
by changes in public policy. That reduces the number of
free parameters. You just need to make a list of events
that could plausibly affect R₀. (You don't want to make
each day's R₀ an independent fitting parameter, because
that would leave you with more parameters than data points.)

The recent changes in reporting criteria increase the
workload for modelers, no matter how simple or how fancy
your model is. Some jurisdictions handled the changeover
reasonably well, but most have been scandalously incompetent.

Also keep in mind that under present conditions, the number
of "confirmed cases" tells you nothing about the disease,
nothing about the number of /actual/ cases. All it tells
you is that the testing is falling farther and farther behind
where we need it to be.

At present the only observations worth paying attention to
are the reported deaths. The problem is that deaths are
a lagging indicator. They lag infections by 25 or 30 days.

That's on top of the kernel-size, so you need data going back
about 7 weeks in order to jump-start the convolutional model.
Less-laggy reliable data would be a huge help.

We need comprehensive, reliable, prompt testing. We need
100× or 1000× times more testing than we presently have. We
need testing the way the Challenger needed reliable O-rings.
Just because you want it, just because your life depends on
it, that doesn't mean you have it.

As always, we should keep eyes on the prize. We should focus
on results, and on the action-items needed to obtain results.
The whole point of modeling is to guide decisions. Lagging
indicators make it difficult to build detailed models.

On the other hand, even the simplest models already tell us
what we need to know. We should have spent the last three
months building capacity: PPE capacity, testing capacity,
contact-tracing capacity, subcritical isolation capacity,
and so on. That time, effort, and cost have been squandered.
The WHO recently published a six-step checklist of things
that need to be in place before it is safe for people to
venture out. We're not even at step 1 yet.

Every country that has successfully suppressed the virus
has done it the same way. All the models say the same thing.

We know what needs to be done. We're just not doing it.