Chronology Current Month Current Thread Current Date
[Year List] [Month List (current year)] [Date Index] [Thread Index] [Thread Prev] [Thread Next] [Date Prev] [Date Next]

Re: [Phys-l] "The Truth Wears Off" by Jonah Lehrer in The New Yorker Dec 13, 2010.



At 17:18 -0600 12/27/2010, John Clement wrote:

An interesting question might be whether the phenomena have actually
shifted?

I don't think that most scientists would buy into this, but sometimes it may
actually happen. So in biology a drug which once worked can become
ineffective due to resistance. But another distinct possibility is the
problem of systematic error.

The very first measurement of any phenomenon might have some systematic
error which was not discovered. To publish another value one may have to do
a better job, so subsequent measurements may have lower systematic errors.

Sometimes researchers might be accused of falsifying data when in reality
there were factors that were different from subsequent research. In biology
it is often very difficult to control all of the factors, but the same thing
can happen in physics. We did some very accurate Neutron cross sections
which disagreed with Karlsruhe. It seemed that they used a multi shot TAC
which required some very complicated corrections. But the corrections were
wrong which filled in the valleys of the cross sections. Their data was
considered to be definitive and published in "The Barn Book", but they tried
to get high statistics at the expense of accuracy. They even admitted they
had problems when we talked to them personally, but it wasn't in writing.

But science works by checking other groups measurements. So eventually you
hopefully zero in on more accurate results.

There certainly are some measurements which may have been tweaked too much.
An example is the Hawthorne effect. The original data was lost, and I
understand that subsequently others have not really been able to replicate
the data. But the original paper is so compelling that it is quoted as
being true. The effect may not actually exist, or be very weak. For those
who are not familiar with it, it is a psychological placebo effect. When
you pay attention to workers they produce more output.

It seems to me that the explanation needs to be more subtle than just a systematic error, or some degree of inadvertent or unconscious observer bias (I am assuming here that the effects the article talks about are not the ones that show up every so often that are outcome-driven--those are usually pretty quickly discovered and the penalties for that tend to be severe enough that they don't happen very often). But the article talks about effects that are good at the beginning and then get worse with time, I don't recall that any examples were given that went the other way. Nor were any cited in which the changes in the results were variable--some better, some worse. What we are looking at it a systematic decline in the measured effectiveness of new drugs as their time on the market passes.

Does this happen only occasionally and with new drugs that come on the market in competition with older drugs? How about drugs that have been used for decades and have a reputation for effectiveness? If we performed similar tests on some older drugs would they show the same effect, compared to the tests that were originally done? Unfortunately, tests like these are unlikely to be done, since that would mostly be an expensive academic exercise that would be difficult to get funding for. But I suspect that might help us gain some insight into why this effect seems to happen.

I was not overly impressed with the explanation the article finally presented. It seemed to much like a standard excuse--"well, you just didn't make your measurements well enough in the beginning," or "your statistics were flawed," or "your sample wasn't properly randomized," and the like. These explanations may be correct, but have they been tested? Did anyone go back and look carefully at the original tests to see if they were fundamentally flawed?

I agree that a real effect is unlikely. Most of these drugs don't involve killing bacteria that can evolve resistance to the drug. In fact, those effects are, I believe, rather easily detected, since bacteria or viruses can evolve relatively rapidly. So it is probably something involving the experimental method, but the article wasn't very clear about just what the initial error is, and why it somehow disappears later when the experiment is repeated by the same investigator. Presumably, the methodology is the same in the retest. If not, how might it (the methodology) evolve with time so as to move the results always in the same direction?

It's an interesting question, and I think the author of the New Yorker article didn't do a real good job of dealing with it. On the other hand, he isn't scientist (I suspect), and might not appreciate all the subtleties involved. I hope some follow-on to this article appears that delves deeper into the explanation.

Hugh
--

Hugh Haskell
mailto:hugh@ieer.org
mailto:haskellh@verizon.net

It isn't easy being green.

--Kermit Lagrenouille