Chronology Current Month Current Thread Current Date
[Year List] [Month List (current year)] [Date Index] [Thread Index] [Thread Prev] [Thread Next] [Date Prev] [Date Next]

Re: [Phys-L] scientific method diagram



On 01/09/2013 11:15 AM, Folkerts, Timothy J wrote:

2) Scientists may be partly to blame for the perceived "linear
process" of science because that is how the reports are written and
presented to the scientific community and to the public. While the
process is indeed the looping process you describe, the report is
almost always linear:
* here is our project
* here are the papers that relate to the final hypothesis
* here is the experiment we performed and the results
* here is the analysis
* here is our conclusion

This gets back to what Larry Woolf said: There is an important distinction
between how science is /done/ and how science is /published/. There are
lots of good reasons why doing and publishing should be different. It is a
fundamental error to confuse them. It is a widespread error, but an error
nonetheless.

Thomas Kuhn famously made essentially the same point. When presenting ideas
(in introductory textbooks or in the technical literature) there are lots of
good reasons to lay things out linearly and logically. History (including
the history of science) is nonlinear and messy. It would be sadomasochistic
to expect students to retrace the actual historical steps.

Consider the distinction:
a) "here's what we did"
b) "here's what we know, and here is the best evidence for it".

Most publications should be of type (b) rather than (a).

I don't see any "blame" here ... except when somebody tries to pretend
they are smarter than they really are by pretending the logical path
is the path they really took. I had a co-author try to pull that
stunt once. He tried to pretend that we /intended/ to discover what
we actually discovered. I insisted on a rewrite. In fact the apparatus
was built for another purpose and we were surprised by the result.
There was a theory that nominally predicted the result, but it was
published in Russian and we didn't know about it until afterwards.
The fact that we had a good /explanation/ for the result does not
mean we /anticipated/ the result.

A related problem is when classroom posters are published by people
who have read scientific explanations but never actually done any
science, and who don't understand the difference between how science
is /done/ and how it is /published/.

Reporting null results is important too, and is rarely done.

That's another excellent point.

It's a tricky problem, because there are provably an infinite number
of rejected hypotheses. For example, the hypothesis that 2+2=13 is
almost always rejected. The challenge is to find a set of /interesting/
hypotheses that are rejected.

This point is eloquently stated in the introduction to the book
_Counterexamples in Topology_ where they made it clear that their
goal was to collect /interesting/ counterexamples.

This is related to the pedagogical issue of discussing misconceptions.
It's almost always a bad idea, especially in the introductory class.
It is better to teach the right answer and move on. Of course that
includes being clear about the /limitations of validity/ of the
right idea, but that is not at all the same as listing (let
alone detailing) the innumerable wild misconceptions that are
possible.

One of the most important papers I've ever published dealt with an
issue of this type. Once upon a time, a few people were getting
positive results while a *lot* of people were unable to reproduce
the results and especially unable to extend the method to other
applications. Nobody understood what the problem was. The odd
thing was that nobody could publish the fact that they were having
problems. That's partially odd and partially not, because of the
proverb that says whatever you're doing, you can always do it
wrong. If everybody who screwed up an experiment published the
non-result, we would be crushed under the weight.

The interesting thing is that in this case everybody was failing
*for the same reason*. However, nobody knew that until we figured
out the reason and published it (along with the recipe for making
the problem go away). A lot of people were able to fix their
methods and started getting results overnight.

The solution in practice is to rely on less-formal methods of
communication. For example, scientists at conferences spend
only a small fraction of their time going to the scheduled talks.
They spend the rest of the time milling around in the corridors
talking to people. This is an opportunity to say things like
"We tried such-and-such and it doesn't work; does anybody know
why?" to which the reply might be "Hey, I tried the same thing
and I couldn't get it work either."