Chronology Current Month Current Thread Current Date
[Year List] [Month List (current year)] [Date Index] [Thread Index] [Thread Prev] [Thread Next] [Date Prev] [Date Next]

Re: [Phys-L] how research is done : exploring a maze using only local information



On 09/21/2015 05:40 PM, Joseph Bellina wrote:

how does one know that one has all the plausible scenarios.

That's a thought-provoking question. I now realize that my
previous statements on the subject have been very unclear,
verging on wrong. When I say it is important to consider
all the plausible scenarios, that does not mean that they
all get considered in detail. Life is too short to permit
exploration of every possibility, so generally you have
to pick something and pursue it for a while, to see how it
turns out. The point remains, however, that you must never
become too wedded to it, never promote it from hypothesis
to assumption. If it's not working out, drop it and try
something else.
-- Too little perseverance is bad.
-- Too much perseverance is bad.

The maze is a good model of this. BTW I recoded the
maze game to ensure that Harvey's wall-banger algorithm
has only a low probability of finding the cheese. The
probability is under program control; I could make it
zero if I wanted. More generally, I now have much finer
control of how gnarly the maze is.
https://www.av8n.com/physics/glorpy-maze.html

Scientists are better decision-makers than most people,
but even so, they are not immune from making bad decisions.
For example, sometimes they persist in trying to fix an
old model even after the evidence suggests a fresh start
is needed.

what is plausible depends to a great extent on our prior beliefs of
how the world works, so for some all the plausible scenarios may
indeed be not the way the world works, and so each is wrong.

That's true.

Taking another step down that road: As always, the students
are a moving target. What is realistic as a starting point
is not acceptable as an ending point. We need realism about
where they start out, but not fatalism or defeatism about they
end up.

Realistically, students start out rather clueless about the
real world, so their list of "plausible" hypotheses is off
the mark, and hence they are often wrong in the strict sense
of the word. Our job is to train them to do better.

Meanwhile ... trained scientists are less often wrong. This
is *not* because we know exactly how the experiment will
turn out, and *not* because we are lucky guessers. If we
knew how the experiment would turn out, it would not be worth
doing. The trick is to construct a much longer and better
list of plausible hypotheses. The best way to not make wrong
guesses is to not make guesses.

Doug: You don't want to be surprised?
Giles: A-as a rule, no.
http://www.buffyworld.com/buffy/transcripts/033_tran.html

Another thing that needs to be oh-so-carefully explained
to students is the distinction between /good outcome/ and
/good decision/.
-- A good outcome is *not* the same as a good decision.
-- A bad outcome is *not* the same as a bad decision.

For example, if somebody is offering you 20-to-1 odds on
the toss of an ordinary fair die, it's a good decision
to accept the offer (subject to mild restrictions). You
will get an unfavorable outcome 5/6ths of the time, but
even so, it was a good decision.

The merit of the decision is based on an /ensemble average/
of outcomes. If we express this using the frequentist
notion of probability(*), that means we have to consider
a huge number of equivalent decisions, and figure how
well they pay off on average. (We can also consider
higher moments of the distribution, such as arise in
connection with the "gambler's ruin" problem.)

To say the same thing the other way:
-- Judging a decision based on a single outcome
(rather than the ensemble) is a bush-league mistake.
-- Judging a decision based on 20/20 hindsight is a
bush-league mistake.
-- Judging a probability distribution based on a small
sample is a bush-league mistake.

This is one advantage that the computerized maze game
has over a maze printed on paper. A paper maze makes
it too easy to cheat. People figure out the solution
in advance, and then draw it in, pretending that they
didn't even consider the dead-end branches. In contrast,
the computerized maze doesn't let you look ahead, and
it keeps track of where you've been. The side-trips
are there for all too see.

This is why I consider the "history of science" that
you see in typical science books, encyclopedias, etc.
to be fundamentally wrong and fundamentally dishonest.
It portrays the development of modern ideas as an
almost-unbroken string of successes. That is, it
portrays the shortest path through the maze, mostly
concealing the eeeeenormous number of false starts
and dead ends.

This is relevant to the here-and-now, not just history.
Suppose you are managing an R&D lab, and there are 10
hypotheses that need to be explored. You assign 10
teams, one per hypothesis, with instructions to pursue
it as far as they can. At some point it becomes clear
that one of the hypotheses is the winner, and you need
to shut down the other 9 teams. This turns into a test
of your skill as a manager -- and of your integrity.
You must not reward guys just for being on the "winning"
team or penalize guys just for being on "losing" teams.
You /assigned/ them to explore, and if you sent them
down a blind alley that's on you, not on them. If you
screw this up, they will never trust you again, for good
reason.

Of course is is possible that one of the avenues would
have succeeded, but the guys didn't pursue it properly.
Then you get to complain. However, that's not what I'm
talking about here. Remember, 9 of the 10 teams are
going to get shut down, even if they do everything
right.

This is why some R&D labs outperform others, year after
year. In the well-run lab, people will happily volunteer
for reconnaissance missions ... but in the poorly-run
lab they won't, knowing they probably won't be rewarded.

---------------------------

(*) Note: I strongly recommend defining probability using
the set-theoretic approach, rather than the frequentist
approach, especially when discussing issues of principle
... but the frequentist notion is good enough for present
purposes.