Chronology Current Month Current Thread Current Date
[Year List] [Month List (current year)] [Date Index] [Thread Index] [Thread Prev] [Thread Next] [Date Prev] [Date Next]

Re: [Phys-l] "Unlearning"



On 09/10/2010 03:12 AM, Dan L. MacIsaac wrote in part:

- the incomplete nature of scientific knowledge (and what "wrong"
means in what context)

- the incomplete nature of individual learning (and what it means to
learn something "right" in what context)

- the use of idealized models to approximate natural phenomena under
restricted domains and ranges (more nature of science; all models are
"wrong" out of context)

- the learning of developmentally appropriate models ......

That's all true, and that raises an important point, more
important than the main point of my previous note.

Ww all agree that approximations are necessary. There are
at least three levels on which this could be discussed:
a) Sometimes we must make tradeoffs, as when model A is
simpler and more convenient, but model B is more accurate.
I don't consider this a problem; it's just part of the
job. It's not easy, but it's just part of the job of
living in the world.
b) Item (a) gets to be a problem when it is done badly,
e.g. using bad approximations when simpler-and-better
approximations are readily available (as previously
discussed).
c) Much, much more serious problems arise from pretending
that something is The Truth when in fact it's only a
rough approximation.

Some texts suffer from problem (c) much less than others.
For example:

++ When Feynman introduces the idea of energy, using the
parable of Dennis and the blocks, he also discusses
several ways in which the blocks are *not* an apt model.

++ Similarly, when he introduces the notion of electric
and magnetic field lines, he also discusses several ways
in which the lines are *not* an apt model.

Other positive examples abound. This is how things should
be done. In addition to whatever domain-specific knowledge
is being learned, there is a much more important lesson in
general-purpose _thinking_ skills, namely that it is important
to be upfront about the strengths /and weaknesses/ of the
models one is using.

By way of contrast:

-- I have a book here that states in Chapter 1 that
"you must round off the value to the correct number of
significant figures" ... and then goes on to reiterate
this point throughout the text, without reservation,
without any caveats or provisos.

This book's handling of this topic wins the trifecta in
the race to the bottom:
1) Simpler methods are available. (This is a minor
criticism of sig figs in particular.)
2) The simpler methods work /better/, so there is
not even a tradeoff involved. (This is another
minor criticism of sig figs in particular.)
3) There is a _major problem_ with the presentation,
passing off a lousy approximation as if it were
The Rule.

Some of these problems arise from ignorance, which is
sometimes excusable ... but sometimes not. I'm not
asking for perfection; I'm just asking for a basic
level of professionalism and due diligence. I don't
want to complain about minor mistakes, because those
are inevitable. Instead, I am complaining about cases
where even modest levels of diligence and critical
thinking would have raised red flags.

- whether anything can be "unlearned" at all

Habits can be changed. For example, when people travel
from Britain to the US, we expect them to learn to drive
on the right side of the road. Some of them struggle
with it more than others.

The point of this note is that in my experience, students
can unlearn (or modify) an old idea more easily if they
were warned -- from the very beginning -- about the
limitations of the idea, so that they never become
unduly attached to it.