Chronology Current Month Current Thread Current Date
[Year List] [Month List (current year)] [Date Index] [Thread Index] [Thread Prev] [Thread Next] [Date Prev] [Date Next]

Re: software: no wrong options?



At 10:10 PM 10/9/01 -0400, Ludwik Kowalski wrote:

During a workshop devoted to a computer tool I was
trying to write down the procedural steps one after another. I
always do this because I can not trust my memory. The
instructor criticized me for this.

That criticism makes no sense to me.

He said something like this:

> You are translating experience which is essentially visual
> and kinesthetic into a textual representation. This prevents
> you from internalizing this experience and from learning
> more effectively.

That doesn't help. That sounds like psychobabble to me.

Significant differences between learning how to use hardware
and how to use software were clear to me for a long time but
this was a good articulation worth thinking about.

I disagree with the premise.

Designing complex hardware is very very similar to designing complex
software. If you're designing trivial stuff, trivial differences will be
noticeable, but who cares?

In learning how to use hardware I developed an attitude of not doing
anything without knowing the consequences or without
following well written step by step instructions. Thus taking
notes, memorizing steps and trying to understand was an
essential preparation for doing laboratory things.

That's appropriate when dealing with complex and/or dangerous and/or
valuable things, especially when you have enough expertise to know what the
consequences will be.

At the opposite extreme is the Playskool approach, where we give out crude
plastic screws and crude plastic screwdrivers and let kids fool around and
become familiar with the basic mechanical concepts.

In learning software the emphasis seems to be on "try it and
learn from what happens.

That's the Playskool approach.

There will be no costly consequences or danger from clicking on wrong
buttons or choosing wring options." I hope this kind of attitude will not
be transferred to our science labs. How can this be prevented?

I am responsible for some complex software systems. If I choose the "wrong
options", a lot of people could lose their network connectivity, and/or
major corporations could lose all semblance of network security.

During the learning process, there is a phase where the Playskool approach
is appropriate. That's true for mechanical concepts (crude plastic screws
and crude plastic screwdrivers) and it's also true for software
concepts. But it would be quite a rash generalization to think that entire
fields of modern engineering are stuck in the Playskool phase.

The truth is just about 180 degrees away, diametrically opposite. In the
software business, there is a notion of _proof of correctness_. The proofs
are sometimes quite formal and quite elaborate. Just the other day
somebody asked me "Why did you do XXX in the first version? It seems
somewhat inefficient." I replied that my solution was provably correct,
and that although I knew of many possible optimizations, there was no
chance of proving them correct within the available time budget, so we
didn't implement them. (A couple of years later a colleague did manage to
devise some provably-correct optimizations, which will be incorporated into
later versions -- but everybody agrees I made the right decision about the
first version.)

The same is true for complex hardware systems. Imagine what the fault-tree
analysis for a Boeing airliner looks like. The longest mathematical
theorem ever published runs to about 15,000 pages, and the fault tree is a
lot bigger than that, and at least as formal, and at least as detailed.