Chronology | Current Month | Current Thread | Current Date |
[Year List] [Month List (current year)] | [Date Index] [Thread Index] | [Thread Prev] [Thread Next] | [Date Prev] [Date Next] |
On 09/02/2013 12:07 PM, Brian Blais wrote:[In case any one of the seriously humor-challenged missed it - this was a wonderful joke!]
... wondering what you think of E. T. Jaynes' approach to BayesianIn general I don't like terms like "Bayesian" or "Darwinian".
inference. He does not make use of set-theoretic definitions, but
in my reading of him, he does seem to admit that these have
identical consequences in applications.
1) do you agree?
/snip/
2) do you find some *quantitative* improvement using the/snip/
set-theoretic definitions. I mean, is there an actual problem where
one method works and the other not.
3) is there some *practical* improvement using the set-theoretic
definitions. I mean, are there problems that are much easier to
solve, even if both methods yield the same result in the end?
The development guys had a huuuge software system that was doing OCR with
a 2% error rate, which was the same as people could do on the same data
set, so this was considered quite an achievement.
/snip/ ... one symptom was expressed by John
von Neumann, who was not the village idiot: "With four adjustable
parameters I can fit an elephant, and with five I can wiggle his tail."
Then word got out that my buddies and I were fitting 100,000 adjustable
parameters, with good results.
Then word got out that I had a scheme to learn maximum_a_posteriori (MAP)/snip/
not maximum likelihood. This is P(a|b) instead of P(b|a). The statistics
research guys did not believe this was possible. The development guys were
skeptical, but after much inveigling and cajoling they tried my idea, and
the error rate went down from 2% to 0.2%.