Chronology | Current Month | Current Thread | Current Date |
[Year List] [Month List (current year)] | [Date Index] [Thread Index] | [Thread Prev] [Thread Next] | [Date Prev] [Date Next] |
On 09/02/2013 12:07 PM, Brian Blais wrote:
... wondering what you think of E. T. Jaynes' approach to Bayesian
inference. He does not make use of set-theoretic definitions, but
in my reading of him, he does seem to admit that these have
identical consequences in applications.
1) do you agree?
In general I don't like terms like "Bayesian" or "Darwinian".
By way of analogy: I make good use of Newton's laws, but does
that make me a "Newtonian"? I hope not. Am I required to accept
everything that has been said by Newton, or about Newton? I hope
not.
2) do you find some *quantitative* improvement using the
set-theoretic definitions. I mean, is there an actual problem where
one method works and the other not.
3) is there some *practical* improvement using the set-theoretic
definitions. I mean, are there problems that are much easier to
solve, even if both methods yield the same result in the end?
The answer to (2) and (3) is the same.
Then word got out that I had a scheme to learn maximum_a_posteriori (MAP)
not maximum likelihood. This is P(a|b) instead of P(b|a). The statistics
research guys did not believe this was possible. The development guys were
skeptical, but after much inveigling and cajoling they tried my idea, and
the error rate went down from 2% to 0.2%.