2009年12月16日星期三

Statistics is big-N logic?

Statistics is big-N logic?
I think I believe one of these things, but I’m not quite sure.
Statistics is just like logic, except with uncertainty.
This would be true if statistics is Bayesian statistics and you buy the Bayesian inductive logic story — add induction to propositional logic, via a conditional credibility operator, and the Cox axioms imply standard probability theory as a consequence. (That is, probability theory is logic with uncertainty. And then a good Bayesian thinks probability theory and statistics are the same.) Links: Jaynes’ explanation; SEP article; also Fitelson’s article. (Though there are negative results; all I can think of right now is a Halpern article on Cox; and also interesting is Halpern and Koller.)
Secondly, here is another statement.
Statistics is just like logic, except with a big N.
This is a more data-driven view — the world is full of things and they need to be described. Logical rules can help you describe things, but you also have to deal with averages, correlations, and other things based on counting.
I don’t have any fancy cites or much thought yet in to this.
Here are two other views I’ve seen…
Johan van Benthem: probability theory is “logic with numbers”. I only saw this mentioned in passing in a subtitle of some lecture notes; this is not his official position or anything. Multi-valued and fuzzy logics can fit this description too. (Is fuzzy logic statistical? I don’t know much about it, other than that the Bayesians claim a weakness of fuzzy logic is that it doesn’t naturally relate to statistics.)
Manning and Schütze: statistics has to do with counting. (In one of the intro chapters of FSNLP). Statistics-as-counting seems more intriguing than statistics-as-aggregate-randomness.
Not sure how all these different possibilities combine or interact.

2 comments to “Statistics is big-N logic?”
Shawn wrote:26. March 2007 at 4:00 am :
Could you explain what “big N” means to those of us who are statistically ignorant (e.g. me)? Your statement is amazingly opaque to me as is.
Brendan wrote:27. March 2007 at 6:14 am :
Oh, I’m not sure it means much to anyone besides me :) Whenever there’s an experiment or a study, “N” often refers to the total number of data points (number of subjects, trials, samples, etc.) If you’re building an intelligent system, there’s a difference between having to support learning and inference over big datasets of many different things, or just small numbers of things. There is quite a history of sophisticated logical systems that fail to scale to real world data, while ludicrously simple statistical systems can be surprisingly robust.
The idea is that logical formalisms — boolean algebras, relations, functions, or frames and the like — are good at describing complex structure and relations for small sets of things. But if you have lots of things, you need to introduce abstractions involving counting — averages, covariance, correlations, and the like. The even more vague idea is that, given a nice axiomatic formalism, perhaps there is a way in which statistical notions result from having to consider learning/inference over large quantities of things.

There are approaches to probability theory (Cox/Jaynes as I understand it) that demonstrate it as a consequence of adding induction/uncertainty to boolean logic… I was wondering if there might be something analogous for statistics.

没有评论:

发表评论