2008/6/21 Wei Dai [EMAIL PROTECTED]:
A different way to break Solomonoff Induction takes advantage of the fact
that it restricts Bayesian reasoning to computable models. I wrote about
this in is induction unformalizable? [2] on the everything mailing list.
Abram Demski also made similar points
I just read Abram Demski's comments about Loosemore's,
Complex Systems, Artificial Intelligence and Theoretical
Psychology, at
http://dragonlogic-ai.blogspot.com/2008/03/i-recently-read-article-called-complex.html
I thought Abram's comments were interesting. I just wanted to make a few
Quick argument for the same point: AIXI is uncomputable, but only
considers computable models. The anthropic principle requires a
rational entity to include itself in all models that are given nonzero
probability. AIXI obviously cannot do so.
Such an argument fails for computable approximations
Jim,
On 6/21/08, Jim Bromer [EMAIL PROTECTED] wrote:
The major problem I have is that writing a really really complicated
computer program is really really difficult.
The ONLY rational approach to this (that I know of) is to construct an
engine that develops and applies machine knowledge,
To be honest, I am not completely satisfied with my conclusion on the
post you refer to. I'm not so sure now that the fundamental split
between logical/messy methods should occur at the line between perfect
approximate methods. This is one type of messiness, but one only. I
think you are
Abram,
A useful midpoint between views is to decide what knowledge must distill
down to, to be able to relate it together and do whatever you want to do. I
did this with Dr. Eliza and realized that I had to have a column in my DB
that contained what people typically say to indicate the presence