On 11/8/07, Edward W. Porter <[EMAIL PROTECTED]> wrote:
>
>
>
> VLADIMIR NESOV IN HIS  11/07/07 10:54 PM POST SAID
>
> VLADIMIR>>>> "Hutter shows that prior can be selected rather arbitrarily
> without giving up too much"

BTW: There is a point in Hutter's book that I don't fully understand:
the belief contamination theorem. Is the contamination reintroduced at
each cycle in this theorem? (The only way it makes sense.)
>
> (However, I have read that for complex probability distributions the choice
> of the class of mathematical model you use to model the distribution is part
> of the prior choosing issue, and can be important — but that did not seem to
> be addressed in the Solomonoff Induction paper.  For example in some speech
> recognition each of the each speech frame model has a pre-selected number of
> dimensions, such as FFT bins (or related signal processing derivatives), and
> each dimension is not represented by a Gausian but rather by a basis
> function comprised of a set of a selected number of Gausians.)

Yes. The choice of Solomonoff and Hutter is to take a distribution
over all computable things.
>
> It seems to me that when you don't have much frequency data, we humans
> normally make a guess based on the probability of similar things, as
> suggested in the Kemp paper I cited.    It seems to me that is by far the
> most commonsensical approach.  In fact, due to the virtual omnipreseance of
> non-literal similarity in everything we see and hear, (e.g., the same face
> virtually never hits V1 exactly the same) most of our probabilistic thinking
> is dominated by similarity derived probabilities.
>
I think the main point is: Bayesian reasoning is about conditional
distributions, and Solomonoff / Hutter's work is about conditional
complexities. (Although directly taking conditional Kolmogorov
complexity didn't work, there is a paragraph about this in Hutter's
book.) When you build a posterior over TMs from all that vision data
using the universal prior, you are looking for the simplest cause, you
get "the probability of similar things", because similar things can be
simply transformed into the thing under question, moreover you get it
summed with the probability of things that are similar in the induced
model space.

>
> ED>>>> So, from a practical standpoint, which is all I really care about, is
> it a dead end?
>
> Also, do you, or anybody know, if  Solmononoff (the only way I can remember
> the name is "Soul man on off" like Otis Redding with a microphone problem)

You scared me... Check again, it's like in Solomon the king.

> Induction have the ability of deal with deep forms of non-literal similarity
> matching in is complexity calculations.  And is so how?  And if not, isn't
> it brain dead?  And if it is a brain dead why is such a bright guy as Shane
> Legg spending his time on it.
>
Yes, it is all about non-literal similarity matching, like you said in
later post, finding a library that makes for very short codes for a
class of similar things.

OK I must post now or I'll get lost in other posts ;-)

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=62921326-deb5ed

Reply via email to