Yeah, we use Occam's razor heuristics in Novamente, and they are commonly
used throughout AI.  For instance in evolutionary program learning one uses
a "parsimony pressure" which automatically rates smaller program trees as
more fit...

ben

On Nov 8, 2007 12:21 PM, Edward W. Porter <[EMAIL PROTECTED]> wrote:

>  BEN>>>> However, the current form of AIXI-related math theory gives zero
> guidance regarding how to make  a practical AGI.
> ED>>>> Legg's Solomonoff Induction paper did suggest some down and dirty
> hacks, such as Occam's razor.  It woud seem a Novamente-class machine could
> do a quick backward chaining of preconditions and their probabilities to
> guestimate probabilities.  That would be a rough function of a complexity
> measure.  But actually it wold be something much better because it would be
> concerned not only with the complexity of elements and/or sub-events and
> their relationships but also so their probabilities and that of their
> relationships.
>
> Edward W. Porter
> Porter & Associates
> 24 String Bridge S12
> Exeter, NH 03833
> (617) 494-1722
> Fax (617) 494-1822
> [EMAIL PROTECTED]
>
>  -----Original Message-----
> *From:* Benjamin Goertzel [mailto:[EMAIL PROTECTED]
> *Sent:* Thursday, November 08, 2007 11:52 AM
> *To:* agi@v2.listbox.com
> *Subject:* Re: [agi] How valuable is Solmononoff Induction for real world
> AGI?
>
>   BEN>>>> [referring the Vlad's statement that about AIXI's
> > uncomputability]"Now now, it doesn't require infinite resources -- the
> > AIXItl variant of AIXI only requires an insanely massive amount of
> > resources, more than would be feasible in the physical universe, but not an
> > infinite amount ;-) "
> >
> > ED>>>> So, from a practical standpoint, which is all I really care
> > about, is it a dead end?
> >
>
> "Dead end" would be too strong IMO, though others might disagree.
>
> However, the current form of AIXI-related math theory gives zero guidance
> regarding how to make  a practical AGI.  To get practical guidance out of
> that theory would require some additional, extremely profound math
> breakthroughs, radically different in character from the theory as it exists
> right now.  This could happen.  I'm not counting on it, and I've decided not
> to spend time working on it personally, as fascinating as the subject area
> is to me.
>
>
> >  Also, do you, or anybody know, if  Solmononoff (the only way I can
> > remember the name is "Soul man on off" like Otis Redding with a microphone
> > problem) Induction have the ability of deal with deep forms of non-literal
> > similarity matching in is complexity calculations.  And is so how?  And if
> > not, isn't it brain dead?  And if it is a brain dead why is such a bright
> > guy as Shane Legg spending his time on it.
> >
>
> Solomonoff induction is mentally all-powerful.  But it requires infinitely
> much computational resources to achieve this ubermentality.
>
> -- Ben G
> ------------------------------
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;
>
> ------------------------------
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;
>

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=62945314-e0a234

Reply via email to