Sure ... but my point is that unless the environment satisfies a certain
Occam-prior-like property, NARS will be useless...

ben

On Tue, Oct 28, 2008 at 11:52 AM, Abram Demski <[EMAIL PROTECTED]>wrote:

> Ben,
>
> You assert that Pei is forced to make an assumption about the
> regulatiry of the world to justify adaptation. Pei could also take a
> different argument. He could try to show that *if* a strategy exists
> that can be implemented given the finite resources, NARS will
> eventually find it. Thus, adaptation is justified on a sort of "we
> might as well try" basis. (The proof would involve showing that NARS
> searches the state of finite-state-machines that can be implemented
> with the resources at hand, and is more probable to stay for longer
> periods of time in configurations that give more reward, such that
> NARS would eventually settle on a configuration if that configuration
> consistently gave the highest reward.)
>
> So, some form of learning can take place with no assumptions. The
> problem is that the search space is exponential in the resources
> available, so there is some maximum point where the system would
> perform best (because the amount of resources match the problem), but
> giving the system more resources would hurt performance (because the
> system searches the unnecessarily large search space). So, in this
> sense, the system's behavior seems counterintuitive-- it does not seem
> to be taking advantage of the increased resources.
>
> I'm not claiming NARS would have that problem, of course.... just that
> a theoretical no-assumption learner would.
>
> --Abram
>
> On Tue, Oct 28, 2008 at 2:12 PM, Ben Goertzel <[EMAIL PROTECTED]> wrote:
> >
> >
> > On Tue, Oct 28, 2008 at 10:00 AM, Pei Wang <[EMAIL PROTECTED]>
> wrote:
> >>
> >> Ben,
> >>
> >> Thanks. So the other people now see that I'm not attacking a straw man.
> >>
> >> My solution to Hume's problem, as embedded in the experience-grounded
> >> semantics, is to assume no predictability, but to justify induction as
> >> adaptation. However, it is a separate topic which I've explained in my
> >> other publications.
> >
> > Right, but justifying induction as adaptation only works if the
> environment
> > is assumed to have certain regularities which can be adapted to.  In a
> > random environment, adaptation won't work.  So, still, to justify
> induction
> > as adaptation you have to make *some* assumptions about the world.
> >
> > The Occam prior gives one such assumption: that (to give just one form)
> sets
> > of observations in the world tend to be producible by short computer
> > programs.
> >
> > For adaptation to successfully carry out induction, *some* vaguely
> > comparable property to this must hold, and I'm not sure if you have
> > articulated which one you assume, or if you leave this open.
> >
> > In effect, you implicitly assume something like an Occam prior, because
> > you're saying that  a system with finite resources can successfully adapt
> to
> > the world ... which means that sets of observations in the world *must*
> be
> > approximately summarizable via subprograms that can be executed within
> this
> > system.
> >
> > So I argue that, even though it's not your preferred way to think about
> it,
> > your own approach to AI theory and practice implicitly assumes some
> variant
> > of the Occam prior holds in the real world.
> >>
> >>
> >> Here I just want to point out that the original and basic meaning of
> >> Occam's Razor and those two common (mis)usages of it are not
> >> necessarily the same. I fully agree with the former, but not the
> >> latter, and I haven't seen any convincing justification of the latter.
> >> Instead, they are often taken as granted, under the name of Occam's
> >> Razor.
> >
> > I agree that the notion of an Occam prior is a significant conceptual
> beyond
> > the original "Occam's Razor" precept enounced long ago.
> >
> > Also, I note that, for those who posit the Occam prior as a **prior
> > assumption**, there is not supposed to be any convincing justification
> for
> > it.  The idea is simply that: one must make *some* assumption (explicitly
> or
> > implicitly) if one wants to do induction, and this is the assumption that
> > some people choose to make.
> >
> > -- Ben G
> >
> >
> >
> > ________________________________
> > agi | Archives | Modify Your Subscription
>
>
> -------------------------------------------
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"A human being should be able to change a diaper, plan an invasion, butcher
a hog, conn a ship, design a building, write a sonnet, balance accounts,
build a wall, set a bone, comfort the dying, take orders, give orders,
cooperate, act alone, solve equations, analyze a new problem, pitch manure,
program a computer, cook a tasty meal, fight efficiently, die gallantly.
Specialization is for insects."  -- Robert Heinlein



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com

Reply via email to