On Tue, Oct 28, 2008 at 3:01 PM, Ben Goertzel <[EMAIL PROTECTED]> wrote:
>
> I believe I could prove that *mathematically*, in order for a NARS system to
> consistently, successfully achieve goals in an environment, that environment
> would need to have some Occam-prior-like property.

Maybe you can, but "to consistently, successfully achieve goals in an
environment" is not in my working definition of "intelligence", so I
don't really mind.

> However, even if so, that doesn't mean such is the best way to think about
> NARS ... that's a different issue.

Exactly. I'm glad we finally agree again. ;-)

Pei

> -- Ben G
>
> On Tue, Oct 28, 2008 at 11:58 AM, Pei Wang <[EMAIL PROTECTED]> wrote:
>>
>> Ben,
>>
>> It seems that you agree the issue I pointed out really exists, but
>> just take it as a necessary evil. Furthermore, you think I also
>> assumed the same thing, though I failed to see it. I won't argue
>> against the "necessary evil" part --- as far as you agree that those
>> "postulates" (such as "the universe is computable") are not
>> convincingly justified. I won't try to disprove them.
>>
>> As for the latter part, I don't think you can convince me that you
>> know me better than I know myself. ;-)
>>
>> The following is from
>> http://nars.wang.googlepages.com/wang.semantics.pdf , page 28:
>>
>> If the answers provided by NARS are fallible, in what sense these answers
>> are
>> "better" than arbitrary guesses? This leads us to the concept of
>> "rationality".
>> When infallible predictions cannot be obtained (due to insufficient
>> knowledge
>> and resources), answers based on past experience are better than arbitrary
>> guesses, if the environment is relatively stable. To say an answer is only
>> a
>> summary of past experience (thus no future confirmation guaranteed) does
>> not make it equal to an arbitrary conclusion — it is what "adaptation"
>> means.
>> Adaptation is the process in which a system changes its behaviors as if
>> the
>> future is similar to the past. It is a rational process, even though
>> individual
>> conclusions it produces are often wrong. For this reason, valid inference
>> rules
>> (deduction, induction, abduction, and so on) are the ones whose
>> conclusions
>> correctly (according to the semantics) summarize the evidence in the
>> premises.
>> They are "truth-preserving" in this sense, not in the model-theoretic
>> sense that
>> they always generate conclusions which are immune from future revision.
>>
>> --- so you see, I don't assume adaptation will always be successful,
>> even successful to a certain probability. You can dislike this
>> conclusion, though you cannot say it is the same as what is assumed by
>> Novamente and AIXI.
>>
>> Pei
>>
>> On Tue, Oct 28, 2008 at 2:12 PM, Ben Goertzel <[EMAIL PROTECTED]> wrote:
>> >
>> >
>> > On Tue, Oct 28, 2008 at 10:00 AM, Pei Wang <[EMAIL PROTECTED]>
>> > wrote:
>> >>
>> >> Ben,
>> >>
>> >> Thanks. So the other people now see that I'm not attacking a straw man.
>> >>
>> >> My solution to Hume's problem, as embedded in the experience-grounded
>> >> semantics, is to assume no predictability, but to justify induction as
>> >> adaptation. However, it is a separate topic which I've explained in my
>> >> other publications.
>> >
>> > Right, but justifying induction as adaptation only works if the
>> > environment
>> > is assumed to have certain regularities which can be adapted to.  In a
>> > random environment, adaptation won't work.  So, still, to justify
>> > induction
>> > as adaptation you have to make *some* assumptions about the world.
>> >
>> > The Occam prior gives one such assumption: that (to give just one form)
>> > sets
>> > of observations in the world tend to be producible by short computer
>> > programs.
>> >
>> > For adaptation to successfully carry out induction, *some* vaguely
>> > comparable property to this must hold, and I'm not sure if you have
>> > articulated which one you assume, or if you leave this open.
>> >
>> > In effect, you implicitly assume something like an Occam prior, because
>> > you're saying that  a system with finite resources can successfully
>> > adapt to
>> > the world ... which means that sets of observations in the world *must*
>> > be
>> > approximately summarizable via subprograms that can be executed within
>> > this
>> > system.
>> >
>> > So I argue that, even though it's not your preferred way to think about
>> > it,
>> > your own approach to AI theory and practice implicitly assumes some
>> > variant
>> > of the Occam prior holds in the real world.
>> >>
>> >>
>> >> Here I just want to point out that the original and basic meaning of
>> >> Occam's Razor and those two common (mis)usages of it are not
>> >> necessarily the same. I fully agree with the former, but not the
>> >> latter, and I haven't seen any convincing justification of the latter.
>> >> Instead, they are often taken as granted, under the name of Occam's
>> >> Razor.
>> >
>> > I agree that the notion of an Occam prior is a significant conceptual
>> > beyond
>> > the original "Occam's Razor" precept enounced long ago.
>> >
>> > Also, I note that, for those who posit the Occam prior as a **prior
>> > assumption**, there is not supposed to be any convincing justification
>> > for
>> > it.  The idea is simply that: one must make *some* assumption
>> > (explicitly or
>> > implicitly) if one wants to do induction, and this is the assumption
>> > that
>> > some people choose to make.
>> >
>> > -- Ben G
>> >
>> >
>> >
>> > ________________________________
>> > agi | Archives | Modify Your Subscription
>>
>>
>> -------------------------------------------
>> agi
>> Archives: https://www.listbox.com/member/archive/303/=now
>> RSS Feed: https://www.listbox.com/member/archive/rss/303/
>> Modify Your Subscription: https://www.listbox.com/member/?&;
>> Powered by Listbox: http://www.listbox.com
>
>
>
> --
> Ben Goertzel, PhD
> CEO, Novamente LLC and Biomind LLC
> Director of Research, SIAI
> [EMAIL PROTECTED]
>
> "A human being should be able to change a diaper, plan an invasion, butcher
> a hog, conn a ship, design a building, write a sonnet, balance accounts,
> build a wall, set a bone, comfort the dying, take orders, give orders,
> cooperate, act alone, solve equations, analyze a new problem, pitch manure,
> program a computer, cook a tasty meal, fight efficiently, die gallantly.
> Specialization is for insects."  -- Robert Heinlein
>
>
> ________________________________
> agi | Archives | Modify Your Subscription


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com

Reply via email to