Ben wrote: "I remain convinced that probability theory is a proper
foundation for uncertain inference in an AGI context, whereas Pei remains
convinced of the opposite....So, this is really the essential issue, rather
than the particularities of the algebra..."

But, please, don't stop discussing that algebra.  This is the most fun I've
had on an e-mail list in years!

Cheers,
Brad

Ben Goertzel wrote:
> 
> 
> On Tue, Sep 23, 2008 at 9:28 PM, Pei Wang <[EMAIL PROTECTED]
> <mailto:[EMAIL PROTECTED]>> wrote:
> 
>     On Tue, Sep 23, 2008 at 7:26 PM, Abram Demski <[EMAIL PROTECTED]
>     <mailto:[EMAIL PROTECTED]>> wrote:
>     > Wow! I did not mean to stir up such an argument between you two!!
> 
>     Abram: This argument has been going on for about 10 years, with some
>     "on" periods and "off" periods, so don't feel responsible for it ---
>     you just raised the right topic in the right time to turn it "on"
>     again. ;-)
> 
> 
> 
> Correct ... Pei and I worked together on the same AI project for a few
> years
> (1998-2001) and had related arguments in person many times during that
> period,
> and have continued the argument off and on over email...
> 
> It has been an interesting and worthwhile discussion, from my view any way,
> but neither of us has really convinced the other...
> 
> I remain convinced that probability theory is a proper foundation for
> uncertain
> inference in an AGI context, whereas Pei remains convinced of the
> opposite ...
> 
> So, this is really the essential issue, rather than the particularities
> of the
> algebra...
> 
> The reason this is a subtle point is roughly as follows (in my view, Pei's
> surely differs).
> 
> I think it's mathematically and conceptually clear that for a system
> with unbounded
> resources probability theory is the right way to reason.   However if
> you look
> at Cox's axioms
> 
> http://en.wikipedia.org/wiki/Cox%27s_theorem
> 
> you'll see that the third one (consistency) cannot reasonably be expected of
> a system with severely bounded computational resources...
> 
> So the question, conceptually, is: If a cognitive system can only
> approximately
> obey Cox's third axiom, then is it really sensible for the system to
> explicitly
> approximate probability theory ... or not?  Because there is no way for
> the system
> to *exactly* follow probability theory....
> 
> There is not really any good theory of what reasoning math a system should
> (implicitly or explicitly) emulate given limited resources... Pei has
> his hypothesis,
> I have mine ... I'm pretty confident I'm right, but I can't prove it ...
> nor can he
> prove his view...
> 
> Lacking a comprehensive math theory of these things, the proof is gonna be
> in the pudding ...
> 
> And, it is quite possible IMO that both approaches can work, though they
> will
> not fit into the same AGI systems.  That is, an AGI system in which NARS
> would
> be an effective component, would NOT necessarily
> look the same as an AGI system in which PLN would be an effective
> component...
> 
> Along these latter lines:
> One thing I do like about using a reasoning system with a probabilistic
> foundation
> is that it lets me very easily connect my reasoning engine with other
> cognitive
> subsystems also based on probability theory ... say, a Hawkins style
> hierarchical
> perception network (which is based on Bayes nets) ... MOSES for
> probabilistic
> evolutionary program learning etc.   Probability theory is IMO a great
> "lingua
> franca" for connecting different AI components into an integrative whole...
> 
> -- Ben G
> 
> 
> ------------------------------------------------------------------------
> *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/> | Modify
> <https://www.listbox.com/member/?&;>
> Your Subscription     [Powered by Listbox] <http://www.listbox.com>
> 


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com

Reply via email to