Richard Loosemore <[EMAIL PROTECTED]> said:

... your tangled(*) system might be just as vulnerable
to the problem as those thousands upon thousands of examples of complex
systems that are *not* understandable...

To the best of my knowledge, nobody has *ever* used "intuitive
understanding" to second-guess the stability of an artificial complex
system in which those four factors were all present in the elements in a
tightly coupled way.

So that is all we have as a reply to the complex systems problem:
engineers saying that they think they can just use "intuitive
understanding" to get around it.

Richard Loosemore

-----------------------
I don't wish to sound petty about this, but your description (or any
body's description) of the effects of the kind of complexity you are
talking about would have  to be developed using "intuitive
understanding" as well. So the problems that are implied by your view
(which must be intuitive) may not all be insurmountable.

I appreciate your bringing us up to speed on your view about this.  I
think I agree with the premise that AGI would have to be complex (or
exhibit some aspect of complexity) and that this complexity will be
difficult to comprehend "intuitively".  But that does not mean that we
will never be able to develop and learn how to effectively utilize
devices of the kind that we are talking about now.

Jim Bromer

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com

Reply via email to