Re: [agi] MONISTIC, CLOSED-ENDED AI VS PLURALISTIC, OPEN-ENDED AGI

2007-05-01 Thread DEREK ZAHN
Bob Mottram writes: Some things can be not so long as others. ... Thanks for taking the time for such in-depth descriptions, but I am still not clear what you are getting at. Much of what you write is a context in which the meaning of a term might have been learned, sometimes with multiple v

Re: [agi] MONISTIC, CLOSED-ENDED AI VS PLURALISTIC, OPEN-ENDED AGI

2007-05-01 Thread Bob Mottram
On 01/05/07, DEREK ZAHN <[EMAIL PROTECTED]> wrote: what exactly do you think my internal simulation processes might be doing when I read the following sentence from your email? >In short, imagery from visual, acoustic and other sensory modalities >give life through simulation to the basic skelet

Re: [agi] MONISTIC, CLOSED-ENDED AI VS PLURALISTIC, OPEN-ENDED AGI

2007-05-01 Thread DEREK ZAHN
To elaborate a bit: It seems likely to me that our minds work with the mechanisms of perception when appropriate -- that is, when the concepts are not far from sensory modalities. This type of concept is basically all that animals have and is probably most of what we have. Somehow, though, we

Re: [agi] MONISTIC, CLOSED-ENDED AI VS PLURALISTIC, OPEN-ENDED AGI

2007-05-01 Thread DEREK ZAHN
Bob Mottram writes: When you're reading a book or an email I think what you're doing is tieing your internal simulation processes to the stream of words Then it would be crucial to understand these simulation processes. For some very visual things I think I can follow what I think you are talk

Re: [agi] MONISTIC, CLOSED-ENDED AI VS PLURALISTIC, OPEN-ENDED AGI

2007-05-01 Thread Bob Mottram
On 01/05/07, Mike Tintner <[EMAIL PROTECTED]> wrote: There is no choice about all this. You do not have an option to have a pure language AGI - if you wish any brain to understand the world, and draw further connections about the world, it HAS to operate with graphics and images. Period. Plato's

Re: [agi] MONISTIC, CLOSED-ENDED AI VS PLURALISTIC, OPEN-ENDED AGI

2007-05-01 Thread DEREK ZAHN
Mike Tintner writes: And.. by now you should get the idea. And the all-important thing here is that if you want to TEST or question the above sentence, the only way to do it successfully is to go back and look at the reality. If you wanted to argue, "well look at China, they're rocketing wi

Re: [agi] MONISTIC, CLOSED-ENDED AI VS PLURALISTIC, OPEN-ENDED AGI

2007-05-01 Thread Mike Tintner
hrough is to go and LOOK at China. Of course, lazy people - philosophers - have always wanted to do it all - advance knowledge - by just playing with words - but it doesn't work. ----- Original Message - From: "Derek Zahn" <[EMAIL PROTECTED]> To: Sent: Tuesday, May 01, 20

Re: [agi] MONISTIC, CLOSED-ENDED AI VS PLURALISTIC, OPEN-ENDED AGI

2007-05-01 Thread Mark Waser
ike Tintner" <[EMAIL PROTECTED]> To: Sent: Monday, April 30, 2007 9:38 PM Subject: **SPAM** Re: [agi] MONISTIC, CLOSED-ENDED AI VS PLURALISTIC, OPEN-ENDED AGI I should point out something amazing that has gone on here in all these conversations re language & images. No one see

Re: [agi] MONISTIC, CLOSED-ENDED AI VS PLURALISTIC, OPEN-ENDED AGI

2007-05-01 Thread Derek Zahn
Mike Tintner writes: It goes ALL THE WAY. Language is backed by SENSORY images - the whole range. ALL your assumptions about how language can't be cashed out by images and graphics will be similarly illiterate - or, literally, UNIMAGINATIVE. I don't doubt that the visual and other sensory s

Re: [agi] MONISTIC, CLOSED-ENDED AI VS PLURALISTIC, OPEN-ENDED AGI

2007-05-01 Thread Mike Tintner
MD:What does "warm" look like? How about "angry" or "happy"? Can you draw a picture of "abstract" or "indeterminate"? I understand (i think) where you are coming from, and I agree wholeheartedly - up to the point where you seem to imply that a picture of something is the totality of its charact

Re: [agi] MONISTIC, CLOSED-ENDED AI VS PLURALISTIC, OPEN-ENDED AGI

2007-04-30 Thread Jean-Paul Van Belle
You're mostly correct about the word symbols (barring onomatopoeic words such as bang hum clipclop boom hiss howl screech fizz murmur clang buzz whine tinkle sizzle twitter as well as prefixes, suffixes and derived wordforms which all allow one to derive some meaning). However you are NOT correct

Re: [agi] MONISTIC, CLOSED-ENDED AI VS PLURALISTIC, OPEN-ENDED AGI

2007-04-30 Thread Mike Dougherty
On 4/30/07, Mike Tintner <[EMAIL PROTECTED]> wrote: The linguistic sign bears NO RELATION WHATSOEVER to the signified. true The only signs that bear relation to, and to some extent reflect, reality and real things are graphics [maps/cartoons/geometry/ icons etc] and images [photos, statues,

Re: [agi] MONISTIC, CLOSED-ENDED AI VS PLURALISTIC, OPEN-ENDED AGI

2007-04-30 Thread Mike Tintner
ve and laborious, have suddenly become very cheap, and are becoming ever cheaper. - Original Message - From: "Bob Mottram" <[EMAIL PROTECTED]> To: Sent: Monday, April 30, 2007 11:37 PM Subject: Re: [agi] MONISTIC, CLOSED-ENDED AI VS PLURALISTIC, OPEN-ENDED AGI

Re: [agi] MONISTIC, CLOSED-ENDED AI VS PLURALISTIC, OPEN-ENDED AGI

2007-04-30 Thread Bob Mottram
On 30/04/07, Mike Dougherty <[EMAIL PROTECTED]> wrote: graphics, image, redrawn, visualizations - all indicative of a high degree of visual-spatial thinking. I'm curious, are your own AGI efforts are modelled on this mode of thought? I ask because I wonder if the machine intelligence we build w

Re: [agi] MONISTIC, CLOSED-ENDED AI VS PLURALISTIC, OPEN-ENDED AGI

2007-04-30 Thread Mark Waser
From: "Charles D Hixson" <[EMAIL PROTECTED]> To: Sent: Monday, April 30, 2007 3:56 PM Subject: **SPAM** Re: [agi] MONISTIC, CLOSED-ENDED AI VS PLURALISTIC, OPEN-ENDED AGI Bob Mottram wrote: On 30/04/07, *Mike Tintner* <[EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]>>

Re: [agi] MONISTIC, CLOSED-ENDED AI VS PLURALISTIC, OPEN-ENDED AGI

2007-04-30 Thread Charles D Hixson
Bob Mottram wrote: On 30/04/07, *Mike Tintner* <[EMAIL PROTECTED] > wrote: Best example I can think of is William Calvin saying something like: "the conscious mind is clearly designed to deal with problematic decisions, where existing solutions won't wo

Re: [agi] MONISTIC, CLOSED-ENDED AI VS PLURALISTIC, OPEN-ENDED AGI

2007-04-30 Thread Mark Waser
agi@v2.listbox.com Sent: Monday, April 30, 2007 8:33 AM Subject: **SPAM** Re: [agi] MONISTIC, CLOSED-ENDED AI VS PLURALISTIC, OPEN-ENDED AGI obvious rejoinder: how can you have "correct" handling of uncertainty? Perhaps you mean "effective/ most effective available&q

Re: [agi] MONISTIC, CLOSED-ENDED AI VS PLURALISTIC, OPEN-ENDED AGI

2007-04-30 Thread Mike Dougherty
On 4/30/07, Mike Tintner <[EMAIL PROTECTED]> wrote: it is in the human brain. Every concept must be a tree, which can continually be added to and fundamentally altered. Every symbolic concept must be grounded in a set of graphics and images, which are provisional and can continually be redrawn.

Re: [agi] MONISTIC, CLOSED-ENDED AI VS PLURALISTIC, OPEN-ENDED AGI

2007-04-30 Thread Bob Mottram
On 30/04/07, Mike Tintner <[EMAIL PROTECTED]> wrote: Best example I can think of is William Calvin saying something like: "the conscious mind is clearly designed to deal with problematic decisions, where existing solutions won't work. The smartest mind is the one that can find the correct answ

Re: [agi] MONISTIC, CLOSED-ENDED AI VS PLURALISTIC, OPEN-ENDED AGI

2007-04-30 Thread Mike Tintner
n find the correct answer to those problems."Well, that's a definite self-contradiction. There is no correct answer to problematic decisions, only a calculated gamble. - Original Message - From: Benjamin Goertzel To: agi@v2.listbox.com Sent: Monday, April 30, 2007

Re: [agi] MONISTIC, CLOSED-ENDED AI VS PLURALISTIC, OPEN-ENDED AGI

2007-04-30 Thread Benjamin Goertzel
Your reactions please about to what extent any modern AGI incorporates uncertainty and provisionality of knowledge, and the need for rightness of other forms of AI. Well, a number of modern AGI designs (Novamente, NARS) are specifically founded on uncertain logic systems... in which correc

[agi] MONISTIC, CLOSED-ENDED AI VS PLURALISTIC, OPEN-ENDED AGI

2007-04-30 Thread Mike Tintner
This exchange below & that with Mike D focussed another key issue of AGI which I'd like comments back on. My impression is: AI has been strangled by a rationalistic desire to be RIGHT - to get the right answer every time. This can be called "psychological/ behavioural monism." This desire is ma