Re: [agi] more interesting stuff

2003-02-26 Thread Shane Legg
Kevin wrote: Kevin's random babbling follows: Is there a working definition of what complexity exactly is? It seems to be quite subjective to me. But setting that aside for the moment... I think the situation is similar to that with the concept of intelligence in the sense that it means

[agi] Loebner prize

2003-02-26 Thread Ben Goertzel
A funny article... http://www.salon.com/tech/feature/2003/02/26/loebner_part_one/index.html --- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

[agi] swarm intellience

2003-02-26 Thread Kevin
An interesting approach to AI by modeling ant colony behavior.. http://www.openp2p.com/pub/a/p2p/2003/02/21/bonabeau.html Peace, Kevin

Re: [agi] swarm intellience

2003-02-26 Thread SMcClenahan
I have often thought this is the best architecture for an AGI (or brain for a mind). But those who have tried to implement a massively parallel system modelling neurons obviously hasn't been successful, at least not yet. So as programmers, we perform tricks and come up with algorithms to try

Re: [agi] Loebner prize

2003-02-26 Thread SMcClenahan
On the serious side, is Loebner doing anything nowadays with regards to AI/AGI research? Other than promoting his competition for what seems like his own personal special interests ... cheers, Simon A funny article...

RE: [agi] Loebner prize

2003-02-26 Thread Ben Goertzel
Loebner is not himself an AI researcher, so far as I know. -- Ben G -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Behalf Of [EMAIL PROTECTED] Sent: Wednesday, February 26, 2003 10:38 AM To: [EMAIL PROTECTED] Subject: Re: [agi] Loebner prize On the serious

Re: [agi] swarm intellience

2003-02-26 Thread Brad Wyble
The limitation in multi-agent systems is usually the degree of interaction they can have. The bandwidth between ants, for example, is fairly low even when they are in direct contact, let alone 1 inch apart. This limitation keeps their behavior relatively simple, simple relative to what you

Re: [agi] swarm intellience

2003-02-26 Thread Bill Hibbard
On Wed, 26 Feb 2003, Brad Wyble wrote: The limitation in multi-agent systems is usually the degree of interaction they can have. The bandwidth between ants, for example, is fairly low even when they are in direct contact, let alone 1 inch apart. This limitation keeps their behavior

Re: [agi] really cool

2003-02-26 Thread Stephen Reed
Regarding the Google voice search technology... Maybe no IP/phone number mapping whatsoever as the web page says that the system has very limited capacity. So the search results page could be shared among all users. I imagine that the Google service is aimed at cell phone users and that the (to

Re: [agi] swarm intellience

2003-02-26 Thread SMcClenahan
But hopefully the bandwidth of communication is compensated by the power of parallel processing. So long as communication between ants or processing nodes is not completely blocked, some sort of intelligence should self-organize, then its just a matter of time. As programmers or engineers we

RE: [agi] swarm intellience

2003-02-26 Thread Ben Goertzel
But hopefully the bandwidth of communication is compensated by the power of parallel processing. So long as communication between ants or processing nodes is not completely blocked, some sort of intelligence should self-organize, then its just a matter of time. As programmers or engineers

Re: [agi] swarm intellience

2003-02-26 Thread Brad Wyble
But hopefully the bandwidth of communication is compensated by the power of parallel processing. So long as communication between ants or processing nodes is not completely blocked, some sort of intelligence should self-organize, then its just a matter of time. As programmers or

Re: [agi] seed AI vs Cyc, where does Novamente fall?

2003-02-26 Thread Brad Wyble
Just to pick a point, Eliezer defines Seed AI as Artificial Intelligence designed for self-understanding, self-modification, and recursive self-enhancement. I do not agree with you that pure Seed AI is a know-nothing baby. I was perhaps a bit extreme in my word choice, but I do not believe

Re: [agi] Loebner prize

2003-02-26 Thread Ed Heflin
- Original Message - From: Ben Goertzel [EMAIL PROTECTED] To: [EMAIL PROTECTED] Listbox. Com [EMAIL PROTECTED] Sent: Wednesday, February 26, 2003 9:10 AM Subject: [agi] Loebner prize A funny article... http://www.salon.com/tech/feature/2003/02/26/loebner_part_one/index.html Ben,

RE: [agi] seed AI vs Cyc, where does Novamente fall?

2003-02-26 Thread Stephen Reed
On Wed, 26 Feb 2003, Ben Goertzel wrote: Cyc seems moderately strong on declarative knowledge (though I think it misses most of the fine-grained declarative knowledge that helps us cope with the real world... it focuses on relatively abstract levels of declarative knowledge...) Agreed on the