On Sat, Aug 2, 2008 at 11:42 PM, Ben Goertzel <[EMAIL PROTECTED]> wrote:

>
> Hector....
>
> you say
>
>  In other words, there is nothing to do about AI or AGI but to look at the
>> systems we have already around. I do think that any of those simple systems
>> such as CA can achieve AGI of the kind we expect without having to do
>> anything else! From my point of view it is just a matter of technological
>> sophistication, of providing the necessary elements and interfaces to the
>> real world and not of theoretical foundations or the reinvention of new
>> algorithms. If the first is what AGI is focusing on, I think it is heading
>> to the right direction, but it would be a mistake of the same kind to think
>> of the AGI field as building AGI from a bottom-up approach than to think of
>> the Game of Life as designed or engineered. Your term "AGI design" and
>> "designing AGI" discourages me, though.
>>
>
> and I respectfully but radically disagree ;-)
>
> I understand the appeal of this "artificial life" type approach to AGI, and
> I do think it can conceivably work, but I doubt very very much it will be
> the first approach to AGI to succeed.  I think the computational resource
> requirements for that approach will be quite huge compared to other
> approaches.
>
> I think that an engineering based approach will succeed first, just as we
> succeeded in building airplanes first, rather than evolving a birdlike
> flying machine out of a prebiotic molecular soup...
>


Perhaps you are right, and an engineering based approach will succeed first,
whatever it happens it is very exciting and I am looking forward to, by
contributing in my own field as well. But that doesn't change the fact (as I
do believe) that at the end, it is that AGI was engineered and successful
because it was already there just as happened to the Game of Life, other
than because we were clever enough to reinvent what intelligence is, just as
it was somehow done with  airplanes. I think current AI did for intelligence
what airplanes did for flying, we don't mind how we fly but we do mind how
machines think or don't think (at least as a matter of research for AGI).
That we haven't been able to create intelligence of the kind seen in humans
is because we are still expecting to build airplanes. While airplanes can
just keep flying just as they do unlike birds, current approaches from AI
and AGI to general intelligence will just keep producing airplane-type
solutions to a non-airplane problem.

I simply don't see why intelligence would need to be of a different type to
what all those sophisticated systems around are (just as airplanes are so
different to how birds fly) and I think that is what all current research on
AI and AGI is standing for. While research on complexity theory and complex
systems are doing better still seem to miss what the real problem is and
then how we should do to figure it out for agi (I don't want to write AGI to
not mix it up with the current field of research). Believe or not, I think
systems just as the Game of Life and rule 110 (able of universal
computation) are all what is needed for agi. Disregarding matters of
computational efficiency, I think it is just a matter of time before someone
just look at the right direction to the right (almost any) system, be in the
right place and the right time (the time when we figure out how to empower
those systems to interact as we do) just as it would take time to evolve a
birdlike flying machine out of a prebiotic molecular soup (but not that much
since we are talking of different time scales), and then just see that
almost anything can just produce the agi we were so eager for. Perhaps it
would turn out to be OpenCog just as the Game of Life turned out to be
capable of universal computation.

In fact what I am saying is that your OpenCog approach is likely to be able
of agi, but it will not produce the expected output until it happens what I
am saying and looking for, but I still do think that even when you have the
strong impression that you are designing OpenCog, it is just another general
powerful enough system capable of general intelligence, and therefore
something more like a discovery rather than an invention.

What a work on OpenCog, no doubt it looks very interesting. I do strongly
agree with Minsky's opinion in your twiki page (it turns out to be pretty
much what I am saying above):

No one has tried to make a thinking machine. The bottom line is that we
really haven't progressed too far toward a truly intelligent machine. We
have collections of dumb specialists in small domains; the true majesty of
general intelligence still awaits our attack. We have got to get back to the
deepest questions of AI and general intelligence and quit wasting time on
little projects that don't contribute to the main goal.

--- *Marvin Minsky* on AGI (as interviewed in Hal's Legacy, Edited by David
Stork, 2000)



>
>
> What started this thread was a discussion on the OpenCog mailing list (
> opencog.org) of a highly specific AGI design I've proposed called
> OpenCogPrime... see
>
> www.opencog.org/wiki/*OpenCogPrime*:WikiBook
>
> thx
> Ben
>
>  ------------------------------
>   *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/> | 
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



-- 
Hector Zenil http://zenil.mathrix.org



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com

Reply via email to