--- Richard Loosemore <[EMAIL PROTECTED]> wrote:

> Benjamin Goertzel wrote:
> > 
> > 
> >     Has anyone noticed that people who have studied AGI all their lives,
> >     like
> >     Kurzweil and Minsky, aren't trying to build one?
> > 
> > 
> >     -- Matt Mahoney, [EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]>
> > 
> > 
> > 
> > I don't get it ... some of us who have studied AGI all our lives ARE
> trying
> > to build one...
> 
> Yeah, ditto.
> 
> I don't think the people who stopped did so because the TASK is 
> incredibly hard, I think it is because, using their techniques, their 
> VERSION of the task was impossibly hard.
> 
> In fact, I have a sneaking suspicion that the task could turn out to be 
> not nearly as hard as is widely assumed (it might even be easy).  The 
> reason I say that is that there is a glaringly different approach to AI 
> that no one has tried in earnest, but which, on those few occasions that 
> anyone came near to trying it, worked astonishingly well.  That approach 
> (basically the connectionist one) only stopped working when people 
> inadvertently tried to 'improve' it to make it more rigorous.
> 
> We'll see.

We'll see.  Everyone thinks they know how to solve the problem, but then
nobody can even agree on what the problem is.

If a robot can solve Rubik's cube, is it smarter than a thermostat?
http://my.fit.edu/~pierrel/RASSL-Images/RASSL-Pics/RUBOT.mpg


-- Matt Mahoney, [EMAIL PROTECTED]

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936

Reply via email to