My exchange with Steve illustrates the point I made.
Steve began basically by asserting:
“gee, AGI is so hard technologically... we’ll never be able to wrap our heads 
around it for yonks ... I certainly can’t.”
And I said,
“no it isn’t.. here’s a true AGI problem -  an example of an AGI’s function -
a slime mould/real world robot must make its way through a maze that it, unlike 
narrow AI’s in a similar position, **doesn’t already know how to navigate***. 
Slime moulds – all real world AGI’s/animals -  can do that. And slime moulds 
are relatively uncomplicated technologically.
He didn’t stop to think about it – didn’t give it a moment’s thought – because 
he simply doesn’t know how to think about AGI functions and problems.
He went back to the maths, because that’s what he knows about. He hasn’t the 
slightest idea whether maths is really relevant to real AGI problems – he 
doesn’t know what those problems are. But maths and associated stuff are all he 
knows how to think about. So that’s what he thinks about.
And Steve is typical of all AGI-ers.


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com

Reply via email to