On 21.08.2012 20:17, Ben Goertzel wrote:
There is talk of them spinning off their rationality stuff into a
separate org so as to regain more focus on Friendly AI

However, my understanding is that they intend to focus for a while on
Friendly AI related theory, not implementing anything substantial till they've come to what they consider an appropriate level of theoretical
understanding



I look upon the aptly named and surprisingly well funded Singhilarity Institute with some amusement. There seem to be a number of confusions going on in relation to:

 - chaotic neural systems
 - the origins of concepts
- the sensitivity to initial conditions or mutations of feedback systems - distinctions between types of machines (does it matter where the ghost is?)
 - the capabilities of existing technology compared to human brains
 - the influence of existing narrow AI on human affairs


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com

Reply via email to