Aaron Hosford wrote:
http://www.rec.ri.cmu.edu/about/news/11_01_minds.php
This project's approach is to use 3D simulation to detect and classify behavior, and then generate symbolic information about the events that were observed. I'm encouraged to see someone doing work on this stage of cognition, as I see perception as the "missing link" that's stopping AGI from developing.
I wonder, will a certain naysayer feel vindicated that someone else sees simulation as vital to intelligence (and is using it to solve precisely the problems he says it's needed to solve), or will he be annoyed that the ultimate form the information takes is symbolic, which is compatible with semantic nets or any number of other existing AGI approaches?
The approach you are describing is both a revolutionary advancement, along the lines of what I've been trying to advocate for the last 6-8 months... (which is usually the case)...
But, at the same time, tying it to classic approaches is a crippling limitation because it robs the system of the types of introspection that are the hallmark of human cognition. Intelligence is kinda a 'turtles all the way down' kinda thing... The cortex is an organized network of basically one type of algorithm applied to itself again and again... So yeah, you would need to generalize your simulation layer so that it can work in abstract domains in addition to domains for which graphics programming techniques are already reasonably well developed.
-- E T F N H E D E D Powers are not rights. ------------------------------------------- AGI Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393 Modify Your Subscription: https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968 Powered by Listbox: http://www.listbox.com
