Hi Brad,
really excited about Novamente as an AGI system, we'll need splashy demos. They will come in time, don't worry ;-) .... We have specifically chosen to
Looking forward to it as ever :) I can understand your frustration with this state of affairs. Getting people to buy into your theoretical framework requires a major time investment on their part.
This is why my own works stays within the bounds of conventional experimental and psychological research. I speak the same langauge as everyone else, and so it's easy to cross pollenate ideas. Of course, this is also why SOAR and similar architectures have such appeal despite their limitations. Because the SOAR community is speaking the same language to one another, it's possible (in theory) for the whole of them to make faster progress than if they each had their own pet architechture.
This synergy is very real, but may be outweighed by SOAR's limitations.
And, I hope my comments didn't seem to be "dissing" Deb Roy's work. It's really good stuff, and was among the more interesting stuff at this conference, for sure.
Not at all, I think we're in general agreement about the value of his work.
Now, I understand well that the human brain is a mess with a lot of complexity, a lot of different parts doing diverse things. However, what I think Minsky's architecture does is to explicitly embed, in his AI design, a diversity of phenomena that are better thought of as being emergent. My argument with him then comes down to a series of detailed arguments as to whether this or that particular cognitive phenomenon
a) is explicitly encoded or emergent in human cognitive neuroscience b) is better explicitly encoded, or coaxed to emerge, from an AI system
In each case, it's a judgment call, and some cases are better understood based on current AI or neuroscience knowledge than others. But I think Minsky has a consistent, very strong bias toward explicit encoding. This is the same kind of bias underlying Cyc and a lot of GOFAI.
Whether something is explicit or emergent depends only on your perspective of what counts as explicit. I'll assume you mean anatomically explicit in some way (where anatomical refers to features of both
neurophysiology and box/arrow design).
With this assumption, I think b follows from a. Evolution has always looked for the efficient solution, so if evolution has explicitly encoded these behaviors, it's likely the best way to do it, at least as far as we'll be able to determine with our "stupid human brains" :)
There's certainly a huge preponderance of evidence that our brains have leaned towards specific anatomically explicit solutions to problems in the domains that we can examine easily (near the motor and sensory areas).
Of course, in many cases these anatomically explicity solutions are emergent from developmental processes, but I still think they should be considered explicit.
-Brad
-------
To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]
