Hi, > Looking forward to it as ever :) I can understand your frustration with > this state of affairs. Getting people to buy into your theoretical > framework requires a major time investment on their part. > > This is why my own works stays within the bounds of conventional > experimental and psychological research. I speak the same langauge as > everyone else, and so it's easy to cross pollenate ideas. Of > course, this > is also why SOAR and similar architectures have such appeal despite their > limitations. Because the SOAR community is speaking the same language to > one another, it's possible (in theory) for the whole of them to make > faster progress than if they each had their own pet architechture.
Yes, this issue of specialized languages is a hard problem for AGI work. This is one reason that, when hiring people for Novamente projects, I have a bias toward former Webmind-ers.... Even though Novamente is a quite different software system and mathematical framework from Webmind, it's based on the same sort of "conceptual language", and the folks who worked at Webmind are used to that language. I noticed at this conference that different researchers were using basic words like "knowledge" and "representation" and "learning" and "evolution" in very different ways -- which makes communication tricky! When Pei Wang and I worked together in 1998-2001, we spent about a month initially just establishing a common language in which we could communicate to really understand what our agreements and disagreements were... > Whether something is explicit or emergent depends only on your > perspective of what counts as explicit. I'll assume you mean > anatomically > explicit in some way (where anatomical refers to features of both > neurophysiology and box/arrow design). In an AI context, it means whether something exists explicitly in the source code, rather than coming about dynamically as an indirect result of the sourcecode, in the bit-patterns in RAM created by the executable running... > With this assumption, I think b follows from a. Evolution has > always looked for the efficient solution, so if evolution has explicitly > encoded these behaviors, it's likely the best way to do it, at least as > far as we'll be able to determine with our "stupid human brains" :) > > > There's certainly a huge preponderance of evidence that our brains have > leaned towards specific anatomically explicit solutions to > problems in the domains that we can examine easily (near the motor and > sensory areas). > > Of course, in many cases these anatomically explicity solutions are > emergent from developmental processes, but I still think they should be > considered explicit. Agreed. And I think that sensorimotor stuff is more likely to be explicit rather than emergent in the brain.... And that, in coding an AI system, it's hopeless to try to make too much of cognition explicit rather than emergent -- but the same statement probably doesn't hold for perception & action... -- Ben G ------- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]
