On Mon, Mar 19, 2007 at 11:19:17AM -0400, Richard Loosemore wrote: > I have been noticing this problem for some time, and have started trying > to draw people's attention to it. > > One reason that I have become sensitized to it is that I am in the > (relatively) unusual position of having been a fully paid-up member of > five different scientific/technical professions -- physics, cognitive > science, parapsychology, artificial intelligence and software > engineering -- and I have recently come to realize that there are quite > extraordinary differences in attitude among people trained in these > areas. Differences, mind you, that make a huge difference in the way > that research gets carried out.
I can absolutely confirm this (my background is too somewhat unusual). > [I say this as a plug, of course: I have presented just such evidence, > in (among other places, the AGIRI workshop last year) and it has been > met with astonishing outbursts of irrational scorn. I have never seen > anyone make so many lame excuses to try to destroy an argument]. I've by now have become quite used to that, too. > But now, even if they do have that attitude, why don't they just believe > that a nice, mathematical approach will eventually yield a true AGI, or > human-level intelligence? > > The answer to that question, I believe, is threefold: > > 1) They do not have much of a clue about how to solve some aspects of > the AGI puzzle (unsupervised learning mechanisms that can ground their > systems in a completely autonomous way, for example), so they see an > immense gulf between where they are and where we would like them to get > to. They don't know how to cross that gulf. I think it's worse. I would call it problem agnosia. > 2) They try to imagine some of their favorite AI mechanisms being > extended in order to cope with AGI, and they immediately come up against > problems that seem insurmountable. A classic case of this is the > Friendliness Problem that is the favorite obsession of the SIAI crowd: > if you buy the standard AI concept of how to drive an AI (the goals and > supergoals method, or what I have referred to before as the "Goal Stack" > approach), what you get is an apparently dangerous situation: the AI > could easily go berserk, and they cannot see any way to fix it. And Nile is just a river in Egypt. > 3) Part of the philosophy of at least some of these Neat-AI folks is > that human cognitive systems are trash. This ought not to make any Yes, they don't understand it, but have this strong sentiment about something they know little about. Deeply irrational. > difference (after all, they could ignore human cognition and still > believe that AGI is achievable by other means), but I suspect that it > exerts a halo effect: they associate the idea of building a complete > human-level AI with the thought of having to cope with all the messy > details involved in actually getting a real system to work, and that > horrifies them. It isn't math. It's dirty. Strange, I like dirty, untretable things. It's the added challenge that makes things interesting. > Put these things together and you can see why they believe that AGI is > either not going to happen, or should not happen. > > Overall, I believe this attitude problem is killing progress. Quite > seriously, I think that someone is eventually going to put money behind > an AGI project that is specifically NOT staffed with mathematicians, and > all of a sudden everyone is going to wonder why that project is making > rapid progress. I wouldn't wonder at all. AI has remained a sterile wasteland for many decades. I expected nice things from the ALife people, but they never got enough momentum. > Unfortunately, this is a deep potential well we are in. It will take > more than one renegade researcher to dig us out of it. The hardware is getting better, though. This gives tool into irreverent hands, to teenagers and young adults which are not spoilt with premises about How It's Being Done. -- Eugen* Leitl <a href="http://leitl.org">leitl</a> http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE ----- This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?list_id=11983
