Eric B. Ramsay wrote:
If the Novamente design is able to produce an AGI with only 10-20 programmers in 3 to 10 years at a cost of under $10 million, then this represents such a paltry expense to some companies (Google for example) that it would seem to me that the thing to do is share the design with them and go for it (Google could R&D this with no impact to their shareholders even if it fails). The potential of an AGI is so enormous that the cost (risk)/benefit ratio swamps anything Google (or others) could possibly be working on. If the concept behind Novamente is truly compelling enough it should be no problem to make a successful pitch.

Eric B. Ramsay

[WARNING!  Controversial comments.]


When you say "If the concept behind Novamente is truly compelling enough", this is the point at which your suggestion hits a brick wall.

What could be "compelling" about a project? (Novamente or any other). Artificial Intelligence is not a field that rests on a firm theoretical basis, because there is no science that says "this design should produce an intelligent machine because intelligence is KNOWN to be x and y and z, and this design unambiguously will produce something that satisfies x and y and z".

Every single AGI design in existence is a Suck It And See design. We will know if the design is correct if it is built and it works. Before that, the best that any outside investor can do is use their gut instinct to decide whether they think that it will work.

Now, my own argument to investors is that the only situation in which we can do better than say "My gut instinct says that my design will work" is when we do actually base our work on a foundation that gives objective reasons for believing in it. And the only situation that I know of that allows that kind of objective measure is by taking the design of a known intelligent system (the human cognitive system) and staying as close to it as possible. That is precisely what I am trying to do, and I know of no other project that is trying to do that (including the neural emulation projects like Blue Brain, which are not pitched at the cognitive level and therefore have many handicaps).

I have other, much more compelling reasons for staying close to human cognition (namely the complex systems problem and the problem of guaranteeing friendliness), but this objective-validation factor is one of the most important.

My pleas that more people do what I am doing fall on deaf ears, unfortunately, because the AI community is heavily biassed against the messy empiricism of psychology. Interesting situation: the personal psychology of AI researchers may be what is keeping the field in Dead Stop mode.




Richard Loosemore



-------------------------------------------
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com

Reply via email to