Er, you don't ask that in AGI. The general culture here is not to recognize the crux, or the "test" of AGI. You are the first person here to express the basic requirement of any creative project. You should only embark on a true creative project - in the sense of committing to it - if you have a creative "idea", i.e. if you have a provisional definition of the problem and a partial solution to it, one that will make people say, "Yes that might work." (Many more ideas will of course usually be required). It's one of the most extraordinary phenomena that everyone, but everyone, involved in the creative community of AGI resists doing that and has extensive rationalisations of why they're not doing that.Every AGI systembuilder has several "ideas" about how to do *other* things, that may be auxiliary to AGI, like search more efficiently, and logics to deal with uncertainty, but no one has offered a "crux" idea.

I believe that you're misrepresenting the situation. I would guess that most people on this list have an idea that they are pursuing because they believe it has a chance at creating general intelligence.

Some here are research students or professional academics, who enjoy the spirit of discussion but are careful about disclosing the specifics of their own ideas until they have first been 'timestamped' by publication.

Others have already made their position clear on this list and in publication, but it seems that you're rejecting their ideas as not "creative" enough. That doesn't mean nobody has offered a "crux" idea, it just means that nobody has offered an idea that you believe in. Personally, I'm optimistic about many of the ideas that have been floated here. I know that many will fail, and that only one can be the *first* to create AGI; but I see sufficient cause here for people to commit to their ideas and embark on a true creative project to test those ideas and discover which ones are workable and which aren't. Not everybody is sufficiently well staffed and funded to lay out a roadmap for their entire project today, but I'm sure everybody has an idea of how their work fits into a big picture, where they ultimately see it going, and how their work relates to AGI.

I get the impression from this posting, and your earlier posting about a "Simple mathematical test of cog sci" that you see intelligence as something "crazy and spontaneous" (to use your words) - something almost magical. With that position, it would seem logical for you to expect a solution to AGI to also appear magical.

While human intelligence is impressive, I don't think it is inherently magical. If you look at a timeline of evolution, you'll see that it took billions of years to evolve multi-cellular life, hundreds of millions of years to evolve mammals, but the evolutionary time difference between us and apes or even between us and mice is, by comparison, very small. Creating human-like intelligence doesn't appear to take much extra work (for evolution) once you can do mouse-like intelligence.

I like to think about Deep Blue a lot. Prior to Deep Blue, I'm sure that there were people who, like you, complained that nobody has offered a "crux" idea that could make truly intelligent computer chess system. In the end Deep Blue appeared to win largely by brute force computing power. What I find most interesting is that Kasparov didn't say he was beaten by a particularly strong computer chess system, but claimed to see deep intelligence and creativity in the machine's play. That is, he didn't think Deep Blue was merely a slightly better version than the other chess systems, but he felt it had something else. He was surprised by the way the machine was playing, and even accussed the IBM team of cheating.

I'm certainly not saying that Deep Blue exhibited general intelligence or that it was anything more than a powerful move-searching machine (with well designed heuristics); but the fact that Kasparov had played many computer systems before, but saw an exceptional intelligence in Deep Blue suggests to me that intelligence isn't magical, but is something that can emerge when a suitable mechanism performed on a sufficient scale. Look at our own brains for example: while a single neuron is not yet 100% understood; each neuron appears to perform a minimal computation that when combined in the billions emerges to create an extremely robust intelligence. Brute force search or assembling millions of neurons might not seem like "crux" ideas, but when they are used towards a coherent vision, it is possible to create something that appears to be deeply intelligent.

I don't know how the first successful AGI will work - it may be based on a special logic, a search algorithm, a neural network, a vast knowledge base, some new mechanisms, or a hybrid combination of several approachs - I think, however, that we have seen many plausible ideas that are being pursued in the hope of creating something powerful. Some of these, I'm sure, are doomed to fail; but we don't really know which ones they are until we try them. However, it doesn't seem fair for you to say that nobody has offered a "crux" idea. Should we all sit around doing nothing until someone has a "Eureka" moment that brings mystic enlightenment to us all and solves every aspect of general intelligence?

The first thing is that you need a definition of the problem, and therefore a test of AGI. And there is nothing even agreed about that - although I think most people know what is required. This was evident in Richard's recent response to ATMurray's recent declaring of his "Agi" system. Richard clearly knew pretty well why that system failed the AGI "test" but he didn't have an explicit definition of the test at his fingertips.


I don't think you need a precise definition of AGI to create an AGI, any more than you need a precise definition of consumer desire in order to create a successful product. You can evaluate (and compare) systems by their performance in challenging environments. You can use increasingly challenging environments in order to evaluate progress.

I'm not arguing against a defnition or stringent tests - they may be of real value. I'm only saying that it is possible to make meaningful progress using benchmarks and performance based evaluations, while philosophers continue to grapple with defining intelligence.

In fact, maybe it will remain a challenge to define intelligence until we have more than one example to use in formulating our definition - until we have an AGI to compare and contrast with human intelligence.

-Ben

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=93701669-974916

Reply via email to