Benjamin Johnston wrote:

First, I think there is a world of difference between passionate researchers at the beginning of the field, in 1956, and passionate researchers in 2008 who have a half-century of other people's mistakes to learn from. The secret of success is to try and fail, then to try again with a fresh outlook. That exactly fits the new AGI crowd.


Are you suggesting that the early researchers (not just the 50s, but also the 60s, 70s and 80s) weren't learning from each other's mistakes? If so, I think you need to address why this generation is able to learn from past mistakes, when prior generations couldn't learn from each other?

When something is wrong in a research field, and the thing that is wrong is fairly small, all you need to do is propose a fix and people will realize the value of the fix and adopt it.

But when something deeper is wrong, a person who spots the issue cannot simply propose a fix, because the fix will require the established people to downgrade their expertise and go back to school, to some extent.

Now, in the last five decades I think that people have looked for small fixes (being reluctant to think that there could be anything drastically wrong with what they were doing), and as a result there has been a tendency to *not* learn from mistakes. To use an extreme metaphor, they have chosen to see the tilt on the upper deck of the Titanic as a reasons to rearrange the deck chairs.

I believe that the AGI community has become impatient with the deckchair reconfiguration and is looking for more substantial things to fix. That is what I am trying to convey when I say that they will be able to learn more from past mistakes.


Second, when you say that "a better argument would be to point to a fundamental technological or methodological change that makes AGI finally credible" I must say that I could not agree more. That is *exactly* what I have tried to do in my project, because I have pointed out a problem with the way that old-style AI has been carried out, and that problem is capable of neatly explaining why the early optimism produced nothing. I have also suggested a solution to that problem, pointed out that the solution has never been tried before (so it has the virtue of not being a failure yet!), and also pointed out that the proposed solution resembles some previous approaches that did have sporadic, spectacular success (the early work in connectionism).

However, in my Open Letter post, I did not want to emphasize my own work (I will do that elsewhere on the website), but instead point out some general facts about all AGI projects. Perhaps I should have also said "And I have an approach that is dramatically new....", but I felt that that that would have weakened the points that I was tryingt to make.


I don't think you need to say that.

Hypothetically, if you were to believe that your own project is the only one with a chance of success, then encouraging investment in AGI would, given such beliefs, surely only discredit the field when money is later found to have been wasted with no results.


In contrast, I think that you believe there are many people with good ideas and that there is "change in the air": researchers really are starting to tackle AGI in interesting and promising new ways; and that you are just one of many groups with fresh and plausible ideas for building an AGI.

So, why not try to pinpoint and express such changes (and their cause) in your letter?

Oh, I don't believe everyone is on the right track, not by any means: I really do believe that the problem I have described and that everyone needs to take it seriously.

The reason I would encourage investors to embrace all the different projects is that I will eventually get through to other people and make them understand the situation, and I think that at that stage they will all adopt the methodology I have proposed and start making much better progress on their own projects.

Having said that, I will be taking steps to explain why an investor should back my project in particular. It is just that that letter was aiming at a more general point, which is the inability of investors to tell which approach is better, and whether any of the approaches is better than what has come before.



Finally, I do have to point out that you made no comment on the second part of the post, where I explained that *any* investor who put money into any project in the AGI arena would be injecting a shot of adrenalin into the whole field, causing it to attract attention and so stimulate further investment. That is a very important point: in any other field of investment the last thing you want to do is to provoke other investors into funding rivals to your own investment, but in AGI nothing could be better. If that shot of adrenalin into the AGI field caused one of the projects to succeed, the result would be a massive technology surge that would benefit everyone, not just the individual investor. There is no other investment opportunity where so much "trickle-down" could be generated by the success of one company.

That argument lessens the risk of the investor's money being wasted.


I didn't comment on the second part of the email because I don't want to get involved in arguments about singularity.

Since you bring up the second part, I will add some comments about your claims that an investor wouldn't/shouldn't care if they back a "bunch of idiots... [that] burn all the cash and produce nothing", because it will create buzz and lead to many projects. You say that if "just one of them" succeeds then all AGI investors will personally benefit (even if they weren't investing in the particular company that succeeds).

Your assumption here is that the chance of "just one of them" succeeding is quite good. This brings us right back to my comments on the first half of your letter: you haven't really offered an investor a convincing argument for why the chances are good that *somebody* will succeed this time, when talented passsionate people have been trying for years and failing.

You're also bringing up themes related to singularity, and I don't think this is necessary. I (and many others here have expressed a similar opinion) think that AGI can have many benefits and applications even if it takes a very long time to get to super-intelligence. If you can justify investment in AGI even with a pessimistic outlook (but the possibility of radical change), then you would have a much better case.

And finally, I really don't think it is good advice to say "oh, just throw money about, and create buzz, without a care for who receives it". If AGI researchers were to receive huge amounts of money (and buzz) but end up failing again, the reputation of the field and the chance of finding *any* investment could be severely hurt. Historians looking over the years following a failed AGI buzz (bubble?) may call it the AGI Winter.


It sounds very suspect when you tell investors not to be overly discriminating with their money. It would be much better to encourage investors to follow the field and understand it - invest their time, rather than their money - so that when they find something they believe in (even if it takes 10 years to find it), they will be the ones on the cutting edge. If they find something today, then that is fantastic.


I hope you do revise and improve your open letter. If we want AGI to be taken seriously, then it is our responsibility to engage the public with as much professionalism and polish as possible.

Agreed.

I will certainly follow it up with more detail.

Perhaps that more detailed approach to investors will answer some of your concerns.



Richard Loosemore






-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com

Reply via email to