On Nov 18, 2007, at 7:06 PM, Benjamin Goertzel wrote:
Navigating complex social and business situations requires a quite
different set of capabilities than creating AGI. Potentially they
could
be combined in the same person, but one certainly can't assume that
would be the case.
I completely agree. But if we are to assume that AGI requires some
respectable amount of funding, as seems to be posited by many people,
then it seems that it will require a person with broader skills than
the stereotypical computer science nerd. In that case, maybe AGI is
not accessible to someone who is unwilling or unable to be anything
but a computer science nerd. As if the pool of viable AGI
researchers was not small enough already.
And, I don't think it's fair to say that "if you're smart enough to
solve AGI,
you should be able to quickly make a pile of money doing some kind of
more marketable technical-computer-science, and fund the AGI
yourself."
This assumes a lot of things, for instance that AGI is the same
sort of
problem as technical-computer-science problems, so that if someone can
do AGI better than others, they must be able to do technical-
computer-science
better than others too. But I actually don't think this is true; I
think that AGI
demands a different sort of thinking.
I'm not so sure about this. All hard problems seem to receive
similar sentiments until they are actually solved. I do think that
AGI is a relatively hard problem even among the "hard problems", but
there are other computer science problems that had thousands of pages
of literature devoted to them without much progress that when they
were solved by someone turned out to be relatively simple. That
20/20 hindsight thing. To the extent that there is any special sauce
in AGI, I expect it will look like one of these cases.
Solving computer science problems is a pretty general skill, in part
because it is a pretty shallow field in most important respects. To
use AI research as an example, it is composed of only a handful of
fundamental ideas from which a myriad of derivatives and mashups have
been created. Most other problems in computer science have the same
feature, and when problems get solved it is because someone looked at
the handful of fundamentals and ignored the vast bodies of derivative
products which add nothing new. Vast quantities of research does not
equate to a significant quantity of ideas. AI is a little more
complex than some other topics, but is still far simpler at the level
of fundamentals than some people make it out to be.
People are incapable of solving AGI for the same reason they are
incapable of solving any of the other interesting computer science
problems, which was the point I was making obliquely. It is not a
different skill, it is the same skill that the vast majority of all
computer science people are incompetent at. And AGI is particularly
hard problem, even for that tiny minority of people capable of
solving real problems in computer science.
If you cannot solve interesting computer science problems that are
likely to be simpler, then it is improbable that you'll ever be able
to solve really hard interesting problems like AGI (or worse,
Friendly AGI). I don't mean to disparage anyone doing AGI research,
but if they are incapable of solving the easy problems, why should
anyone expect them to solve the hard problems?
Again, AGI savvy may well come combined with great technical-computer-
science savvy, but one can't assume that this must be the case.
And, turning technical-computer-science savvy into a lot of $$ is
by no means
easy and requires either a lot of luck or an uncommon business
savvy...
Definitely, that requires practice and skill. But someone that
develops that skill will be able to get commercial interest in their
AGI prototype at a far earlier stage than someone who does not.
The question is which costs less, developing the business skills or
developing an AGI to the point where you don't need business skills?
One might be able to make an argument either way, but I suspect the
former is closer to the truth. The optimal path is rarely the path
anyone is most comfortable with.
Look back at history, after all. Babbage was smart enough to
create a computer,
but evidently didn't have the right kind of smarts to actually get
it done. Leibniz,
before him, was smart enough to create a mechanical calculator (he
designed one),
but also didn't seem to have the right kind of smarts to actually
get it done.
The venture investment environment is far more favorable today, at
least in the US, than back then. But this is not really disagreeing
with my point in any case. Are you arguing that there was an
unambiguous market for these products at the time the inventors came
up with the ideas? And if so, why was it so hard to convince
everyone else? No one is making the claim that there is no market
for AGI today that I know of.
If someone had an AGI as thoroughly designed and spec-ed as Babbage
or Leibniz, they would have little problem selling it, but the
reality is that we do not have an AGI market full of Babbage and
Leibniz, we have an AGI market for wannabes that aspire to being
Babbage or Leibniz. That is a distinction with a difference, and the
cases are not analogous. Babbage and Leibniz competently designed
things for which their was no market. A market exists for AGI, there
simply have been no Babbage's around to meet that market.
Cheers,
J. Andrew Rogers
-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=66475721-fa9393