On Mon, Jun 23, 2008 at 12:50 AM, William Pearson <[EMAIL PROTECTED]> wrote:
> 2008/6/22 Vladimir Nesov <[EMAIL PROTECTED]>:
>
>>
>> Two questions:
>> 1) Do you know enough to estimate which scenario is more likely?
>
> Well since intelligence explosions haven't happened previously in our
> light cone, it can't be a simple physical pattern, so I think
> non-exploding intelligences have the evidence for being simpler on
> their side.

This message that I'm currently writing hasn't happened previously in
out light code. By your argument, it is evidence for it being more
difficult to write, than to recreate life on Earth and human
intellect, which is clearly false, for all practical purposes. You
should state that argument more carefully, in order for it to make
sense.


> So we might find them more easily. I also think I have
> solid reasoning to think intelligence exploding is unlikely, which
> requires paper length rather than post length. So it I think I do, but
> should I trust my own rationality?

But not too much, especially when the argument is not technical (which
is clearly the case for questions such as this one). If argument is
sound, you should be able to convince seed AI crowd too, even against
their confirmation bias. If you can't convince them, then either they
are idiots, or the argument is not good enough, which means that it's
probably wrong, and so you yourself shouldn't place too high stakes on
it.


> Getting a bunch of people together to argue for both paths seems like
> a good bet at the moment.

Yes, if it will lead to a good estimation of which methodology is more
likely to succeed.


>> 2) What does this difference change for research at this stage?
>
> It changes the focus of research from looking for simple principles of
> intelligence (that can be improved easily on the fly), to one that
> expects intelligence creation to be a societal process over decades.
>
> It also makes secrecy no longer be the default position. If you take
> the intelligence explosion scenario seriously you won't write anything
> in public forums that might help other people make AI. As bad/ignorant
> people might get hold of it and cause the first explosion.
>

I agree, but it works only if you know that the answer is correct, and
(which you didn't address and which is critical for these issues) you
won't build a doomsday machine as a result of your efforts, even if
this particular path turns out to be more feasible.

If you want to achieve artificial flight, you can start a research
project that will try to figure out the fundamental principles of
flying and will last a thousand years, or you can get a short cut, by
climbing to a highest cliff in the world (which is no easy feat too),
and jumping from it, thus achieving limited flying. Even if you have a
good argument that cliff-climbing is a simpler technology than
aerodynamics, choosing to climb is a wrong conclusion.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com

Reply via email to