--- Bryan Bishop <[EMAIL PROTECTED]> wrote:

> On Monday 10 December 2007, Matt Mahoney wrote:
> > The worst case scenario is that AI wipes out all life on earth, and
> > then itself, although I believe at least the AI is likely to survive.
> 
> http://lifeboat.com/ex/ai.shield

SIAI has not yet solved the friendliness problem.  I posted my views earlier
at  http://www.mattmahoney.net/singularity.html  To summarize, friendliness is
not a stable goal once computers start creating smarter versions of
themselves.  Recursive self improvement is an experimental, competitive,
evolutionary process that favors rapid reproduction and acquisition of
computing resources, not service to humans.

> Re: how much computing power is needed for ai. My worst-case scenario 
> accounts for nearly any finite computing power, via the production of 
> semiconductant silicon wafer tech. Now, if the dx on the number of 
> nodes is too low, we may have to start making factories that build 
> factories that build factories that build factories, etc. etc., which 
> would exponentially increase the rate of production of computational 
> nodes, and supposedly there is in fact some finite limit of 
> computational bruteforce required, yes?

A human brain sized neural network requires about 10^15 bits of memory and
10^16 operations per second.  The Internet already has enough computing power
to simulate a few thousand brains.  The threshold for a singularity is to
surpass the collective intelligence of all 10^10 human brains on Earth.

Moore's law allows you to estimate when this will happen, but keep in mind
that to double the number of components in a computer, you must also double
their reliability.  In a fault tolerant network, the second requirement is
dropped, so the process is faster.


-- Matt Mahoney, [EMAIL PROTECTED]

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=74837064-aa09b8

Reply via email to