Matt Mahoney wrote:
--- Tom McCabe <[EMAIL PROTECTED]> wrote:

These questions, although important, have little to do
with the feasibility of FAI.

These questions are important because AGI is coming, friendly or not.  Will
our AGIs cooperate or compete?  Do we upload ourselves?
...
-- Matt Mahoney, [EMAIL PROTECTED]
-- Matt Mahoney, [EMAIL PROTECTED]
It's not absolutely certain that AGI is coming. However if it isn't, we will probably kill ourselves off because of too much power enclosed in too small a box. (Interstellar dispersion is the one obvious alternate solution. This will need to be *SLOW* because of energetic considerations. That doesn't mean it's not feasible.)

OTOH: AGI, in some form or other, is quite probable. If not "designed from scratch", then as an amalgamation of the minds of several different people (100's? 1,000's? more?) linked by neural connectors. Possibly not linked in permanently...possibly only linked in as a part of a game. Lots of possibilities here, because once I invoke "neural connectors" there are LOTS of different things that might be what was developed. Primitive neural connectors exist, but they aren't very useful yet, unless you're a quadriplegic. Anyway, neural connectors are not a "it might arrive some time in the future" technology. They're here right now. Just not very powerful.

Time frame, however, is somewhat interesting. It appears that if it contains a large component of genetic programming, that it is over a decade off to general access (i.e., ownership of sufficient computing resources). There may, however, be many ways to trim these requirements. Communication can often substitute for local computation. Also, there may be more efficient approaches that genetic programming. (There had better be better ways than CURRENT genetic programming.) Note that most people who feel that they are close have not only large amounts of local hardware, but a complex scheme of processing that is only partially dependent on genetic programming, if that.

Say that someone were to, tomorrow, start up a newly modified program that was as intelligent as an average human. How long would that program run before the creator turned it off and tinkered with it's internals? How much learning would it need to acquire before the builder was convinced that "This is something I shouldn't turn off without it's permission"? Given this, what would tend to evolve? And what does it mean to be "as intelligent as an average human" when we are talking about something that doesn't have an average human's growth experiences?

I feel that this whole area is one in which a great deal of uncertainty is the only proper approach. I'm quite certain that my current program is less intelligent than a housefly...but I wouldn't want to defend any particular estimate as to "How much less intelligent?".


-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&user_secret=7d7fb4d8

Reply via email to