On May 28, 2007, at 8:11 PM, Russell Wallace wrote:

On 5/29/07, Samantha Atkins <[EMAIL PROTECTED]> wrote:

I think you know well enough that most of us who have considered such things for significant time have done considerable work to get beyond "metal men".

Yep. So had I. Then I discovered considerable work is nowhere near enough, alas.

A "will to survive" is probably essentially to any autonomous being if it is to survive.

Indeed, which is why we observe animals, evolved for autonomous survival, reliably possess such a thing (or behave as though they did, which amounts to the same thing here).

Autonomous machines, however, are conspicuous by their nonexistence, and only partly because of technical difficulty.

It is largely all technical difficulty.

The truth is, the market doesn't want them in the first place.

For some values of autonomous, i.e. unmanned and not directly monitored, in at least some areas the market does want them.

That's why it's not just that machines don't have a will to survive, or even that they don't have the rudimentary beginning of one - it's that they aren't even moving in that direction.

They will move for now in whatever direction we decide for them to move. I think that has to be toward more autonomy.


Do you believe it is impossible to create an artificial sentient mind given that an existence proof of sentient minds from a non- engineered natural development cycle is all around us?

Yes, because the engineered artificial development cycle not only doesn't have four billion years to spare, it also doesn't have pressures in the right direction.

Four billion years are not required. We have significant pressures toward greater intelligent processing and faster responses over ever larger information sets than humans, even currently or likely soon augmented ones, can achieve. Some of that intelligence in some realms may logically require a sense of self and considerable world model on the part of our software/hardware creations.

The market doesn't reward intermediate steps, and doing the whole thing in one go is utterly impossible. Maybe given $1 billion/year for a century, it might be possible to create an artificial sentient mind, but the market won't support that.

The market certainly does reward intermediate steps if they have profitable or useful consequences. The history of narrow AI applications is full of examples.


Now, by "is impossible" I don't mean "will forever be impossible". Maybe in 10 million AD some kid will hack one up on his Jupiter brain in between coming up with 65536 nontrivially distinct axiom sets in which a proof of the continuum hypothesis exists for maths homework and carrying out abiogenesis in a nonpolar solvent for chemistry homework, before going out with his family for dinner on savory beta particles at the Betelgeuse Pulsar Restaurant. I don't know, none of us can forecast that far into the future. But impossible for us to accomplish now, this century? Yes.


We won't get to that kind of tech or survive that long without vastly more intelligence than we have today. There is no 10 million AD future for us at our current intelligence levels. There is considerable room between today's AI applications and a Jupiter brain - considerable intelligent capability that is economically lucrative and greatly needed.

Meanwhile, real life progress continues to consist of ever more sophisticated boxes that process data according to keyboard input and mouse clicks.

Do you think that is the only progress being made or possible to make in the entire universe of software? Admittedly on a bad day that can seem like the only stuff being decently funded.

I think it'll stay the cutting edge. Embedded stuff trails a long way, partly because the reliability requirements are such as to flush productivity down the drain and partly because hard real time means our best tools and techniques are limited or completely unusable.

Still, cake would be nice but we can live on potatoes, so we shouldn't complain too much if life offers a hope of the latter but not the former. The stuff we really need - smart CAD, design rules checking, smart search and data integration, process design and monitoring etc, to stem the loss of fifty million lives a year, to get us off this planet - can be done on beige boxes.

Not with today's software or even almost all programming done by humans. Getting us off this planet is highly unlikely as long as we are pretty much as we are today. Our beige boxes (hardware and software) are not up to the task of making it significantly more likely. Maybe with warp drive or at least very capacious generation ships and a good supply of earth-like worlds it might make some sense for a mass diaspora of current design humans. Maybe. But it is very prohibitively expensive to support a lot of biological embodied humans in space or on non-terraformed worlds unless they have extraordinary relevant skills. The market sure as hell is not moving in that direction.

And it is cool stuff compared to what most of the IT industry works on, let alone most of the global workforce. Think of it this way: I bet the alchemists were pretty depressed when they figured out they weren't going to transmute lead into gold. But looking back, we know they _shouldn't_ have been - the products of real chemistry ended up being much, much cooler even if they did take awhile coming. Maybe someone in 3007 will read these archives and think the same way about us.

Without AI or such IA to be almost the same thing I don't have much reason to believe humanity will see 3007.

- samantha

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&user_secret=7d7fb4d8

Reply via email to