On 29/09/2007, Vladimir Nesov <[EMAIL PROTECTED]> wrote:
> Although it indeed seems off-topic for this list, calling it a
> religion is ungrounded and in this case insulting, unless you have
> specific arguments.
>
> Killing huge amounts of people is a pretty much possible venture for
> regular humans, so it should be at least as possible for artificial
> ones. If artificial system is going to provide intellectual labor
> comparable to that of humans, it's going to be pretty rich, and after
> that it can use obtained resources for whatever it feels like.

This statement is, in my opinion, full of unfounded assumptions about
the nature of AGI that are actually going to be produced in the world.

I am leaving this on list, because I think these assumptions are
detrimental to thinking about AGI.

If a RSI AGI infecting the internet is not possible, for whatever
theoretical reason, and we turn out to have a relatively normal
future, I would contend that Artificial People (AP) will not make up
the majority of the intelligence in the world. If we have the
knowledge to create the whole brain of an artificial person with
separate goal system, then we should have the knowledge to create a
partial Artificial Brain (PAB) without a goal system and hook it up in
some fashion to the goal system of humans.

PAB in this scenario would replace von Neumann computers and make it a
lot less easy for AP bot net the world. They would also provide most
of the economic benefits that a AP could.

I would contend that PABs is what the market will demand. Companies
would get them for managers, to replace cube workers. The general
public would get them to find out and share information about the
world with less effort and to chat and interact with them whenever
they want. And the military would want them for the ultimate
unquestioning soldier. Very few people would want computer systems
with their own identity/bank account and rights.

The places systems with their own separate goal system would be mainly
used, is where they are out of contact from humans for a long time. So
deep space and deep sea.

Now the external brain type of AI can be dangerous in its own right,
but the dangers are very different to the blade runner/terminator view
that is too prevalent today.

So can anyone give me good reasons as to why I should think that AGI
with identity will be a large factor in shaping the future (ignoring
recursive self improvement for the moment)?

 Will Pearson

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=48225310-543eca

Reply via email to