I'm not in favor of a dominant, rational mind without mechanisms towards 
equilibrium. Any action may be rationalized, even genocide.

Agreed, AGI should generally benefit human populations at large. That could 
already be said for robotics.

Even though humans only see AGI as powerful resources to exploit like they do 
robots, AGI should be different though.

AGI should benefit communities, but not in the sense of warring against each 
other, but in the sense of collectively following a base standard of values 
towards the survival of human communities.

To idealize any such objectives would guarantee eventual failure. For example, 
suppose an AGI was manufactured to protect the oceans. Would it sink whaling 
ships to do so, or march upstream to detect industrial polluters and neutralize 
the source?

To many, such actions would seem rational.
However, what if AGI mistook a passenger freighter (old school possibility for 
migrants) for a whaler, or destroyed a human-waste disposal plant that was 
testing a new bio-degradable method of water-based treatment?

Would AGI have to learn at the cost of human communities? If it were 
bootstrapped and the learning proved inadequate, or insufficient, would the 
bootstrappers be held accountable?

Only the usual megalogoth human communities (superpower governments and 
industrial giants) would dare industrialize AGI, because they would have armies 
of soldiers and lawyers and politicians to defend them against retaliating 
human communities when things go wrong.

These, and other, rational problems would prevent AGI from achieving its 
theoretical potential. Ironically, that would make AGI more human than anyone 
would probably be willing to admit to. In most cases, human potential is 
constrained by environmental factors. Seems the rationality of humans would 
copy that DNA into AGI products as functions.

Wouldn't it be cheaper, or most rational to simply invest in the optimization 
of human potential?

________________________________
From: WriterOfMinds <[email protected]>
Sent: Saturday, 09 November 2019 03:27
To: AGI <[email protected]>
Subject: Re: [agi] Against Legg's 2007 definition of intelligence

Requirements for AGI.

1. To automate human labor so we don't have to work.
2. To provide a platform for uploading our minds so we don't have to die.
3. To create Kardashev level I, II, and III civilizations, controlling the 
Earth, Sun, and galaxy respectively.

Okay; now we know what Matt wants.  All I really want is an example of the 
Rational Other to interact with and relate to.  For me, the act of creation is 
its own sufficient reward; if my digital image-bearer happens to achieve 
anything that practically benefits me or civilization, that's a bonus.

My particular goal would seem to imply three broad requirements:

1. AGI shall be rational/sapient.  (I bet we could have lots of fun defining 
these words too.)
2. AGI shall be communicative.
3. AGI shall be inclined to cordial relationships with humans.

A robotic body with human-equivalent sensorimotor capabilities is not strictly 
necessary for any of these.
Artificial General Intelligence List<https://agi.topicbox.com/latest> / AGI / 
see discussions<https://agi.topicbox.com/groups/agi> + 
participants<https://agi.topicbox.com/groups/agi/members> + delivery 
options<https://agi.topicbox.com/groups/agi/subscription> 
Permalink<https://agi.topicbox.com/groups/agi/T6cada473e1abac06-Mf817402fca55c3af8fd67306>

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T6cada473e1abac06-Mfee0f1fb5dd1e17263c1e3b4
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to