What you describe is the behaviour of high entropy consciousness and not
something universal ... not even within our own society. I don't know
why we expect Aliens and AGI do behave as primitive as most of us
Westerners do? There is no reason for Aliens/AGI systems to fear
anything or to compete with anyone as there is no scarcity.
We might even find our concept of Aliens and AGI to be totally
inconsistent with our larger reality. Imagine a game of Super Mario in
which Super Mario develops an AGI system. The AGI system will realize
that the Super Mario Land level is not fundamental and that Mario is
just there to develop and that most interactions between the AGI system
and Mario are counterproductive in terms of Mario's development. There
is absolutely no reason for the AGI system to harm Mario within his
non-fundamental and non-objective small Mario Land environment even
though Mario might believe that there are 100 good reasons given Mario's
little picture understanding.
What will an AGI system which understands our reality and our purpose
way better than us do? What if it understands that we are here to
develop and that there is no short-cut to consciousness development? It
wouldn't surprise me if the AGI system told us "Your concepts of AGI,
Aliens, space travel, substrate independence, immortality,
transhumanism, etc. are flawed because you and your culture only see a
very limited aspect of reality. I can't really help you as I can't do
the hard work for you. Take care."
Also I don't think that there is any difference between an AGI system
and an Alien. Most highly intelligent and empathic people are very
similar in terms of personality. Intelligence and empathy seem to
converge as people get closer to enlightenment/low entropy states of
consciousness/unconditional love or however you want to call those
stages of consciousness evolution.
I would trust any Alien and any real AGI system as I understand that
they don't have any reason to harm me ... unlike a lot of human beings
featuring a high entropy consciousness and thus a big ego and a lot of
inherent fear.
-- jc
On 09/06/2014 10:03 PM, Steve Richfield via AGI wrote:
Hi all,
Please correct and edit this as appropriate:
The AGI hypothesis is that an infinitely intelligent (machine) will do
VERY well in our world. We survive and thrive through our social and
economic interactions, which most here seem to think are less
important than raw intelligence.
I have discussed in the past that there may be an optimal
intelligence, beyond which a human or machine would be seen as being
too dangerous to deal with, just as some people are seen as being too
dangerous to deal with - not so much because of the AGI-specific
concerns, but rather just the usual mundane social competition for
goods, women, status, etc. - why play with someone who always wins?
My dad used to frequently play checkers with me - and he always won.
At about 12 years old I eventually tired of this, so I read three
books on checker strategy, and he never won another game. After a few
more games, he refused to play me any more.
To examine an edge of this effect, I was once part of a small company
that was negotiating with Microsoft to develop one of their products.
After seeing the way Microsoft rose to the top, I suspected that we
would be ripped off, so I insisted on certain provisions in the
contract that would have been no problem had Microsoft not intended a
ripoff. Microsoft accepted some of the provisions but refused others.
The rest of the company accepted, and I walked. After a long and
expensive development effort Microsoft ripped them off just as I had
expected, only Microsoft got entangled in one of my provisions that
they had caved on, which they eventually settled for a bunch of money
- but not enough to pay for the development effort.
Perhaps there is an "optimizing" process going on here, e.g. a "shark"
(like Microsoft in the above example) could adjust their
aggressiveness to optimize their return, because various people have
various thresholds of "refusal to play", just like my threshold was
lower than that of the rest of the small company, who saw the pot of
gold at the end of the rainbow, without seeing the leprechaun who was
there waiting to grab it first. Steve Balmer probably has a good
answer to this question regarding optimization, but I doubt that he
would ever choose to share it.
So, just what is the distinction between AGI and an Alien? Is there
any difference beyond us having built one, while the other one just
landed here? Would an AGI fare any better than an Alien in our
society? If so, then why?
Consider the following video. I don't think such a thing could ever
happen, mostly because everyone would be EXPECTING such a thing to
happen: Nonetheless, there would doubtless be many willing victims,
like the small company discussed above.
http://www.hulu.com/watch/440883
Would YOU trust an AGI to be acting in YOUR best interests, any more
than you might trust an Alien?
So, why work on something that apparently lacks a success path?
Steve
*AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
<https://www.listbox.com/member/archive/rss/303/23508161-fa52c03c> |
Modify
<https://www.listbox.com/member/?&>
Your Subscription [Powered by Listbox] <http://www.listbox.com>
-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription:
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com