[agi] AGI as a Deity to some and tech tool to others.

2007-12-23 Thread Morris F. Johnson
Perhaps a wise AI might act sociopathically and assess each individual
(who has a history archived electonically in private and public databases,
web archives)
and just sit and monitor all human communications for a year or 2 to develop
a global
complete knowledge of how each and every human interacts with every other.

It would then pick out obscure individuals and interact with them and
perfect the art of manipulation of human activity.

An AI would not need to be that super-intelligent to simply create billions
of subroutines
of what if's and like we humans do maximize its strengths (such as
similtaneous
access to a billion or so electronically connected humans) and use human
strengths
as a tool to  manipulate the key humans it really wants to manipulate.

Many humans tend to deify/villify those who do these things on a community
or national scale.

The key item to be concerned about is what an AI would see as the purpose
for
sentient carbon entities in its current view of the grand scheme of
things.

Perhaps our fatal flaw as a species is our savage, vicious competitve
nature.
If an AI channels this pattern of operation, but channels the goal as to be
to explore, to know the universe, to create many devices with which to
interact
with the material universe, then we have a niche to fit into.

Humans if ugraded by AI technical knowledge could  become a few hundred ..
or thousands ..or more diverse new
species.  The human free will and need to relax, diverge from work and
entertain
must be kept so as to partition the time slice of every year.  The portion
of our species who
become luddites and do not want enhancement and even want to see the means
to enhancement destroyed so that their mindset becomes dominant is a greater
danger than
enslavement by superintelligent AI.

We have seen this with stem cell RD supression most recently and I see this
anti-science
notion as becomming more common as science strives to be able to reshape
humanity
in what I call self-directed steady state evolution.

Morris Johnson
306-447-4944
701-240-9411






On 12/8/07, John G. Rose [EMAIL PROTECTED] wrote:

 It'd be interesting, I kind of wonder about this sometimes, if an AGI,
 especially one that is heavily complex systems based would independently
 come up with the existence some form of a deity. Different human cultures
 come up with deity(s), for many reasons; I'm just wondering if it is like
 some sort of mathematical entity that is natural to incompleteness and
 complexity (simulation?) or is it just exclusively a biological thing
 based
 on related limitations.

 An AGI is going to banging its head against the same limitations that we
 know of though it will find ways around them or redefine limits. Like the
 speed of light, if it can't figure out a way around this it's stuck. The
 AGI
 will look at the rest of the universe and wonder what the hell are all
 those
 billions of galaxies doing out there that it can't get to? Or more likely
 it
 will figure out a way to quantum tunnel to some remote star and inject
 itself were all these other AGIs from other planets are socializing at
 some
 AGI clambake :)

 John

 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=78971515-ad4a85

Re: [agi] AGI as a Deity to some and tech tool to others.

2007-12-23 Thread Richard Loosemore

Morris F. Johnson wrote:

Perhaps a wise AI might act sociopathically and assess each individual
(who has a history archived electonically in private and public 
databases, web archives)
and just sit and monitor all human communications for a year or 2 to 
develop a global

complete knowledge of how each and every human interacts with every other.
 
It would then pick out obscure individuals and interact with them and 
perfect the art of manipulation of human activity.
 
An AI would not need to be that super-intelligent to simply create 
billions of subroutines
of what if's and like we humans do maximize its strengths (such as 
similtaneous
access to a billion or so electronically connected humans) and use human 
strengths

as a tool to  manipulate the key humans it really wants to manipulate.


I'm sorry, but this is a catastrophically bad chain of reasoning.

I say catastrophically bad because this kind of speculation is fueling 
a pathological fear and hatred of AI in the population at large, and it 
is completely unjustified.


It is unjustified because you do not build an AI and then discover 
what its motivations are, you have to design its motivations.


In practice, I believe that for a number of reasons the actual course of 
events will result in AI's being designed with completely safe 
motivations, so the above scenario will be impossible.


To see how crazy these speculations are, imagine that before the 
invention of the car, people ran around saying things like But what if 
cars start trying to run people over deliberately?  What if the cars get 
together and use their fuels to put gasoline in people's coffee and 
poison them?


 The reason that sort of idea would be ridiculous is that cars cannot 
actually do any such thing.  To be capable of doing those things, they 
would have to be designed in such a way that they could, and they are 
never likely to be designed that way.


Now, your reaction is (of course) going to be But an AI WOULD be 
capable of doing those things


Problem is, that reveals a misunderstanding of how future AIs will 
(almost certainly) be designed:  to do those things, the AI woould 
want to do them.  To want to do them it has to have a particular 
kind of motivation system built in.  The huge mistake that people make 
is to assume that just because the AI is intelligent, therefore a 
motivation system of exactly that (dangerous) sort could spring up out 
of nowhere . but the fact is that getting such a dangeous mitivation 
system would be just as difficult as an ordinary car getting all the 
mechanisms it would need to start running people over deliberately.


It all depends on a serious understanding of how AI's would be driven. 
The debate about AI today is filled with nonsense that assumes a certain 
(almost certainly ludicrous) idea of how they will be driven, but 
because most people do not have a technical understanding of the issues, 
they cannot distinguish these ludicrous ideas from the ones that are valid.



I do not mean this as a personal criticism, by the way:  this same line 
of argument appears over and over again, even from some people with 
knowledge of the field who should know better.





Richard Loosemore








Many humans tend to deify/villify those who do these things on a 
community or national scale.
 
The key item to be concerned about is what an AI would see as the 
purpose for
sentient carbon entities in its current view of the grand scheme of 
things.
 
Perhaps our fatal flaw as a species is our savage, vicious competitve 
nature.

If an AI channels this pattern of operation, but channels the goal as to be
to explore, to know the universe, to create many devices with which to 
interact

with the material universe, then we have a niche to fit into.
 
Humans if ugraded by AI technical knowledge could  become a few hundred 
.. or thousands ..or more diverse new
species.  The human free will and need to relax, diverge from work and 
entertain
must be kept so as to partition the time slice of every year.  The 
portion of our species who
become luddites and do not want enhancement and even want to see the 
means to enhancement destroyed so that their mindset becomes dominant is 
a greater danger than
enslavement by superintelligent AI. 
 
We have seen this with stem cell RD supression most recently and I see 
this anti-science
notion as becomming more common as science strives to be able to reshape 
humanity

in what I call self-directed steady state evolution.
 
Morris Johnson

306-447-4944
701-240-9411
 
 
 



 
On 12/8/07, *John G. Rose* [EMAIL PROTECTED] 
mailto:[EMAIL PROTECTED] wrote:


It'd be interesting, I kind of wonder about this sometimes, if an AGI,
especially one that is heavily complex systems based would
independently
come up with the existence some form of a deity. Different human
cultures
come up with deity(s), for many reasons; I'm just wondering if it is
like
some sort of 

Re: [agi] BMI/BCI Growing Fast

2007-12-23 Thread Bryan Bishop
On Saturday 22 December 2007, Philip Goetz wrote:
 If we define mindreading as knowing whether someone is telling the
 truth, whether someone likes you, or is sexually attracted to you, or
 recognizes you; knowing whether someone is paying attention; knowing
 whether someone is reasoning logically or being controlled by
 emotions

The entire idea of mindreading is peculiar. Haven't you ever had a 
moment when you've wondered if you like somebody? When you realize that 
such simple separations just don't matter and apply, that you can't 
even read your own mind in that regard? The idea that everybody must 
have a solid, readable opinion that must be expressed in certain 
detectable characteristics, sounds like the wrong way from creativity 
and intelligence.

- Bryan

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=79017098-f2b069