Re: [agi] Pure reason is a disease.

2007-05-26 Thread Jiri Jelinek

Mark,


If Google came along and offered you $10 million for your AGI, would you

give it to them?

No, I would sell services.


How about the Russian mob for $1M and your life and the

lives of your family?

How about FBI? No? So maybe selling him a messed up version for $2M
and then hiring a skilled pro who would make sure he would *never*
bother AGI developers again? If you are smart enough to design AGI, you
are likely to figure out how to deal with such a guy. ;-)


Or, what if your advisor tells you that unless you upgrade him so that he

can take actions, it is highly probable that someone else will create a
system in the very near future that will be able to take actions and won't
have the protections that you've built into him.

I would just let the system explain what actions would it then take.


I suggest preventing potential harm by making the AGI's top-level

goal to be Friendly
(and unlike most, I actually have a reasonably implementable idea of what is
meant by that).

Tell us about it. :)


sufficiently sophisticated AGI will act as if it experiences pain


So could such AGI be then forced by torture to break rules it
otherwise would not want to break?  Can you give me an example of
something what will cause the pain? What do you think will the AGI
do when in extreme pain? BTW it's just a bad design from my
perspective.


I don't see your point unless you're arguing that there is something

special about using chemicals for global environment settings rather
than some other method (in which case I
would ask What is that something special and why is it special?).

2 points I was trying to make:
1) Sophisticated general intelligence system can work fine without the
ability to feel pain.
2) von Neumann architecture lacks components known to support the pain
sensation.

Regards,
Jiri Jelinek

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


[agi] The Importance of Pride Shame for AGI

2007-05-26 Thread Mike Tintner
Marvin Minsky:Memes. Once you have a species with social transmission
of information, then evolution continues on a higher level.
The idea here is that the emotions of Pride and Shame
evolved as a result of the competition -- inside our species --
between different sets of contagious ideas.

MT: I think pride and shame evolved vastly earlier and for a much more basic - 
indeed basic to AGI/ robotics - reason. 

(There is also, I think, considerable evidence now about the existence of these 
emotions in lower animals. See this week's New Scientist for one. Animals 
clearly seem to take pride in physical achievements and demonstrations of 
physical skill).

The reason for their early evolution is this:

any emotional system exists primarily to reward animals or any agent, including 
robotic, with emotions of happiness/ sadness for success/ failure in achieving 
goals - that is fundamental now to cognitive science.  However, any emotional 
system has to make an agent happy/sad however those goals are achieved or not . 
And that actually is the case. If someone gives you a $100,000 today, as a 
bequest, without your having done anything to achieve it, you  will, almost 
certainly, still feel happy.

An emotional system, therefore,  I suggest,  has to, and does give animals and 
agents additional feelings when they achieve goals through their own efforts, 
or not. Then animals and humans feel more or less happy and good about 
themselves, (pride), or sad and bad about themselves, (shame). If someone 
offers Marvin a $100,000 today for the paperback rights to EM, he will feel 
almost certainly happy AND proud.

An emotional system has to do this because any agent learning to succeed and 
survive in a problematic world, has to learn the difference between goals 
reached through its own efforts and through luck. 

No doubt animal pride and shame take much simpler forms - that's why I use the 
words feeling good/bad about oneself - but I'd be surprised if even say, my 
favourite lowly worm doesn't feel them.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e

Re: [agi] The Importance of Pride Shame for AGI

2007-05-26 Thread J Storrs Hall, PhD
On Saturday 26 May 2007 06:38:37 am Mike Tintner wrote:

 The reason for their early evolution is this:
 
 any emotional system exists primarily to reward animals or any agent, 
including robotic, with emotions of happiness/ sadness for success/ failure 
in achieving goals ...

Yep. In fact, the only emotion Tommy could be said to have would be pride, 
in the sense of a positive value associated with copying some action.

However, it's generally dangerous (in the sense of likely to be wrong) to 
reason about evolutionary phenomena with a simple linear model. There are too 
many things like the peacock's tail, that are thought to have evolved at 
least in part as a signal to peahens as to how much useless weight the 
peacock has the strength to carry around. Strong emotions in humans are 
similarly thought to be constraint devices difficult to override by the 
individual involved for various purposes in social settings. See the 
game-theory discussion in Beyond AI, and also check out Speven Pinker's How 
the Mind Works.

Josh


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-05-26 Thread Mark Waser

If Google came along and offered you $10 million for your AGI, would you

give it to them?
No, I would sell services.
:-)  No.  That wouldn't be an option.  $10 million or nothing (and they'll 
go off and develop it themselves).



How about the Russian mob for $1M and your life and the

lives of your family?
How about FBI? No? So maybe selling him a messed up version for $2M
and then hiring a skilled pro who would make sure he would *never*
bother AGI developers again? If you are smart enough to design AGI, you
are likely to figure out how to deal with such a guy. ;-)
Nice fantasy world . . . . How are you going to do any of that stuff after 
they've already kidnapped you?  No one is smart enough to handle that 
without extensive pre-existing preparations -- and you're too busy with 
other things.



Or, what if your advisor tells you that unless you upgrade him so that he

can take actions, it is highly probable that someone else will create a
system in the very near future that will be able to take actions and won't
have the protections that you've built into him.
I would just let the system explain what actions would it then take.
And he would (truthfully) explain that using you as an interface to the 
world (and all the explanations that would entail) would slow him down 
enough that he couldn't prevent catastrophe.



Tell us about it. :)

July (as previously stated)



So could such AGI be then forced by torture to break rules it
otherwise would not want to break?  Can you give me an example of
something what will cause the pain? What do you think will the AGI
do when in extreme pain? BTW it's just a bad design from my
perspective.


Of course.  Killing 10 million people.  Put *much* shorter deadlines on 
figuring out it's responses/Kill a single person to avoid the killing of 
another ten million.  And I believe that your perspective is too way too 
limited.  To me, what you're saying is equivalent to the fact that an 
engine produces excess heat is just a bad design.



2 points I was trying to make:
1) Sophisticated general intelligence system can work fine without the
ability to feel pain.
2) von Neumann architecture lacks components known to support the pain
sensation.


Prove to me that 2) is true.  What component do you have that can't exist in 
a von Neumann architecture?  Hint:  Prove that you aren't just a simulation 
on a von Neumann architecture.


Further, prove that pain (or more preferably sensation in general) isn't an 
emergent property of sufficient complexity.  My argument is that you 
unavoidably get sensation before you get complex enough to be generally 
intelligent.


   Mark

- Original Message - 
From: Jiri Jelinek [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Saturday, May 26, 2007 4:20 AM
Subject: Re: [agi] Pure reason is a disease.



Mark,


If Google came along and offered you $10 million for your AGI, would you

give it to them?

No, I would sell services.


How about the Russian mob for $1M and your life and the

lives of your family?

How about FBI? No? So maybe selling him a messed up version for $2M
and then hiring a skilled pro who would make sure he would *never*
bother AGI developers again? If you are smart enough to design AGI, you
are likely to figure out how to deal with such a guy. ;-)


Or, what if your advisor tells you that unless you upgrade him so that he

can take actions, it is highly probable that someone else will create a
system in the very near future that will be able to take actions and won't
have the protections that you've built into him.

I would just let the system explain what actions would it then take.


I suggest preventing potential harm by making the AGI's top-level

goal to be Friendly
(and unlike most, I actually have a reasonably implementable idea of what 
is

meant by that).

Tell us about it. :)


sufficiently sophisticated AGI will act as if it experiences pain


So could such AGI be then forced by torture to break rules it
otherwise would not want to break?  Can you give me an example of
something what will cause the pain? What do you think will the AGI
do when in extreme pain? BTW it's just a bad design from my
perspective.


I don't see your point unless you're arguing that there is something

special about using chemicals for global environment settings rather
than some other method (in which case I
would ask What is that something special and why is it special?).

2 points I was trying to make:
1) Sophisticated general intelligence system can work fine without the
ability to feel pain.
2) von Neumann architecture lacks components known to support the pain
sensation.

Regards,
Jiri Jelinek

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:

Re: [agi] Pure reason is a disease.

2007-05-26 Thread Mark Waser

I think it is a serious mistake for anyone to say that the difference

between machines cannot in principle experience real feelings.

We are complex machines, so yes, machines can, but my PC cannot, even
though it can power AGI.


Agreed, your PC cannot feel pain.  Are you sure, however, that an entity 
hosted/simulated on your PC doesn't/can't?  Once again, prove that you/we 
aren't just simulations on a sufficiently large and fast PC.  (I know that I 
can't and many really smart people say they can't either). 



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Opensource Business Model

2007-05-26 Thread YKY (Yan King Yin)

Hi Ben and others,

Let's analyse the opensource vs closed-source issue in more detail...
(limericks are not arugments!)

1.  I guess the biggest turn-off about opensource is that it may allow
competitors to look at our source and steal our ideas / algorithms.  I'm
aware of this but I still advocate opensource, 'cause I think the situation
here is *special*.  AGI is an exceptionally hard problem and we do not have
enough people (researchers and programmers).  Opensource will allow us to
recruit people *much* more easily since a lot of people can look at our
stuff and decide whether they like it or not.  Thus, we'll be able to work
less hard and achieve the same goal.

The problem of copycats stealing our ideas becomes insignificant once
people realize how hard the whole deal is.  When there's so much innovative
work to be done, people are less likely to take the trouble of starting
clones.

Moreover, the business model I propose tries to make it easier for people to
contribute and make modifications.  I guess one reason why people start new
projects is that they are *dissatisfied* with existing ones.

2.  Let's not use the opensource will lead to 'bad' singularity line of
arguments, cause that's rather unrealistic and unprofessional.  Quite the
contrary, an AGI-for-the-masses scenario is much less likely to become
catastrophic than an oligarchic / totalitarian one.

The bottomline is that we cannot use AGI to save humanity anymore than we
can save all the species in the ecosystem.  AGI will improve the lives of
many people, and that's the best we can do.

Basing a company's mission on false arguments may drive potential partners
away.  Moreover, because of self-deceptive thinking you may miss some good
alternatives.

3.  Note that I'm not advocating for free-AGI.  The two things that I most
care about are:  a) my own pay / company shares; and b) I hope our
project will have the biggest market share.

4.  If you have really good AGI ideas, they will get a lot of shares in the
new project.  So there's no need to panic!

[ PS  Please don't use stereotypes like assimilation or borg on me.  I'm
not intent on making humanity a homogeneous race without diversity.  People
often make those assumptions because of my weird look or that I'm Chinese =(
]

YKY

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e

RE: [agi] Opensource Business Model

2007-05-26 Thread John G. Rose
I like open source for a lot of things and have used quite a bit of it.  A
problem with open source and AGI is that if the AGI is going to work there's
going to be some really cutting edge stuff with lots of elbow grease in
there that I'm not sure too many people want to expose.  For a basic AGI
with standard design though yes an open source project could be setup.  And
for this I would recommend some sort of plugin model.  But you do have to
look at the number of open source projects that exist and the number that
lay dormant and die.  Many, many die and also many are eclipsed by better
open source projects doing the same thing.  Building an instant biz on some
non-existent AGI project though I can't comment on that.  Usually open
source projects get running for a while and then a business or business eco
system form around them.  For AGI I picture 3 or 4 that get started, maybe
more once it heats up, then a couple die off or merge.  Though in some
spaces you see dozens of successful projects.

 

John

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e

Re: [agi] Opensource Business Model

2007-05-26 Thread YKY (Yan King Yin)

On 5/27/07, John G. Rose [EMAIL PROTECTED] wrote:

I like open source for a lot of things and have used quite a bit of it.  A

problem with open source and AGI is that if the AGI is going to work there's
going to be some really cutting edge stuff with lots of elbow grease in
there that I'm not sure too many people want to expose.  For a basic AGI
with standard design though yes an open source project could be setup.  And
for this I would recommend some sort of plugin model.  But you do have to
look at the number of open source projects that exist and the number that
lay dormant and die.  Many, many die and also many are eclipsed by better
open source projects doing the same thing.  Building an instant biz on some
non-existent AGI project though I can't comment on that.  Usually open
source projects get running for a while and then a business or business eco
system form around them.  For AGI I picture 3 or 4 that get started, maybe
more once it heats up, then a couple die off or merge.  Though in some
spaces you see dozens of successful projects…

The problem seems to be that there's not enough incentive for people to
merge projects and work collaboratively so they tend to spawn new projects
based on superficial differences.  I'll think about that.

Maybe if the project is modular people will more likely work within
the project as they don't want to duplicate all the other modules.  Also,
the free-opensource model may actually encourage spawning because
their license allows free copying.  Why not make a license where users can
see the source but they may not modify it or start new projects *except*
within the official one?

I like software that I can see the source of.  But I think we can control
the way it's modified and still not lose its appeal...

YKY

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e