Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-11 Thread wannabe
When people discuss the ethics of the treatment of artificial intelligent
agents, it's almost always with the presumption that the key issue is the
subjective level of suffering of the agent.  This isn't the only possible
consideration.

One other consideration is our stance relative to that agent.  Are we just
acting in a selfish way, using the agent as simply a means to achieve our
goals?  I'll just leave that idea open as there are traditions that see
value in de-emphasizing greed and personal acquisitiveness.

Another consideration is the inherent value of self-determination.  This
is above any suffering that might be caused by being a completely
controlled subject.  One of the problems of slavery was just that it
simply works better if you let people decide things for themselves. 
Similarly, just letting an artificial agent have autonomy for its own sake
may just be a more effective thing than having it simply be a controlled
subject.

So I don't even think the consciousness of an artificial intelligent
agent is completely necessary in considering the ethics of our stance
towards it.  We can consider our own emotional position and the inherent
value of independence of thinking.
andi



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-11 Thread Richard Loosemore

Mark Waser wrote:

An understanding of what consciousness actually is, for
starters.

It is a belief.

No it is not.
And that statement (It is a belief) is a cop-out theory.


An understanding of what consciousness is requires a consensus 
definition of what it is.


For most people, it seems to be an undifferentiated mess that includes 
all of attentional components, intentional components, understanding 
components, and, frequently, experiential components (i.e. qualia).


This mess was cleaned up a great deal when Chalmers took the simple step 
of dividing it into the 'easy' problems and the hard problem (which is 
the last one on your list).  The easy problems do not have any 
philosophical depth to them;  the hard problem seems to be a 
philosophical chasm.


You are *very* correct to say that An 'understanding' of what 
consciousness is requires a consensus definition of what it is.  My 
goal is to get a consensus definition, which then contains within it the 
explanation also.  But, yes, if my explanation does not also include a 
definition that satisfies everyone as a good consensus definition, then 
it does not work.


That is why Matt's it is a belief is not an explanation:  it leaves so 
many questions unanswered that it will never make it as a consensus 
definition/explanation.


We will see.  My paper on the subject is almost finished.



Richard Loosemore




If you only buy into the first three and do it in a very concrete 
fashion, consciousness (and ethics) isn't all that tough.


Or you can follow Alice and star debating the real meaning of the 
third and whether or not the truly fourth exists in anyone except yourself.


Personally, if something has a will (intentionality/goals) that it can 
focus effectively (attentional and understanding), I figure that you'd 
better start treating it ethically for your own long-term self-interest.


Of course, that then begs the question of what ethics is . . . . but I 
think that that is pretty easy to solve as well . . . .





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-11 Thread Matt Mahoney
--- On Tue, 11/11/08, Richard Loosemore [EMAIL PROTECTED] wrote:

  Would a program be conscious if it passes the Turing
  test? If not, what else is required?
 
  No.
 
  An understanding of what consciousness actually is, for
  starters.
  
  It is a belief.
 
 No it is not.
 
 And that statement (It is a belief) is a cop-out theory.

No. Depending on your definition of consciousness, there is either an objective 
test for it or not. If consciousness results in an observable difference in 
behavior, then a machine that passes the Turing test must be conscious because 
there is no observable difference between it and a human. Or, if consciousness 
is not observable, then you must admit that the brain does something that 
cannot be explained by the known (computable) laws of physics. You conveniently 
avoid this inconsistency by refusing to define what you mean by consciousness. 
That is a cop-out.

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-11 Thread Richard Loosemore

Matt Mahoney wrote:

--- On Tue, 11/11/08, Richard Loosemore [EMAIL PROTECTED] wrote:




Would a program be conscious if it passes the Turing

test? If not, what else is required?

No.

An understanding of what consciousness actually is, for 
starters.

It is a belief.

No it is not.

And that statement (It is a belief) is a cop-out theory.


No. Depending on your definition of consciousness, there is either an
objective test for it or not. If consciousness results in an
observable difference in behavior, then a machine that passes the
Turing test must be conscious because there is no observable
difference between it and a human. Or, if consciousness is not
observable, then you must admit that the brain does something that
cannot be explained by the known (computable) laws of physics. You
conveniently avoid this inconsistency by refusing to define what you
mean by consciousness. That is a cop-out.


Your 'belief' explanation is a cop-out because it does not address any 
of the issues that need to be addressed for something to count as a 
definition or an explanation of the facts that need to be explained.


My proposal is being written up now and will be available at the end of 
tomorrow.  It does address all of the facts that need to be explained.




Richard Loosemore


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-11 Thread Richard Loosemore

Matt Mahoney wrote:

--- On Mon, 11/10/08, Richard Loosemore [EMAIL PROTECTED] wrote:


Do you agree that there is no test to distinguish a

conscious human from a philosophical zombie, thus no way to
establish whether zombies exist?

Disagree.


What test would you use?


A sophisticated assessment of the mechanisms inside the cognitive system.



Would a program be conscious if it passes the Turing

test? If not, what else is required?

No.

An understanding of what consciousness actually is, for
starters.


It is a belief.


No it is not.

And that statement (It is a belief) is a cop-out theory.




Richard Loosemore


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-11 Thread Mark Waser

An understanding of what consciousness actually is, for
starters.

It is a belief.

No it is not.
And that statement (It is a belief) is a cop-out theory.


An understanding of what consciousness is requires a consensus definition 
of what it is.


For most people, it seems to be an undifferentiated mess that includes all 
of attentional components, intentional components, understanding components, 
and, frequently, experiential components (i.e. qualia).


If you only buy into the first three and do it in a very concrete fashion, 
consciousness (and ethics) isn't all that tough.


Or you can follow Alice and star debating the real meaning of the third 
and whether or not the truly fourth exists in anyone except yourself.


Personally, if something has a will (intentionality/goals) that it can focus 
effectively (attentional and understanding), I figure that you'd better 
start treating it ethically for your own long-term self-interest.


Of course, that then begs the question of what ethics is . . . . but I think 
that that is pretty easy to solve as well . . . . 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-11 Thread Matt Mahoney
--- On Tue, 11/11/08, Richard Loosemore [EMAIL PROTECTED] wrote:

 Your 'belief' explanation is a cop-out because it
 does not address any of the issues that need to be addressed
 for something to count as a definition or an explanation of
 the facts that need to be explained.

As I explained, animals that have no concept of death have nevertheless evolved 
to fear most of the things that can kill them. Humans have learned to associate 
these things with death, and invented the concept of consciousness as the large 
set of features which distinguishes living humans from dead humans. Thus, 
humans fear the loss or destruction of consciousness, which is equivalent to 
death.

Consciousness, free will, qualia, and good and bad are universal human beliefs. 
We should not confuse them with truth by asking the wrong questions. Thus, 
Turing sidestepped the question of can machines think? by asking instead can 
machines appear to think?  Since we can't (by definition) distinguish doing 
something from appearing to do something, it makes no sense for us to make this 
distinction.

Likewise, asking if it is ethical to inflict simulated pain on machines is 
asking the wrong question. Evolution favors the survival of tribes that 
practice altruism toward other tribe members and teach these ethical values to 
their children. This does not mean that certain practices are good or bad. If 
there was such a thing, then there would be no debate about war, abortion, 
euthanasia, capital punishment, or animal rights, because these questions could 
be answered experimentally.

The question is not how should machines be treated? The question is how will 
we treat machines?

 My proposal is being written up now and will be available
 at the end of tomorrow.  It does address all of the facts
 that need to be explained.

I am looking forward to reading it.

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-11 Thread Mark Waser
This does not mean that certain practices are good or bad. If there was 
such a thing, then there would be no debate about war, abortion, 
euthanasia, capital punishment, or animal rights, because these questions 
could be answered experimentally.


Given a goal and a context, there is absolutely such a thing as good or bad. 
The problem with the examples that you cited is that you're attempting to 
generalize to a universal answer across contexts (because I would argue that 
there is a useful universal goal) which is nonsensical.  All of this can be 
answered both logically and experimentally if you just ask the right 
question instead of engaging in vacuous hand-waving about how tough it all 
is after you've mindlessly expanded your problem beyond solution.



- Original Message - 
From: Matt Mahoney [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Tuesday, November 11, 2008 5:58 PM
Subject: **SPAM** Re: [agi] Ethics of computer-based cognitive 
experimentation




--- On Tue, 11/11/08, Richard Loosemore [EMAIL PROTECTED] wrote:


Your 'belief' explanation is a cop-out because it
does not address any of the issues that need to be addressed
for something to count as a definition or an explanation of
the facts that need to be explained.


As I explained, animals that have no concept of death have nevertheless 
evolved to fear most of the things that can kill them. Humans have learned 
to associate these things with death, and invented the concept of 
consciousness as the large set of features which distinguishes living 
humans from dead humans. Thus, humans fear the loss or destruction of 
consciousness, which is equivalent to death.


Consciousness, free will, qualia, and good and bad are universal human 
beliefs. We should not confuse them with truth by asking the wrong 
questions. Thus, Turing sidestepped the question of can machines think? 
by asking instead can machines appear to think?  Since we can't (by 
definition) distinguish doing something from appearing to do something, it 
makes no sense for us to make this distinction.


Likewise, asking if it is ethical to inflict simulated pain on machines is 
asking the wrong question. Evolution favors the survival of tribes that 
practice altruism toward other tribe members and teach these ethical 
values to their children. This does not mean that certain practices are 
good or bad. If there was such a thing, then there would be no debate 
about war, abortion, euthanasia, capital punishment, or animal rights, 
because these questions could be answered experimentally.


The question is not how should machines be treated? The question is how 
will we treat machines?



My proposal is being written up now and will be available
at the end of tomorrow.  It does address all of the facts
that need to be explained.


I am looking forward to reading it.

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-11 Thread Trent Waddington
On Wed, Nov 12, 2008 at 8:58 AM, Matt Mahoney [EMAIL PROTECTED] wrote:
 As I explained, animals that have no concept of death have nevertheless 
 evolved to fear most of the things that can kill them. Humans have learned to 
 associate these things with death, and invented the concept of consciousness 
 as the large set of features which distinguishes living humans from dead 
 humans. Thus, humans fear the loss or destruction of consciousness, which is 
 equivalent to death.

So you're saying you're not a heavy drinker eh?

Trent


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-11 Thread John LaMuth

Reality check ***

Consciousness is an emergent spectrum of subjectivity spanning 600 mill. 
years of
evolution involving mega-trillions of competing organisms, probably 
selecting

for obscure quantum effects/efficiencies

Our puny engineering/coding efforts could never approach this - not even in 
a million years.


An outwardly pragmatic language simulation, however, is very do-able.

John LaMuth

www.forebrain.org

www.emotionchip.net


- Original Message - 
From: Richard Loosemore [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Tuesday, November 11, 2008 8:31 AM
Subject: Re: [agi] Ethics of computer-based cognitive experimentation



Mark Waser wrote:

An understanding of what consciousness actually is, for
starters.

It is a belief.

No it is not.
And that statement (It is a belief) is a cop-out theory.


An understanding of what consciousness is requires a consensus 
definition of what it is.


For most people, it seems to be an undifferentiated mess that includes 
all of attentional components, intentional components, understanding 
components, and, frequently, experiential components (i.e. qualia).


This mess was cleaned up a great deal when Chalmers took the simple step 
of dividing it into the 'easy' problems and the hard problem (which is the 
last one on your list).  The easy problems do not have any philosophical 
depth to them;  the hard problem seems to be a philosophical chasm.


You are *very* correct to say that An 'understanding' of what 
consciousness is requires a consensus definition of what it is.  My goal 
is to get a consensus definition, which then contains within it the 
explanation also.  But, yes, if my explanation does not also include a 
definition that satisfies everyone as a good consensus definition, then it 
does not work.


That is why Matt's it is a belief is not an explanation:  it leaves so 
many questions unanswered that it will never make it as a consensus 
definition/explanation.


We will see.  My paper on the subject is almost finished.



Richard Loosemore




If you only buy into the first three and do it in a very concrete 
fashion, consciousness (and ethics) isn't all that tough.


Or you can follow Alice and star debating the real meaning of the third 
and whether or not the truly fourth exists in anyone except yourself.


Personally, if something has a will (intentionality/goals) that it can 
focus effectively (attentional and understanding), I figure that you'd 
better start treating it ethically for your own long-term self-interest.


Of course, that then begs the question of what ethics is . . . . but I 
think that that is pretty easy to solve as well . . . .





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com 




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-11 Thread Matt Mahoney
--- On Tue, 11/11/08, Mark Waser [EMAIL PROTECTED] wrote:

  This does not mean that certain practices are good
 or bad. If there was such a thing, then there would be no
 debate about war, abortion, euthanasia, capital punishment,
 or animal rights, because these questions could be answered
 experimentally.
 
 Given a goal and a context, there is absolutely such a
 thing as good or bad. The problem with the examples that you
 cited is that you're attempting to generalize to a
 universal answer across contexts (because I would argue that
 there is a useful universal goal) which is nonsensical.  All
 of this can be answered both logically and experimentally if
 you just ask the right question instead of engaging in
 vacuous hand-waving about how tough it all is after
 you've mindlessly expanded your problem beyond solution.

That's what I just said. You have to ask the right questions.

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-11 Thread Richard Loosemore

Matt Mahoney wrote:

--- On Tue, 11/11/08, Richard Loosemore [EMAIL PROTECTED] wrote:


Your 'belief' explanation is a cop-out because it does not address
any of the issues that need to be addressed for something to count
as a definition or an explanation of the facts that need to be
explained.


As I explained, animals that have no concept of death have
nevertheless evolved to fear most of the things that can kill them.
Humans have learned to associate these things with death, and
invented the concept of consciousness as the large set of features
which distinguishes living humans from dead humans. Thus, humans fear
the loss or destruction of consciousness, which is equivalent to
death.

Consciousness, free will, qualia, and good and bad are universal
human beliefs. We should not confuse them with truth by asking the
wrong questions. Thus, Turing sidestepped the question of can
machines think? by asking instead can machines appear to think?
Since we can't (by definition) distinguish doing something from
appearing to do something, it makes no sense for us to make this
distinction.


The above two paragraphs STILL do not address any of the issues that 
need to be addressed for something to count as a definition, or an 
explanation of the facts that need to be explained.




Richard Loosemore


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-11 Thread Richard Loosemore

John LaMuth wrote:

Reality check ***

Consciousness is an emergent spectrum of subjectivity spanning 600 mill. 
years of
evolution involving mega-trillions of competing organisms, probably 
selecting

for obscure quantum effects/efficiencies

Our puny engineering/coding efforts could never approach this - not even 
in a million years.


An outwardly pragmatic language simulation, however, is very do-able.

John LaMuth


It is not.

And we can.



Richard Loosemore


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-11 Thread Colin Hales



[EMAIL PROTECTED] wrote:

When people discuss the ethics of the treatment of artificial intelligent
agents, it's almost always with the presumption that the key issue is the
subjective level of suffering of the agent.  This isn't the only possible
consideration.

One other consideration is our stance relative to that agent.  Are we just
acting in a selfish way, using the agent as simply a means to achieve our
goals?  I'll just leave that idea open as there are traditions that see
value in de-emphasizing greed and personal acquisitiveness.

Another consideration is the inherent value of self-determination.  This
is above any suffering that might be caused by being a completely
controlled subject.  One of the problems of slavery was just that it
simply works better if you let people decide things for themselves. 
Similarly, just letting an artificial agent have autonomy for its own sake

may just be a more effective thing than having it simply be a controlled
subject.

So I don't even think the consciousness of an artificial intelligent
agent is completely necessary in considering the ethics of our stance
towards it.  We can consider our own emotional position and the inherent
value of independence of thinking.
andi


  
I'm inclined to agree - this will be an issue in the future... if you 
have a robot helper and someone comes by and beats it to death in 
front of your kids, who have some kind of attachment to it...a 
relationship... then  crime (i) may be said to be the psychological 
damage to the children. Crime (ii) is then the murder and whatever one 
knows of suffering inflicted on the robot helper. Ethicists are gonna 
have all manner of novelty to play with.

cheers
colin



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


RE: [agi] Ethics of computer-based cognitive experimentation

2008-11-11 Thread John G. Rose
 From: Richard Loosemore [mailto:[EMAIL PROTECTED]
 
 John LaMuth wrote:
  Reality check ***
 
  Consciousness is an emergent spectrum of subjectivity spanning 600
 mill.
  years of
  evolution involving mega-trillions of competing organisms, probably
  selecting
  for obscure quantum effects/efficiencies
 
  Our puny engineering/coding efforts could never approach this - not
 even
  in a million years.
 
  An outwardly pragmatic language simulation, however, is very do-able.
 
  John LaMuth
 
 It is not.
 
 And we can.
 

I thought what he said was a good description more or less. Out of 600
millions years there may be only a fraction of that which is an improvement
but it's still there.

How do you know, beyond a reasonable doubt, that any other being is
conscious? 

At some point you have to trust that others are conscious, in the same
species, you bring them into your recursive loop of consciousness
component mix.

A primary component of consciousness is a self definition. Conscious
experience is unique to the possessor. It is more than a belief that the
possessor herself is conscious but others who appear conscious may be just
that, appearing to be conscious. Though at some point there is enough
feedback between individuals and/or a group to share consciousness
experience.

Still though, is it really necessary for an AGI to be conscious? Except for
delivering warm fuzzies to the creators? Doesn't that complicate things?
Shouldn't the machines/computers be slaves to man? Or will they be
equal/superior. It's a dog-eat-dog world out there.

I just want things to be taken care of and no issues. Consciousness brings
issues. Intelligence and consciousness are separate.

John



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com