Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-18 Thread Harry Chesley
Trent Waddington wrote: As I believe the is that conciousness? debate could go on forever, I think I should make an effort here to save this thread. Setting aside the objections of vegetarians and animal lovers, many hard nosed scientists decided long ago that jamming things into the brains

Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-14 Thread Jiri Jelinek
On Fri, Nov 14, 2008 at 2:07 AM, John G. Rose [EMAIL PROTECTED] wrote: there are many computer systems now, domain specific intelligent ones where their life is more important than mine. Some would say that the battle is already lost. For now, it's not really your life (or interest) vs the

RE: [agi] Ethics of computer-based cognitive experimentation

2008-11-14 Thread John G. Rose
From: Jiri Jelinek [mailto:[EMAIL PROTECTED] On Fri, Nov 14, 2008 at 2:07 AM, John G. Rose [EMAIL PROTECTED] wrote: there are many computer systems now, domain specific intelligent ones where their life is more important than mine. Some would say that the battle is already lost. For now,

Why consciousness is hard to define (was Re: [agi] Ethics of computer-based cognitive experimentation)

2008-11-14 Thread Matt Mahoney
--- On Fri, 11/14/08, Colin Hales [EMAIL PROTECTED] wrote: Try running yourself with empirical results instead of metabelief (belief about belief). You'll get someplace .i.e. you'll resolve the inconsistencies. When inconsistencies are testably absent, no matter how weird the answer, it will

Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-13 Thread Matt Mahoney
--- On Wed, 11/12/08, Harry Chesley [EMAIL PROTECTED] wrote: Matt Mahoney wrote: If you don't define consciousness in terms of an objective test, then you can say anything you want about it. We don't entirely disagree about that. An objective test is absolutely crucial. I believe where

Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-13 Thread Colin Hales
Dear Matt, Try running yourself with empirical results instead of metabelief (belief about belief). You'll get someplace .i.e. you'll resolve the inconsistencies. When inconsistencies are *testably *absent, no matter how weird the answer, it will deliver maximally informed choices. Not facts.

Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-13 Thread Trent Waddington
As I believe the is that conciousness? debate could go on forever, I think I should make an effort here to save this thread. Setting aside the objections of vegetarians and animal lovers, many hard nosed scientists decided long ago that jamming things into the brains of monkeys and the like is

RE: [agi] Ethics of computer-based cognitive experimentation

2008-11-13 Thread John G. Rose
From: Jiri Jelinek [mailto:[EMAIL PROTECTED] On Wed, Nov 12, 2008 at 2:41 AM, John G. Rose [EMAIL PROTECTED] wrote: is it really necessary for an AGI to be conscious? Depends on how you define it. If you think it's about feelings/qualia then - no - you don't need that [potentially

RE: [agi] Ethics of computer-based cognitive experimentation

2008-11-13 Thread John G. Rose
From: Richard Loosemore [mailto:[EMAIL PROTECTED] I thought what he said was a good description more or less. Out of 600 millions years there may be only a fraction of that which is an improvement but it's still there. How do you know, beyond a reasonable doubt, that any other being

Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-12 Thread Matt Mahoney
--- On Tue, 11/11/08, Richard Loosemore [EMAIL PROTECTED] wrote: Matt Mahoney wrote: --- On Tue, 11/11/08, Richard Loosemore [EMAIL PROTECTED] wrote: Your 'belief' explanation is a cop-out because it does not address any of the issues that need to be addressed for something to count

Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-12 Thread Matt Mahoney
--- On Tue, 11/11/08, Colin Hales [EMAIL PROTECTED] wrote: I'm inclined to agree - this will be an issue in the future... if you have a robot helper and someone comes by and beats it to death in front of your kids, who have some kind of attachment to it...a relationship... then crime (i)

Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-12 Thread Harry Chesley
This thread has gone back and forth several times concerning the reality of consciousness. So at the risk of extending it further unnecessarily, let me give my view, which seems self-evident to me, but I'm sure isn't to others (meaning they may reasonably disagree with me, not that they're

Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-12 Thread Richard Loosemore
John G. Rose wrote: From: Richard Loosemore [mailto:[EMAIL PROTECTED] John LaMuth wrote: Reality check *** Consciousness is an emergent spectrum of subjectivity spanning 600 mill. years of evolution involving mega-trillions of competing organisms, probably selecting for obscure quantum

Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-12 Thread Richard Loosemore
Matt Mahoney wrote: --- On Tue, 11/11/08, Richard Loosemore [EMAIL PROTECTED] wrote: Matt Mahoney wrote: --- On Tue, 11/11/08, Richard Loosemore [EMAIL PROTECTED] wrote: Your 'belief' explanation is a cop-out because it does not address any of the issues that need to be addressed for

Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-12 Thread Richard Loosemore
Jiri Jelinek wrote: On Wed, Nov 12, 2008 at 2:41 AM, John G. Rose [EMAIL PROTECTED] wrote: is it really necessary for an AGI to be conscious? Depends on how you define it. H interesting angle. Everything you say from this point on seems to be predicated on the idea that a person

Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-12 Thread Jiri Jelinek
Richard, Everything you say from this point on seems to be predicated on the idea that a person can *choose* to define it any way they want There are some good-to-stick-with rules for definitions http://en.wikipedia.org/wiki/Definition#Rules_for_definition_by_genus_and_differentia but (even

Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-12 Thread Richard Loosemore
Jiri Jelinek wrote: Richard, Everything you say from this point on seems to be predicated on the idea that a person can *choose* to define it any way they want There are some good-to-stick-with rules for definitions

Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-12 Thread Matt Mahoney
--- On Wed, 11/12/08, Harry Chesley [EMAIL PROTECTED] wrote: 1) I'm talking about the hard question of consciousness. 2) It is real, as it clearly influences our thoughts. On the other hand, though it feels subjectively like it is qualitatively different from other aspects of the world, it

Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-12 Thread John LaMuth
- Original Message - From: John G. Rose [EMAIL PROTECTED] To: agi@v2.listbox.com Sent: Tuesday, November 11, 2008 11:41 PM Subject: RE: [agi] Ethics of computer-based cognitive experimentation I thought what he said was a good description more or less. Out of 600 millions years

Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-12 Thread John LaMuth
- Original Message - From: Richard Loosemore [EMAIL PROTECTED] To: agi@v2.listbox.com Sent: Wednesday, November 12, 2008 9:05 AM Subject: Re: [agi] Ethics of computer-based cognitive experimentation One of the main conclusions of the paper I am writing now is that you

Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-12 Thread Jiri Jelinek
On Wed, Nov 12, 2008 at 2:41 AM, John G. Rose [EMAIL PROTECTED] wrote: is it really necessary for an AGI to be conscious? Depends on how you define it. If you think it's about feelings/qualia then - no - you don't need that [potentially dangerous] crap + we don't know how to implement it anyway.

Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-12 Thread John LaMuth
] To: agi@v2.listbox.com Sent: Wednesday, November 12, 2008 1:36 PM Subject: Re: [agi] Ethics of computer-based cognitive experimentation John LaMuth wrote: - Original Message - From: Richard Loosemore [EMAIL PROTECTED] To: agi@v2.listbox.com Sent: Wednesday, November 12, 2008 9:05 AM

Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-12 Thread Harry Chesley
Matt Mahoney wrote: 2) It is real, as it clearly influences our thoughts. On the other hand, though it feels subjectively like it is qualitatively different from other aspects of the world, it probably isn't (but I'm open to being wrong here). The correct statement is that you believe it is

Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-12 Thread Colin Hales
Matt Mahoney wrote: snip ... accepted because the theory makes predictions that can be tested. But there are absolutely no testable predictions that can be made from a theory of consciousness. -- Matt Mahoney, [EMAIL PROTECTED] This is simply wrong. It is difficult but you can test for

Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-12 Thread Matt Mahoney
--- On Wed, 11/12/08, Colin Hales [EMAIL PROTECTED] wrote: It is difficult but you can test for it objectively by demanding that an entity based on your 'theory of consciousness' deliver an authentic scientific act on the a-priori unknown using visual experience for scientific evidence. So

Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-12 Thread Matt Mahoney
--- On Wed, 11/12/08, Harry Chesley [EMAIL PROTECTED] wrote: Matt Mahoney wrote: 2) It is real, as it clearly influences our thoughts. On the other hand, though it feels subjectively like it is qualitatively different from other aspects of the world, it probably isn't (but I'm open to

Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-12 Thread Colin Hales
Matt Mahoney wrote: --- On Wed, 11/12/08, Colin Hales [EMAIL PROTECTED] wrote: It is difficult but you can test for it objectively by demanding that an entity based on your 'theory of consciousness' deliver an authentic scientific act on the a-priori unknown using visual experience for

Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-12 Thread Harry Chesley
Matt Mahoney wrote: If you don't define consciousness in terms of an objective test, then you can say anything you want about it. We don't entirely disagree about that. An objective test is absolutely crucial. I believe where we disagree is that I expect there to be such a test one day, while

Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-11 Thread wannabe
When people discuss the ethics of the treatment of artificial intelligent agents, it's almost always with the presumption that the key issue is the subjective level of suffering of the agent. This isn't the only possible consideration. One other consideration is our stance relative to that

Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-11 Thread Richard Loosemore
Mark Waser wrote: An understanding of what consciousness actually is, for starters. It is a belief. No it is not. And that statement (It is a belief) is a cop-out theory. An understanding of what consciousness is requires a consensus definition of what it is. For most people, it seems to

Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-11 Thread Matt Mahoney
--- On Tue, 11/11/08, Richard Loosemore [EMAIL PROTECTED] wrote: Would a program be conscious if it passes the Turing test? If not, what else is required? No. An understanding of what consciousness actually is, for starters. It is a belief. No it is not. And that

Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-11 Thread Richard Loosemore
Matt Mahoney wrote: --- On Tue, 11/11/08, Richard Loosemore [EMAIL PROTECTED] wrote: Would a program be conscious if it passes the Turing test? If not, what else is required? No. An understanding of what consciousness actually is, for starters. It is a belief. No it is not. And that

Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-11 Thread Richard Loosemore
Matt Mahoney wrote: --- On Mon, 11/10/08, Richard Loosemore [EMAIL PROTECTED] wrote: Do you agree that there is no test to distinguish a conscious human from a philosophical zombie, thus no way to establish whether zombies exist? Disagree. What test would you use? A sophisticated

Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-11 Thread Mark Waser
An understanding of what consciousness actually is, for starters. It is a belief. No it is not. And that statement (It is a belief) is a cop-out theory. An understanding of what consciousness is requires a consensus definition of what it is. For most people, it seems to be an

Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-11 Thread Matt Mahoney
--- On Tue, 11/11/08, Richard Loosemore [EMAIL PROTECTED] wrote: Your 'belief' explanation is a cop-out because it does not address any of the issues that need to be addressed for something to count as a definition or an explanation of the facts that need to be explained. As I explained,

Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-11 Thread Mark Waser
:58 PM Subject: **SPAM** Re: [agi] Ethics of computer-based cognitive experimentation --- On Tue, 11/11/08, Richard Loosemore [EMAIL PROTECTED] wrote: Your 'belief' explanation is a cop-out because it does not address any of the issues that need to be addressed for something to count

Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-11 Thread Trent Waddington
On Wed, Nov 12, 2008 at 8:58 AM, Matt Mahoney [EMAIL PROTECTED] wrote: As I explained, animals that have no concept of death have nevertheless evolved to fear most of the things that can kill them. Humans have learned to associate these things with death, and invented the concept of

Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-11 Thread John LaMuth
: [agi] Ethics of computer-based cognitive experimentation Mark Waser wrote: An understanding of what consciousness actually is, for starters. It is a belief. No it is not. And that statement (It is a belief) is a cop-out theory. An understanding of what consciousness is requires a consensus

Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-11 Thread Matt Mahoney
--- On Tue, 11/11/08, Mark Waser [EMAIL PROTECTED] wrote: This does not mean that certain practices are good or bad. If there was such a thing, then there would be no debate about war, abortion, euthanasia, capital punishment, or animal rights, because these questions could be answered

Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-11 Thread Richard Loosemore
Matt Mahoney wrote: --- On Tue, 11/11/08, Richard Loosemore [EMAIL PROTECTED] wrote: Your 'belief' explanation is a cop-out because it does not address any of the issues that need to be addressed for something to count as a definition or an explanation of the facts that need to be explained.

Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-11 Thread Richard Loosemore
John LaMuth wrote: Reality check *** Consciousness is an emergent spectrum of subjectivity spanning 600 mill. years of evolution involving mega-trillions of competing organisms, probably selecting for obscure quantum effects/efficiencies Our puny engineering/coding efforts could never

Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-11 Thread Colin Hales
[EMAIL PROTECTED] wrote: When people discuss the ethics of the treatment of artificial intelligent agents, it's almost always with the presumption that the key issue is the subjective level of suffering of the agent. This isn't the only possible consideration. One other consideration is our

RE: [agi] Ethics of computer-based cognitive experimentation

2008-11-11 Thread John G. Rose
From: Richard Loosemore [mailto:[EMAIL PROTECTED] John LaMuth wrote: Reality check *** Consciousness is an emergent spectrum of subjectivity spanning 600 mill. years of evolution involving mega-trillions of competing organisms, probably selecting for obscure quantum

Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-10 Thread Richard Loosemore
Matt Mahoney wrote: --- On Fri, 11/7/08, Richard Loosemore [EMAIL PROTECTED] wrote: The question of whether a test is possible at all depends on the fact that there is a coherent theory behind the idea of consciousness. Would you agree that consciousness is determined by a large set of

Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-10 Thread Matt Mahoney
--- On Mon, 11/10/08, Richard Loosemore [EMAIL PROTECTED] wrote: Do you agree that there is no test to distinguish a conscious human from a philosophical zombie, thus no way to establish whether zombies exist? Disagree. What test would you use? Would a program be conscious if it passes

Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-10 Thread Colin Hales
Matt Mahoney wrote: --- On Mon, 11/10/08, Richard Loosemore [EMAIL PROTECTED] wrote: Do you agree that there is no test to distinguish a conscious human from a philosophical zombie, thus no way to establish whether zombies exist? Disagree. What test would you use? The

Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-07 Thread Richard Loosemore
Matt Mahoney wrote: --- On Wed, 11/5/08, Richard Loosemore [EMAIL PROTECTED] wrote: In the future (perhaps the near future) it will be possible to create systems that will have their own consciousness. *Appear* to have consciousness, or do you have a test? Yes. But the test depends on an

Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-07 Thread Matt Mahoney
--- On Fri, 11/7/08, Richard Loosemore [EMAIL PROTECTED] wrote: The question of whether a test is possible at all depends on the fact that there is a coherent theory behind the idea of consciousness. Would you agree that consciousness is determined by a large set of attributes that are

Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-06 Thread YKY (Yan King Yin)
On Thu, Nov 6, 2008 at 12:55 AM, Harry Chesley [EMAIL PROTECTED] wrote: Personally, I'm not making an AGI that has emotions... So you take the view that, despite our minimal understanding of the basis of emotions, they will only arise if designed in, never spontaneously as an emergent

Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-05 Thread YKY (Yan King Yin)
On Wed, Nov 5, 2008 at 7:35 AM, Matt Mahoney [EMAIL PROTECTED] wrote: Personally, I'm not making an AGI that has emotions, and I doubt if emotions are generally desirable in AGIs, except when the goal is to make human companions (and I wonder why people need them anyway, given that there're

Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-05 Thread Mike Tintner
YKY:I just want to point out that AGI-with-emotions is not necessary goal of AGI. Which AGI as distinct from narrow AI problems do *not* involve *incalculable and possibly unmanageable risks*? - a)risks that the process of problem-solving will be interminable? b)risks that the agent does not

Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-05 Thread Mike Tintner
YKY, As I was saying, before I so rudely interrupted myself - re the narrow AI vs AGI problem difference: *the syllogistic problems of logic - is Aristotle mortal? etc - which you mainly use as examples - are narrow AI problems, which can be solved according to precise rules however:

Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-05 Thread Harry Chesley
On 11/4/2008 2:53 PM, YKY (Yan King Yin) wrote: Personally, I'm not making an AGI that has emotions... So you take the view that, despite our minimal understanding of the basis of emotions, they will only arise if designed in, never spontaneously as an emergent property? So you can safely

Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-05 Thread Harry Chesley
On 11/4/2008 3:31 PM, Matt Mahoney wrote: To answer your (modified) question, consciousness is detected by the activation of a large number of features associated with living humans. The more of these features are activated, the greater the tendency to apply ethical guidelines to the target

Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-05 Thread Matt Mahoney
--- On Wed, 11/5/08, YKY (Yan King Yin) [EMAIL PROTECTED] wrote: On Wed, Nov 5, 2008 at 7:35 AM, Matt Mahoney [EMAIL PROTECTED] wrote: Personally, I'm not making an AGI that has emotions, and I doubt if emotions are generally desirable in AGIs, except when the goal is to make human

Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-05 Thread Matt Mahoney
--- On Wed, 11/5/08, Harry Chesley [EMAIL PROTECTED] wrote: If I understand correctly, you're saying that there is no such thing as objective ethics, and that our subjective ethics depend on how much we identify/empathize with another creature. I grant this as a possibility, in which case I

Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-05 Thread Richard Loosemore
Harry Chesley wrote: On 11/4/2008 3:31 PM, Matt Mahoney wrote: To answer your (modified) question, consciousness is detected by the activation of a large number of features associated with living humans. The more of these features are activated, the greater the tendency to apply ethical

Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-05 Thread Matt Mahoney
--- On Wed, 11/5/08, Richard Loosemore [EMAIL PROTECTED] wrote: In the future (perhaps the near future) it will be possible to create systems that will have their own consciousness. *Appear* to have consciousness, or do you have a test? Stepping back for the moment, the entire question of

[agi] Ethics of computer-based cognitive experimentation

2008-11-04 Thread Harry Chesley
The question of when it's ethical to do AGI experiments has bothered me for a while. It's something that every AGI creator has to deal with sooner or later if you believe you're actually going to create real intelligence that might be conscious. The following link is a blog essay on the subject,

Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-04 Thread YKY (Yan King Yin)
On Wed, Nov 5, 2008 at 6:05 AM, Harry Chesley [EMAIL PROTECTED] wrote: The question of when it's ethical to do AGI experiments has bothered me for a while. It's something that every AGI creator has to deal with sooner or later if you believe you're actually going to create real intelligence

Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-04 Thread Matt Mahoney
--- On Tue, 11/4/08, Harry Chesley [EMAIL PROTECTED] wrote: The question of when it's ethical to do AGI experiments has bothered me for a while. That's because you're asking the wrong question. Don't confuse belief with truth. The question is: what ethical guidelines will we (not should we)

Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-04 Thread Matt Mahoney
--- On Tue, 11/4/08, YKY (Yan King Yin) [EMAIL PROTECTED] wrote: Personally, I'm not making an AGI that has emotions, and I doubt if emotions are generally desirable in AGIs, except when the goal is to make human companions (and I wonder why people need them anyway, given that there're so

Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-04 Thread Matt Mahoney
--- On Tue, 11/4/08, Trent Waddington [EMAIL PROTECTED] wrote: On Wed, Nov 5, 2008 at 9:31 AM, Matt Mahoney [EMAIL PROTECTED] wrote: As a second example, the video game Grand Theft Auto allows you to have simulated sex with prostitutes and then beat them to death to get your money back.