Trent Waddington wrote:
As I believe the is that conciousness? debate could go on forever,
I think I should make an effort here to save this thread.
Setting aside the objections of vegetarians and animal lovers, many
hard nosed scientists decided long ago that jamming things into the
brains
On Fri, Nov 14, 2008 at 2:07 AM, John G. Rose [EMAIL PROTECTED] wrote:
there are many computer systems now, domain specific intelligent ones where
their life is more
important than mine. Some would say that the battle is already lost.
For now, it's not really your life (or interest) vs the
From: Jiri Jelinek [mailto:[EMAIL PROTECTED]
On Fri, Nov 14, 2008 at 2:07 AM, John G. Rose [EMAIL PROTECTED]
wrote:
there are many computer systems now, domain specific intelligent ones
where their life is more
important than mine. Some would say that the battle is already lost.
For now,
--- On Fri, 11/14/08, Colin Hales [EMAIL PROTECTED] wrote:
Try running yourself with empirical results instead of metabelief
(belief about belief). You'll get someplace .i.e. you'll resolve the
inconsistencies. When inconsistencies are testably absent, no
matter how weird the answer, it will
--- On Wed, 11/12/08, Harry Chesley [EMAIL PROTECTED] wrote:
Matt Mahoney wrote:
If you don't define consciousness in terms of an objective test, then
you can say anything you want about it.
We don't entirely disagree about that. An objective test is absolutely
crucial. I believe where
Dear Matt,
Try running yourself with empirical results instead of metabelief
(belief about belief). You'll get someplace .i.e. you'll resolve the
inconsistencies. When inconsistencies are *testably *absent, no matter
how weird the answer, it will deliver maximally informed choices. Not
facts.
As I believe the is that conciousness? debate could go on forever, I
think I should make an effort here to save this thread.
Setting aside the objections of vegetarians and animal lovers, many
hard nosed scientists decided long ago that jamming things into the
brains of monkeys and the like is
From: Jiri Jelinek [mailto:[EMAIL PROTECTED]
On Wed, Nov 12, 2008 at 2:41 AM, John G. Rose [EMAIL PROTECTED]
wrote:
is it really necessary for an AGI to be conscious?
Depends on how you define it. If you think it's about feelings/qualia
then - no - you don't need that [potentially
From: Richard Loosemore [mailto:[EMAIL PROTECTED]
I thought what he said was a good description more or less. Out of
600
millions years there may be only a fraction of that which is an
improvement
but it's still there.
How do you know, beyond a reasonable doubt, that any other being
--- On Tue, 11/11/08, Richard Loosemore [EMAIL PROTECTED] wrote:
Matt Mahoney wrote:
--- On Tue, 11/11/08, Richard Loosemore [EMAIL PROTECTED] wrote:
Your 'belief' explanation is a cop-out because it does not address
any of the issues that need to be addressed for something to count
--- On Tue, 11/11/08, Colin Hales [EMAIL PROTECTED] wrote:
I'm inclined to agree - this will be an issue in the
future... if you have a robot helper and someone comes by
and beats it to death in front of your kids, who
have some kind of attachment to it...a relationship... then
crime (i)
This thread has gone back and forth several times concerning the reality
of consciousness. So at the risk of extending it further unnecessarily,
let me give my view, which seems self-evident to me, but I'm sure isn't
to others (meaning they may reasonably disagree with me, not that
they're
John G. Rose wrote:
From: Richard Loosemore [mailto:[EMAIL PROTECTED]
John LaMuth wrote:
Reality check ***
Consciousness is an emergent spectrum of subjectivity spanning 600
mill.
years of
evolution involving mega-trillions of competing organisms, probably
selecting
for obscure quantum
Matt Mahoney wrote:
--- On Tue, 11/11/08, Richard Loosemore [EMAIL PROTECTED] wrote:
Matt Mahoney wrote:
--- On Tue, 11/11/08, Richard Loosemore [EMAIL PROTECTED] wrote:
Your 'belief' explanation is a cop-out because it does not address
any of the issues that need to be addressed for
Jiri Jelinek wrote:
On Wed, Nov 12, 2008 at 2:41 AM, John G. Rose [EMAIL PROTECTED] wrote:
is it really necessary for an AGI to be conscious?
Depends on how you define it.
H interesting angle. Everything you say from this point on
seems to be predicated on the idea that a person
Richard,
Everything you say from this point on seems to be predicated on the idea that
a person can *choose* to define it any way they want
There are some good-to-stick-with rules for definitions
http://en.wikipedia.org/wiki/Definition#Rules_for_definition_by_genus_and_differentia
but (even
Jiri Jelinek wrote:
Richard,
Everything you say from this point on seems to be predicated on the idea that a
person can *choose* to define it any way they want
There are some good-to-stick-with rules for definitions
--- On Wed, 11/12/08, Harry Chesley [EMAIL PROTECTED] wrote:
1) I'm talking about the hard question of
consciousness.
2) It is real, as it clearly influences our thoughts. On
the other hand, though it feels subjectively like it is
qualitatively different from other aspects of the world, it
- Original Message -
From: John G. Rose [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Tuesday, November 11, 2008 11:41 PM
Subject: RE: [agi] Ethics of computer-based cognitive experimentation
I thought what he said was a good description more or less. Out of 600
millions years
- Original Message -
From: Richard Loosemore [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Wednesday, November 12, 2008 9:05 AM
Subject: Re: [agi] Ethics of computer-based cognitive experimentation
One of the main conclusions of the paper I am writing now is that you
On Wed, Nov 12, 2008 at 2:41 AM, John G. Rose [EMAIL PROTECTED] wrote:
is it really necessary for an AGI to be conscious?
Depends on how you define it. If you think it's about feelings/qualia
then - no - you don't need that [potentially dangerous] crap + we
don't know how to implement it anyway.
]
To: agi@v2.listbox.com
Sent: Wednesday, November 12, 2008 1:36 PM
Subject: Re: [agi] Ethics of computer-based cognitive experimentation
John LaMuth wrote:
- Original Message - From: Richard Loosemore
[EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Wednesday, November 12, 2008 9:05 AM
Matt Mahoney wrote:
2) It is real, as it clearly influences our thoughts. On the other
hand, though it feels subjectively like it is qualitatively
different from other aspects of the world, it probably isn't (but
I'm open to being wrong here).
The correct statement is that you believe it is
Matt Mahoney wrote:
snip ... accepted because the theory makes predictions that can be tested.
But there are absolutely no testable predictions that can be made from a theory of
consciousness.
-- Matt Mahoney, [EMAIL PROTECTED]
This is simply wrong.
It is difficult but you can test for
--- On Wed, 11/12/08, Colin Hales [EMAIL PROTECTED] wrote:
It is difficult but you can test for it objectively by
demanding that an entity based on your 'theory of
consciousness' deliver an authentic scientific act on
the a-priori unknown using visual experience for scientific
evidence.
So
--- On Wed, 11/12/08, Harry Chesley [EMAIL PROTECTED] wrote:
Matt Mahoney wrote:
2) It is real, as it clearly influences our thoughts. On the other
hand, though it feels subjectively like it is qualitatively
different from other aspects of the world, it probably isn't (but
I'm open to
Matt Mahoney wrote:
--- On Wed, 11/12/08, Colin Hales [EMAIL PROTECTED] wrote:
It is difficult but you can test for it objectively by
demanding that an entity based on your 'theory of
consciousness' deliver an authentic scientific act on
the a-priori unknown using visual experience for
Matt Mahoney wrote:
If you don't define consciousness in terms of an objective test, then
you can say anything you want about it.
We don't entirely disagree about that. An objective test is absolutely
crucial. I believe where we disagree is that I expect there to be such a
test one day, while
When people discuss the ethics of the treatment of artificial intelligent
agents, it's almost always with the presumption that the key issue is the
subjective level of suffering of the agent. This isn't the only possible
consideration.
One other consideration is our stance relative to that
Mark Waser wrote:
An understanding of what consciousness actually is, for
starters.
It is a belief.
No it is not.
And that statement (It is a belief) is a cop-out theory.
An understanding of what consciousness is requires a consensus
definition of what it is.
For most people, it seems to
--- On Tue, 11/11/08, Richard Loosemore [EMAIL PROTECTED] wrote:
Would a program be conscious if it passes the Turing
test? If not, what else is required?
No.
An understanding of what consciousness actually is, for
starters.
It is a belief.
No it is not.
And that
Matt Mahoney wrote:
--- On Tue, 11/11/08, Richard Loosemore [EMAIL PROTECTED] wrote:
Would a program be conscious if it passes the Turing
test? If not, what else is required?
No.
An understanding of what consciousness actually is, for
starters.
It is a belief.
No it is not.
And that
Matt Mahoney wrote:
--- On Mon, 11/10/08, Richard Loosemore [EMAIL PROTECTED] wrote:
Do you agree that there is no test to distinguish a
conscious human from a philosophical zombie, thus no way to
establish whether zombies exist?
Disagree.
What test would you use?
A sophisticated
An understanding of what consciousness actually is, for
starters.
It is a belief.
No it is not.
And that statement (It is a belief) is a cop-out theory.
An understanding of what consciousness is requires a consensus definition
of what it is.
For most people, it seems to be an
--- On Tue, 11/11/08, Richard Loosemore [EMAIL PROTECTED] wrote:
Your 'belief' explanation is a cop-out because it
does not address any of the issues that need to be addressed
for something to count as a definition or an explanation of
the facts that need to be explained.
As I explained,
:58 PM
Subject: **SPAM** Re: [agi] Ethics of computer-based cognitive
experimentation
--- On Tue, 11/11/08, Richard Loosemore [EMAIL PROTECTED] wrote:
Your 'belief' explanation is a cop-out because it
does not address any of the issues that need to be addressed
for something to count
On Wed, Nov 12, 2008 at 8:58 AM, Matt Mahoney [EMAIL PROTECTED] wrote:
As I explained, animals that have no concept of death have nevertheless
evolved to fear most of the things that can kill them. Humans have learned to
associate these things with death, and invented the concept of
: [agi] Ethics of computer-based cognitive experimentation
Mark Waser wrote:
An understanding of what consciousness actually is, for
starters.
It is a belief.
No it is not.
And that statement (It is a belief) is a cop-out theory.
An understanding of what consciousness is requires a consensus
--- On Tue, 11/11/08, Mark Waser [EMAIL PROTECTED] wrote:
This does not mean that certain practices are good
or bad. If there was such a thing, then there would be no
debate about war, abortion, euthanasia, capital punishment,
or animal rights, because these questions could be answered
Matt Mahoney wrote:
--- On Tue, 11/11/08, Richard Loosemore [EMAIL PROTECTED] wrote:
Your 'belief' explanation is a cop-out because it does not address
any of the issues that need to be addressed for something to count
as a definition or an explanation of the facts that need to be
explained.
John LaMuth wrote:
Reality check ***
Consciousness is an emergent spectrum of subjectivity spanning 600 mill.
years of
evolution involving mega-trillions of competing organisms, probably
selecting
for obscure quantum effects/efficiencies
Our puny engineering/coding efforts could never
[EMAIL PROTECTED] wrote:
When people discuss the ethics of the treatment of artificial intelligent
agents, it's almost always with the presumption that the key issue is the
subjective level of suffering of the agent. This isn't the only possible
consideration.
One other consideration is our
From: Richard Loosemore [mailto:[EMAIL PROTECTED]
John LaMuth wrote:
Reality check ***
Consciousness is an emergent spectrum of subjectivity spanning 600
mill.
years of
evolution involving mega-trillions of competing organisms, probably
selecting
for obscure quantum
Matt Mahoney wrote:
--- On Fri, 11/7/08, Richard Loosemore [EMAIL PROTECTED] wrote:
The question of whether a test is possible at all depends
on the fact that there is a coherent theory behind the idea
of consciousness.
Would you agree that consciousness is determined by a large set of
--- On Mon, 11/10/08, Richard Loosemore [EMAIL PROTECTED] wrote:
Do you agree that there is no test to distinguish a
conscious human from a philosophical zombie, thus no way to
establish whether zombies exist?
Disagree.
What test would you use?
Would a program be conscious if it passes
Matt Mahoney wrote:
--- On Mon, 11/10/08, Richard Loosemore [EMAIL PROTECTED] wrote:
Do you agree that there is no test to distinguish a
conscious human from a philosophical zombie, thus no way to
establish whether zombies exist?
Disagree.
What test would you use?
The
Matt Mahoney wrote:
--- On Wed, 11/5/08, Richard Loosemore [EMAIL PROTECTED] wrote:
In the future (perhaps the near future) it will be possible
to create systems that will have their own consciousness.
*Appear* to have consciousness, or do you have a test?
Yes.
But the test depends on an
--- On Fri, 11/7/08, Richard Loosemore [EMAIL PROTECTED] wrote:
The question of whether a test is possible at all depends
on the fact that there is a coherent theory behind the idea
of consciousness.
Would you agree that consciousness is determined by a large set of attributes
that are
On Thu, Nov 6, 2008 at 12:55 AM, Harry Chesley [EMAIL PROTECTED] wrote:
Personally, I'm not making an AGI that has emotions...
So you take the view that, despite our minimal understanding of the basis of
emotions, they will only arise if designed in, never spontaneously as an
emergent
On Wed, Nov 5, 2008 at 7:35 AM, Matt Mahoney [EMAIL PROTECTED] wrote:
Personally, I'm not making an AGI that has emotions, and I doubt if
emotions are generally desirable in AGIs, except when the goal is to
make human companions (and I wonder why people need them anyway, given
that there're
YKY:I just want to point out that
AGI-with-emotions is not necessary goal of AGI.
Which AGI as distinct from narrow AI problems do *not* involve *incalculable
and possibly unmanageable risks*? -
a)risks that the process of problem-solving will be interminable?
b)risks that the agent does not
YKY,
As I was saying, before I so rudely interrupted myself - re the narrow AI vs
AGI problem difference:
*the syllogistic problems of logic - is Aristotle mortal? etc - which you
mainly use as examples - are narrow AI problems, which can be solved
according to precise rules
however:
On 11/4/2008 2:53 PM, YKY (Yan King Yin) wrote:
Personally, I'm not making an AGI that has emotions...
So you take the view that, despite our minimal understanding of the
basis of emotions, they will only arise if designed in, never
spontaneously as an emergent property? So you can safely
On 11/4/2008 3:31 PM, Matt Mahoney wrote:
To answer your (modified) question, consciousness is detected by the
activation of a large number of features associated with living
humans. The more of these features are activated, the greater the
tendency to apply ethical guidelines to the target
--- On Wed, 11/5/08, YKY (Yan King Yin) [EMAIL PROTECTED] wrote:
On Wed, Nov 5, 2008 at 7:35 AM, Matt Mahoney
[EMAIL PROTECTED] wrote:
Personally, I'm not making an AGI that has emotions, and I doubt if
emotions are generally desirable in AGIs, except when the goal is to
make human
--- On Wed, 11/5/08, Harry Chesley [EMAIL PROTECTED] wrote:
If I understand correctly, you're saying that there is
no such thing as objective ethics, and that our subjective
ethics depend on how much we identify/empathize with another
creature. I grant this as a possibility, in which case I
Harry Chesley wrote:
On 11/4/2008 3:31 PM, Matt Mahoney wrote:
To answer your (modified) question, consciousness is detected by the
activation of a large number of features associated with living
humans. The more of these features are activated, the greater the
tendency to apply ethical
--- On Wed, 11/5/08, Richard Loosemore [EMAIL PROTECTED] wrote:
In the future (perhaps the near future) it will be possible
to create systems that will have their own consciousness.
*Appear* to have consciousness, or do you have a test?
Stepping back for the moment, the entire question of
The question of when it's ethical to do AGI experiments has bothered me
for a while. It's something that every AGI creator has to deal with
sooner or later if you believe you're actually going to create real
intelligence that might be conscious. The following link is a blog essay
on the subject,
On Wed, Nov 5, 2008 at 6:05 AM, Harry Chesley [EMAIL PROTECTED] wrote:
The question of when it's ethical to do AGI experiments has bothered me
for a while. It's something that every AGI creator has to deal with
sooner or later if you believe you're actually going to create real
intelligence
--- On Tue, 11/4/08, Harry Chesley [EMAIL PROTECTED] wrote:
The question of when it's ethical to do AGI experiments
has bothered me for a while.
That's because you're asking the wrong question. Don't confuse belief with
truth. The question is: what ethical guidelines will we (not should we)
--- On Tue, 11/4/08, YKY (Yan King Yin) [EMAIL PROTECTED] wrote:
Personally, I'm not making an AGI that has emotions, and I doubt if
emotions are generally desirable in AGIs, except when the goal is to
make human companions (and I wonder why people need them anyway, given
that there're so
--- On Tue, 11/4/08, Trent Waddington [EMAIL PROTECTED] wrote:
On Wed, Nov 5, 2008 at 9:31 AM, Matt Mahoney
[EMAIL PROTECTED] wrote:
As a second example, the video game Grand Theft Auto
allows you to have simulated sex with prostitutes and then
beat them to death to get your money back.
63 matches
Mail list logo