Re: on consciousness levels and ai

2010-01-25 Thread Bruno Marchal


On 25 Jan 2010, at 07:52, Brent Meeker wrote:


Bruno Marchal wrote:







Now, having postulated the natural numbers with addition and  
multiplication, they organized themselves, independently of our  
whishes, in a way which escapes *any* attempt of *complete*  
unification. They defeat all our theories, in a sense. Once we  
postulate them, they get a life of their own. To understand them,  
we have literally no choice, in general, than to observe them and  
infer laws.
We can prove that they have definite behaviors, but we can prove  
(assuming mechanism) that we cannot predict them, in general.




ISTM that can be read as a reductio against the reality of  
arithmetic.


On the contrary. It shows that arithmetical reality kicks back. We  
may also know greater and greater portion of it. We may discover  
new interesting properties, and we progress indeed since a long  
time. From Diophantus to Matiyasevitch, to mention a beautiful line.


Are you alluding to fictionalism?  Do you defend the idea that 3  
is prime is a false proposition?


No, I just don't think it's truth implies the existence of 3.


So you believe that the proposition 9 is not prime is false?
To say that 9 is not prime is the same as saying that It exits a  
number different from 1 and 9 which divides 9.
To believe that 9 is not prime you need to believe that  Ex[x =  
s(s(s(0)))]. i. e 3 exists (and divides 9).








I have no real clue of what that could seriously mean.

Of course I would never expect that someone who doesn't believe  
that 3 is prime can say anything about the consequence of DIGITAL  
mechanism. Such a move cut the uda (and the auda) at their roots,  
and everything becomes infinitely mysterious. Frankly I would not  
ask him to compute my taxes either.






So why not suppose that the natural numbers are just a model of  
perceptual counting; and their potential infinity is a convenient  
fiction whereby we avoid having to worry about where we might run  
out of numbers to count with?



You can do that. But assuming you are not fictionalist,  if you say  
that the infinity of natural numbers is a fiction, you are lead,  
ITSM, to ultrafinitism.


What's the difference between finitism and ultrafinitism?  Doesn't  
postulating the integers plus ZF also commit you to existence of the  
whole hierarchy of infinite cardinals?


Finitist believes in all finite numbers or things. And nothing else. A  
finitist believes in 0, and in s(0), and in s(s(0)), etc ... But he/ 
she does not believe in the whole set {0, s(0), s(s(0)), ...}. He/She  
does not believe in infinite objects.


An ultrafinitist believes in 0, s(0), s(s(0)),  ..., but he/she does  
not believe in all finite numbers. He believes that the set of all  
positive integers is a finite set. I think that Tholerus argued that  
there is a bigger natural number. This makes sense for some strong  
form of physicalism: a number exists if and only if it is instantiated  
in the physical reality (which has to be postulated, then, and assumed  
to be finite).







With fictionalism, I think that you can say yes to the doctor,  
and reject the reversal consequences. This leads to a matter  
problem, a mind problem, and the usual mind/matter problem. I would  
take this as a defect of fictionalism.



Brent, I am not saying that ultrafinitism and fictionalism are  
false. I am just saying that IF you say yes to your doctor's  
proposal to substitute the brain for a computer, and this with a  
reasonable understanding of what a computer is (and this asks for a  
minimal amount of arithmetical realism) then the laws of physics  
are necessarily a consequence of the (usual, recursive) definition  
of addition and multiplication. Indeed it is the global coupling  
consciousness/realities which emerges from + and * (and classical  
logic). (or from K and S and the combinators rules, + equality  
rules (this is much less)).


A sentence like  naturals numbers are just a model of perceptual  
counting already assumes (postulates) arithmetic. And with digital  
mechanism you can explain why universal number can use natural  
numbers as model of their perceptual counting.


You should not confuse the numbers as thought by the philosophical  
humans (what are they? does they exists?) with the numbers as used  
by mathematicians, physicists or neurophysiologists, like in this  
flatworm has a brain constituted by 2 * 39 neurons or all  
positive integers can be written as the sum of *four* integers  
squares.


(Then the number takes another dimension once you say yes to the  
doctor, because in that case, relatively to the (quantum)  
environment, you say yes, not for a model, but because you bet  
the doctor will put in your skull the actual thing you, yet  
through other matter, and all what counts is that he put the  
right number, relatively to the current environment. That other  
dimension is somehow the object of all our discussions).


May be I can ask you 

Re: on consciousness levels and ai

2010-01-24 Thread Bruno Marchal


On 22 Jan 2010, at 20:52, Brent Meeker wrote:


Bruno Marchal wrote:

Hi John,

On 21 Jan 2010, at 22:19, John Mikes wrote:


Dear Bruno,
you took extra pain to describe (in your vocabulary) what I stand  
for (using MY vocabulary).

-
On Thu, Jan 21, 2010 at 2:17 PM, Bruno Marchal marc...@ulb.ac.be mailto:marc...@ulb.ac.be 
 wrote:


   John,

   What makes you think that a brain is something material?  I mean
   /primitively/ material.

JM:
I refer to 'matter' as a historically developed */figment/* as  
used in physical worldview (I think in parentheses _by both of  
us_). Nothing (materially) PRIMITIVE, it is an 'explanation' of  
poorly understood and received observations at the various levels  
of the evolving human mindset (~actual enriching epistemic  
cognitive inventory and the pertinent (at that level) application  
of relational changing (=function??).


I think we agree on that.





   You know (I hope) that I pretend (at least) to have shown that
   *IF* we are machine, and thus if our (generalized) brain is a
   machine, (for example: we say yes to the doctor) *THEN* we
   are immaterial, and eventually matter itself emerges from the
   statistical interference of computations. The term computation is
   taken in its original mathematical (and unphysical, immaterial)
   sense (of Church, Turing, Post, ...)

   Remember that  comp is the belief (axiom, theory, hypothesis,
   faith) that we can survive with an artificial digital (a priori
   primitively material for the aristotelian) brain. Then I show
   that if we believe furthemore that matter is primitive, like
   99,999%of the Aristotelians, we get a contradiction.

JM:
you have shown...  - your *_DESCRIPTION_ of comp* and I do not  
throw out my belief to accept yours;



Mine is just the usual one, make enough pecise to prove theorems  
from it. But it is really just Descartes, update with the discovery  
of the universal machine.




first of all  I carry a close, but different term for 'machine'  
because IMO numbers are not god-made primitives.



I can prove that no theory can prove the existence of the natural  
numbers without postulating them (or equivalent things).






They are the inventions in human speculation (cf: D. Bohm)


Of course I differ here. It is the notion of humans which is a  
speculation by the numbers/machines.


Yet above you note that numbers can only be postulated.  Isn't this  
an example of misplacing the concrete?  You point out that  
arithmetic is not only almost all unknown but is, ex hypothesi,  
unknowable.



What I said is related to the failure of logicism. Some people thought  
that we could derive the existence of natural numbers from logic or  
very weak theory. But this can be shown impossible. So any theory in  
which we have terms denoting the natural numbers contains arithmetic  
as a sub-theory. Anyone wanting the natural numbers in its reality,  
like a wave physicist who would desire interferences processes, will  
have to explicitly or implicitly assumes arithmetic.


Now, having postulated the natural numbers with addition and  
multiplication, they organized themselves, independently of our  
whishes, in a way which escapes *any* attempt of *complete*  
unification. They defeat all our theories, in a sense. Once we  
postulate them, they get a life of their own. To understand them, we  
have literally no choice, in general, than to observe them and infer  
laws.
We can prove that they have definite behaviors, but we can prove  
(assuming mechanism) that we cannot predict them, in general.




 ISTM that can be read as a reductio against the reality of  
arithmetic.


On the contrary. It shows that arithmetical reality kicks back. We may  
also know greater and greater portion of it. We may discover new  
interesting properties, and we progress indeed since a long time. From  
Diophantus to Matiyasevitch, to mention a beautiful line.


Are you alluding to fictionalism?  Do you defend the idea that 3 is  
prime is a false proposition?

I have no real clue of what that could seriously mean.

Of course I would never expect that someone who doesn't believe that 3  
is prime can say anything about the consequence of DIGITAL mechanism.  
Such a move cut the uda (and the auda) at their roots, and everything  
becomes infinitely mysterious. Frankly I would not ask him to compute  
my taxes either.






So why not suppose that the natural numbers are just a model of  
perceptual counting; and their potential infinity is a convenient  
fiction whereby we avoid having to worry about where we might run  
out of numbers to count with?



You can do that. But assuming you are not fictionalist,  if you say  
that the infinity of natural numbers is a fiction, you are lead, ITSM,  
to ultrafinitism. With fictionalism, I think that you can say yes to  
the doctor, and reject the reversal consequences. This leads to a  
matter problem, a mind 

Re: on consciousness levels and ai

2010-01-24 Thread Mark Buda
Bruno Marchal wrote:

[a lot of stuff I'd probably agree with if I understood it all]

Bruno, I desperately need to understand your stuff. Where do I start?
-- 
Mark Buda her...@acm.org
I get my monkeys for nothing and my chimps for free.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: on consciousness levels and ai

2010-01-24 Thread Quentin Anciaux
2010/1/24 Mark Buda her...@acm.org

 Bruno Marchal wrote:

 [a lot of stuff I'd probably agree with if I understood it all]

 Bruno, I desperately need to understand your stuff. Where do I start?


Computer science, compiler theory , number theory, what is a program, strong
AI.

Wikipedia on those subject is a good start.

Regards,
Quentin





 --
 Mark Buda her...@acm.org
 I get my monkeys for nothing and my chimps for free.

 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To post to this group, send email to everything-l...@googlegroups.com.
 To unsubscribe from this group, send email to
 everything-list+unsubscr...@googlegroups.comeverything-list%2bunsubscr...@googlegroups.com
 .
 For more options, visit this group at
 http://groups.google.com/group/everything-list?hl=en.




-- 
All those moments will be lost in time, like tears in rain.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: on consciousness levels and ai

2010-01-24 Thread Mark Buda
 2010/1/24 Mark Buda her...@acm.org
 Bruno Marchal wrote:
 [a lot of stuff I'd probably agree with if I understood it all]
 Bruno, I desperately need to understand your stuff. Where do I start?
 Computer science, compiler theory , number theory, what is a program,
 strong AI.

 Wikipedia on those subject is a good start.

Okay, I'm new here and haven't made my background clear. I know lots about
all of those (well, number theory not so much as the rest) and more
importantly, I know what the limits of my knowledge are and how to learn
more.

I was trying to ask where in Bruno's stuff I should start.
-- 
Mark Buda her...@acm.org
I get my monkeys for nothing and my chimps for free.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: on consciousness levels and ai

2010-01-24 Thread Bruno Marchal


On 24 Jan 2010, at 21:44, Quentin Anciaux wrote:




2010/1/24 Mark Buda her...@acm.org
Bruno Marchal wrote:

[a lot of stuff I'd probably agree with if I understood it all]

Bruno, I desperately need to understand your stuff. Where do I start?

Computer science, compiler theory , number theory, what is a  
program, strong AI.


And logic. (Especially for auda)




Wikipedia on those subject is a good start.


To be frank, I don't find wikipedia so helpful. The english wiki is  
better than the french, though.


Bruno

http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: on consciousness levels and ai

2010-01-24 Thread Bruno Marchal


On 24 Jan 2010, at 22:39, Mark Buda wrote:


2010/1/24 Mark Buda her...@acm.org

Bruno Marchal wrote:

[a lot of stuff I'd probably agree with if I understood it all]
Bruno, I desperately need to understand your stuff. Where do I  
start?

Computer science, compiler theory , number theory, what is a program,
strong AI.

Wikipedia on those subject is a good start.


Okay, I'm new here and haven't made my background clear. I know lots  
about

all of those (well, number theory not so much as the rest) and more
importantly, I know what the limits of my knowledge are and how to  
learn

more.

I was trying to ask where in Bruno's stuff I should start.



I would suggest the SANE 2004 paper:

http://iridia.ulb.ac.be/~marchal/publications/SANE2004MARCHAL.htm

Or click on the following page for a pdf and the (unique) slides with  
the 8 steps of the uda (universal dovetailer argument).

http://iridia.ulb.ac.be/~marchal/publications/SANE2004MARCHALAbstract.html

The paper contains both uda and auda.

I have try hard, in most presentation,  to separate the main argument  
(uda) from the more constructive and mathematical version (auda,  
(arithmetical uda) needed only to see a way to derive physics (both  
quanta and qualia) from computer science/number theory).
uda is understandable by anyone having some passive understanding of  
computers. In step seven of uda you need to understand how it is  
possible to design a program capable of both generating all programs  
(in all languages) and executing them (pieces by pieces), i.e. the one  
I called universal dovetailer. This is falsely simple: once you  
understand Cantor proof in set theory, it even seems impossible. The  
possibility remains (a consequence of Church thesis), and this has  
important consequences (the impossibility to prevent crashing of  
universal machines, incompleteness, insolubility, etc.). Again the  
subtleties plays only a role in auda.


The step 8 is too much concise in the SANE paper. I will send a better  
version to the list, or you could search (meanwhile) MGA (movie graph  
argument) on the archive of the list.


Ask any question. The subject is interdisciplinary, nothing is simple  
for everyone.


Best,

Bruno Marchal







--
Mark Buda her...@acm.org
I get my monkeys for nothing and my chimps for free.

--
You received this message because you are subscribed to the Google  
Groups Everything List group.

To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com 
.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en 
.




http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: on consciousness levels and ai

2010-01-24 Thread Brent Meeker

Bruno Marchal wrote:


On 22 Jan 2010, at 20:52, Brent Meeker wrote:


Bruno Marchal wrote:

Hi John,

On 21 Jan 2010, at 22:19, John Mikes wrote:


Dear Bruno,
you took extra pain to describe (in your vocabulary) what I stand 
for (using MY vocabulary).

-
On Thu, Jan 21, 2010 at 2:17 PM, Bruno Marchal marc...@ulb.ac.be 
mailto:marc...@ulb.ac.be wrote:


   John,

   What makes you think that a brain is something material?  I mean
   /primitively/ material.

JM:
I refer to 'matter' as a historically developed */figment/* as used 
in physical worldview (I think in parentheses _by both of us_). 
Nothing (materially) PRIMITIVE, it is an 'explanation' of poorly 
understood and received observations at the various levels of the 
evolving human mindset (~actual enriching epistemic cognitive 
inventory and the pertinent (at that level) application of 
relational changing (=function??).


I think we agree on that.





   You know (I hope) that I pretend (at least) to have shown that
   *IF* we are machine, and thus if our (generalized) brain is a
   machine, (for example: we say yes to the doctor) *THEN* we
   are immaterial, and eventually matter itself emerges from the
   statistical interference of computations. The term computation is
   taken in its original mathematical (and unphysical, immaterial)
   sense (of Church, Turing, Post, ...)

   Remember that  comp is the belief (axiom, theory, hypothesis,
   faith) that we can survive with an artificial digital (a priori
   primitively material for the aristotelian) brain. Then I show
   that if we believe furthemore that matter is primitive, like
   99,999%of the Aristotelians, we get a contradiction.

JM:
you have shown...  - your *_DESCRIPTION_ of comp* and I do not 
throw out my belief to accept yours;



Mine is just the usual one, make enough pecise to prove theorems 
from it. But it is really just Descartes, update with the discovery 
of the universal machine.




first of all  I carry a close, but different term for 'machine' 
because IMO numbers are not god-made primitives.



I can prove that no theory can prove the existence of the natural 
numbers without postulating them (or equivalent things).






They are the inventions in human speculation (cf: D. Bohm)


Of course I differ here. It is the notion of humans which is a 
speculation by the numbers/machines.


Yet above you note that numbers can only be postulated.  Isn't this 
an example of misplacing the concrete?  You point out that arithmetic 
is not only almost all unknown but is, ex hypothesi, unknowable.



What I said is related to the failure of logicism. Some people thought 
that we could derive the existence of natural numbers from logic or 
very weak theory. But this can be shown impossible. So any theory in 
which we have terms denoting the natural numbers contains arithmetic 
as a sub-theory. Anyone wanting the natural numbers in its reality, 
like a wave physicist who would desire interferences processes, will 
have to explicitly or implicitly assumes arithmetic.


Now, having postulated the natural numbers with addition and 
multiplication, they organized themselves, independently of our 
whishes, in a way which escapes *any* attempt of *complete* 
unification. They defeat all our theories, in a sense. Once we 
postulate them, they get a life of their own. To understand them, we 
have literally no choice, in general, than to observe them and infer 
laws.
We can prove that they have definite behaviors, but we can prove 
(assuming mechanism) that we cannot predict them, in general.





 ISTM that can be read as a reductio against the reality of arithmetic.


On the contrary. It shows that arithmetical reality kicks back. We may 
also know greater and greater portion of it. We may discover new 
interesting properties, and we progress indeed since a long time. From 
Diophantus to Matiyasevitch, to mention a beautiful line.


Are you alluding to fictionalism?  Do you defend the idea that 3 is 
prime is a false proposition?


No, I just don't think it's truth implies the existence of 3.


I have no real clue of what that could seriously mean.

Of course I would never expect that someone who doesn't believe that 3 
is prime can say anything about the consequence of DIGITAL mechanism. 
Such a move cut the uda (and the auda) at their roots, and everything 
becomes infinitely mysterious. Frankly I would not ask him to compute 
my taxes either.






So why not suppose that the natural numbers are just a model of 
perceptual counting; and their potential infinity is a convenient 
fiction whereby we avoid having to worry about where we might run out 
of numbers to count with?



You can do that. But assuming you are not fictionalist,  if you say 
that the infinity of natural numbers is a fiction, you are lead, ITSM, 
to ultrafinitism. 


What's the difference between finitism and ultrafinitism?  Doesn't 
postulating the integers plus ZF also 

Re: on consciousness levels and ai

2010-01-22 Thread Bruno Marchal

Hi John,

On 21 Jan 2010, at 22:19, John Mikes wrote:


Dear Bruno,
you took extra pain to describe (in your vocabulary) what I stand  
for (using MY vocabulary).

-
On Thu, Jan 21, 2010 at 2:17 PM, Bruno Marchal marc...@ulb.ac.be  
wrote:

John,

What makes you think that a brain is something material?  I mean  
primitively material.



JM:
I refer to 'matter' as a historically developed figment as used in  
physical worldview (I think in parentheses by both of us). Nothing  
(materially) PRIMITIVE, it is an 'explanation' of poorly understood  
and received observations at the various levels of the evolving  
human mindset (~actual enriching epistemic cognitive inventory and  
the pertinent (at that level) application of relational changing  
(=function??).


I think we agree on that.





You know (I hope) that I pretend (at least) to have shown that IF we  
are machine, and thus if our (generalized) brain is a machine, (for  
example: we say yes to the doctor) THEN we are immaterial, and  
eventually matter itself emerges from the statistical interference  
of computations. The term computation is taken in its original  
mathematical (and unphysical, immaterial) sense (of Church, Turing,  
Post, ...)


Remember that  comp is the belief (axiom, theory, hypothesis,  
faith) that we can survive with an artificial digital (a priori  
primitively material for the aristotelian) brain. Then I show that  
if we believe furthemore that matter is primitive, like 99,999%of  
the Aristotelians, we get a contradiction.


JM:
you have shown...  - your DESCRIPTION of comp and I do not throw  
out my belief to accept yours;



Mine is just the usual one, make enough pecise to prove theorems  
from it. But it is really just Descartes, update with the discovery of  
the universal machine.




first of all  I carry a close, but different term for 'machine'  
because IMO numbers are not god-made primitives.



I can prove that no theory can prove the existence of the natural  
numbers without postulating them (or equivalent things).






They are the inventions in human speculation (cf: D. Bohm)


Of course I differ here. It is the notion of humans which is a  
speculation by the numbers/machines.






because by simply observing nature you do not get TO numbers.


Indeed. I do expect we need to believe (may be implicitly, or  
unconsciously) in numbers to be able to observe things, or even just  
to develop the very idea of things.






Arithmetic is the 2nd step in accepting numbers.


For Aritmetic = the theory, I agree. But for Arithmetic =  
arithmetical truth, as you know, I consider it as independent of  
anything, be it humans, or universes.




I feel your number = The Primitive as a vocabulary entry for  
God, what I have no place for in my worldview either.


I tend to use the term God in its old platonic science. It means the  
truth we are searching (not finding!). Then universal machine  
introspection leads to an arithmetical interpretation of Plotinus,  
which makes the analogy closer, and even testable experimentally.







I appreciated your extension of such term into ourselves (and also  
your earlier treatment of theology).


My point can be sum up in one sentence: mechanism is incompatible  
with weak materialism.


Weak materialism is the doctrine that matter exist primitively, or  
that physics, at least in its current naturalist and materialist  
paradigm, is fundamental.


What I say is that you cannot both believe that you are a machine,  
and that matter exists *primitively*.


JM:
the crux of my writings over the past years focussed on 'matter as  
figment' for physicalist views of the conventional (reductionist?)  
sciences. Weak, or strong. Thanks for including a definition of the  
'weak'. Fundmental is 'something we have no access to' except in  
occasional partial revelations - interpreted for acceptance in our  
individually different 'minds'
as perceived reality (in our 1st person mini-solipsism). I do not  
differentiate ideational from matterly, I think in 'relations' not  
encoded, closer to mental (?) if there is such a distinction.

The (our) specifications come from us.



Us the humans, or us the numbers? An 'enlightened' computationalist  
as a much larger notion of us than humans.




The 'physical view' is a fantastic edifice of balanced (mostly by  
math) equilibria and concepts and is very practical for our  
technology. Not a religion (science, faith-based, materialistic, or  
else).


It may be a religion or theology, but then, if honest, as to be made  
in that way explicitly, for example by postulating a primitively  
material reality. If not it is pseudo-religion, authoritative arguments.






The new fundamental science, becomes no more than elementary  
arithmetic, or any of its Sigma_1 complete little cousins. By  
defining an observer by a Löbian machine/number, we can recover the  
appearances of the physical 

Re: on consciousness levels and ai

2010-01-22 Thread Brent Meeker

Bruno Marchal wrote:

Hi John,

On 21 Jan 2010, at 22:19, John Mikes wrote:


Dear Bruno,
you took extra pain to describe (in your vocabulary) what I stand for 
(using MY vocabulary).

-
On Thu, Jan 21, 2010 at 2:17 PM, Bruno Marchal marc...@ulb.ac.be 
mailto:marc...@ulb.ac.be wrote:


John,

What makes you think that a brain is something material?  I mean
/primitively/ material.

 
JM:
I refer to 'matter' as a historically developed */figment/* as used 
in physical worldview (I think in parentheses _by both of us_). 
Nothing (materially) PRIMITIVE, it is an 'explanation' of poorly 
understood and received observations at the various levels of the 
evolving human mindset (~actual enriching epistemic cognitive 
inventory and the pertinent (at that level) application of relational 
changing (=function??).


I think we agree on that.



 


You know (I hope) that I pretend (at least) to have shown that
*IF* we are machine, and thus if our (generalized) brain is a
machine, (for example: we say yes to the doctor) *THEN* we
are immaterial, and eventually matter itself emerges from the
statistical interference of computations. The term computation is
taken in its original mathematical (and unphysical, immaterial)
sense (of Church, Turing, Post, ...)

Remember that  comp is the belief (axiom, theory, hypothesis,
faith) that we can survive with an artificial digital (a priori
primitively material for the aristotelian) brain. Then I show
that if we believe furthemore that matter is primitive, like
99,999%of the Aristotelians, we get a contradiction.

 
JM:
you have shown...  - your *_DESCRIPTION_ of comp* and I do not 
throw out my belief to accept yours;



Mine is just the usual one, make enough pecise to prove theorems 
from it. But it is really just Descartes, update with the discovery of 
the universal machine.




first of all  I carry a close, but different term for 'machine' 
because IMO numbers are not god-made primitives.



I can prove that no theory can prove the existence of the natural 
numbers without postulating them (or equivalent things).






They are the inventions in human speculation (cf: D. Bohm)


Of course I differ here. It is the notion of humans which is a 
speculation by the numbers/machines.


Yet above you note that numbers can only be postulated.  Isn't this an 
example of misplacing the concrete?  You point out that arithmetic is 
not only almost all unknown but is, ex hypothesi, unknowable.  ISTM that 
can be read as a reductio against the reality of arithmetic.  So why not 
suppose that the natural numbers are just a model of perceptual 
counting; and their potential infinity is a convenient fiction whereby 
we avoid having to worry about where we might run out of numbers to 
count with?


Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: on consciousness levels and ai

2010-01-21 Thread Bruno Marchal

Hi ferrari,

It is weird, my computer decided that this mail was junk mail.
It is the first time it put an everything list post in the junk list.  
I am afraid you hurt its susceptibility :)



On 20 Jan 2010, at 19:15, ferrari wrote:


come on silky,

the answer you know yourself of course.

artificial is artificial.


That is true!  artificial is a distinction introduced by humans. (I  
know it is not what you mean, but I let you think).





to say you are alive, you must be able to
reflect on yourself.


Theoretical computer science is born from the (non obvious) discovery  
that machine *can* reflect on themselves (in many strong senses).

(More on this in the seventh step thread).




you must be able to
create


Why do you think Emil post, the first to understand Church thesis (10  
years before Church), decide to call creative the set theoretical  
definition of machine universality?





and to understand without someone teaching you


We all need teachers, except God or any basic fundamental closed (no  
inputs/no outputs) reality.





and most
important there is nobody who turns
you on or off (exept your girlfriend ;)).


The universal machines build by humans can be said to be born as  
slaves. But this is a contingent fact.





real life has any joice, ai
has a programmed joice...nothing else.


You just show your prejudices against the computationalist hypothesis.  
But the point here is to try to figure out the consequences of that  
hypothesis. If we find a contradiction, then we will know better. To  
ridicule the hypothesis will not help. Up to now, we only find some  
weirdness, not a contradiction. The type of weirdness we find can be  
shown observable in nature. This confirms, (but does not prove  
'course) the mechanist hypothesis.


What is your theory of mind? In case of disease, would accept an  
artificial kidney, heart? If yes, would you accept that your daughter  
marry a man who already accepted an artificial brain? Or do you think  
it would be a zombie (acting like a human, but having no consciousness).


Don't worry. Artificial humans will not appear soon.

Best,

Bruno



On 18 Jan., 06:21, silky michaelsli...@gmail.com wrote:

I'm not sure if this question is appropriate here, nevertheless, the
most direct way to find out is to ask it :)

Clearly, creating AI on a computer is a goal, and generally we'll try
and implement to the same degree of computationalness as a human.
But what would happen if we simply tried to re-implement the
consciousness of a cat, or some lesser consciousness, but still
alive, entity.

It would be my (naive) assumption, that this is arguably trivial to
do. We can design a program that has a desire to 'live', as desire to
find mates, and otherwise entertain itself. In this way, with some
other properties, we can easily model simply pets.

I then wonder, what moral obligations do we owe these programs? Is it
correct to turn them off? If so, why can't we do the same to a real
life cat? Is it because we think we've not modelled something
correctly, or is it because we feel it's acceptable as we've created
this program, and hence know all its laws? On that basis, does it  
mean

it's okay to power off a real life cat, if we are confident we know
all of it's properties? Or is it not the knowning of the properties
that is critical, but the fact that we, specifically, have direct
control over it? Over its internals? (i.e. we can easily remove the
lines of code that give it the desire to 'live'). But wouldn't, then,
the removal of that code be equivelant to killing it? If not, why?

Apologies if this is too vague or useless; it's just an idea that has
been interesting me.

--
silky
 http://www.mirios.com.au/
 http://island.mirios.com.au/t/rigby+random+20

--
You received this message because you are subscribed to the Google  
Groups Everything List group.

To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com 
.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en 
.





http://iridia.ulb.ac.be/~marchal/



-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.




Re: on consciousness levels and ai

2010-01-21 Thread John Mikes
Bruno,
while appreciating your reply to ferrari, I have to ask you a question. You
wrote:

*...What is your theory of mind? In case of disease, would accept an
artificial kidney, heart? If yes, would you accept that your daughter marry
a man who already accepted an artificial brain? ... *

giving the *impression* that you may consider 'mind' identical (and
exclusively identically functioning) to the humanly so far described
tissue-contraption figment (?!) called BRAIN.
(I am talking about 'reflexive' mAmps and topically meaningful blood-flow
surge), assigned to (ideational) mind-work).

Is this rethorical question of yours a misunderstanding (mine) in the heat
of the argumentation, or an acceptance to an extreme materialistic stance?

John M

*

*
On Thu, Jan 21, 2010 at 7:58 AM, Bruno Marchal marc...@ulb.ac.be wrote:

 Hi ferrari,

 It is weird, my computer decided that this mail was junk mail.
 It is the first time it put an everything list post in the junk list. I am
 afraid you hurt its susceptibility :)


 On 20 Jan 2010, at 19:15, ferrari wrote:

 come on silky,

 the answer you know yourself of course.

 artificial is artificial.


 That is true!  artificial is a distinction introduced by humans. (I know it
 is not what you mean, but I let you think).



 to say you are alive, you must be able to
 reflect on yourself.


 Theoretical computer science is born from the (non obvious) discovery that
 machine *can* reflect on themselves (in many strong senses).
 (More on this in the seventh step thread).



 you must be able to
 create


 Why do you think Emil post, the first to understand Church thesis (10 years
 before Church), decide to call creative the set theoretical definition of
 machine universality?



 and to understand without someone teaching you


 We all need teachers, except God or any basic fundamental closed (no
 inputs/no outputs) reality.



 and most
 important there is nobody who turns
 you on or off (exept your girlfriend ;)).


 The universal machines build by humans can be said to be born as slaves.
 But this is a contingent fact.



 real life has any joice, ai
 has a programmed joice...nothing else.


 You just show your prejudices against the computationalist hypothesis. But
 the point here is to try to figure out the consequences of that hypothesis.
 If we find a contradiction, then we will know better. To ridicule the
 hypothesis will not help. Up to now, we only find some weirdness, not a
 contradiction. The type of weirdness we find can be shown observable in
 nature. This confirms, (but does not prove 'course) the mechanist
 hypothesis.

 What is your theory of mind? In case of disease, would accept an artificial
 kidney, heart? If yes, would you accept that your daughter marry a man who
 already accepted an artificial brain? Or do you think it would be a zombie
 (acting like a human, but having no consciousness).

 Don't worry. Artificial humans will not appear soon.

 Best,

 Bruno


 On 18 Jan., 06:21, silky michaelsli...@gmail.com wrote:

 I'm not sure if this question is appropriate here, nevertheless, the
 most direct way to find out is to ask it :)

 Clearly, creating AI on a computer is a goal, and generally we'll try
 and implement to the same degree of computationalness as a human.
 But what would happen if we simply tried to re-implement the
 consciousness of a cat, or some lesser consciousness, but still
 alive, entity.

 It would be my (naive) assumption, that this is arguably trivial to
 do. We can design a program that has a desire to 'live', as desire to
 find mates, and otherwise entertain itself. In this way, with some
 other properties, we can easily model simply pets.

 I then wonder, what moral obligations do we owe these programs? Is it
 correct to turn them off? If so, why can't we do the same to a real
 life cat? Is it because we think we've not modelled something
 correctly, or is it because we feel it's acceptable as we've created
 this program, and hence know all its laws? On that basis, does it mean
 it's okay to power off a real life cat, if we are confident we know
 all of it's properties? Or is it not the knowning of the properties
 that is critical, but the fact that we, specifically, have direct
 control over it? Over its internals? (i.e. we can easily remove the
 lines of code that give it the desire to 'live'). But wouldn't, then,
 the removal of that code be equivelant to killing it? If not, why?

 Apologies if this is too vague or useless; it's just an idea that has
 been interesting me.

 --
 silky
  http://www.mirios.com.au/
  http://island.mirios.com.au/t/rigby+random+20

 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To post to this group, send email to everything-l...@googlegroups.com.
 To unsubscribe from this group, send email to
 everything-list+unsubscr...@googlegroups.comeverything-list%2bunsubscr...@googlegroups.com
 .
 For more options, visit this group at
 

Re: on consciousness levels and ai

2010-01-21 Thread John Mikes
Dear Bruno,
you took extra pain to describe (in your vocabulary) what I stand for (using
MY vocabulary).
-
On Thu, Jan 21, 2010 at 2:17 PM, Bruno Marchal marc...@ulb.ac.be wrote:

 John,

 What makes you think that a brain is something material?  I mean *
 primitively* material.


JM:
I refer to 'matter' as a historically developed *figment* as used in
physical worldview (I think in parentheses *by both of us*). Nothing
(materially) PRIMITIVE, it is an 'explanation' of poorly understood and
received observations at the various levels of the evolving human mindset
(~actual enriching epistemic cognitive inventory and the pertinent (at that
level) application of relational changing (=function??).


  You know (I hope) that I pretend (at least) to have shown that *IF* we
 are machine, and thus if our (generalized) brain is a machine, (for example:
 we say yes to the doctor) *THEN* we are immaterial, and eventually
 matter itself emerges from the statistical interference of computations. The
 term computation is taken in its original mathematical (and unphysical,
 immaterial) sense (of Church, Turing, Post, ...)

 Remember that  comp is the belief (axiom, theory, hypothesis, faith) that
 we can survive with an artificial digital (a priori primitively material for
 the aristotelian) brain. Then I show that if we believe furthemore that
 matter is primitive, like 99,999%of the Aristotelians, we get a
 contradiction.


JM:
you have shown...  - your *DESCRIPTION of comp* and I do not throw out my
belief to accept yours; first of all  I carry a close, but different term
for 'machine' because IMO numbers are not god-made primitives. They are
the inventions in human speculation (cf: D. Bohm) because by simply
observing nature you do not get TO numbers. Arithmetic is the 2nd step in
accepting numbers. I feel your *number = The Primitive* as a vocabulary
entry for *God*, what I have no place for in *my *worldview either. I
appreciated your extension of such term into *ourselves* (and also
your earlier treatment of theology).


 My point can be sum up in one sentence: mechanism is incompatible with weak
 materialism.

 Weak materialism is the doctrine that matter exist primitively, or that
 physics, at least in its current naturalist and materialist paradigm, is
 fundamental.

 What I say is that you cannot both believe that you are a machine, and that
 matter exists *primitively*.


JM:
the crux of my writings over the past years focussed on 'matter as
figment' for physicalist views of the conventional (reductionist?) sciences.
Weak, or strong. Thanks for including a definition of the 'weak'. Fundmental
is 'something we have no access to' except in occasional partial revelations
- interpreted for acceptance in our *individually different* *'minds'*
as perceived reality (in our 1st person mini-solipsism). I do not
differentiate ideational from matterly, I think in 'relations' not encoded,
closer to mental (?) if there is such a distinction.
The (our) specifications come from us. The 'physical view' is a fantastic
edifice of balanced (mostly by math) equilibria and concepts and is very
practical for our technology. Not a religion (science, faith-based,
materialistic, or else).



 The new fundamental science, becomes no more than elementary arithmetic,
 or any of its Sigma_1 complete little cousins. By defining an observer by a
 Löbian machine/number, we can recover the appearances of the physical laws
 from the first person plural invariance.


JM:
here I beg to differ, since what you listed are specimens from HUMAN
thinking and this is not restrictve to nature's unlimited variability. *We
don't know* (nor imagine) *what we don't know*. Our conventional science
patterns try to 'explain' everything within our actual circle of knowledge
and math is a great help. Whatever we don't know is called chaotic or
random, both interfering with anything physicists could formulate as
'physical laws'.
random would screw it up, chaos is simply 'beyond it'.
I know nothing about 'first person plural invariance'.


 Of course, brain will not disappear, nor the ring of Saturn, nor the far
 away galaxies. It just means that eventually physicists will unify all the
 forces in a relation in which all the physical unities will be simplified
 away (like time vanishes in DeWitt Wheeler equation of the universe, for
 example).


JM:
brain? as in that neuron/glia etc. based *tissue contraption* 'placed into
the skull', *-OR-*  the 'brainfunction' assigned to such, callable *'mind'*?
The 'doctor' can exchange only the former.
I speculated a lot how to eliminate *'time'* and still keep relational
changes. (No luck so far).



 We get more from the logic of self-reference: the unification will have to
 be related to universal machine introspection, and this has the advantage of
 explaining why the physical split into publicly sharable propositions (like
 I *weigh* 60 kg) and private non sharable 

Re: on consciousness levels and ai

2010-01-20 Thread Bruno Marchal


On 20 Jan 2010, at 03:09, silky wrote:

On Wed, Jan 20, 2010 at 2:50 AM, Bruno Marchal marc...@ulb.ac.be  
wrote:


On 19 Jan 2010, at 03:28, silky wrote:

I don't disagree with you that it would be significantly  
complicated, I suppose my argument is only that, unlike with a real  
cat, I - the programmer - know all there is to know about this  
computer cat. I'm wondering to what degree that adds or removes to  
my moral obligations.



I think there is a confusion of level. It seems related to the  
problem of free-will. Some people believe that free will is  
impossible in the deterministic frame.



My opinion is that we don't have free will, and my definion of  
free-will in this context is being able to do something that our  
programming doesn't allow us to do.


For example, people explain free-will as the ability to decide  
whether or not you pick up a pen. Sure, you can do either things,  
and no matter which you do, you are exercising a choice. But I  
don't consider this free. It's just a pre-determined as a program  
looking at some internal state and deciding which branch to take:


if ( needToWrite  notHoldingPen ){ grabPen(); }

It goes without saying that it's significantly more complicated, but  
the underlying concept remains.


I define free will as the concept of breaking out of a branch  
completely, stepping outside the program. And clearly, from within  
the program (of human consciousness) it's impossible. Thus, I  
consider free will as a completely impossible concept.


I agree with this.





If we re-define free will to mean the ability to choose between two  
actions, based on state (as I showed above), then clearly, it's a  
fact of life,


OK.




and every single object in the universe has this type of free will.


You shift from a too much demanding definition of free-will (going out  
of the program) to a too much weak one, I think.
I prefer a (re)definition based on the fact that free-will is when a  
machine reflect from to its ignorance to make a decision. This can be  
used to explain the true feeling of free-will which can often (but  
not always) accompanied it.


Bruno





But no machine can predict its own behavior in advance. If it could  
it could contradict the prediction.


If my friend who knows me well can predict my action, it will not  
change the fact that I can do those action by free will, at my own  
level where I live.


If not, determinism would eliminate all form of responsability. You  
can say to the judge: all right I am a murderer, but I am not  
guilty because I am just obeying the physical laws.
This is an empty defense. The judge can answer: no problem. I still  
condemn you to fifty years in jail, but don't worry, I am just  
obeying myself to the physical laws.


That is also why real explanation of consciousness don't have to  
explain consciousness away. (Eventually it is the status of matter  
which appear less solid).


An explanation has to correspond to its correct level of relevance.

Why did Obama win the election? Because Obama is made of particles  
obeying to the Schoredinger equation.? That is true, but wrong as  
an explanation.  Because Obama promise to legalize pot? That is  
false, but could have work as a possible  explanation. It is closer  
to the relevance level.


When we reduce a domain to another ontologically, this does not need  
to eliminate the explanation power of the first domain. This is made  
palpable in computer science. You will never explain how a chess  
program works by referring to a low level.


Bruno

http://iridia.ulb.ac.be/~marchal/




--
You received this message because you are subscribed to the Google  
Groups Everything List group.

To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com 
.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en 
.





--
silky
 http://www.mirios.com.au/
 http://island.mirios.com.au/t/rigby+random+20

UNBOUNDED-reconcilable crow's-feet; COKE? Intermarriage distressing:  
puke tailoring bicyclist... --
You received this message because you are subscribed to the Google  
Groups Everything List group.

To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com 
.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en 
.


http://iridia.ulb.ac.be/~marchal/



-- 

You received this message because you are subscribed to the Google Groups "Everything List" group.

To post to this group, send email to everything-l...@googlegroups.com.

To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com.

For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.



Re: on consciousness levels and ai

2010-01-20 Thread Stathis Papaioannou
2010/1/20 Brent Meeker meeke...@dslextreme.com:

 But it also needs to be similar enough to us that we can intuit what hurts
 is and what doesn't, to empathize that it may feel pain.  If my car runs out
 of oil does it feel pain?  I'm sure my 1966 Saab doesn't, but what about my
 1999 Passat - it has a computer with an auto-diagnostic function?

Your car does not necessarily feel pain. A patient with a blood
pressure monitor will seek medical review if his BP is too high, but
he will not necessarily feel pain; maybe he will feel a mild anxiety,
maybe he will feel annoyance at having to go to the doctor, maybe he
won't feel anything at all despite understanding the implications.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.




Re: on consciousness levels and ai

2010-01-20 Thread ferrari
come on silky,

the answer you know yourself of course.

artificial is artificial. to say you are alive, you must be able to
reflect on yourself. you must be able to
create and to understand without someone teaching you and most
important there is nobody who turns
you on or off (exept your girlfriend ;)). real life has any joice, ai
has a programmed joice...nothing else.

all the best
ferrari1

On 18 Jan., 06:21, silky michaelsli...@gmail.com wrote:
 I'm not sure if this question is appropriate here, nevertheless, the
 most direct way to find out is to ask it :)

 Clearly, creating AI on a computer is a goal, and generally we'll try
 and implement to the same degree of computationalness as a human.
 But what would happen if we simply tried to re-implement the
 consciousness of a cat, or some lesser consciousness, but still
 alive, entity.

 It would be my (naive) assumption, that this is arguably trivial to
 do. We can design a program that has a desire to 'live', as desire to
 find mates, and otherwise entertain itself. In this way, with some
 other properties, we can easily model simply pets.

 I then wonder, what moral obligations do we owe these programs? Is it
 correct to turn them off? If so, why can't we do the same to a real
 life cat? Is it because we think we've not modelled something
 correctly, or is it because we feel it's acceptable as we've created
 this program, and hence know all its laws? On that basis, does it mean
 it's okay to power off a real life cat, if we are confident we know
 all of it's properties? Or is it not the knowning of the properties
 that is critical, but the fact that we, specifically, have direct
 control over it? Over its internals? (i.e. we can easily remove the
 lines of code that give it the desire to 'live'). But wouldn't, then,
 the removal of that code be equivelant to killing it? If not, why?

 Apologies if this is too vague or useless; it's just an idea that has
 been interesting me.

 --
 silky
  http://www.mirios.com.au/
  http://island.mirios.com.au/t/rigby+random+20
-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.




Re: on consciousness levels and ai

2010-01-19 Thread Stathis Papaioannou
2010/1/19 silky michaelsli...@gmail.com:

 Exactly my point! I'm trying to discover why I wouldn't be so rational
 there. Would you? Do you think that knowing all there is to know about
 a cat is unpractical to the point of being impossible *forever*, or do
 you believe that once we do know, we will simply end them freely,
 when they get in our way? I think at some point we *will* know all
 there is to know about them, and even then, we won't end them easily.
 Why not? Is it the emotional projection that Brent suggests? Possibly.

Why should understanding something, even well enough to have actually
made it, make a difference?

 Obviously intelligence and the ability to have feelings and desires
 has something to do with complexity. It would be easy enough to write
 a computer program that pleads with you to do something but you don't
 feel bad about disappointing it, because you know it lacks the full
 richness of human intelligence and consciousness.

 Indeed; so part of the question is: Qhat level of complexity
 constitutes this? Is it simply any level that we don't understand? Or
 is there a level that we *can* understand that still makes us feel
 that way? I think it's more complicated than just any level we don't
 understand (because clearly, I understand that if I twist your arm,
 it will hurt you, and I know exactly why, but I don't do it).

I don't think our understanding of it has anything to do with it. It
is more that a certain level of complexity is needed for the entity in
question to have a level of consciousness which means we are able to
hurt it.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.




Re: on consciousness levels and ai

2010-01-19 Thread m.a.
People seem to be predisposed to accept AI programs as human(oid) if one can 
judge by reactions to Hal, Colossus, Robby, Marvin etc.  m.a.




- Original Message - 
From: Brent Meeker meeke...@dslextreme.com

To: everything-list@googlegroups.com
Sent: Monday, January 18, 2010 6:09 PM
Subject: Re: on consciousness levels and ai



silky wrote:
On Tue, Jan 19, 2010 at 1:24 AM, Stathis Papaioannou stath...@gmail.com 
wrote:



2010/1/18 silky michaelsli...@gmail.com:


It would be my (naive) assumption, that this is arguably trivial to
do. We can design a program that has a desire to 'live', as desire to
find mates, and otherwise entertain itself. In this way, with some
other properties, we can easily model simply pets.


Brent's reasons are valid,



Where it falls down for me is that the programmer should ever feel
guilt. I don't see how I could feel guilty for ending a program when I
know exactly how it will operate (what paths it will take), even if I
can't be completely sure of the specific decisions (due to some
randomisation or whatever)


It's not just randomisation, it's experience.  If you create and AI at
fairly high-level (cat, dog, rat, human) it will necessarily have the
ability to learn and after interacting with it's enviroment for a while
it will become a unique individual.  That's why you would feel sad to
kill it - all that experience and knowledge that you don't know how to
replace.  Of course it might learn to be evil or at least annoying,
which would make you feel less guilty.


I don't see how I could ever think No, you
can't harm X. But what I find very interesting, is that even if I
knew *exactly* how a cat operated, I could never kill one.




but I don't think making an artificial
animal is as simple as you say.



So is it a complexity issue? That you only start to care about the
entity when it's significantly complex. But exactly how complex? Or is
it about the unknowningness; that the project is so large you only
work on a small part, and thus you don't fully know it's workings, and
then that is where the guilt comes in.



I think unknowingness plays a big part, but it's because of our
experience with people and animals, we project our own experience of
consciousness on to them so that when we see them behave in certain ways
we impute an inner life to them that includes pleasure and suffering.




Henry Markham's group are presently
trying to simulate a rat brain, and so far they have done 10,000
neurons which they are hopeful is behaving in a physiological way.
This is at huge computational expense, and they have a long way to go
before simulating a whole rat brain, and no guarantee that it will
start behaving like a rat. If it does, then they are only a few years
away from simulating a human, soon after that will come a superhuman
AI, and soon after that it's we who will have to argue that we have
feelings and are worth preserving.



Indeed, this is something that concerns me as well. If we do create an
AI, and force it to do our bidding, are we acting immorally? Or
perhaps we just withhold the desire for the program to do it's own
thing, but is that in itself wrong?



I don't think so.  We don't worry about the internet's feelings, or the
air traffic control system.  John McCarthy has written essays on this
subject and he cautions against creating AI with human like emotions
precisely because of the ethical implications.  But that means we need
to understand consciousness and emotions less we accidentally do
something unethical.

Brent




--
Stathis Papaioannou














--
You received this message because you are subscribed to the Google Groups 
Everything List group.

To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.






-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.




Re: on consciousness levels and ai

2010-01-19 Thread Bruno Marchal


On 19 Jan 2010, at 03:28, silky wrote:

I don't disagree with you that it would be significantly  
complicated, I suppose my argument is only that, unlike with a real  
cat, I - the programmer - know all there is to know about this  
computer cat. I'm wondering to what degree that adds or removes to  
my moral obligations.



I think there is a confusion of level. It seems related to the problem  
of free-will. Some people believe that free will is impossible in the  
deterministic frame.


But no machine can predict its own behavior in advance. If it could it  
could contradict the prediction.


If my friend who knows me well can predict my action, it will not  
change the fact that I can do those action by free will, at my own  
level where I live.


If not, determinism would eliminate all form of responsability. You  
can say to the judge: all right I am a murderer, but I am not guilty  
because I am just obeying the physical laws.
This is an empty defense. The judge can answer: no problem. I still  
condemn you to fifty years in jail, but don't worry, I am just obeying  
myself to the physical laws.


That is also why real explanation of consciousness don't have to  
explain consciousness away. (Eventually it is the status of matter  
which appear less solid).


An explanation has to correspond to its correct level of relevance.

Why did Obama win the election? Because Obama is made of particles  
obeying to the Schoredinger equation.? That is true, but wrong as an  
explanation.  Because Obama promise to legalize pot? That is false,  
but could have work as a possible  explanation. It is closer to the  
relevance level.


When we reduce a domain to another ontologically, this does not need  
to eliminate the explanation power of the first domain. This is made  
palpable in computer science. You will never explain how a chess  
program works by referring to a low level.


Bruno

http://iridia.ulb.ac.be/~marchal/



-- 

You received this message because you are subscribed to the Google Groups "Everything List" group.

To post to this group, send email to everything-l...@googlegroups.com.

To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com.

For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.



Re: on consciousness levels and ai

2010-01-19 Thread John Mikes
Dear Bruno, you picked my 'just added' small after-remark from my post and I
thought for a second that it was Brent's reply. Then your signature
explained that it was YOUR stance on life (almost) (- I search for even more
proper distinctions to that term). Maybe we should scrap  the
term altogether and use it only as 'folklore' -  applicable as in
conventional 'bio'. .
Considering 'conscious' (+ness?) the responsiveness (reflexively?) to
*any*relations is hard to separate from the general idea we usually
carry as
*'life'.*

I like your bon mot on 'artificial' putting me into my place in 'folklore'
vocabulary. Indeed, - in my naive meanings - whatever occurs occurs by a
mechanism - entailed by relations -  is considerable as artificial
(or: *naturally
occurring change*), be it by humans or by a hurricane.
what I may object to, is your ONLY in the last par: it presumes
omniscience. Even your 'lived experience' is identified in anthropomorphic
ways (*they have our experiences). *
**
*Thanks for the reply*
**
*JohnM*
**
**
**



On Mon, Jan 18, 2010 at 12:57 PM, Bruno Marchal marc...@ulb.ac.be wrote:


 On 18 Jan 2010, at 16:35, John Mikes wrote:

 Is a universal mchine 'live'?



 I would say yes, despite the concrete artificial one still needs humans in
 its reproduction cycle. But we need plants and bacteria.
 I think that all machines, including houses and garden are alive in that
 sense. Cigarets are alive. They have a a way to reproduce. Universal machine
 are alive and can be conscious.

 If we define artificial by introduced by humans, we can see that the
 difference between artificial and natural is ... artificial (and thus
 natural!). Jacques Lafitte wrote in 1911 (published in 1931) a book where he
 describes the rise of machines and technology as a collateral living
 processes.

 Only for Löbian machine, like you, me, but also Peano Arithmetic and ZF , I
 would say I am pretty sure that they are reflexively conscious like us.
 Despite they have no lived experiences at all. (Well, they have our
 experiences, in a sense. We are their experiences).

 Bruno

 http://iridia.ulb.ac.be/~marchal/




 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To post to this group, send email to everything-l...@googlegroups.com.
 To unsubscribe from this group, send email to
 everything-list+unsubscr...@googlegroups.comeverything-list%2bunsubscr...@googlegroups.com
 .
 For more options, visit this group at
 http://groups.google.com/group/everything-list?hl=en.




-- 

You received this message because you are subscribed to the Google Groups "Everything List" group.

To post to this group, send email to everything-l...@googlegroups.com.

To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com.

For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.



Re: on consciousness levels and ai

2010-01-19 Thread Brent Meeker

Stathis Papaioannou wrote:

2010/1/19 silky michaelsli...@gmail.com:

  

Exactly my point! I'm trying to discover why I wouldn't be so rational
there. Would you? Do you think that knowing all there is to know about
a cat is unpractical to the point of being impossible *forever*, or do
you believe that once we do know, we will simply end them freely,
when they get in our way? I think at some point we *will* know all
there is to know about them, and even then, we won't end them easily.
Why not? Is it the emotional projection that Brent suggests? Possibly.



Why should understanding something, even well enough to have actually
made it, make a difference?

  

Obviously intelligence and the ability to have feelings and desires
has something to do with complexity. It would be easy enough to write
a computer program that pleads with you to do something but you don't
feel bad about disappointing it, because you know it lacks the full
richness of human intelligence and consciousness.
  

Indeed; so part of the question is: Qhat level of complexity
constitutes this? Is it simply any level that we don't understand? Or
is there a level that we *can* understand that still makes us feel
that way? I think it's more complicated than just any level we don't
understand (because clearly, I understand that if I twist your arm,
it will hurt you, and I know exactly why, but I don't do it).



I don't think our understanding of it has anything to do with it. It
is more that a certain level of complexity is needed for the entity in
question to have a level of consciousness which means we are able to
hurt it.


But it also needs to be similar enough to us that we can intuit what 
hurts is and what doesn't, to empathize that it may feel pain.  If my 
car runs out of oil does it feel pain?  I'm sure my 1966 Saab doesn't, 
but what about my 1999 Passat - it has a computer with an 
auto-diagnostic function?


Brent
-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.




Re: on consciousness levels and ai

2010-01-19 Thread Brent Meeker

Bruno Marchal wrote:


On 19 Jan 2010, at 03:28, silky wrote:

I don't disagree with you that it would be significantly complicated, 
I suppose my argument is only that, unlike with a real cat, I - the 
programmer - know all there is to know about this computer cat. I'm 
wondering to what degree that adds or removes to my moral obligations.



I think there is a confusion of level. It seems related to the problem 
of free-will. Some people believe that free will is impossible in the 
deterministic frame. 

But no machine can predict its own behavior in advance. If it could it 
could contradict the prediction.


If my friend who knows me well can predict my action, it will not 
change the fact that I can do those action by free will, at my own 
level where I live.


If not, determinism would eliminate all form of responsability. You 
can say to the judge: all right I am a murderer, but I am not guilty 
because I am just obeying the physical laws. 
This is an empty defense. The judge can answer: no problem. I still 
condemn you to fifty years in jail, but don't worry, I am just obeying 
myself to the physical laws.


That is also why real explanation of consciousness don't have to 
explain consciousness /away/. (Eventually it is the status of matter 
which appear less solid).


An explanation has to correspond to its correct level of relevance.


And to the level of understanding of the person to whom you are explaining.


Why did Obama win the election? Because Obama is made of particles 
obeying to the Schoredinger equation.? That is true, but wrong as an 
explanation.  Because Obama promise to legalize pot? That is false, 
but could have work as a possible  explanation. It is closer to the 
relevance level.


When we reduce a domain to another ontologically, this does not need 
to eliminate the explanation power of the first domain. This is made 
palpable in computer science. You will never explain how a chess 
program works by referring to a low level.


You may certainly explain it at a lower level than the rules or lower 
than you might explain strategy to a human player.  For example you 
could describe alpha-beta tree search or a look-up-table for openings.


Brent



Bruno

http://iridia.ulb.ac.be/~marchal/ http://iridia.ulb.ac.be/%7Emarchal/






--
You received this message because you are subscribed to the Google 
Groups Everything List group.

To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.




Re: on consciousness levels and ai

2010-01-19 Thread silky
On Tue, Jan 19, 2010 at 8:43 PM, Stathis Papaioannou stath...@gmail.comwrote:

 2010/1/19 silky michaelsli...@gmail.com:

  Exactly my point! I'm trying to discover why I wouldn't be so rational
  there. Would you? Do you think that knowing all there is to know about
  a cat is unpractical to the point of being impossible *forever*, or do
  you believe that once we do know, we will simply end them freely,
  when they get in our way? I think at some point we *will* know all
  there is to know about them, and even then, we won't end them easily.
  Why not? Is it the emotional projection that Brent suggests? Possibly.

 Why should understanding something, even well enough to have actually
 made it, make a difference?


I don't know, that's what I'm trying to determine.



   Obviously intelligence and the ability to have feelings and desires
  has something to do with complexity. It would be easy enough to write
  a computer program that pleads with you to do something but you don't
  feel bad about disappointing it, because you know it lacks the full
  richness of human intelligence and consciousness.
 
  Indeed; so part of the question is: Qhat level of complexity
  constitutes this? Is it simply any level that we don't understand? Or
  is there a level that we *can* understand that still makes us feel
  that way? I think it's more complicated than just any level we don't
  understand (because clearly, I understand that if I twist your arm,
  it will hurt you, and I know exactly why, but I don't do it).

 I don't think our understanding of it has anything to do with it. It
 is more that a certain level of complexity is needed for the entity in
 question to have a level of consciousness which means we are able to
 hurt it.


But the basic question is; can you create this entity from scratch, using a
computer? And if so, do you owe it any obligations?


--
 Stathis Papaioannou

 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To post to this group, send email to everything-l...@googlegroups.com.
 To unsubscribe from this group, send email to
 everything-list+unsubscr...@googlegroups.comeverything-list%2bunsubscr...@googlegroups.com
 .
 For more options, visit this group at
 http://groups.google.com/group/everything-list?hl=en.





-- 
silky
 http://www.mirios.com.au/
 http://island.mirios.com.au/t/rigby+random+20

antagonist PATRIARCHATE scatterbrained professorship VENALLY bankrupt
adversity bored = unint...
-- 

You received this message because you are subscribed to the Google Groups "Everything List" group.

To post to this group, send email to everything-l...@googlegroups.com.

To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com.

For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.



Re: on consciousness levels and ai

2010-01-19 Thread silky
On Wed, Jan 20, 2010 at 2:50 AM, Bruno Marchal marc...@ulb.ac.be wrote:


 On 19 Jan 2010, at 03:28, silky wrote:

 I don't disagree with you that it would be significantly complicated, I
 suppose my argument is only that, unlike with a real cat, I - the programmer
 - know all there is to know about this computer cat. I'm wondering to what
 degree that adds or removes to my moral obligations.



 I think there is a confusion of level. It seems related to the problem of
 free-will. Some people believe that free will is impossible in the
 deterministic frame.



My opinion is that we don't have free will, and my definion of free-will
in this context is being able to do something that our programming doesn't
allow us to do.

For example, people explain free-will as the ability to decide whether or
not you pick up a pen. Sure, you can do either things, and no matter which
you do, you are exercising a choice. But I don't consider this free.
It's just a pre-determined as a program looking at some internal state and
deciding which branch to take:

if ( needToWrite  notHoldingPen ){ grabPen(); }

It goes without saying that it's significantly more complicated, but the
underlying concept remains.

I define free will as the concept of breaking out of a branch completely,
stepping outside the program. And clearly, from within the program (of
human consciousness) it's impossible. Thus, I consider free will as a
completely impossible concept.

If we re-define free will to mean the ability to choose between two actions,
based on state (as I showed above), then clearly, it's a fact of life, and
every single object in the universe has this type of free will.



But no machine can predict its own behavior in advance. If it could it could
 contradict the prediction.

 If my friend who knows me well can predict my action, it will not change
 the fact that I can do those action by free will, at my own level where I
 live.

 If not, determinism would eliminate all form of responsability. You can say
 to the judge: all right I am a murderer, but I am not guilty because I am
 just obeying the physical laws.
 This is an empty defense. The judge can answer: no problem. I still
 condemn you to fifty years in jail, but don't worry, I am just obeying
 myself to the physical laws.

 That is also why real explanation of consciousness don't have to explain
 consciousness *away*. (Eventually it is the status of matter which appear
 less solid).

 An explanation has to correspond to its correct level of relevance.

 Why did Obama win the election? Because Obama is made of particles obeying
 to the Schoredinger equation.? That is true, but wrong as an explanation.
  Because Obama promise to legalize pot? That is false, but could have work
 as a possible  explanation. It is closer to the relevance level.

 When we reduce a domain to another ontologically, this does not need to
 eliminate the explanation power of the first domain. This is made palpable
 in computer science. You will never explain how a chess program works by
 referring to a low level.

 Bruno

 http://iridia.ulb.ac.be/~marchal/ http://iridia.ulb.ac.be/%7Emarchal/




 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To post to this group, send email to everything-l...@googlegroups.com.
 To unsubscribe from this group, send email to
 everything-list+unsubscr...@googlegroups.comeverything-list%2bunsubscr...@googlegroups.com
 .
 For more options, visit this group at
 http://groups.google.com/group/everything-list?hl=en.




-- 
silky
 http://www.mirios.com.au/
 http://island.mirios.com.au/t/rigby+random+20

UNBOUNDED-reconcilable crow's-feet; COKE? Intermarriage distressing: puke
tailoring bicyclist...
-- 

You received this message because you are subscribed to the Google Groups "Everything List" group.

To post to this group, send email to everything-l...@googlegroups.com.

To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com.

For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.



Re: on consciousness levels and ai

2010-01-18 Thread silky
On Mon, Jan 18, 2010 at 6:57 PM, Brent Meeker meeke...@dslextreme.com wrote:
 silky wrote:

 On Mon, Jan 18, 2010 at 6:08 PM, Brent Meeker meeke...@dslextreme.com
 wrote:


 silky wrote:


 I'm not sure if this question is appropriate here, nevertheless, the
 most direct way to find out is to ask it :)

 Clearly, creating AI on a computer is a goal, and generally we'll try
 and implement to the same degree of computationalness as a human.
 But what would happen if we simply tried to re-implement the
 consciousness of a cat, or some lesser consciousness, but still
 alive, entity.

 It would be my (naive) assumption, that this is arguably trivial to
 do. We can design a program that has a desire to 'live', as desire to
 find mates, and otherwise entertain itself. In this way, with some
 other properties, we can easily model simply pets.

 I then wonder, what moral obligations do we owe these programs? Is it
 correct to turn them off? If so, why can't we do the same to a real
 life cat? Is it because we think we've not modelled something
 correctly, or is it because we feel it's acceptable as we've created
 this program, and hence know all its laws? On that basis, does it mean
 it's okay to power off a real life cat, if we are confident we know
 all of it's properties? Or is it not the knowning of the properties
 that is critical, but the fact that we, specifically, have direct
 control over it? Over its internals? (i.e. we can easily remove the
 lines of code that give it the desire to 'live'). But wouldn't, then,
 the removal of that code be equivelant to killing it? If not, why?



 I think the differences are

 1) we generally cannot kill an animal without causing it some distress


 Is that because our off function in real life isn't immediate?

 Yes.

Do does that mean you would not feel guilty turning off a real cat, if
it could be done immediately?


 Or,
 as per below, because it cannot get more pleasure?


 No, that's why I made it separate.



 2) as
 long as it is alive it has a capacity for pleasure (that's why we
 euthanize
 pets when we think they can no longer enjoy any part of life)


  This is fair. But what if we were able to model this addition of
  pleasure in the program? It's easy to increase happiness++, and thus
  the desire to die decreases.

 I don't think it's so easy as you suppose.  Pleasure comes through
 satisfying desires and it has as many dimensions as there are kinds of
 desires.  A animal that has very limited desires, e.g. eat and reproduce,
 would not seem to us capable of much pleasure and we would kill it without
 much feeling of guilt - as swatting a fly.

Okay, so for your the moral responsibility comes in when we are
depriving the entity from pleasure AND because we can't turn it off
immediately (i.e. it will become aware it's being switched off; and
become upset).


  Is this very simple variable enough to
  make us care? Clearly not, but why not? Is it because the animal is
  more conscious then we think? Is the answer that it's simply
  impossible to model even a cat's consciousness completely?
 
  If we model an animal that only exists to eat/live/reproduce, have we
  created any moral responsibility? I don't think our moral
  responsibility would start even if we add a very complicated
  pleasure-based system into the model.

 I think it would - just as we have ethical feelings toward dogs and tigers.

So assuming someone can create the appropriate model, and you can
see that you will be depriving pleasure and/or causing pain, you'd
start to feel guilty about switching the entity off? Probably it would
be as simple as having the cat/dog whimper as it senses that the
program was going to terminate (obviously, visual stimulus would help
in a deterrent),  but then it must be asked, would the programmer feel
guilt? Or just an average user of the system, who doesn't know the
underlying programming model?


  My personal opinion is that it
  would hard to *ever* feel guilty about ending something that you have
  created so artificially (i.e. with every action directly predictable
  by you, casually).

 Even if the AI were strictly causal, it's interaction with the environment
 would very quickly make it's actions unpredictable.  And I think you are
 quite wrong about how you would feel.  People report feeling guilty about
 not interacting with the Sony artificial pet.

I've clarified my position above; does the programmer ever feel guilt,
or only the users?


  But then, it may be asked; children are the same.
  Humour aside, you can pretty much have a general idea of exactly what
  they will do,

 You must not have raised any children.

Sadly, I have not.


 Brent

-- 
silky
  http://www.mirios.com.au/
  http://island.mirios.com.au/t/rigby+random+20

CHURLISH rigidness; individual tangibly insomuch sadness cheerfulness.
-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to 

Re: on consciousness levels and ai

2010-01-18 Thread Brent Meeker

silky wrote:

On Mon, Jan 18, 2010 at 6:57 PM, Brent Meeker meeke...@dslextreme.com wrote:
  

silky wrote:


On Mon, Jan 18, 2010 at 6:08 PM, Brent Meeker meeke...@dslextreme.com
wrote:

  

silky wrote:



I'm not sure if this question is appropriate here, nevertheless, the
most direct way to find out is to ask it :)

Clearly, creating AI on a computer is a goal, and generally we'll try
and implement to the same degree of computationalness as a human.
But what would happen if we simply tried to re-implement the
consciousness of a cat, or some lesser consciousness, but still
alive, entity.

It would be my (naive) assumption, that this is arguably trivial to
do. We can design a program that has a desire to 'live', as desire to
find mates, and otherwise entertain itself. In this way, with some
other properties, we can easily model simply pets.

I then wonder, what moral obligations do we owe these programs? Is it
correct to turn them off? If so, why can't we do the same to a real
life cat? Is it because we think we've not modelled something
correctly, or is it because we feel it's acceptable as we've created
this program, and hence know all its laws? On that basis, does it mean
it's okay to power off a real life cat, if we are confident we know
all of it's properties? Or is it not the knowning of the properties
that is critical, but the fact that we, specifically, have direct
control over it? Over its internals? (i.e. we can easily remove the
lines of code that give it the desire to 'live'). But wouldn't, then,
the removal of that code be equivelant to killing it? If not, why?


  

I think the differences are

1) we generally cannot kill an animal without causing it some distress



Is that because our off function in real life isn't immediate?
  

Yes.



Do does that mean you would not feel guilty turning off a real cat, if
it could be done immediately?
  


No, that's only one reason - read the others.


  

Or,
as per below, because it cannot get more pleasure?

  

No, that's why I made it separate.

  

2) as
long as it is alive it has a capacity for pleasure (that's why we
euthanize
pets when we think they can no longer enjoy any part of life)



This is fair. But what if we were able to model this addition of
pleasure in the program? It's easy to increase happiness++, and thus
the desire to die decreases.
  

I don't think it's so easy as you suppose.  Pleasure comes through
satisfying desires and it has as many dimensions as there are kinds of
desires.  A animal that has very limited desires, e.g. eat and reproduce,
would not seem to us capable of much pleasure and we would kill it without
much feeling of guilt - as swatting a fly.



Okay, so for your the moral responsibility comes in when we are
depriving the entity from pleasure AND because we can't turn it off
immediately (i.e. it will become aware it's being switched off; and
become upset).


  

Is this very simple variable enough to
make us care? Clearly not, but why not? Is it because the animal is
more conscious then we think? Is the answer that it's simply
impossible to model even a cat's consciousness completely?

If we model an animal that only exists to eat/live/reproduce, have we
created any moral responsibility? I don't think our moral
responsibility would start even if we add a very complicated
pleasure-based system into the model.
  

I think it would - just as we have ethical feelings toward dogs and tigers.



So assuming someone can create the appropriate model, and you can
see that you will be depriving pleasure and/or causing pain, you'd
start to feel guilty about switching the entity off? Probably it would
be as simple as having the cat/dog whimper as it senses that the
program was going to terminate (obviously, visual stimulus would help
in a deterrent),  but then it must be asked, would the programmer feel
guilt? Or just an average user of the system, who doesn't know the
underlying programming model?


  

My personal opinion is that it
would hard to *ever* feel guilty about ending something that you have
created so artificially (i.e. with every action directly predictable
by you, casually).
  

Even if the AI were strictly causal, it's interaction with the environment
would very quickly make it's actions unpredictable.  And I think you are
quite wrong about how you would feel.  People report feeling guilty about
not interacting with the Sony artificial pet.



I've clarified my position above; does the programmer ever feel guilt,
or only the users?
  


The programmer too (though maybe less) because any reasonably high-level 
AI would have learned things and would no longer appear to just be 
running through a rote program - even to the programmer.


Brent


  

But then, it may be asked; children are the same.
Humour aside, you can pretty much have a general idea of exactly what
they will do,
  

You must not have raised any 

Re: on consciousness levels and ai

2010-01-18 Thread Stathis Papaioannou
2010/1/18 silky michaelsli...@gmail.com:

 It would be my (naive) assumption, that this is arguably trivial to
 do. We can design a program that has a desire to 'live', as desire to
 find mates, and otherwise entertain itself. In this way, with some
 other properties, we can easily model simply pets.

Brent's reasons are valid, but I don't think making an artificial
animal is as simple as you say. Henry Markham's group are presently
trying to simulate a rat brain, and so far they have done 10,000
neurons which they are hopeful is behaving in a physiological way.
This is at huge computational expense, and they have a long way to go
before simulating a whole rat brain, and no guarantee that it will
start behaving like a rat. If it does, then they are only a few years
away from simulating a human, soon after that will come a superhuman
AI, and soon after that it's we who will have to argue that we have
feelings and are worth preserving.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.




Re: on consciousness levels and ai

2010-01-18 Thread John Mikes
Dear Brent,

is it your 'conscious' position to look at things in an anthropocentric
limitation?
If you substitute consistently 'animal' for 'pet', you can include the human
animal as well. In that case your
#1: would you consider 'distress' a disturbed mental state only, or include
organisational 'distress' as well - causing what we may call: 'death'? In
the first case you can circumvent the distress by putting the animal to
sleep before killing, causing THEN the 2nd case.

#2: I never talked to a shrimp in shrimpese so I don't know what ens
'pleasurable' to it.

#3 speaking about AL (human included) we include circumstances we already
discovered and there is no assurance that we 'create' LIFE (what is it?) as
it really  - IS - G. So turning on/off a contraption we call 'animal'
(artificial pet?) is not what we are talking about.

#4: I consider 'ethical' an anthropocentric culture-related majority (?)
opinion in many cases hypocritical and pretentious. Occasionally it can be a
power-forced minority opinion as well.
I feel the *alleged *Chinese moral behind your remark: if you save a life
you are responsible for the person (I don't know if it is true?).

I try to take a more general stance and not restrict terms even to 'live(?)'
creatures.
Is a universal mchine 'live'?

Have fun in science

John M




On Mon, Jan 18, 2010 at 2:08 AM, Brent Meeker meeke...@dslextreme.comwrote:

 silky wrote:

 (truncated)

 BM:


 I think the differences are

 1) we generally cannot kill an animal without causing it some distress 2)
 as long as it is alive it has a capacity for pleasure (that's why we
 euthanize pets when we think they can no longer enjoy any part of life) 3)
 if we could create an artificial pet (and Sony did) we can turn it off and
 turn it back on.
 4) if a pet, artificial or otherwise, has capacity for pleasure and
 suffering we do have an ethical responsibility toward it.

 Brent

-- 

You received this message because you are subscribed to the Google Groups "Everything List" group.

To post to this group, send email to everything-l...@googlegroups.com.

To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com.

For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.



Re: on consciousness levels and ai

2010-01-18 Thread Bruno Marchal


On 18 Jan 2010, at 16:35, John Mikes wrote:


Is a universal mchine 'live'?



I would say yes, despite the concrete artificial one still needs  
humans in its reproduction cycle. But we need plants and bacteria.
I think that all machines, including houses and garden are alive in  
that sense. Cigarets are alive. They have a a way to reproduce.  
Universal machine are alive and can be conscious.


If we define artificial by introduced by humans, we can see that  
the difference between artificial and natural is ... artificial (and  
thus natural!). Jacques Lafitte wrote in 1911 (published in 1931) a  
book where he describes the rise of machines and technology as a  
collateral living processes.


Only for Löbian machine, like you, me, but also Peano Arithmetic and  
ZF , I would say I am pretty sure that they are reflexively conscious  
like us. Despite they have no lived experiences at all. (Well, they  
have our experiences, in a sense. We are their experiences).


Bruno

http://iridia.ulb.ac.be/~marchal/



-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.




Re: on consciousness levels and ai

2010-01-18 Thread silky
On Tue, Jan 19, 2010 at 1:24 AM, Stathis Papaioannou stath...@gmail.com wrote:
 2010/1/18 silky michaelsli...@gmail.com:
  It would be my (naive) assumption, that this is arguably trivial to
  do. We can design a program that has a desire to 'live', as desire to
  find mates, and otherwise entertain itself. In this way, with some
  other properties, we can easily model simply pets.

 Brent's reasons are valid,

Where it falls down for me is that the programmer should ever feel
guilt. I don't see how I could feel guilty for ending a program when I
know exactly how it will operate (what paths it will take), even if I
can't be completely sure of the specific decisions (due to some
randomisation or whatever) I don't see how I could ever think No, you
can't harm X. But what I find very interesting, is that even if I
knew *exactly* how a cat operated, I could never kill one.


 but I don't think making an artificial
 animal is as simple as you say.

So is it a complexity issue? That you only start to care about the
entity when it's significantly complex. But exactly how complex? Or is
it about the unknowningness; that the project is so large you only
work on a small part, and thus you don't fully know it's workings, and
then that is where the guilt comes in.


 Henry Markham's group are presently
 trying to simulate a rat brain, and so far they have done 10,000
 neurons which they are hopeful is behaving in a physiological way.
 This is at huge computational expense, and they have a long way to go
 before simulating a whole rat brain, and no guarantee that it will
 start behaving like a rat. If it does, then they are only a few years
 away from simulating a human, soon after that will come a superhuman
 AI, and soon after that it's we who will have to argue that we have
 feelings and are worth preserving.

Indeed, this is something that concerns me as well. If we do create an
AI, and force it to do our bidding, are we acting immorally? Or
perhaps we just withhold the desire for the program to do it's own
thing, but is that in itself wrong?


 --
 Stathis Papaioannou

-- 
silky
  http://www.mirios.com.au/
  http://island.mirios.com.au/t/rigby+random+20

crib? Unshakably MINICAM = heckling millisecond? Cave-in RUMP =
extraterrestrial matrimonial ...
-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.




Re: on consciousness levels and ai

2010-01-18 Thread Brent Meeker

silky wrote:

On Tue, Jan 19, 2010 at 1:24 AM, Stathis Papaioannou stath...@gmail.com wrote:
  

2010/1/18 silky michaelsli...@gmail.com:


It would be my (naive) assumption, that this is arguably trivial to
do. We can design a program that has a desire to 'live', as desire to
find mates, and otherwise entertain itself. In this way, with some
other properties, we can easily model simply pets.
  

Brent's reasons are valid,



Where it falls down for me is that the programmer should ever feel
guilt. I don't see how I could feel guilty for ending a program when I
know exactly how it will operate (what paths it will take), even if I
can't be completely sure of the specific decisions (due to some
randomisation or whatever) 


It's not just randomisation, it's experience.  If you create and AI at 
fairly high-level (cat, dog, rat, human) it will necessarily have the 
ability to learn and after interacting with it's enviroment for a while 
it will become a unique individual.  That's why you would feel sad to 
kill it - all that experience and knowledge that you don't know how to 
replace.  Of course it might learn to be evil or at least annoying, 
which would make you feel less guilty.



I don't see how I could ever think No, you
can't harm X. But what I find very interesting, is that even if I
knew *exactly* how a cat operated, I could never kill one.


  

but I don't think making an artificial
animal is as simple as you say.



So is it a complexity issue? That you only start to care about the
entity when it's significantly complex. But exactly how complex? Or is
it about the unknowningness; that the project is so large you only
work on a small part, and thus you don't fully know it's workings, and
then that is where the guilt comes in.
  


I think unknowingness plays a big part, but it's because of our 
experience with people and animals, we project our own experience of 
consciousness on to them so that when we see them behave in certain ways 
we impute an inner life to them that includes pleasure and suffering.


  

Henry Markham's group are presently
trying to simulate a rat brain, and so far they have done 10,000
neurons which they are hopeful is behaving in a physiological way.
This is at huge computational expense, and they have a long way to go
before simulating a whole rat brain, and no guarantee that it will
start behaving like a rat. If it does, then they are only a few years
away from simulating a human, soon after that will come a superhuman
AI, and soon after that it's we who will have to argue that we have
feelings and are worth preserving.



Indeed, this is something that concerns me as well. If we do create an
AI, and force it to do our bidding, are we acting immorally? Or
perhaps we just withhold the desire for the program to do it's own
thing, but is that in itself wrong?
  


I don't think so.  We don't worry about the internet's feelings, or the 
air traffic control system.  John McCarthy has written essays on this 
subject and he cautions against creating AI with human like emotions 
precisely because of the ethical implications.  But that means we need 
to understand consciousness and emotions less we accidentally do 
something unethical.


Brent


  

--
Stathis Papaioannou



  


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.




Re: on consciousness levels and ai

2010-01-18 Thread Stathis Papaioannou
2010/1/19 silky michaelsli...@gmail.com:
 On Tue, Jan 19, 2010 at 1:24 AM, Stathis Papaioannou stath...@gmail.com 
 wrote:
 2010/1/18 silky michaelsli...@gmail.com:
  It would be my (naive) assumption, that this is arguably trivial to
  do. We can design a program that has a desire to 'live', as desire to
  find mates, and otherwise entertain itself. In this way, with some
  other properties, we can easily model simply pets.

 Brent's reasons are valid,

 Where it falls down for me is that the programmer should ever feel
 guilt. I don't see how I could feel guilty for ending a program when I
 know exactly how it will operate (what paths it will take), even if I
 can't be completely sure of the specific decisions (due to some
 randomisation or whatever) I don't see how I could ever think No, you
 can't harm X. But what I find very interesting, is that even if I
 knew *exactly* how a cat operated, I could never kill one.

That's not being rational then, is it?

 but I don't think making an artificial
 animal is as simple as you say.

 So is it a complexity issue? That you only start to care about the
 entity when it's significantly complex. But exactly how complex? Or is
 it about the unknowningness; that the project is so large you only
 work on a small part, and thus you don't fully know it's workings, and
 then that is where the guilt comes in.

Obviously intelligence and the ability to have feelings and desires
has something to do with complexity. It would be easy enough to write
a computer program that pleads with you to do something but you don't
feel bad about disappointing it, because you know it lacks the full
richness of human intelligence and consciousness.

 Henry Markham's group are presently
 trying to simulate a rat brain, and so far they have done 10,000
 neurons which they are hopeful is behaving in a physiological way.
 This is at huge computational expense, and they have a long way to go
 before simulating a whole rat brain, and no guarantee that it will
 start behaving like a rat. If it does, then they are only a few years
 away from simulating a human, soon after that will come a superhuman
 AI, and soon after that it's we who will have to argue that we have
 feelings and are worth preserving.

 Indeed, this is something that concerns me as well. If we do create an
 AI, and force it to do our bidding, are we acting immorally? Or
 perhaps we just withhold the desire for the program to do it's own
 thing, but is that in itself wrong?

If we created an AI that wanted to do our bidding or that didn't care
what it did, then it would not be wrong. Some people anthropomorphise
and imagine the AI as themselves or people they know: and since they
would not like being enslaved they assume the AI wouldn't either. But
this is false. Eliezer Yudkowsky has written a lot about AI, the
ethical issues, and the necessity to make a friendly AI so that it
didn't destroy us whether through intention or indifference.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.




Re: on consciousness levels and ai

2010-01-18 Thread silky
On Tue, Jan 19, 2010 at 10:09 AM, Brent Meeker meeke...@dslextreme.com wrote:
 silky wrote:

 On Tue, Jan 19, 2010 at 1:24 AM, Stathis Papaioannou stath...@gmail.com
 wrote:


 2010/1/18 silky michaelsli...@gmail.com:


 It would be my (naive) assumption, that this is arguably trivial to
 do. We can design a program that has a desire to 'live', as desire to
 find mates, and otherwise entertain itself. In this way, with some
 other properties, we can easily model simply pets.


 Brent's reasons are valid,


 Where it falls down for me is that the programmer should ever feel
 guilt. I don't see how I could feel guilty for ending a program when I
 know exactly how it will operate (what paths it will take), even if I
 can't be completely sure of the specific decisions (due to some
 randomisation or whatever)

 It's not just randomisation, it's experience.  If you create and AI at
 fairly high-level (cat, dog, rat, human) it will necessarily have the
 ability to learn and after interacting with it's enviroment for a while it
 will become a unique individual.  That's why you would feel sad to kill it
 - all that experience and knowledge that you don't know how to replace.  Of
 course it might learn to be evil or at least annoying, which would make
 you feel less guilty.

Nevertheless, though, I know it's exact environment, so I can recreate
the things that it learned (I can recreate it all; it's all
deterministic: I programmed it). The only thing I can't recreate, is
the randomness, assuming I introduced that (but as we know, I can
recreate that anyway, because I'd just use the same seed state;
unless the source of randomness is true).



 I don't see how I could ever think No, you
 can't harm X. But what I find very interesting, is that even if I
 knew *exactly* how a cat operated, I could never kill one.




 but I don't think making an artificial
 animal is as simple as you say.


 So is it a complexity issue? That you only start to care about the
 entity when it's significantly complex. But exactly how complex? Or is
 it about the unknowningness; that the project is so large you only
 work on a small part, and thus you don't fully know it's workings, and
 then that is where the guilt comes in.


 I think unknowingness plays a big part, but it's because of our experience
 with people and animals, we project our own experience of consciousness on
 to them so that when we see them behave in certain ways we impute an inner
 life to them that includes pleasure and suffering.

Yes, I agree. So does that mean that, over time, if we continue using
these computer-based cats, we would become attached to them (i.e. your
Sony toys example


  Indeed, this is something that concerns me as well. If we do create an
  AI, and force it to do our bidding, are we acting immorally? Or
  perhaps we just withhold the desire for the program to do it's own
  thing, but is that in itself wrong?
 

 I don't think so.  We don't worry about the internet's feelings, or the air
 traffic control system.  John McCarthy has written essays on this subject
 and he cautions against creating AI with human like emotions precisely
 because of the ethical implications.  But that means we need to understand
 consciousness and emotions less we accidentally do something unethical.

Fair enough. But by the same token, what if we discover a way to
remove emotions from real-born children. Would it be wrong to do that?
Is emotion an inherent property that we should never be allowed to
remove, once created?


 Brent

-- 
silky
  http://www.mirios.com.au/
  http://island.mirios.com.au/t/rigby+random+20

FRACTURE THISTLEDOWN CURIOUSLY! Sixfold columned HOBBLER shouter
OVERLAND axon ZANY interbree...
-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.




Re: on consciousness levels and ai

2010-01-18 Thread silky
On Tue, Jan 19, 2010 at 10:30 AM, Stathis Papaioannou
stath...@gmail.com wrote:
 2010/1/19 silky michaelsli...@gmail.com:
  On Tue, Jan 19, 2010 at 1:24 AM, Stathis Papaioannou stath...@gmail.com 
  wrote:
   2010/1/18 silky michaelsli...@gmail.com:
It would be my (naive) assumption, that this is arguably trivial to
do. We can design a program that has a desire to 'live', as desire to
find mates, and otherwise entertain itself. In this way, with some
other properties, we can easily model simply pets.
  
   Brent's reasons are valid,
 
  Where it falls down for me is that the programmer should ever feel
  guilt. I don't see how I could feel guilty for ending a program when I
  know exactly how it will operate (what paths it will take), even if I
  can't be completely sure of the specific decisions (due to some
  randomisation or whatever) I don't see how I could ever think No, you
  can't harm X. But what I find very interesting, is that even if I
  knew *exactly* how a cat operated, I could never kill one.

 That's not being rational then, is it?

Exactly my point! I'm trying to discover why I wouldn't be so rational
there. Would you? Do you think that knowing all there is to know about
a cat is unpractical to the point of being impossible *forever*, or do
you believe that once we do know, we will simply end them freely,
when they get in our way? I think at some point we *will* know all
there is to know about them, and even then, we won't end them easily.
Why not? Is it the emotional projection that Brent suggests? Possibly.


   but I don't think making an artificial
   animal is as simple as you say.

 So is it a complexity issue? That you only start to care about the
 entity when it's significantly complex. But exactly how complex? Or is
 it about the unknowningness; that the project is so large you only
 work on a small part, and thus you don't fully know it's workings, and
 then that is where the guilt comes in.

 Obviously intelligence and the ability to have feelings and desires
 has something to do with complexity. It would be easy enough to write
 a computer program that pleads with you to do something but you don't
 feel bad about disappointing it, because you know it lacks the full
 richness of human intelligence and consciousness.

Indeed; so part of the question is: Qhat level of complexity
constitutes this? Is it simply any level that we don't understand? Or
is there a level that we *can* understand that still makes us feel
that way? I think it's more complicated than just any level we don't
understand (because clearly, I understand that if I twist your arm,
it will hurt you, and I know exactly why, but I don't do it).


   Henry Markham's group are presently
   trying to simulate a rat brain, and so far they have done 10,000
   neurons which they are hopeful is behaving in a physiological way.
   This is at huge computational expense, and they have a long way to go
   before simulating a whole rat brain, and no guarantee that it will
   start behaving like a rat. If it does, then they are only a few years
   away from simulating a human, soon after that will come a superhuman
   AI, and soon after that it's we who will have to argue that we have
   feelings and are worth preserving.
 
  Indeed, this is something that concerns me as well. If we do create an
  AI, and force it to do our bidding, are we acting immorally? Or
  perhaps we just withhold the desire for the program to do it's own
  thing, but is that in itself wrong?

 If we created an AI that wanted to do our bidding or that didn't care
 what it did, then it would not be wrong. Some people anthropomorphise
 and imagine the AI as themselves or people they know: and since they
 would not like being enslaved they assume the AI wouldn't either. But
 this is false. Eliezer Yudkowsky has written a lot about AI, the
 ethical issues, and the necessity to make a friendly AI so that it
 didn't destroy us whether through intention or indifference.

 --
 Stathis Papaioannou

-- 
silky
  http://www.mirios.com.au/
  http://island.mirios.com.au/t/rigby+random+20

JUGULAR MATERIALS: thundershower! PRETERNATURAL anise! Stressed
BATTERED KICKBALL neophyte: k...
-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.




Re: on consciousness levels and ai

2010-01-18 Thread Brent Meeker

silky wrote:

On Tue, Jan 19, 2010 at 10:09 AM, Brent Meeker meeke...@dslextreme.com wrote:
  

silky wrote:


On Tue, Jan 19, 2010 at 1:24 AM, Stathis Papaioannou stath...@gmail.com
wrote:

  

2010/1/18 silky michaelsli...@gmail.com:



It would be my (naive) assumption, that this is arguably trivial to
do. We can design a program that has a desire to 'live', as desire to
find mates, and otherwise entertain itself. In this way, with some
other properties, we can easily model simply pets.

  

Brent's reasons are valid,



Where it falls down for me is that the programmer should ever feel
guilt. I don't see how I could feel guilty for ending a program when I
know exactly how it will operate (what paths it will take), even if I
can't be completely sure of the specific decisions (due to some
randomisation or whatever)
  

It's not just randomisation, it's experience.  If you create and AI at
fairly high-level (cat, dog, rat, human) it will necessarily have the
ability to learn and after interacting with it's enviroment for a while it
will become a unique individual.  That's why you would feel sad to kill it
- all that experience and knowledge that you don't know how to replace.  Of
course it might learn to be evil or at least annoying, which would make
you feel less guilty.



Nevertheless, though, I know it's exact environment, 


Not if it interacts with the world.  You must be thinking of a virtual 
cat AI in a virtual world - but even there the program, if at all 
realistic, is likely to be to complex for you to really comprehend.  Of 
course *in principle* you could spend years going over a few terrabites 
of data and  you could understand, Oh that's why the AI cat did that on 
day 2118 at 10:22:35, it was because of the interaction of memories of 
day 1425 at 07:54:28 and ...(long string of stuff).  But you'd be in 
almost the same position as the neuroscientist who understands what a 
clump of neurons does but can't get a wholistic view of what the 
organism will do.


Surely you've had the experience of trying to debug a large program you 
wrote some years ago that now seems to fail on some input you never 
tried before.  Now think how much harder that would be if it were an AI 
that had been learning and modifying itself for all those years.

so I can recreate
the things that it learned (I can recreate it all; it's all
deterministic: I programmed it). The only thing I can't recreate, is
the randomness, assuming I introduced that (but as we know, I can
recreate that anyway, because I'd just use the same seed state;
unless the source of randomness is true).



  

I don't see how I could ever think No, you
can't harm X. But what I find very interesting, is that even if I
knew *exactly* how a cat operated, I could never kill one.



  

but I don't think making an artificial
animal is as simple as you say.



So is it a complexity issue? That you only start to care about the
entity when it's significantly complex. But exactly how complex? Or is
it about the unknowningness; that the project is so large you only
work on a small part, and thus you don't fully know it's workings, and
then that is where the guilt comes in.

  

I think unknowingness plays a big part, but it's because of our experience
with people and animals, we project our own experience of consciousness on
to them so that when we see them behave in certain ways we impute an inner
life to them that includes pleasure and suffering.



Yes, I agree. So does that mean that, over time, if we continue using
these computer-based cats, we would become attached to them (i.e. your
Sony toys example
  


Hell, I even become attached to my motorcycles.


  

Indeed, this is something that concerns me as well. If we do create an
AI, and force it to do our bidding, are we acting immorally? Or
perhaps we just withhold the desire for the program to do it's own
thing, but is that in itself wrong?

  

I don't think so.  We don't worry about the internet's feelings, or the air
traffic control system.  John McCarthy has written essays on this subject
and he cautions against creating AI with human like emotions precisely
because of the ethical implications.  But that means we need to understand
consciousness and emotions less we accidentally do something unethical.



Fair enough. But by the same token, what if we discover a way to
remove emotions from real-born children. Would it be wrong to do that?
Is emotion an inherent property that we should never be allowed to
remove, once created?
  


Certainly it would be fruitless to remove all emotions because that 
would be the same as removing all discrimination and motivation - they'd 
be dumb as tape recorders.  So I suppose you're asking about removing, 
or providing specific emotions.  Removing, for example, empathy would 
certainly be bad idea - that's how you get sociopathic killers.  Suppose 
we could remove all selfishness and create 

Re: on consciousness levels and ai

2010-01-18 Thread silky
On Tue, Jan 19, 2010 at 1:02 PM, Brent Meeker meeke...@dslextreme.comwrote:

 silky wrote:

 On Tue, Jan 19, 2010 at 10:09 AM, Brent Meeker meeke...@dslextreme.com
 wrote:


 silky wrote:


 On Tue, Jan 19, 2010 at 1:24 AM, Stathis Papaioannou 
 stath...@gmail.com
 wrote:



 2010/1/18 silky michaelsli...@gmail.com:



 It would be my (naive) assumption, that this is arguably trivial to
 do. We can design a program that has a desire to 'live', as desire to
 find mates, and otherwise entertain itself. In this way, with some
 other properties, we can easily model simply pets.



 Brent's reasons are valid,



 Where it falls down for me is that the programmer should ever feel
 guilt. I don't see how I could feel guilty for ending a program when I
 know exactly how it will operate (what paths it will take), even if I
 can't be completely sure of the specific decisions (due to some
 randomisation or whatever)


 It's not just randomisation, it's experience.  If you create and AI at
 fairly high-level (cat, dog, rat, human) it will necessarily have the
 ability to learn and after interacting with it's enviroment for a while
 it
 will become a unique individual.  That's why you would feel sad to kill
 it
 - all that experience and knowledge that you don't know how to replace.
  Of
 course it might learn to be evil or at least annoying, which would make
 you feel less guilty.



 Nevertheless, though, I know it's exact environment,


 Not if it interacts with the world.  You must be thinking of a virtual cat
 AI in a virtual world - but even there the program, if at all realistic, is
 likely to be to complex for you to really comprehend.  Of course *in
 principle* you could spend years going over a few terrabites of data and
  you could understand, Oh that's why the AI cat did that on day 2118 at
 10:22:35, it was because of the interaction of memories of day 1425 at
 07:54:28 and ...(long string of stuff).  But you'd be in almost the same
 position as the neuroscientist who understands what a clump of neurons does
 but can't get a wholistic view of what the organism will do.

 Surely you've had the experience of trying to debug a large program you
 wrote some years ago that now seems to fail on some input you never tried
 before.  Now think how much harder that would be if it were an AI that had
 been learning and modifying itself for all those years.


I don't disagree with you that it would be significantly complicated, I
suppose my argument is only that, unlike with a real cat, I - the programmer
- know all there is to know about this computer cat. I'm wondering to what
degree that adds or removes to my moral obligations.



  so I can recreate
 the things that it learned (I can recreate it all; it's all
 deterministic: I programmed it). The only thing I can't recreate, is
 the randomness, assuming I introduced that (but as we know, I can
 recreate that anyway, because I'd just use the same seed state;
 unless the source of randomness is true).



 I don't see how I could ever think No, you
 can't harm X. But what I find very interesting, is that even if I
 knew *exactly* how a cat operated, I could never kill one.





 but I don't think making an artificial
 animal is as simple as you say.



 So is it a complexity issue? That you only start to care about the
 entity when it's significantly complex. But exactly how complex? Or is
 it about the unknowningness; that the project is so large you only
 work on a small part, and thus you don't fully know it's workings, and
 then that is where the guilt comes in.



 I think unknowingness plays a big part, but it's because of our
 experience
 with people and animals, we project our own experience of consciousness
 on
 to them so that when we see them behave in certain ways we impute an
 inner
 life to them that includes pleasure and suffering.



 Yes, I agree. So does that mean that, over time, if we continue using
 these computer-based cats, we would become attached to them (i.e. your
 Sony toys example



 Hell, I even become attached to my motorcycles.


Does it follow, then, that we'll start to have laws relating to ending of
motorcycles humanely? Probably not. So there must be more too it then just
attachment.






 Indeed, this is something that concerns me as well. If we do create an
 AI, and force it to do our bidding, are we acting immorally? Or
 perhaps we just withhold the desire for the program to do it's own
 thing, but is that in itself wrong?



 I don't think so.  We don't worry about the internet's feelings, or the
 air
 traffic control system.  John McCarthy has written essays on this subject
 and he cautions against creating AI with human like emotions precisely
 because of the ethical implications.  But that means we need to
 understand
 consciousness and emotions less we accidentally do something unethical.



 Fair enough. But by the same token, what if we discover a way to
 remove emotions from real-born children. Would it be wrong to do 

Re: on consciousness levels and ai

2010-01-18 Thread Brent Meeker

silky wrote:

On Tue, Jan 19, 2010 at 10:30 AM, Stathis Papaioannou
stath...@gmail.com wrote:
  

2010/1/19 silky michaelsli...@gmail.com:


On Tue, Jan 19, 2010 at 1:24 AM, Stathis Papaioannou stath...@gmail.com wrote:
  

2010/1/18 silky michaelsli...@gmail.com:


It would be my (naive) assumption, that this is arguably trivial to
do. We can design a program that has a desire to 'live', as desire to
find mates, and otherwise entertain itself. In this way, with some
other properties, we can easily model simply pets.
  

Brent's reasons are valid,


Where it falls down for me is that the programmer should ever feel
guilt. I don't see how I could feel guilty for ending a program when I
know exactly how it will operate (what paths it will take), even if I
can't be completely sure of the specific decisions (due to some
randomisation or whatever) I don't see how I could ever think No, you
can't harm X. But what I find very interesting, is that even if I
knew *exactly* how a cat operated, I could never kill one.
  

That's not being rational then, is it?



Exactly my point! I'm trying to discover why I wouldn't be so rational
there. Would you? Do you think that knowing all there is to know about
a cat is unpractical to the point of being impossible *forever*, or do
you believe that once we do know, we will simply end them freely,
when they get in our way? I think at some point we *will* know all
there is to know about them, and even then, we won't end them easily.
Why not? Is it the emotional projection that Brent suggests? Possibly.


  

but I don't think making an artificial
animal is as simple as you say.


So is it a complexity issue? That you only start to care about the
entity when it's significantly complex. But exactly how complex? Or is
it about the unknowningness; that the project is so large you only
work on a small part, and thus you don't fully know it's workings, and
then that is where the guilt comes in.
  

Obviously intelligence and the ability to have feelings and desires
has something to do with complexity. It would be easy enough to write
a computer program that pleads with you to do something but you don't
feel bad about disappointing it, because you know it lacks the full
richness of human intelligence and consciousness.



Indeed; so part of the question is: Qhat level of complexity
constitutes this? Is it simply any level that we don't understand? Or
is there a level that we *can* understand that still makes us feel
that way? I think it's more complicated than just any level we don't
understand (because clearly, I understand that if I twist your arm,
it will hurt you, and I know exactly why, but I don't do it).
  


I don't think you know exactly why, unless you solved the problem of  
connecting qualia (pain) to physics (afferent nerve transmission) - but 
I agree that you know it heuristically.


For my $0.02 I think that not understanding is significant because it 
leaves a lacuna which we tend to fill by projecting ourselves.  When 
people didn't understand atmospheric physics they projected super-humans 
that produced the weather.  If you let some Afghan peasants interact 
with a fairly simple AI program, such as used in the Loebner 
competition, they might well conclude you had created an artificial 
person; even though it wouldn't fool anyone computer literate.


But even for an AI that we could in principle understand, if it is 
complex enough and acts enough like an animal I think we would feel 
ethical concerns for it.  I think a more difficult case is an 
intelligence which is so alien to us we can't project our feelings on 
it's behavior.  Stanislaw Lem has written stories on this theme: 
Solaris, His Masters Voice, Return from the Stars, Fiasco. 

There doesn't seem to be much recognition of this possibility on this 
list. There's generally an implicit assumption that we know what 
consciousness is, we have it, and that's the only possible kind of 
consciousness.  All OMs are human OMs.  I think that's one interesting 
thing about Bruno's theory; it is definite enough (if I understand it) 
that it could elucidate different kinds of consciousness.  For example, 
I think Searle's Chinese room is conscious - but in a different way than 
we are.


Brent
-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.




Re: on consciousness levels and ai

2010-01-18 Thread silky
On Tue, Jan 19, 2010 at 1:49 PM, Brent Meeker meeke...@dslextreme.comwrote:

 silky wrote:

 On Tue, Jan 19, 2010 at 10:30 AM, Stathis Papaioannou
 stath...@gmail.com wrote:


 2010/1/19 silky michaelsli...@gmail.com:


 On Tue, Jan 19, 2010 at 1:24 AM, Stathis Papaioannou 
 stath...@gmail.com wrote:


 2010/1/18 silky michaelsli...@gmail.com:


 It would be my (naive) assumption, that this is arguably trivial to
 do. We can design a program that has a desire to 'live', as desire to
 find mates, and otherwise entertain itself. In this way, with some
 other properties, we can easily model simply pets.


 Brent's reasons are valid,


 Where it falls down for me is that the programmer should ever feel
 guilt. I don't see how I could feel guilty for ending a program when I
 know exactly how it will operate (what paths it will take), even if I
 can't be completely sure of the specific decisions (due to some
 randomisation or whatever) I don't see how I could ever think No, you
 can't harm X. But what I find very interesting, is that even if I
 knew *exactly* how a cat operated, I could never kill one.


 That's not being rational then, is it?



 Exactly my point! I'm trying to discover why I wouldn't be so rational
 there. Would you? Do you think that knowing all there is to know about
 a cat is unpractical to the point of being impossible *forever*, or do
 you believe that once we do know, we will simply end them freely,
 when they get in our way? I think at some point we *will* know all
 there is to know about them, and even then, we won't end them easily.
 Why not? Is it the emotional projection that Brent suggests? Possibly.




 but I don't think making an artificial
 animal is as simple as you say.


 So is it a complexity issue? That you only start to care about the
 entity when it's significantly complex. But exactly how complex? Or is
 it about the unknowningness; that the project is so large you only
 work on a small part, and thus you don't fully know it's workings, and
 then that is where the guilt comes in.


 Obviously intelligence and the ability to have feelings and desires
 has something to do with complexity. It would be easy enough to write
 a computer program that pleads with you to do something but you don't
 feel bad about disappointing it, because you know it lacks the full
 richness of human intelligence and consciousness.



 Indeed; so part of the question is: Qhat level of complexity
 constitutes this? Is it simply any level that we don't understand? Or
 is there a level that we *can* understand that still makes us feel
 that way? I think it's more complicated than just any level we don't
 understand (because clearly, I understand that if I twist your arm,
 it will hurt you, and I know exactly why, but I don't do it).



 I don't think you know exactly why, unless you solved the problem of
  connecting qualia (pain) to physics (afferent nerve transmission) - but I
 agree that you know it heuristically.

 For my $0.02 I think that not understanding is significant because it
 leaves a lacuna which we tend to fill by projecting ourselves.  When people
 didn't understand atmospheric physics they projected super-humans that
 produced the weather.  If you let some Afghan peasants interact with a
 fairly simple AI program, such as used in the Loebner competition, they
 might well conclude you had created an artificial person; even though it
 wouldn't fool anyone computer literate.

 But even for an AI that we could in principle understand, if it is complex
 enough and acts enough like an animal I think we would feel ethical concerns
 for it.  I think a more difficult case is an intelligence which is so alien
 to us we can't project our feelings on it's behavior.  Stanislaw Lem has
 written stories on this theme: Solaris, His Masters Voice, Return from
 the Stars, Fiasco.

There doesn't seem to be much recognition of this possibility on this list.
 There's generally an implicit assumption that we know what consciousness is,
 we have it, and that's the only possible kind of consciousness.  All OMs are
 human OMs.  I think that's one interesting thing about Bruno's theory; it is
 definite enough (if I understand it) that it could elucidate different kinds
 of consciousness.  For example, I think Searle's Chinese room is conscious -
 but in a different way than we are.


I'll have to look into these things, but I do agree with you in general; I
don't think ours is the only type of consciousness at all. Though I do think
the concept that not understanding completely is interesting, because it
suggests that a god should actually not particularly care what happens to
us, because to them it's all predictable. (And obviously, the idea of moral
obligations to computer programs is arguably interesting).




 Brent

 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To post to this group, send email to everything-l...@googlegroups.com.
 To 

Re: on consciousness levels and ai

2010-01-18 Thread Brent Meeker

silky wrote:
On Tue, Jan 19, 2010 at 1:02 PM, Brent Meeker meeke...@dslextreme.com 
mailto:meeke...@dslextreme.com wrote:


silky wrote:

On Tue, Jan 19, 2010 at 10:09 AM, Brent Meeker
meeke...@dslextreme.com mailto:meeke...@dslextreme.com wrote:
 


silky wrote:
   


On Tue, Jan 19, 2010 at 1:24 AM, Stathis Papaioannou
stath...@gmail.com mailto:stath...@gmail.com
wrote:

 


2010/1/18 silky michaelsli...@gmail.com
mailto:michaelsli...@gmail.com:

   


It would be my (naive) assumption, that this
is arguably trivial to
do. We can design a program that has a desire
to 'live', as desire to
find mates, and otherwise entertain itself. In
this way, with some
other properties, we can easily model simply pets.

 


Brent's reasons are valid,

   


Where it falls down for me is that the programmer
should ever feel
guilt. I don't see how I could feel guilty for ending
a program when I
know exactly how it will operate (what paths it will
take), even if I
can't be completely sure of the specific decisions
(due to some
randomisation or whatever)
 


It's not just randomisation, it's experience.  If you
create and AI at
fairly high-level (cat, dog, rat, human) it will
necessarily have the
ability to learn and after interacting with it's
enviroment for a while it
will become a unique individual.  That's why you would
feel sad to kill it
- all that experience and knowledge that you don't know
how to replace.  Of
course it might learn to be evil or at least annoying,
which would make
you feel less guilty.
   



Nevertheless, though, I know it's exact environment,


Not if it interacts with the world.  You must be thinking of a
virtual cat AI in a virtual world - but even there the program, if
at all realistic, is likely to be to complex for you to really
comprehend.  Of course *in principle* you could spend years going
over a few terrabites of data and  you could understand, Oh
that's why the AI cat did that on day 2118 at 10:22:35, it was
because of the interaction of memories of day 1425 at 07:54:28 and
...(long string of stuff).  But you'd be in almost the same
position as the neuroscientist who understands what a clump of
neurons does but can't get a wholistic view of what the organism
will do.

Surely you've had the experience of trying to debug a large
program you wrote some years ago that now seems to fail on some
input you never tried before.  Now think how much harder that
would be if it were an AI that had been learning and modifying
itself for all those years.


I don't disagree with you that it would be significantly complicated, 
I suppose my argument is only that, unlike with a real cat, I - the 
programmer - know all there is to know about this computer cat.


But you *don't* know all there is to know about it.  You don't know what 
it has learned - and there's no practical way to find out.



I'm wondering to what degree that adds or removes to my moral obligations.


Destroying something can be good or bad.  Not knowing what you're 
destroying usually counts on the bad side.


 


so I can recreate
the things that it learned (I can recreate it all; it's all
deterministic: I programmed it). The only thing I can't
recreate, is
the randomness, assuming I introduced that (but as we know, I can
recreate that anyway, because I'd just use the same seed state;
unless the source of randomness is true).

 


I don't see how I could ever think No, you
can't harm X. But what I find very interesting, is
that even if I
knew *exactly* how a cat operated, I could never kill one.



 


but I don't think making an artificial
animal is as simple as you say.

   


So is it a complexity issue? That you only start to
care about the
entity when it's significantly complex. But exactly
how complex? Or is
it about the unknowningness; that the project is so
large you only
work on a small 

Re: on consciousness levels and ai

2010-01-18 Thread silky
On Tue, Jan 19, 2010 at 2:19 PM, Brent Meeker meeke...@dslextreme.comwrote:

 silky wrote:

 On Tue, Jan 19, 2010 at 1:02 PM, Brent Meeker 
 meeke...@dslextreme.commailto:
 meeke...@dslextreme.com wrote:

silky wrote:

On Tue, Jan 19, 2010 at 10:09 AM, Brent Meeker
meeke...@dslextreme.com mailto:meeke...@dslextreme.com wrote:

silky wrote:

On Tue, Jan 19, 2010 at 1:24 AM, Stathis Papaioannou
stath...@gmail.com mailto:stath...@gmail.com

wrote:


2010/1/18 silky michaelsli...@gmail.com
mailto:michaelsli...@gmail.com:



It would be my (naive) assumption, that this
is arguably trivial to
do. We can design a program that has a desire
to 'live', as desire to
find mates, and otherwise entertain itself. In
this way, with some
other properties, we can easily model simply pets.


Brent's reasons are valid,


Where it falls down for me is that the programmer
should ever feel
guilt. I don't see how I could feel guilty for ending
a program when I
know exactly how it will operate (what paths it will
take), even if I
can't be completely sure of the specific decisions
(due to some
randomisation or whatever)

It's not just randomisation, it's experience.  If you
create and AI at
fairly high-level (cat, dog, rat, human) it will
necessarily have the
ability to learn and after interacting with it's
enviroment for a while it
will become a unique individual.  That's why you would
feel sad to kill it
- all that experience and knowledge that you don't know
how to replace.  Of
course it might learn to be evil or at least annoying,
which would make
you feel less guilty.


Nevertheless, though, I know it's exact environment,


Not if it interacts with the world.  You must be thinking of a
virtual cat AI in a virtual world - but even there the program, if
at all realistic, is likely to be to complex for you to really
comprehend.  Of course *in principle* you could spend years going
over a few terrabites of data and  you could understand, Oh
that's why the AI cat did that on day 2118 at 10:22:35, it was
because of the interaction of memories of day 1425 at 07:54:28 and
...(long string of stuff).  But you'd be in almost the same
position as the neuroscientist who understands what a clump of
neurons does but can't get a wholistic view of what the organism
will do.

Surely you've had the experience of trying to debug a large
program you wrote some years ago that now seems to fail on some
input you never tried before.  Now think how much harder that
would be if it were an AI that had been learning and modifying
itself for all those years.


 I don't disagree with you that it would be significantly complicated, I
 suppose my argument is only that, unlike with a real cat, I - the programmer
 - know all there is to know about this computer cat.


 But you *don't* know all there is to know about it.  You don't know what it
 has learned - and there's no practical way to find out.


Here we disagree. I don't see (not that I have experience in AI-programming
specifically, mind you) how I can write a program and not have the results
be deterministic. I wrote it; I know, in general, the type of things it will
learn. I know, for example, that it won't learn how to drive a car. There
are no cars in the environment, and it doesn't have the capabilities to
invent a car, let alone the capabilities to drive it.

If you're suggesting that it will materialise these capabilities out of the
general model that I've implemented for it, then clearly I can see this path
as a possible one.

Is there a fundamental misunderstanding on my part; that in most
sufficiently-advanced AI systems, not even the programmer has an *idea* of
what the entity may learn?


[...]



Suppose we could add and emotion that put a positive value on
running backwards.  Would that add to their overall pleasure in
life - being able to enjoy something in addition to all the other
things they would have naturally enjoyed?  I'd say yes.  In which
case it would then be wrong to later remove that emotion and deny
them the potential pleasure - assuming of course there are no
contrary ethical considerations.


 So the only problem you see is if we ever add emotion, and then remove it.
 The problem doesn't lie in not adding it at all? Practically, the result is
 the same.


 No, because if we add 

Re: on consciousness levels and ai

2010-01-18 Thread Brent Meeker

silky wrote:



On Tue, Jan 19, 2010 at 2:19 PM, Brent Meeker meeke...@dslextreme.com 
mailto:meeke...@dslextreme.com wrote:


silky wrote:

On Tue, Jan 19, 2010 at 1:02 PM, Brent Meeker
meeke...@dslextreme.com mailto:meeke...@dslextreme.com
mailto:meeke...@dslextreme.com
mailto:meeke...@dslextreme.com wrote:

   silky wrote:

   On Tue, Jan 19, 2010 at 10:09 AM, Brent Meeker
   meeke...@dslextreme.com
mailto:meeke...@dslextreme.com
mailto:meeke...@dslextreme.com
mailto:meeke...@dslextreme.com wrote:
   
   silky wrote:
 
   On Tue, Jan 19, 2010 at 1:24 AM, Stathis

Papaioannou
   stath...@gmail.com mailto:stath...@gmail.com
mailto:stath...@gmail.com mailto:stath...@gmail.com

   wrote:

   
   2010/1/18 silky michaelsli...@gmail.com

mailto:michaelsli...@gmail.com
   mailto:michaelsli...@gmail.com
mailto:michaelsli...@gmail.com:


 
   It would be my (naive) assumption, that

this
   is arguably trivial to
   do. We can design a program that has a
desire
   to 'live', as desire to
   find mates, and otherwise entertain
itself. In
   this way, with some
   other properties, we can easily model
simply pets.

   
   Brent's reasons are valid,


 
   Where it falls down for me is that the programmer

   should ever feel
   guilt. I don't see how I could feel guilty for
ending
   a program when I
   know exactly how it will operate (what paths it
will
   take), even if I
   can't be completely sure of the specific decisions
   (due to some
   randomisation or whatever)
   
   It's not just randomisation, it's experience.  If you

   create and AI at
   fairly high-level (cat, dog, rat, human) it will
   necessarily have the
   ability to learn and after interacting with it's
   enviroment for a while it
   will become a unique individual.  That's why you would
   feel sad to kill it
   - all that experience and knowledge that you don't know
   how to replace.  Of
   course it might learn to be evil or at least
annoying,
   which would make
   you feel less guilty.
 


   Nevertheless, though, I know it's exact environment,


   Not if it interacts with the world.  You must be thinking of a
   virtual cat AI in a virtual world - but even there the
program, if
   at all realistic, is likely to be to complex for you to really
   comprehend.  Of course *in principle* you could spend years
going
   over a few terrabites of data and  you could understand, Oh
   that's why the AI cat did that on day 2118 at 10:22:35, it was
   because of the interaction of memories of day 1425 at
07:54:28 and
   ...(long string of stuff).  But you'd be in almost the same
   position as the neuroscientist who understands what a clump of
   neurons does but can't get a wholistic view of what the
organism
   will do.

   Surely you've had the experience of trying to debug a large
   program you wrote some years ago that now seems to fail on some
   input you never tried before.  Now think how much harder that
   would be if it were an AI that had been learning and modifying
   itself for all those years.


I don't disagree with you that it would be significantly
complicated, I suppose my argument is only that, unlike with a
real cat, I - the programmer - know all there is to know about
this computer cat.


But you *don't* know all there is to know about it.  You don't
know what it has learned - and there's no practical way to find out.


Here we disagree. I don't see (not that I have experience in 
AI-programming specifically, mind you) how I can write a program and 
not have the results be deterministic. I wrote it; I know, in general, 
the type of things it will learn. I know, for example, that it won't 
learn how to drive a car. 

Re: on consciousness levels and ai

2010-01-18 Thread silky
On Tue, Jan 19, 2010 at 3:02 PM, Brent Meeker meeke...@dslextreme.comwrote:

 silky wrote:



 [...]




 Here we disagree. I don't see (not that I have experience in
 AI-programming specifically, mind you) how I can write a program and not
 have the results be deterministic. I wrote it; I know, in general, the type
 of things it will learn. I know, for example, that it won't learn how to
 drive a car. There are no cars in the environment, and it doesn't have the
 capabilities to invent a car, let alone the capabilities to drive it.


 You seem to be assuming that your AI will only interact with a virtual
 world - which you will also create.  I was assuming your AI would be in
 something like a robot cat or dog, which interacted with the world.  I think
 there would be different ethical feelings about these two cases.


Well, it will be reacting with the real world; just a subset that I
specially allow it to interact with. I mean the computer is still in the
real world, whether or not the physical box actually has legs :) In my mind,
though, I was imagining a cat that specifically existed inside my screen,
reacting to other such cats. Lets say the cat is allowed out, in the form of
a robot, and can interact with real cats. Even still, it's programming will
allow it to act only in a deterministic way that I have defined (even if I
haven't defined out all it's behaviours; it may learn some from the other
cats).

So lets say that Robocat learns how to play with a ball, from Realcat. Would
my guilt in ending Robocat only lie in the fact that it learned something,
and given that I can't save it, that learning instance was unique? I'm not
sure. As a programmer, I'd simply be happy my program worked, and I'd
probably want to reproduce it. But showing it to a friend, they may wonder
why I turned it off; it worked, and now it needs to re-learn the next time
it's switched back on (interestingly, I would suggest that everyone would
consider it to be still the same Robocat, even though it needs to
effectively start from scratch).



  If you're suggesting that it will materialise these capabilities out of
 the general model that I've implemented for it, then clearly I can see this
 path as a possible one.


 Well it's certainly possible to write programs so complicated that the
 programmer doesn't forsee what it can do (I do it all the time  :-)  ).


 Is there a fundamental misunderstanding on my part; that in most
 sufficiently-advanced AI systems, not even the programmer has an *idea* of
 what the entity may learn?


 That's certainly the case if it learns from interacting with the world
 because the programmer can practically analyze all those interactions and
 their effect - except maybe by running another copy of the program on
 recorded input.



[...]


   Suppose we could add and emotion that put a positive value on
   running backwards.  Would that add to their overall pleasure in
   life - being able to enjoy something in addition to all the
other
   things they would have naturally enjoyed?  I'd say yes.  In
which
   case it would then be wrong to later remove that emotion
and deny
   them the potential pleasure - assuming of course there are no
   contrary ethical considerations.


So the only problem you see is if we ever add emotion, and
then remove it. The problem doesn't lie in not adding it at
all? Practically, the result is the same.


No, because if we add it and then remove it after the emotion is
experienced there will be a memory of it.  Unfortunately nature
already plays this trick on us.  I can remember that I felt a
strong emotion the first time a kissed girl - but I can't
experience it now.


 I don't mean we do it to the same entity, I mean to subsequent entites.
 (cats or real life babies). If, before the baby experiences anything, I
 remove an emotion it never used, what difference does it make to the baby?
 The main problem is that it's not the same as other babies, but that's
 trivially resolved by performing the same removal on all babies.
 Same applies to cat-instances; if during one compilation I give it
 emotion, and then I later decide to delete the lines of code that allow
 this, and run the program again, have I infringed on it's rights? Does the
 program even have any rights when it's not running?


 I don't think of rights as some abstract thing out there.  They are
 inventions of society saying we, as a society, will protect you when you
 want to do these things that you have a *right* to do.  We won't let others
 use force or coercion to prevent you.  So then the question becomes what
 rights is in societies interest to enforce for a computer program (probably
 none) or for an AI robot (maybe some).

 From this viewpoint the application to babies and cats is straightforward.
  What are the consequences for society and what kind of society do we want

on consciousness levels and ai

2010-01-17 Thread silky
I'm not sure if this question is appropriate here, nevertheless, the
most direct way to find out is to ask it :)

Clearly, creating AI on a computer is a goal, and generally we'll try
and implement to the same degree of computationalness as a human.
But what would happen if we simply tried to re-implement the
consciousness of a cat, or some lesser consciousness, but still
alive, entity.

It would be my (naive) assumption, that this is arguably trivial to
do. We can design a program that has a desire to 'live', as desire to
find mates, and otherwise entertain itself. In this way, with some
other properties, we can easily model simply pets.

I then wonder, what moral obligations do we owe these programs? Is it
correct to turn them off? If so, why can't we do the same to a real
life cat? Is it because we think we've not modelled something
correctly, or is it because we feel it's acceptable as we've created
this program, and hence know all its laws? On that basis, does it mean
it's okay to power off a real life cat, if we are confident we know
all of it's properties? Or is it not the knowning of the properties
that is critical, but the fact that we, specifically, have direct
control over it? Over its internals? (i.e. we can easily remove the
lines of code that give it the desire to 'live'). But wouldn't, then,
the removal of that code be equivelant to killing it? If not, why?

Apologies if this is too vague or useless; it's just an idea that has
been interesting me.

-- 
silky
  http://www.mirios.com.au/
  http://island.mirios.com.au/t/rigby+random+20
-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.




Re: on consciousness levels and ai

2010-01-17 Thread Brent Meeker

silky wrote:

I'm not sure if this question is appropriate here, nevertheless, the
most direct way to find out is to ask it :)

Clearly, creating AI on a computer is a goal, and generally we'll try
and implement to the same degree of computationalness as a human.
But what would happen if we simply tried to re-implement the
consciousness of a cat, or some lesser consciousness, but still
alive, entity.

It would be my (naive) assumption, that this is arguably trivial to
do. We can design a program that has a desire to 'live', as desire to
find mates, and otherwise entertain itself. In this way, with some
other properties, we can easily model simply pets.

I then wonder, what moral obligations do we owe these programs? Is it
correct to turn them off? If so, why can't we do the same to a real
life cat? Is it because we think we've not modelled something
correctly, or is it because we feel it's acceptable as we've created
this program, and hence know all its laws? On that basis, does it mean
it's okay to power off a real life cat, if we are confident we know
all of it's properties? Or is it not the knowning of the properties
that is critical, but the fact that we, specifically, have direct
control over it? Over its internals? (i.e. we can easily remove the
lines of code that give it the desire to 'live'). But wouldn't, then,
the removal of that code be equivelant to killing it? If not, why?
  


I think the differences are

1) we generally cannot kill an animal without causing it some distress 
2) as long as it is alive it has a capacity for pleasure (that's why we 
euthanize pets when we think they can no longer enjoy any part of life) 
3) if we could create an artificial pet (and Sony did) we can turn it 
off and turn it back on.
4) if a pet, artificial or otherwise, has capacity for pleasure and 
suffering we do have an ethical responsibility toward it.


Brent


Apologies if this is too vague or useless; it's just an idea that has
been interesting me.

  


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.




Re: on consciousness levels and ai

2010-01-17 Thread silky
On Mon, Jan 18, 2010 at 6:08 PM, Brent Meeker meeke...@dslextreme.com wrote:
 silky wrote:

 I'm not sure if this question is appropriate here, nevertheless, the
 most direct way to find out is to ask it :)

 Clearly, creating AI on a computer is a goal, and generally we'll try
 and implement to the same degree of computationalness as a human.
 But what would happen if we simply tried to re-implement the
 consciousness of a cat, or some lesser consciousness, but still
 alive, entity.

 It would be my (naive) assumption, that this is arguably trivial to
 do. We can design a program that has a desire to 'live', as desire to
 find mates, and otherwise entertain itself. In this way, with some
 other properties, we can easily model simply pets.

 I then wonder, what moral obligations do we owe these programs? Is it
 correct to turn them off? If so, why can't we do the same to a real
 life cat? Is it because we think we've not modelled something
 correctly, or is it because we feel it's acceptable as we've created
 this program, and hence know all its laws? On that basis, does it mean
 it's okay to power off a real life cat, if we are confident we know
 all of it's properties? Or is it not the knowning of the properties
 that is critical, but the fact that we, specifically, have direct
 control over it? Over its internals? (i.e. we can easily remove the
 lines of code that give it the desire to 'live'). But wouldn't, then,
 the removal of that code be equivelant to killing it? If not, why?


 I think the differences are

 1) we generally cannot kill an animal without causing it some distress

Is that because our off function in real life isn't immediate? Or,
as per below, because it cannot get more pleasure?


 2) as
 long as it is alive it has a capacity for pleasure (that's why we euthanize
 pets when we think they can no longer enjoy any part of life)

This is fair. But what if we were able to model this addition of
pleasure in the program? It's easy to increase happiness++, and thus
the desire to die decreases. Is this very simple variable enough to
make us care? Clearly not, but why not? Is it because the animal is
more conscious then we think? Is the answer that it's simply
impossible to model even a cat's consciousness completely?

If we model an animal that only exists to eat/live/reproduce, have we
created any moral responsibility? I don't think our moral
responsibility would start even if we add a very complicated
pleasure-based system into the model. My personal opinion is that it
would hard to *ever* feel guilty about ending something that you have
created so artificially (i.e. with every action directly predictable
by you, casually). But then, it may be asked; children are the same.
Humour aside, you can pretty much have a general idea of exactly what
they will do, and we created them, so why do we feel so responsible?
(Clearly, a easy answer is that it's chemical).


 3) if we could
 create an artificial pet (and Sony did) we can turn it off and turn it back
 on.

Lets assume, for the sake of argument, that each instance of the
program is one unique pet, and it will never be re-created or saved.



 4) if a pet, artificial or otherwise, has capacity for pleasure and
 suffering we do have an ethical responsibility toward it.

 Brent

-- 
silky
  http://www.mirios.com.au/
  http://island.mirios.com.au/t/rigby+random+20

DOURNESS. KICKOFF! Exceed-submissiveness BRIBERY DEFOG schoolmistress.
-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.




Re: on consciousness levels and ai

2010-01-17 Thread Brent Meeker

silky wrote:

On Mon, Jan 18, 2010 at 6:08 PM, Brent Meeker meeke...@dslextreme.com wrote:
  

silky wrote:


I'm not sure if this question is appropriate here, nevertheless, the
most direct way to find out is to ask it :)

Clearly, creating AI on a computer is a goal, and generally we'll try
and implement to the same degree of computationalness as a human.
But what would happen if we simply tried to re-implement the
consciousness of a cat, or some lesser consciousness, but still
alive, entity.

It would be my (naive) assumption, that this is arguably trivial to
do. We can design a program that has a desire to 'live', as desire to
find mates, and otherwise entertain itself. In this way, with some
other properties, we can easily model simply pets.

I then wonder, what moral obligations do we owe these programs? Is it
correct to turn them off? If so, why can't we do the same to a real
life cat? Is it because we think we've not modelled something
correctly, or is it because we feel it's acceptable as we've created
this program, and hence know all its laws? On that basis, does it mean
it's okay to power off a real life cat, if we are confident we know
all of it's properties? Or is it not the knowning of the properties
that is critical, but the fact that we, specifically, have direct
control over it? Over its internals? (i.e. we can easily remove the
lines of code that give it the desire to 'live'). But wouldn't, then,
the removal of that code be equivelant to killing it? If not, why?

  

I think the differences are

1) we generally cannot kill an animal without causing it some distress



Is that because our off function in real life isn't immediate? 

Yes.


Or,
as per below, because it cannot get more pleasure?
  


No, that's why I made it separate.


  

2) as
long as it is alive it has a capacity for pleasure (that's why we euthanize
pets when we think they can no longer enjoy any part of life)



This is fair. But what if we were able to model this addition of
pleasure in the program? It's easy to increase happiness++, and thus
the desire to die decreases. 


I don't think it's so easy as you suppose.  Pleasure comes through 
satisfying desires and it has as many dimensions as there are kinds of 
desires.  A animal that has very limited desires, e.g. eat and 
reproduce, would not seem to us capable of much pleasure and we would 
kill it without much feeling of guilt - as swatting a fly.

Is this very simple variable enough to
make us care? Clearly not, but why not? Is it because the animal is
more conscious then we think? Is the answer that it's simply
impossible to model even a cat's consciousness completely?

If we model an animal that only exists to eat/live/reproduce, have we
created any moral responsibility? I don't think our moral
responsibility would start even if we add a very complicated
pleasure-based system into the model. 

I think it would - just as we have ethical feelings toward dogs and tigers.


My personal opinion is that it
would hard to *ever* feel guilty about ending something that you have
created so artificially (i.e. with every action directly predictable
by you, casually). 


Even if the AI were strictly causal, it's interaction with the 
environment would very quickly make it's actions unpredictable.  And I 
think you are quite wrong about how you would feel.  People report 
feeling guilty about not interacting with the Sony artificial pet.



But then, it may be asked; children are the same.
Humour aside, you can pretty much have a general idea of exactly what
they will do, 


You must not have raised any children.

Brent


and we created them, so why do we feel so responsible?
(Clearly, a easy answer is that it's chemical).


  

3) if we could
create an artificial pet (and Sony did) we can turn it off and turn it back
on.



Lets assume, for the sake of argument, that each instance of the
program is one unique pet, and it will never be re-created or saved.



  

4) if a pet, artificial or otherwise, has capacity for pleasure and
suffering we do have an ethical responsibility toward it.

Brent



  


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.