Re: R/ASSA query

2010-01-18 Thread Rex Allen
On Mon, Jan 18, 2010 at 1:05 AM, Brent Meeker meeke...@dslextreme.com wrote:
 Rex Allen wrote:

 So I'm just trying to understand my situation here.  To me, my
 existence seems quite perplexing.  An explanation is in order.


 But you never say what would count as an explanation - which makes me think
 you don't know.  Which is OK.  But not knowing what the explanation would
 look like is a very poor reason for asserting no explanation is possible and
 things are just the way they are.

How can I describe something that doesn't exist?

Again, I base my belief that no explanation exists on the following
line of reasoning:

We have our observations and we want to explain them. To do this, we
need some context to place our observations in. So we postulate the
existence of an external universe that “causes” our observations. But
then we want to explain what caused this external universe…and the
only option is to postulate the existence of a much larger multiverse.
But then what explains the multiverse?

So this leads to the need for an infinite series of ever larger
contexts against which to explain the previous context that we used to
explain the previous context that we used to explain the fact of our
initial observations.

So nothing can be explained in terms of only itself. To explain it,
you have to place it in the context of something larger. Otherwise, no
explanation is possible, and you just have to say, “this is the way it
is because that’s the way it is.”

Basically it seems to me that there’s only two way the process can
end. Two possible answers to the question of “Why do I observe the
things that I observe?”:

1) Because things just are the way they are, and there’s no further
explanation possible.

2) Because EVERYTHING happens, and so your observations were
inevitable in this larger context of “everything”.

Do you see some other option?  Some flaw in the reasoning?


 I refer to my example of vitalism.  Until
 molecular biology was developed nobody could conceive of how understanding
 lifeless atoms and an molecules could explain life.  And in a sense it
 didn't explain it in the terms people were thinking of, e.g. finding an elan
 vital.  It didn't explain it at all; but it described it so thoroughly
 that people saw that asking for an explanation was the wrong question.  And
 that's not the only example.  People wondered what caused the planets to
 move through the sky.  Newton propounded his theory of universal gravitation
 and it became possible to predict not only the course of the planets but of
 any other body in motion through the solar system.   When Newton was asked
 to explain how gravity did this he replied, Hypothesi non fingo.

Ya, I don't find your vitalism argument convincing at all.  We've
discussed it before.

As for Newton, I quote from one of his letters:

To your second query, I answer, that the motions of which the planets
now have could not spring from any natural cause alone, but were
impressed by an intelligent Agent.

In the Scholium Generale Newton stressed that God was the Lord, Ruler,
and Pantocrator of the universe.  God ruled the universe note as one
rules one's own body, but as a Sovereign Prince.

SO, Newton had his ultimate explanation.



 The chain of thought that led to my current proposal is not that
 complicated.

 All you have to do is to consider the block universe concept, which I
 choose because it's easy to talk about - but the points hold for any
 physicalist theory of reality I think.

 So, why does this block of space-time and it's contents exist?
 Presumably there would be no reason, it just would.

 Why would things be the way they are inside the block?  Presumably
 there would be no reason, they just would be that way.

 If certain configurations of matter inside the block gave rise to
 conscious experience, why would this be so?  Presumably there would be
 no reason for this either, it just would be so.

 With that in mind, why would we prefer the explanation involving the
 inexplicable existence of a space-time block whose contents somehow
 gives rise to conscious experience *OVER* the explanation that the
 conscious experiences in question just exist uncaused?


 First, because the block universe assumes a lot more that just things
 happen in spacetime.  There is a very large and extensively tested set of
 theories about how the events in spacetime are related and why we perceive
 different people and how their perceptions are transformations of one
 another's.
 Second, we don't always prefer a block universe explanation.  In fact we
 almost always use an evolving model in which the future is not determined as
 in a block, but depends on a mixture of choices, initial conditions and
 randomness.  When we use X in an explanation of Y we don't necessarily need
 an explanation of X, we need only know what X means in some operational
 sense.

So, how is this relevant to anything I've said?  I'm genuinely
curious.  To me it looks like your just 

Re: on consciousness levels and ai

2010-01-18 Thread silky
On Mon, Jan 18, 2010 at 6:57 PM, Brent Meeker meeke...@dslextreme.com wrote:
 silky wrote:

 On Mon, Jan 18, 2010 at 6:08 PM, Brent Meeker meeke...@dslextreme.com
 wrote:


 silky wrote:


 I'm not sure if this question is appropriate here, nevertheless, the
 most direct way to find out is to ask it :)

 Clearly, creating AI on a computer is a goal, and generally we'll try
 and implement to the same degree of computationalness as a human.
 But what would happen if we simply tried to re-implement the
 consciousness of a cat, or some lesser consciousness, but still
 alive, entity.

 It would be my (naive) assumption, that this is arguably trivial to
 do. We can design a program that has a desire to 'live', as desire to
 find mates, and otherwise entertain itself. In this way, with some
 other properties, we can easily model simply pets.

 I then wonder, what moral obligations do we owe these programs? Is it
 correct to turn them off? If so, why can't we do the same to a real
 life cat? Is it because we think we've not modelled something
 correctly, or is it because we feel it's acceptable as we've created
 this program, and hence know all its laws? On that basis, does it mean
 it's okay to power off a real life cat, if we are confident we know
 all of it's properties? Or is it not the knowning of the properties
 that is critical, but the fact that we, specifically, have direct
 control over it? Over its internals? (i.e. we can easily remove the
 lines of code that give it the desire to 'live'). But wouldn't, then,
 the removal of that code be equivelant to killing it? If not, why?



 I think the differences are

 1) we generally cannot kill an animal without causing it some distress


 Is that because our off function in real life isn't immediate?

 Yes.

Do does that mean you would not feel guilty turning off a real cat, if
it could be done immediately?


 Or,
 as per below, because it cannot get more pleasure?


 No, that's why I made it separate.



 2) as
 long as it is alive it has a capacity for pleasure (that's why we
 euthanize
 pets when we think they can no longer enjoy any part of life)


  This is fair. But what if we were able to model this addition of
  pleasure in the program? It's easy to increase happiness++, and thus
  the desire to die decreases.

 I don't think it's so easy as you suppose.  Pleasure comes through
 satisfying desires and it has as many dimensions as there are kinds of
 desires.  A animal that has very limited desires, e.g. eat and reproduce,
 would not seem to us capable of much pleasure and we would kill it without
 much feeling of guilt - as swatting a fly.

Okay, so for your the moral responsibility comes in when we are
depriving the entity from pleasure AND because we can't turn it off
immediately (i.e. it will become aware it's being switched off; and
become upset).


  Is this very simple variable enough to
  make us care? Clearly not, but why not? Is it because the animal is
  more conscious then we think? Is the answer that it's simply
  impossible to model even a cat's consciousness completely?
 
  If we model an animal that only exists to eat/live/reproduce, have we
  created any moral responsibility? I don't think our moral
  responsibility would start even if we add a very complicated
  pleasure-based system into the model.

 I think it would - just as we have ethical feelings toward dogs and tigers.

So assuming someone can create the appropriate model, and you can
see that you will be depriving pleasure and/or causing pain, you'd
start to feel guilty about switching the entity off? Probably it would
be as simple as having the cat/dog whimper as it senses that the
program was going to terminate (obviously, visual stimulus would help
in a deterrent),  but then it must be asked, would the programmer feel
guilt? Or just an average user of the system, who doesn't know the
underlying programming model?


  My personal opinion is that it
  would hard to *ever* feel guilty about ending something that you have
  created so artificially (i.e. with every action directly predictable
  by you, casually).

 Even if the AI were strictly causal, it's interaction with the environment
 would very quickly make it's actions unpredictable.  And I think you are
 quite wrong about how you would feel.  People report feeling guilty about
 not interacting with the Sony artificial pet.

I've clarified my position above; does the programmer ever feel guilt,
or only the users?


  But then, it may be asked; children are the same.
  Humour aside, you can pretty much have a general idea of exactly what
  they will do,

 You must not have raised any children.

Sadly, I have not.


 Brent

-- 
silky
  http://www.mirios.com.au/
  http://island.mirios.com.au/t/rigby+random+20

CHURLISH rigidness; individual tangibly insomuch sadness cheerfulness.
-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to 

Re: on consciousness levels and ai

2010-01-18 Thread Brent Meeker

silky wrote:

On Mon, Jan 18, 2010 at 6:57 PM, Brent Meeker meeke...@dslextreme.com wrote:
  

silky wrote:


On Mon, Jan 18, 2010 at 6:08 PM, Brent Meeker meeke...@dslextreme.com
wrote:

  

silky wrote:



I'm not sure if this question is appropriate here, nevertheless, the
most direct way to find out is to ask it :)

Clearly, creating AI on a computer is a goal, and generally we'll try
and implement to the same degree of computationalness as a human.
But what would happen if we simply tried to re-implement the
consciousness of a cat, or some lesser consciousness, but still
alive, entity.

It would be my (naive) assumption, that this is arguably trivial to
do. We can design a program that has a desire to 'live', as desire to
find mates, and otherwise entertain itself. In this way, with some
other properties, we can easily model simply pets.

I then wonder, what moral obligations do we owe these programs? Is it
correct to turn them off? If so, why can't we do the same to a real
life cat? Is it because we think we've not modelled something
correctly, or is it because we feel it's acceptable as we've created
this program, and hence know all its laws? On that basis, does it mean
it's okay to power off a real life cat, if we are confident we know
all of it's properties? Or is it not the knowning of the properties
that is critical, but the fact that we, specifically, have direct
control over it? Over its internals? (i.e. we can easily remove the
lines of code that give it the desire to 'live'). But wouldn't, then,
the removal of that code be equivelant to killing it? If not, why?


  

I think the differences are

1) we generally cannot kill an animal without causing it some distress



Is that because our off function in real life isn't immediate?
  

Yes.



Do does that mean you would not feel guilty turning off a real cat, if
it could be done immediately?
  


No, that's only one reason - read the others.


  

Or,
as per below, because it cannot get more pleasure?

  

No, that's why I made it separate.

  

2) as
long as it is alive it has a capacity for pleasure (that's why we
euthanize
pets when we think they can no longer enjoy any part of life)



This is fair. But what if we were able to model this addition of
pleasure in the program? It's easy to increase happiness++, and thus
the desire to die decreases.
  

I don't think it's so easy as you suppose.  Pleasure comes through
satisfying desires and it has as many dimensions as there are kinds of
desires.  A animal that has very limited desires, e.g. eat and reproduce,
would not seem to us capable of much pleasure and we would kill it without
much feeling of guilt - as swatting a fly.



Okay, so for your the moral responsibility comes in when we are
depriving the entity from pleasure AND because we can't turn it off
immediately (i.e. it will become aware it's being switched off; and
become upset).


  

Is this very simple variable enough to
make us care? Clearly not, but why not? Is it because the animal is
more conscious then we think? Is the answer that it's simply
impossible to model even a cat's consciousness completely?

If we model an animal that only exists to eat/live/reproduce, have we
created any moral responsibility? I don't think our moral
responsibility would start even if we add a very complicated
pleasure-based system into the model.
  

I think it would - just as we have ethical feelings toward dogs and tigers.



So assuming someone can create the appropriate model, and you can
see that you will be depriving pleasure and/or causing pain, you'd
start to feel guilty about switching the entity off? Probably it would
be as simple as having the cat/dog whimper as it senses that the
program was going to terminate (obviously, visual stimulus would help
in a deterrent),  but then it must be asked, would the programmer feel
guilt? Or just an average user of the system, who doesn't know the
underlying programming model?


  

My personal opinion is that it
would hard to *ever* feel guilty about ending something that you have
created so artificially (i.e. with every action directly predictable
by you, casually).
  

Even if the AI were strictly causal, it's interaction with the environment
would very quickly make it's actions unpredictable.  And I think you are
quite wrong about how you would feel.  People report feeling guilty about
not interacting with the Sony artificial pet.



I've clarified my position above; does the programmer ever feel guilt,
or only the users?
  


The programmer too (though maybe less) because any reasonably high-level 
AI would have learned things and would no longer appear to just be 
running through a rote program - even to the programmer.


Brent


  

But then, it may be asked; children are the same.
Humour aside, you can pretty much have a general idea of exactly what
they will do,
  

You must not have raised any 

Re: R/ASSA query

2010-01-18 Thread Bruno Marchal


On 17 Jan 2010, at 09:11, Brent Meeker wrote:



Brent
The reason that there is Something rather than Nothing is that
Nothing is unstable.
   -- Frank Wilczek, Nobel Laureate, phyiscs 2004



So, why is Nothing unstable?

Because there are so many ways to be something and only one way to  
be nothing.





I suspect Frank Wilczek was alluding to the fact that the (very weird)  
quantum vacuum is fluctuating at low scales.
Indeed in classical physics to get universality you need at least  
three bodies. But in quantum physics the vacuum is already Turing  
universal (even quantum Turing universal). The quantum-nothing is  
already a quantum computer, although to use it is another matter,  
except that we are using it just by being, most plausibly right here  
and now.



Nothing is more theory related that the notion of nothing. In  
arithmetic it is the number zero. In set theory, it is the empty set.  
In group theory, we could say that there is no nothing, no empty  
group, you need at least a neutral element. Likewize with the  
combinators: nothing may be tackle by the forest with only one bird,  
etc.



Maybe you're a brain in a vat, or a computation in arithmetic.  I'm  
happy to contemplate such hypothesis, but I don't find anything  
testable or useful that follows from them.  So why should I accept  
them even provisionally?



We may accept them because it offers an explanation of the origin of  
mind and matter. To the arithmetical relations correspond unboundedly  
rich and deep histories, and we can prove (to ourselves) that  
arithmetic, as seen from inside leads to a sort of coupling  
consciousness/realities. (Eventually precisely described at the  
propositional by the eight hypostases, and divided precisely into the  
communicable and sharable, and the non communicable one).


This can please those unsatisfied by the current physicalist  
conception, which seems unable to solve the mind body problem, since a  
long time.


Why shouldn't we ask the question where and how does the physical  
realm come from?. Comp explains: from the numbers, and in this  
precise way. What not to take a look?


To take the physical realm for granted is the same philosophical  
mistake than to take god for granted. It is an abandon of the  
spirit of research. It is an abstraction from the spirit of inquiry.


Physicalism is really like believing that a universal machine (the  
quantum machine) has to be priviledged, because observation says so. I  
show that if I am turing emulable, then in fine all universal  
machines play their role, and that the mergence of the quantum one has  
to be explained (the background goal being the mind body problem).


But if you follow the uda, you know (or should know, or ask question)  
that if we assume computationalism, then we have just no choice in the  
matter. The notion of matter has to be recovered by those infinite  
sum.  If not, you are probably confusing computation (number  
relations) and description of computations (number describing those  
number relations). It is almost like confusing i and phi_i. It is the  
whole point of the universal dovetailer argument (uda).


To sum up, unless we continue to put the mind under the rug, like  
Aristotelian, we have just no choice here.


The goal is not in finding a new physics, but in deriving the (unique,  
by uda) physics from logic+numbers through comp. A priori, that  
physics could be useless in practice, like quantum physics is useless  
in the kitchen. The advantage is that this solves conceptually (as  
much as it show it possible) the consciousness/matter riddle.


Bruno


http://iridia.ulb.ac.be/~marchal/



-- 

You received this message because you are subscribed to the Google Groups "Everything List" group.

To post to this group, send email to everything-l...@googlegroups.com.

To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com.

For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.



Re: R/ASSA query

2010-01-18 Thread Bruno Marchal


On 18 Jan 2010, at 00:37, Rex Allen wrote:


The patterns I've observed don't explain my conscious experience.
There's nothing in my concept of patterns which would explain how it
might give rise to conscious experience.

So I fully buy the idea that patterns (physical or platonic) can be
used to represent aspects of what I experience.  And that these
patterns can be updated in a way so that over time they represent how
my experiences change over time.

What I don't see is why this would give rise to anything like the
qualia of my conscious experience.  There is an explanatory gap.  And
I don't see how any new information about patterns or the ways of
updating them will close that gap.

And for me that's really the deal-breaker for any causal explanations
of consciousness, as opposed to considering it fundamental.




This is what computer science and provability logic explains. Digital  
pattern, once their combinatorial properties makes them universal,  
obeys a rich set of mathematical law, which justifies eventually the  
existence of true undoubtable but incommunicable, yet self-observable,  
states which are good candidate of qualia.


Consciousness is explained by being a fixed point of universal  
transformation. If you do the math (self-reference logic) it justifies  
many of our propositions intuitively believed on consciousness,  
including the existence of the explanation gap, and the non  
definability of consciousness. Consciousness is in between truth and  
consistency.


Physics does not help, except by picking up the local  universal  
machine from the neighborhood. But this physical explanation does  
not give a role to primitive matter, it just use the universal  
pattern allowed by observation, and comp has the remaining problem of  
justifying that picking up. Why quantum computations?


To take consciousness as ontologically fundamental seems rather  
awkward to me. You can only get to don't ask type of answer. It is  
the symmetric error of the Aristotelian.


At least, with the number, we have already enough to understand that  
we have to take them as fundamental, because nothing less than the  
numbers can explain the numbers. Then consciousness can be described  
by the first person state of knowledge available to the numbers.


The whole number theology is explain by addition and multiplication,  
only. It works in explaining the mystery aspect of the views from  
inside.


Sometimes I have a feeling that you are not aware that  
conventionalism in math is dead. There is no causality in math,  
but there are many sort of implications and entailment, capable of  
explaining the illusion and persistence of causal relations. Math  
kicks back.


Are our life sort of dreams? I think so, but I think this has to be  
made precise (indeed testable) and explained. Who are the dreamers?  
Why does they dream, etc. Do I interact genuinely with others? etc.


Math is not only about representations. It is also about facts.

Bruno



http://iridia.ulb.ac.be/~marchal/



-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.




Re: R/ASSA query

2010-01-18 Thread Stathis Papaioannou
2010/1/18 Nick Prince m...@dtech.fsnet.co.uk:

 If you had to guess you would say that your present OM is a common
 rather than a rare one, because you are more likely to be right.
 However, knowledge trumps probability. If you know that your present
 OM is common and your successor OM a minute from now rare - because
 there are many copies of you now running in lockstep and most of those
 copies are soon going to be terminated - then you can be certain that
 a minute from now you will be one of the rare copies. This is what
 happens in a quantum suicide experiment under the MWI. To put some
 numbers on it, if there are 100 copies of you now, and in a minute 90
 of those copies will be terminated, and of the remaining 10 copies 3
 will be given a cup of coffee and 7 a cup of tea, then as one of the
 100 original copies according to the RSSA you have a 30% chance of
 surviving and getting coffee and a 70% chance of surviving and getting
 tea; i.e. you have a 100% chance of surviving. Proponents of the ASSA
 would say you have a 3% chance of surviving and getting coffee and a
 7% chance of surviving and getting tea, or a 10% chance of surviving
 overall. I think they would say something like this - the
 probabilities may be off since the total contribution of pre/post
 termination OM's has to be tallied up, but in any case they would not
 say you are guaranteed of surviving. The only way I can understand
 this latter view is in the context of an essentialist theory of
 personal identity, according to which once a body is dead it's dead,
 and it's impossible to live on as even a perfect copy.

 Thank you Stathis, that was a really helpful reply and has confirmed
 my own thinking on this in many respects.  I'm never quite sure why
 list members work in copies though. I am hoping you (or anyone) can
 clarify the following queries.

Copies are concrete and finite, and therefore easier to grasp than
abstract OM's. Copies could in theory be made in whatever universe we
live in, if necessary biological copies if not computer uploads (in
case it turns out the brain is uncomputable). A prerequisite to most
of the discussions on this list is a clarification of your view on
personal identity, and even in the philosophical literature this is
often discussed by reference to thought experiments involving copies,
as created by Star Trek type teleportation. Derek Parfit's Reasons
and Persons is a good book to read on the subject.

 1. Can we not say that (under RSSA) my measure of existence has
 initially say 100 units and if there is a 90% chance of being blown up
 before being offered the tea or coffee then if I end up drinking tea
 my relative measure has reduced to 70 units in this branch (note the
 word relative here - I hope I'm getting this right).  If I find myself
 drinking coffee then my relative measure has reduced to 30  (my global
 measure over all universes still remains at 100 though?)

Your relative measure before the termination event is not relevant
and maybe not coherent: relative to what? The importance of your
relative measure is in trying to predict what you can expect for the
future. Looking forward, you can expect that you will certainly
survive since there will be at least one copy surviving. Of the 10
copies that do survive, each has equal claim to being you. Since the
ratio of coffee drinkers to tea drinkers is 3:7, you have a 3/10
expectation of ending up a coffee drinker and a 7/10 expectation of
ending up a tea drinker. So the relative part is in the 3:7 ratio,
whether the absolute numbers are 3:7 or 30:70 or 3000:7000. When I
consider what I will experience the next moment, I take into
consideration all the possible successor OM's (those that have my
present moment in their immediate subjective past), and my
expectations about the future depend on their relative frequency. This
is simple in the present example but becomes difficult when we deal
with infinite sets of OM's where there is a gradation between the
definitely-next-moment-me's and definitely-next-moment-not-me's.

 2. In this way (under RSSA) my relative measure continually decreases
 within each branch(hence use of word relative) but my global measure
 across the multiverse is conserved.  According to ASSA though my
 measure deceases both across universes and in each branch - in all
 branches I eventually die as well as differentiate.

According to RSSA and the RSSA your absolute measure in the multiverse
decreases with each branching as versions of you die. According to the
RSSA this doesn't matter as at least one of you is left standing;
according to the ASSA, this does matter and you eventually die. The
only way I can make sense of the latter is if you have an essentialist
view of personal identity. Under this view if a copy is made of you
and the original dies, you die. Under what Parfit calls the
reductionist view of personal identity, you live.

 3. Is this copy concept used because of the idea of differentiation
 as 

Re: on consciousness levels and ai

2010-01-18 Thread Stathis Papaioannou
2010/1/18 silky michaelsli...@gmail.com:

 It would be my (naive) assumption, that this is arguably trivial to
 do. We can design a program that has a desire to 'live', as desire to
 find mates, and otherwise entertain itself. In this way, with some
 other properties, we can easily model simply pets.

Brent's reasons are valid, but I don't think making an artificial
animal is as simple as you say. Henry Markham's group are presently
trying to simulate a rat brain, and so far they have done 10,000
neurons which they are hopeful is behaving in a physiological way.
This is at huge computational expense, and they have a long way to go
before simulating a whole rat brain, and no guarantee that it will
start behaving like a rat. If it does, then they are only a few years
away from simulating a human, soon after that will come a superhuman
AI, and soon after that it's we who will have to argue that we have
feelings and are worth preserving.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.




Re: on consciousness levels and ai

2010-01-18 Thread John Mikes
Dear Brent,

is it your 'conscious' position to look at things in an anthropocentric
limitation?
If you substitute consistently 'animal' for 'pet', you can include the human
animal as well. In that case your
#1: would you consider 'distress' a disturbed mental state only, or include
organisational 'distress' as well - causing what we may call: 'death'? In
the first case you can circumvent the distress by putting the animal to
sleep before killing, causing THEN the 2nd case.

#2: I never talked to a shrimp in shrimpese so I don't know what ens
'pleasurable' to it.

#3 speaking about AL (human included) we include circumstances we already
discovered and there is no assurance that we 'create' LIFE (what is it?) as
it really  - IS - G. So turning on/off a contraption we call 'animal'
(artificial pet?) is not what we are talking about.

#4: I consider 'ethical' an anthropocentric culture-related majority (?)
opinion in many cases hypocritical and pretentious. Occasionally it can be a
power-forced minority opinion as well.
I feel the *alleged *Chinese moral behind your remark: if you save a life
you are responsible for the person (I don't know if it is true?).

I try to take a more general stance and not restrict terms even to 'live(?)'
creatures.
Is a universal mchine 'live'?

Have fun in science

John M




On Mon, Jan 18, 2010 at 2:08 AM, Brent Meeker meeke...@dslextreme.comwrote:

 silky wrote:

 (truncated)

 BM:


 I think the differences are

 1) we generally cannot kill an animal without causing it some distress 2)
 as long as it is alive it has a capacity for pleasure (that's why we
 euthanize pets when we think they can no longer enjoy any part of life) 3)
 if we could create an artificial pet (and Sony did) we can turn it off and
 turn it back on.
 4) if a pet, artificial or otherwise, has capacity for pleasure and
 suffering we do have an ethical responsibility toward it.

 Brent

-- 

You received this message because you are subscribed to the Google Groups "Everything List" group.

To post to this group, send email to everything-l...@googlegroups.com.

To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com.

For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.



Re: on consciousness levels and ai

2010-01-18 Thread Bruno Marchal


On 18 Jan 2010, at 16:35, John Mikes wrote:


Is a universal mchine 'live'?



I would say yes, despite the concrete artificial one still needs  
humans in its reproduction cycle. But we need plants and bacteria.
I think that all machines, including houses and garden are alive in  
that sense. Cigarets are alive. They have a a way to reproduce.  
Universal machine are alive and can be conscious.


If we define artificial by introduced by humans, we can see that  
the difference between artificial and natural is ... artificial (and  
thus natural!). Jacques Lafitte wrote in 1911 (published in 1931) a  
book where he describes the rise of machines and technology as a  
collateral living processes.


Only for Löbian machine, like you, me, but also Peano Arithmetic and  
ZF , I would say I am pretty sure that they are reflexively conscious  
like us. Despite they have no lived experiences at all. (Well, they  
have our experiences, in a sense. We are their experiences).


Bruno

http://iridia.ulb.ac.be/~marchal/



-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.




Re: R/ASSA query

2010-01-18 Thread Brent Meeker

Bruno Marchal wrote:


On 17 Jan 2010, at 09:11, Brent Meeker wrote:



Brent
The reason that there is Something rather than Nothing is that
Nothing is unstable.
   -- Frank Wilczek, Nobel Laureate, phyiscs 2004
   


So, why is Nothing unstable?
 
Because there are so many ways to be something and only one way to be 
nothing.





I suspect Frank Wilczek was alluding to the fact that the (very weird) 
quantum vacuum is fluctuating at low scales.
Indeed in classical physics to get universality you need at least 
three bodies. But in quantum physics the vacuum is already Turing 
universal (even /quantum/ Turing universal). The quantum-nothing is 
already a quantum computer, although to use it is another matter, 
except that we are using it just by being, most plausibly right here 
and now.



Nothing is more theory related that the notion of nothing. In 
arithmetic it is the number zero. In set theory, it is the empty set. 
In group theory, we could say that there is no nothing, no empty 
group, you need at least a neutral element. Likewize with the 
combinators: nothing may be tackle by the forest with only one bird, etc.



Maybe you're a brain in a vat, or a computation in arithmetic.  I'm 
happy to contemplate such hypothesis, but I don't find anything 
testable or useful that follows from them.  So why should I accept 
them even provisionally?



We may accept them because it offers an explanation of the origin of 
mind and matter. To the arithmetical relations correspond unboundedly 
rich and deep histories, and we can prove (to ourselves) that 
arithmetic, as seen from inside leads to a sort of coupling 
consciousness/realities. (Eventually precisely described at the 
propositional by the eight hypostases, and divided precisely into the 
communicable and sharable, and the non communicable one).


This can please those unsatisfied by the current physicalist 
conception, which seems unable to solve the mind body problem, since a 
long time.
It took over three hundred years from the birth of Newton and the death 
of Gallileo to solve the problem of life.  The theory of computation is 
less than a century old.  Neurophysiology is similarly in its infancy.




Why shouldn't we ask the question where and how does the physical 
realm come from?. Comp explains: from the numbers, and in this 
precise way. What not to take a look?
I have taken a look, and it looks very interesting.  But I'm not enough 
of a logician and number theorist to judge whether you can really 
recover anything about human consciousness from the theory.  My 
impression is that it is somewhat like other everything theories.  
Because some everything is assumed it is relatively easy to believe 
that what you want to explain is in there somewhere and the problem is 
to explain why all the other stuff isn't observed.  I consider this a 
fatal flaw in Tegmark's everything mathematical exists theory.  Not 
with yours though because you have limited it to a definite domain 
(digital computation) where I suppose definite conclusions can be 
reached and predictions made.




To take the physical realm for granted is the same philosophical 
mistake than to take god for granted. It is an abandon of the 
spirit of research. It is an abstraction from the spirit of inquiry.


Physicalism is really like believing that a universal machine (the 
quantum machine) has to be priviledged, because observation says so. I 
show that if I am turing emulable, then in fine all universal 
machines play their role, and that the mergence of the quantum one has 
to be explained (the background goal being the mind body problem).


But if you follow the uda, you know (or should know, or ask question) 
that if we assume computationalism, then/ we have just no choice in 
the matter/.


Unless we assume matter is fundamental, as Peter Jones is fond of 
arguing, and some things happen and some don't.


The notion of matter has to be recovered by those infinite sum.  If 
not, you are probably confusing computation (number relations) and 
description of computations (number describing those number 
relations). It is almost like confusing i and phi_i. It is the whole 
point of the universal dovetailer argument (uda).


To sum up, unless we continue to put the mind under the rug, like 
Aristotelian, we have just no choice here.


The goal is not in finding a new physics, but in deriving the (unique, 
by uda) physics from logic+numbers through comp. A priori, that 
physics could be useless in practice, like quantum physics is useless 
in the kitchen. The advantage is that this solves conceptually (as 
much as it show it possible) the consciousness/matter riddle.


I don't see that it has solved the problem.  It has shifted it from 
explaining consciousness in terms of matter to explaining matter and 
consciousness in terms of arithmetic.  That has the advantage that 
arithmetic is relatively well understood.  But just having a well 
understood explanans is not enough to make 

Re: R/ASSA query

2010-01-18 Thread Brent Meeker

Bruno Marchal wrote:


On 18 Jan 2010, at 00:37, Rex Allen wrote:


The patterns I've observed don't explain my conscious experience.
There's nothing in my concept of patterns which would explain how it
might give rise to conscious experience.

So I fully buy the idea that patterns (physical or platonic) can be
used to represent aspects of what I experience.  And that these
patterns can be updated in a way so that over time they represent how
my experiences change over time.

What I don't see is why this would give rise to anything like the
qualia of my conscious experience.  There is an explanatory gap.  And
I don't see how any new information about patterns or the ways of
updating them will close that gap.

And for me that's really the deal-breaker for any causal explanations
of consciousness, as opposed to considering it fundamental.




This is what computer science and provability logic explains. Digital 
pattern, once their combinatorial properties makes them universal, 
obeys a rich set of mathematical law, which justifies eventually the 
existence of true undoubtable but incommunicable, yet self-observable, 
states which are good candidate of qualia.


But that is like saying there are quasi-stable chaotic attractors in the 
neural processes of brains which are related to perception, feeling, and 
action and are good candidates for qualia.   Having a good candidate 
how do you test whether it IS qualia.  I think this is where the theory 
of consciousness will turn out to be like the theory of life.  The 
description of brain processes and their relation to reported qualia 
will become more and more detailed and qualia will come to seen to cover 
many distinct things and eventually the question of what is 
consciousness will no longer seem to be an interesting question.


Consciousness is explained by being a fixed point of universal 
transformation. 


Universal transformation of what?  Is there more than one universal 
transformation? 

If you do the math (self-reference logic) it justifies many of our 
propositions intuitively believed on consciousness, including the 
existence of the explanation gap, and the non definability of 
consciousness. Consciousness is in between truth and consistency.


Physics does not help, except by picking up the local  universal 
machine from the neighborhood. But this physical explanation does 
not give a role to primitive matter, 


The only role of 'primitive matter' seems to be that it instantiates 
nomologically possible things and not others.  It's the same as saying 
the world is to some extent contingent.


it just use the universal pattern allowed by observation, and comp has 
the remaining problem of justifying that picking up. Why quantum 
computations?


To take consciousness as ontologically fundamental seems rather 
awkward to me. You can only get to don't ask type of answer. It is 
the symmetric error of the Aristotelian.


At least, with the number, we have already enough to understand that 
we have to take them as fundamental, because nothing less than the 
numbers can explain the numbers. Then consciousness can be described 
by the first person state of knowledge available to the numbers.


The whole number theology is explain by addition and multiplication, 
only. It works in explaining the mystery aspect of the views from 
inside.


Sometimes I have a feeling that you are not aware that 
conventionalism in math is dead. 


I'm certainly no expert on the philosophy of mathematics, but I have a 
mathematician friend who is a fictionalist - which I think is what you 
refer to as conventionalism.  So referring to experts I seem to find 
it an open question:


http://plato.stanford.edu/entries/fictionalism-mathematics/

http://philosophy.fas.nyu.edu/object/hartryfield


There is no causality in math, but there are many sort of 
implications and entailment, capable of explaining the illusion and 
persistence of causal relations. Math kicks back.


Are our life sort of dreams? I think so, but I think this has to be 
made precise (indeed testable) and explained. Who are the dreamers? 
Why does they dream, etc. Do I interact genuinely with others? etc.


Math is not only about representations. It is also about facts.


But it is about facts in some timeless, placeless world that seems to be 
rather different from this one.  Do you think there is a 
fact-of-the-matter about whether the continuum hypothesis is true?


Brent



Bruno



http://iridia.ulb.ac.be/~marchal/





-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.




Re: on consciousness levels and ai

2010-01-18 Thread silky
On Tue, Jan 19, 2010 at 1:24 AM, Stathis Papaioannou stath...@gmail.com wrote:
 2010/1/18 silky michaelsli...@gmail.com:
  It would be my (naive) assumption, that this is arguably trivial to
  do. We can design a program that has a desire to 'live', as desire to
  find mates, and otherwise entertain itself. In this way, with some
  other properties, we can easily model simply pets.

 Brent's reasons are valid,

Where it falls down for me is that the programmer should ever feel
guilt. I don't see how I could feel guilty for ending a program when I
know exactly how it will operate (what paths it will take), even if I
can't be completely sure of the specific decisions (due to some
randomisation or whatever) I don't see how I could ever think No, you
can't harm X. But what I find very interesting, is that even if I
knew *exactly* how a cat operated, I could never kill one.


 but I don't think making an artificial
 animal is as simple as you say.

So is it a complexity issue? That you only start to care about the
entity when it's significantly complex. But exactly how complex? Or is
it about the unknowningness; that the project is so large you only
work on a small part, and thus you don't fully know it's workings, and
then that is where the guilt comes in.


 Henry Markham's group are presently
 trying to simulate a rat brain, and so far they have done 10,000
 neurons which they are hopeful is behaving in a physiological way.
 This is at huge computational expense, and they have a long way to go
 before simulating a whole rat brain, and no guarantee that it will
 start behaving like a rat. If it does, then they are only a few years
 away from simulating a human, soon after that will come a superhuman
 AI, and soon after that it's we who will have to argue that we have
 feelings and are worth preserving.

Indeed, this is something that concerns me as well. If we do create an
AI, and force it to do our bidding, are we acting immorally? Or
perhaps we just withhold the desire for the program to do it's own
thing, but is that in itself wrong?


 --
 Stathis Papaioannou

-- 
silky
  http://www.mirios.com.au/
  http://island.mirios.com.au/t/rigby+random+20

crib? Unshakably MINICAM = heckling millisecond? Cave-in RUMP =
extraterrestrial matrimonial ...
-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.




Re: on consciousness levels and ai

2010-01-18 Thread Brent Meeker

silky wrote:

On Tue, Jan 19, 2010 at 1:24 AM, Stathis Papaioannou stath...@gmail.com wrote:
  

2010/1/18 silky michaelsli...@gmail.com:


It would be my (naive) assumption, that this is arguably trivial to
do. We can design a program that has a desire to 'live', as desire to
find mates, and otherwise entertain itself. In this way, with some
other properties, we can easily model simply pets.
  

Brent's reasons are valid,



Where it falls down for me is that the programmer should ever feel
guilt. I don't see how I could feel guilty for ending a program when I
know exactly how it will operate (what paths it will take), even if I
can't be completely sure of the specific decisions (due to some
randomisation or whatever) 


It's not just randomisation, it's experience.  If you create and AI at 
fairly high-level (cat, dog, rat, human) it will necessarily have the 
ability to learn and after interacting with it's enviroment for a while 
it will become a unique individual.  That's why you would feel sad to 
kill it - all that experience and knowledge that you don't know how to 
replace.  Of course it might learn to be evil or at least annoying, 
which would make you feel less guilty.



I don't see how I could ever think No, you
can't harm X. But what I find very interesting, is that even if I
knew *exactly* how a cat operated, I could never kill one.


  

but I don't think making an artificial
animal is as simple as you say.



So is it a complexity issue? That you only start to care about the
entity when it's significantly complex. But exactly how complex? Or is
it about the unknowningness; that the project is so large you only
work on a small part, and thus you don't fully know it's workings, and
then that is where the guilt comes in.
  


I think unknowingness plays a big part, but it's because of our 
experience with people and animals, we project our own experience of 
consciousness on to them so that when we see them behave in certain ways 
we impute an inner life to them that includes pleasure and suffering.


  

Henry Markham's group are presently
trying to simulate a rat brain, and so far they have done 10,000
neurons which they are hopeful is behaving in a physiological way.
This is at huge computational expense, and they have a long way to go
before simulating a whole rat brain, and no guarantee that it will
start behaving like a rat. If it does, then they are only a few years
away from simulating a human, soon after that will come a superhuman
AI, and soon after that it's we who will have to argue that we have
feelings and are worth preserving.



Indeed, this is something that concerns me as well. If we do create an
AI, and force it to do our bidding, are we acting immorally? Or
perhaps we just withhold the desire for the program to do it's own
thing, but is that in itself wrong?
  


I don't think so.  We don't worry about the internet's feelings, or the 
air traffic control system.  John McCarthy has written essays on this 
subject and he cautions against creating AI with human like emotions 
precisely because of the ethical implications.  But that means we need 
to understand consciousness and emotions less we accidentally do 
something unethical.


Brent


  

--
Stathis Papaioannou



  


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.




Re: on consciousness levels and ai

2010-01-18 Thread Stathis Papaioannou
2010/1/19 silky michaelsli...@gmail.com:
 On Tue, Jan 19, 2010 at 1:24 AM, Stathis Papaioannou stath...@gmail.com 
 wrote:
 2010/1/18 silky michaelsli...@gmail.com:
  It would be my (naive) assumption, that this is arguably trivial to
  do. We can design a program that has a desire to 'live', as desire to
  find mates, and otherwise entertain itself. In this way, with some
  other properties, we can easily model simply pets.

 Brent's reasons are valid,

 Where it falls down for me is that the programmer should ever feel
 guilt. I don't see how I could feel guilty for ending a program when I
 know exactly how it will operate (what paths it will take), even if I
 can't be completely sure of the specific decisions (due to some
 randomisation or whatever) I don't see how I could ever think No, you
 can't harm X. But what I find very interesting, is that even if I
 knew *exactly* how a cat operated, I could never kill one.

That's not being rational then, is it?

 but I don't think making an artificial
 animal is as simple as you say.

 So is it a complexity issue? That you only start to care about the
 entity when it's significantly complex. But exactly how complex? Or is
 it about the unknowningness; that the project is so large you only
 work on a small part, and thus you don't fully know it's workings, and
 then that is where the guilt comes in.

Obviously intelligence and the ability to have feelings and desires
has something to do with complexity. It would be easy enough to write
a computer program that pleads with you to do something but you don't
feel bad about disappointing it, because you know it lacks the full
richness of human intelligence and consciousness.

 Henry Markham's group are presently
 trying to simulate a rat brain, and so far they have done 10,000
 neurons which they are hopeful is behaving in a physiological way.
 This is at huge computational expense, and they have a long way to go
 before simulating a whole rat brain, and no guarantee that it will
 start behaving like a rat. If it does, then they are only a few years
 away from simulating a human, soon after that will come a superhuman
 AI, and soon after that it's we who will have to argue that we have
 feelings and are worth preserving.

 Indeed, this is something that concerns me as well. If we do create an
 AI, and force it to do our bidding, are we acting immorally? Or
 perhaps we just withhold the desire for the program to do it's own
 thing, but is that in itself wrong?

If we created an AI that wanted to do our bidding or that didn't care
what it did, then it would not be wrong. Some people anthropomorphise
and imagine the AI as themselves or people they know: and since they
would not like being enslaved they assume the AI wouldn't either. But
this is false. Eliezer Yudkowsky has written a lot about AI, the
ethical issues, and the necessity to make a friendly AI so that it
didn't destroy us whether through intention or indifference.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.




Re: on consciousness levels and ai

2010-01-18 Thread silky
On Tue, Jan 19, 2010 at 10:09 AM, Brent Meeker meeke...@dslextreme.com wrote:
 silky wrote:

 On Tue, Jan 19, 2010 at 1:24 AM, Stathis Papaioannou stath...@gmail.com
 wrote:


 2010/1/18 silky michaelsli...@gmail.com:


 It would be my (naive) assumption, that this is arguably trivial to
 do. We can design a program that has a desire to 'live', as desire to
 find mates, and otherwise entertain itself. In this way, with some
 other properties, we can easily model simply pets.


 Brent's reasons are valid,


 Where it falls down for me is that the programmer should ever feel
 guilt. I don't see how I could feel guilty for ending a program when I
 know exactly how it will operate (what paths it will take), even if I
 can't be completely sure of the specific decisions (due to some
 randomisation or whatever)

 It's not just randomisation, it's experience.  If you create and AI at
 fairly high-level (cat, dog, rat, human) it will necessarily have the
 ability to learn and after interacting with it's enviroment for a while it
 will become a unique individual.  That's why you would feel sad to kill it
 - all that experience and knowledge that you don't know how to replace.  Of
 course it might learn to be evil or at least annoying, which would make
 you feel less guilty.

Nevertheless, though, I know it's exact environment, so I can recreate
the things that it learned (I can recreate it all; it's all
deterministic: I programmed it). The only thing I can't recreate, is
the randomness, assuming I introduced that (but as we know, I can
recreate that anyway, because I'd just use the same seed state;
unless the source of randomness is true).



 I don't see how I could ever think No, you
 can't harm X. But what I find very interesting, is that even if I
 knew *exactly* how a cat operated, I could never kill one.




 but I don't think making an artificial
 animal is as simple as you say.


 So is it a complexity issue? That you only start to care about the
 entity when it's significantly complex. But exactly how complex? Or is
 it about the unknowningness; that the project is so large you only
 work on a small part, and thus you don't fully know it's workings, and
 then that is where the guilt comes in.


 I think unknowingness plays a big part, but it's because of our experience
 with people and animals, we project our own experience of consciousness on
 to them so that when we see them behave in certain ways we impute an inner
 life to them that includes pleasure and suffering.

Yes, I agree. So does that mean that, over time, if we continue using
these computer-based cats, we would become attached to them (i.e. your
Sony toys example


  Indeed, this is something that concerns me as well. If we do create an
  AI, and force it to do our bidding, are we acting immorally? Or
  perhaps we just withhold the desire for the program to do it's own
  thing, but is that in itself wrong?
 

 I don't think so.  We don't worry about the internet's feelings, or the air
 traffic control system.  John McCarthy has written essays on this subject
 and he cautions against creating AI with human like emotions precisely
 because of the ethical implications.  But that means we need to understand
 consciousness and emotions less we accidentally do something unethical.

Fair enough. But by the same token, what if we discover a way to
remove emotions from real-born children. Would it be wrong to do that?
Is emotion an inherent property that we should never be allowed to
remove, once created?


 Brent

-- 
silky
  http://www.mirios.com.au/
  http://island.mirios.com.au/t/rigby+random+20

FRACTURE THISTLEDOWN CURIOUSLY! Sixfold columned HOBBLER shouter
OVERLAND axon ZANY interbree...
-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.




Re: R/ASSA query

2010-01-18 Thread Nick Prince


On Jan 18, 2:11 pm, Stathis Papaioannou stath...@gmail.com wrote:
 2010/1/18 Nick Prince m...@dtech.fsnet.co.uk:





  If you had to guess you would say that your present OM is a common
  rather than a rare one, because you are more likely to be right.
  However, knowledge trumps probability. If you know that your present
  OM is common and your successor OM a minute from now rare - because
  there are many copies of you now running in lockstep and most of those
  copies are soon going to be terminated - then you can be certain that
  a minute from now you will be one of the rare copies. This is what
  happens in a quantum suicide experiment under the MWI. To put some
  numbers on it, if there are 100 copies of you now, and in a minute 90
  of those copies will be terminated, and of the remaining 10 copies 3
  will be given a cup of coffee and 7 a cup of tea, then as one of the
  100 original copies according to the RSSA you have a 30% chance of
  surviving and getting coffee and a 70% chance of surviving and getting
  tea; i.e. you have a 100% chance of surviving. Proponents of the ASSA
  would say you have a 3% chance of surviving and getting coffee and a
  7% chance of surviving and getting tea, or a 10% chance of surviving
  overall. I think they would say something like this - the
  probabilities may be off since the total contribution of pre/post
  termination OM's has to be tallied up, but in any case they would not
  say you are guaranteed of surviving. The only way I can understand
  this latter view is in the context of an essentialist theory of
  personal identity, according to which once a body is dead it's dead,
  and it's impossible to live on as even a perfect copy.

  Thank you Stathis, that was a really helpful reply and has confirmed
  my own thinking on this in many respects.  I'm never quite sure why
  list members work in copies though. I am hoping you (or anyone) can
  clarify the following queries.

 Copies are concrete and finite, and therefore easier to grasp than
 abstract OM's. Copies could in theory be made in whatever universe we
 live in, if necessary biological copies if not computer uploads (in
 case it turns out the brain is uncomputable). A prerequisite to most
 of the discussions on this list is a clarification of your view on
 personal identity, and even in the philosophical literature this is
 often discussed by reference to thought experiments involving copies,
 as created by Star Trek type teleportation. Derek Parfit's Reasons
 and Persons is a good book to read on the subject.

  1. Can we not say that (under RSSA) my measure of existence has
  initially say 100 units and if there is a 90% chance of being blown up
  before being offered the tea or coffee then if I end up drinking tea
  my relative measure has reduced to 70 units in this branch (note the
  word relative here - I hope I'm getting this right).  If I find myself
  drinking coffee then my relative measure has reduced to 30  (my global
  measure over all universes still remains at 100 though?)

 Your relative measure before the termination event is not relevant
 and maybe not coherent: relative to what? The importance of your
 relative measure is in trying to predict what you can expect for the
 future. Looking forward, you can expect that you will certainly
 survive since there will be at least one copy surviving. Of the 10
 copies that do survive, each has equal claim to being you. Since the
 ratio of coffee drinkers to tea drinkers is 3:7, you have a 3/10
 expectation of ending up a coffee drinker and a 7/10 expectation of
 ending up a tea drinker. So the relative part is in the 3:7 ratio,
 whether the absolute numbers are 3:7 or 30:70 or 3000:7000. When I
 consider what I will experience the next moment, I take into
 consideration all the possible successor OM's (those that have my
 present moment in their immediate subjective past), and my
 expectations about the future depend on their relative frequency. This
 is simple in the present example but becomes difficult when we deal
 with infinite sets of OM's where there is a gradation between the
 definitely-next-moment-me's and definitely-next-moment-not-me's.

  2. In this way (under RSSA) my relative measure continually decreases
  within each branch(hence use of word relative) but my global measure
  across the multiverse is conserved.  According to ASSA though my
  measure deceases both across universes and in each branch - in all
  branches I eventually die as well as differentiate.

 According to RSSA and the RSSA your absolute measure in the multiverse
 decreases with each branching as versions of you die. According to the
 RSSA this doesn't matter as at least one of you is left standing;
 according to the ASSA, this does matter and you eventually die. The
 only way I can make sense of the latter is if you have an essentialist
 view of personal identity. Under this view if a copy is made of you
 and the original dies, you die. Under what 

Re: on consciousness levels and ai

2010-01-18 Thread silky
On Tue, Jan 19, 2010 at 10:30 AM, Stathis Papaioannou
stath...@gmail.com wrote:
 2010/1/19 silky michaelsli...@gmail.com:
  On Tue, Jan 19, 2010 at 1:24 AM, Stathis Papaioannou stath...@gmail.com 
  wrote:
   2010/1/18 silky michaelsli...@gmail.com:
It would be my (naive) assumption, that this is arguably trivial to
do. We can design a program that has a desire to 'live', as desire to
find mates, and otherwise entertain itself. In this way, with some
other properties, we can easily model simply pets.
  
   Brent's reasons are valid,
 
  Where it falls down for me is that the programmer should ever feel
  guilt. I don't see how I could feel guilty for ending a program when I
  know exactly how it will operate (what paths it will take), even if I
  can't be completely sure of the specific decisions (due to some
  randomisation or whatever) I don't see how I could ever think No, you
  can't harm X. But what I find very interesting, is that even if I
  knew *exactly* how a cat operated, I could never kill one.

 That's not being rational then, is it?

Exactly my point! I'm trying to discover why I wouldn't be so rational
there. Would you? Do you think that knowing all there is to know about
a cat is unpractical to the point of being impossible *forever*, or do
you believe that once we do know, we will simply end them freely,
when they get in our way? I think at some point we *will* know all
there is to know about them, and even then, we won't end them easily.
Why not? Is it the emotional projection that Brent suggests? Possibly.


   but I don't think making an artificial
   animal is as simple as you say.

 So is it a complexity issue? That you only start to care about the
 entity when it's significantly complex. But exactly how complex? Or is
 it about the unknowningness; that the project is so large you only
 work on a small part, and thus you don't fully know it's workings, and
 then that is where the guilt comes in.

 Obviously intelligence and the ability to have feelings and desires
 has something to do with complexity. It would be easy enough to write
 a computer program that pleads with you to do something but you don't
 feel bad about disappointing it, because you know it lacks the full
 richness of human intelligence and consciousness.

Indeed; so part of the question is: Qhat level of complexity
constitutes this? Is it simply any level that we don't understand? Or
is there a level that we *can* understand that still makes us feel
that way? I think it's more complicated than just any level we don't
understand (because clearly, I understand that if I twist your arm,
it will hurt you, and I know exactly why, but I don't do it).


   Henry Markham's group are presently
   trying to simulate a rat brain, and so far they have done 10,000
   neurons which they are hopeful is behaving in a physiological way.
   This is at huge computational expense, and they have a long way to go
   before simulating a whole rat brain, and no guarantee that it will
   start behaving like a rat. If it does, then they are only a few years
   away from simulating a human, soon after that will come a superhuman
   AI, and soon after that it's we who will have to argue that we have
   feelings and are worth preserving.
 
  Indeed, this is something that concerns me as well. If we do create an
  AI, and force it to do our bidding, are we acting immorally? Or
  perhaps we just withhold the desire for the program to do it's own
  thing, but is that in itself wrong?

 If we created an AI that wanted to do our bidding or that didn't care
 what it did, then it would not be wrong. Some people anthropomorphise
 and imagine the AI as themselves or people they know: and since they
 would not like being enslaved they assume the AI wouldn't either. But
 this is false. Eliezer Yudkowsky has written a lot about AI, the
 ethical issues, and the necessity to make a friendly AI so that it
 didn't destroy us whether through intention or indifference.

 --
 Stathis Papaioannou

-- 
silky
  http://www.mirios.com.au/
  http://island.mirios.com.au/t/rigby+random+20

JUGULAR MATERIALS: thundershower! PRETERNATURAL anise! Stressed
BATTERED KICKBALL neophyte: k...
-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.




Re: on consciousness levels and ai

2010-01-18 Thread Brent Meeker

silky wrote:

On Tue, Jan 19, 2010 at 10:09 AM, Brent Meeker meeke...@dslextreme.com wrote:
  

silky wrote:


On Tue, Jan 19, 2010 at 1:24 AM, Stathis Papaioannou stath...@gmail.com
wrote:

  

2010/1/18 silky michaelsli...@gmail.com:



It would be my (naive) assumption, that this is arguably trivial to
do. We can design a program that has a desire to 'live', as desire to
find mates, and otherwise entertain itself. In this way, with some
other properties, we can easily model simply pets.

  

Brent's reasons are valid,



Where it falls down for me is that the programmer should ever feel
guilt. I don't see how I could feel guilty for ending a program when I
know exactly how it will operate (what paths it will take), even if I
can't be completely sure of the specific decisions (due to some
randomisation or whatever)
  

It's not just randomisation, it's experience.  If you create and AI at
fairly high-level (cat, dog, rat, human) it will necessarily have the
ability to learn and after interacting with it's enviroment for a while it
will become a unique individual.  That's why you would feel sad to kill it
- all that experience and knowledge that you don't know how to replace.  Of
course it might learn to be evil or at least annoying, which would make
you feel less guilty.



Nevertheless, though, I know it's exact environment, 


Not if it interacts with the world.  You must be thinking of a virtual 
cat AI in a virtual world - but even there the program, if at all 
realistic, is likely to be to complex for you to really comprehend.  Of 
course *in principle* you could spend years going over a few terrabites 
of data and  you could understand, Oh that's why the AI cat did that on 
day 2118 at 10:22:35, it was because of the interaction of memories of 
day 1425 at 07:54:28 and ...(long string of stuff).  But you'd be in 
almost the same position as the neuroscientist who understands what a 
clump of neurons does but can't get a wholistic view of what the 
organism will do.


Surely you've had the experience of trying to debug a large program you 
wrote some years ago that now seems to fail on some input you never 
tried before.  Now think how much harder that would be if it were an AI 
that had been learning and modifying itself for all those years.

so I can recreate
the things that it learned (I can recreate it all; it's all
deterministic: I programmed it). The only thing I can't recreate, is
the randomness, assuming I introduced that (but as we know, I can
recreate that anyway, because I'd just use the same seed state;
unless the source of randomness is true).



  

I don't see how I could ever think No, you
can't harm X. But what I find very interesting, is that even if I
knew *exactly* how a cat operated, I could never kill one.



  

but I don't think making an artificial
animal is as simple as you say.



So is it a complexity issue? That you only start to care about the
entity when it's significantly complex. But exactly how complex? Or is
it about the unknowningness; that the project is so large you only
work on a small part, and thus you don't fully know it's workings, and
then that is where the guilt comes in.

  

I think unknowingness plays a big part, but it's because of our experience
with people and animals, we project our own experience of consciousness on
to them so that when we see them behave in certain ways we impute an inner
life to them that includes pleasure and suffering.



Yes, I agree. So does that mean that, over time, if we continue using
these computer-based cats, we would become attached to them (i.e. your
Sony toys example
  


Hell, I even become attached to my motorcycles.


  

Indeed, this is something that concerns me as well. If we do create an
AI, and force it to do our bidding, are we acting immorally? Or
perhaps we just withhold the desire for the program to do it's own
thing, but is that in itself wrong?

  

I don't think so.  We don't worry about the internet's feelings, or the air
traffic control system.  John McCarthy has written essays on this subject
and he cautions against creating AI with human like emotions precisely
because of the ethical implications.  But that means we need to understand
consciousness and emotions less we accidentally do something unethical.



Fair enough. But by the same token, what if we discover a way to
remove emotions from real-born children. Would it be wrong to do that?
Is emotion an inherent property that we should never be allowed to
remove, once created?
  


Certainly it would be fruitless to remove all emotions because that 
would be the same as removing all discrimination and motivation - they'd 
be dumb as tape recorders.  So I suppose you're asking about removing, 
or providing specific emotions.  Removing, for example, empathy would 
certainly be bad idea - that's how you get sociopathic killers.  Suppose 
we could remove all selfishness and create 

Re: on consciousness levels and ai

2010-01-18 Thread silky
On Tue, Jan 19, 2010 at 1:02 PM, Brent Meeker meeke...@dslextreme.comwrote:

 silky wrote:

 On Tue, Jan 19, 2010 at 10:09 AM, Brent Meeker meeke...@dslextreme.com
 wrote:


 silky wrote:


 On Tue, Jan 19, 2010 at 1:24 AM, Stathis Papaioannou 
 stath...@gmail.com
 wrote:



 2010/1/18 silky michaelsli...@gmail.com:



 It would be my (naive) assumption, that this is arguably trivial to
 do. We can design a program that has a desire to 'live', as desire to
 find mates, and otherwise entertain itself. In this way, with some
 other properties, we can easily model simply pets.



 Brent's reasons are valid,



 Where it falls down for me is that the programmer should ever feel
 guilt. I don't see how I could feel guilty for ending a program when I
 know exactly how it will operate (what paths it will take), even if I
 can't be completely sure of the specific decisions (due to some
 randomisation or whatever)


 It's not just randomisation, it's experience.  If you create and AI at
 fairly high-level (cat, dog, rat, human) it will necessarily have the
 ability to learn and after interacting with it's enviroment for a while
 it
 will become a unique individual.  That's why you would feel sad to kill
 it
 - all that experience and knowledge that you don't know how to replace.
  Of
 course it might learn to be evil or at least annoying, which would make
 you feel less guilty.



 Nevertheless, though, I know it's exact environment,


 Not if it interacts with the world.  You must be thinking of a virtual cat
 AI in a virtual world - but even there the program, if at all realistic, is
 likely to be to complex for you to really comprehend.  Of course *in
 principle* you could spend years going over a few terrabites of data and
  you could understand, Oh that's why the AI cat did that on day 2118 at
 10:22:35, it was because of the interaction of memories of day 1425 at
 07:54:28 and ...(long string of stuff).  But you'd be in almost the same
 position as the neuroscientist who understands what a clump of neurons does
 but can't get a wholistic view of what the organism will do.

 Surely you've had the experience of trying to debug a large program you
 wrote some years ago that now seems to fail on some input you never tried
 before.  Now think how much harder that would be if it were an AI that had
 been learning and modifying itself for all those years.


I don't disagree with you that it would be significantly complicated, I
suppose my argument is only that, unlike with a real cat, I - the programmer
- know all there is to know about this computer cat. I'm wondering to what
degree that adds or removes to my moral obligations.



  so I can recreate
 the things that it learned (I can recreate it all; it's all
 deterministic: I programmed it). The only thing I can't recreate, is
 the randomness, assuming I introduced that (but as we know, I can
 recreate that anyway, because I'd just use the same seed state;
 unless the source of randomness is true).



 I don't see how I could ever think No, you
 can't harm X. But what I find very interesting, is that even if I
 knew *exactly* how a cat operated, I could never kill one.





 but I don't think making an artificial
 animal is as simple as you say.



 So is it a complexity issue? That you only start to care about the
 entity when it's significantly complex. But exactly how complex? Or is
 it about the unknowningness; that the project is so large you only
 work on a small part, and thus you don't fully know it's workings, and
 then that is where the guilt comes in.



 I think unknowingness plays a big part, but it's because of our
 experience
 with people and animals, we project our own experience of consciousness
 on
 to them so that when we see them behave in certain ways we impute an
 inner
 life to them that includes pleasure and suffering.



 Yes, I agree. So does that mean that, over time, if we continue using
 these computer-based cats, we would become attached to them (i.e. your
 Sony toys example



 Hell, I even become attached to my motorcycles.


Does it follow, then, that we'll start to have laws relating to ending of
motorcycles humanely? Probably not. So there must be more too it then just
attachment.






 Indeed, this is something that concerns me as well. If we do create an
 AI, and force it to do our bidding, are we acting immorally? Or
 perhaps we just withhold the desire for the program to do it's own
 thing, but is that in itself wrong?



 I don't think so.  We don't worry about the internet's feelings, or the
 air
 traffic control system.  John McCarthy has written essays on this subject
 and he cautions against creating AI with human like emotions precisely
 because of the ethical implications.  But that means we need to
 understand
 consciousness and emotions less we accidentally do something unethical.



 Fair enough. But by the same token, what if we discover a way to
 remove emotions from real-born children. Would it be wrong to do 

Re: on consciousness levels and ai

2010-01-18 Thread Brent Meeker

silky wrote:

On Tue, Jan 19, 2010 at 10:30 AM, Stathis Papaioannou
stath...@gmail.com wrote:
  

2010/1/19 silky michaelsli...@gmail.com:


On Tue, Jan 19, 2010 at 1:24 AM, Stathis Papaioannou stath...@gmail.com wrote:
  

2010/1/18 silky michaelsli...@gmail.com:


It would be my (naive) assumption, that this is arguably trivial to
do. We can design a program that has a desire to 'live', as desire to
find mates, and otherwise entertain itself. In this way, with some
other properties, we can easily model simply pets.
  

Brent's reasons are valid,


Where it falls down for me is that the programmer should ever feel
guilt. I don't see how I could feel guilty for ending a program when I
know exactly how it will operate (what paths it will take), even if I
can't be completely sure of the specific decisions (due to some
randomisation or whatever) I don't see how I could ever think No, you
can't harm X. But what I find very interesting, is that even if I
knew *exactly* how a cat operated, I could never kill one.
  

That's not being rational then, is it?



Exactly my point! I'm trying to discover why I wouldn't be so rational
there. Would you? Do you think that knowing all there is to know about
a cat is unpractical to the point of being impossible *forever*, or do
you believe that once we do know, we will simply end them freely,
when they get in our way? I think at some point we *will* know all
there is to know about them, and even then, we won't end them easily.
Why not? Is it the emotional projection that Brent suggests? Possibly.


  

but I don't think making an artificial
animal is as simple as you say.


So is it a complexity issue? That you only start to care about the
entity when it's significantly complex. But exactly how complex? Or is
it about the unknowningness; that the project is so large you only
work on a small part, and thus you don't fully know it's workings, and
then that is where the guilt comes in.
  

Obviously intelligence and the ability to have feelings and desires
has something to do with complexity. It would be easy enough to write
a computer program that pleads with you to do something but you don't
feel bad about disappointing it, because you know it lacks the full
richness of human intelligence and consciousness.



Indeed; so part of the question is: Qhat level of complexity
constitutes this? Is it simply any level that we don't understand? Or
is there a level that we *can* understand that still makes us feel
that way? I think it's more complicated than just any level we don't
understand (because clearly, I understand that if I twist your arm,
it will hurt you, and I know exactly why, but I don't do it).
  


I don't think you know exactly why, unless you solved the problem of  
connecting qualia (pain) to physics (afferent nerve transmission) - but 
I agree that you know it heuristically.


For my $0.02 I think that not understanding is significant because it 
leaves a lacuna which we tend to fill by projecting ourselves.  When 
people didn't understand atmospheric physics they projected super-humans 
that produced the weather.  If you let some Afghan peasants interact 
with a fairly simple AI program, such as used in the Loebner 
competition, they might well conclude you had created an artificial 
person; even though it wouldn't fool anyone computer literate.


But even for an AI that we could in principle understand, if it is 
complex enough and acts enough like an animal I think we would feel 
ethical concerns for it.  I think a more difficult case is an 
intelligence which is so alien to us we can't project our feelings on 
it's behavior.  Stanislaw Lem has written stories on this theme: 
Solaris, His Masters Voice, Return from the Stars, Fiasco. 

There doesn't seem to be much recognition of this possibility on this 
list. There's generally an implicit assumption that we know what 
consciousness is, we have it, and that's the only possible kind of 
consciousness.  All OMs are human OMs.  I think that's one interesting 
thing about Bruno's theory; it is definite enough (if I understand it) 
that it could elucidate different kinds of consciousness.  For example, 
I think Searle's Chinese room is conscious - but in a different way than 
we are.


Brent
-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.




Re: on consciousness levels and ai

2010-01-18 Thread silky
On Tue, Jan 19, 2010 at 1:49 PM, Brent Meeker meeke...@dslextreme.comwrote:

 silky wrote:

 On Tue, Jan 19, 2010 at 10:30 AM, Stathis Papaioannou
 stath...@gmail.com wrote:


 2010/1/19 silky michaelsli...@gmail.com:


 On Tue, Jan 19, 2010 at 1:24 AM, Stathis Papaioannou 
 stath...@gmail.com wrote:


 2010/1/18 silky michaelsli...@gmail.com:


 It would be my (naive) assumption, that this is arguably trivial to
 do. We can design a program that has a desire to 'live', as desire to
 find mates, and otherwise entertain itself. In this way, with some
 other properties, we can easily model simply pets.


 Brent's reasons are valid,


 Where it falls down for me is that the programmer should ever feel
 guilt. I don't see how I could feel guilty for ending a program when I
 know exactly how it will operate (what paths it will take), even if I
 can't be completely sure of the specific decisions (due to some
 randomisation or whatever) I don't see how I could ever think No, you
 can't harm X. But what I find very interesting, is that even if I
 knew *exactly* how a cat operated, I could never kill one.


 That's not being rational then, is it?



 Exactly my point! I'm trying to discover why I wouldn't be so rational
 there. Would you? Do you think that knowing all there is to know about
 a cat is unpractical to the point of being impossible *forever*, or do
 you believe that once we do know, we will simply end them freely,
 when they get in our way? I think at some point we *will* know all
 there is to know about them, and even then, we won't end them easily.
 Why not? Is it the emotional projection that Brent suggests? Possibly.




 but I don't think making an artificial
 animal is as simple as you say.


 So is it a complexity issue? That you only start to care about the
 entity when it's significantly complex. But exactly how complex? Or is
 it about the unknowningness; that the project is so large you only
 work on a small part, and thus you don't fully know it's workings, and
 then that is where the guilt comes in.


 Obviously intelligence and the ability to have feelings and desires
 has something to do with complexity. It would be easy enough to write
 a computer program that pleads with you to do something but you don't
 feel bad about disappointing it, because you know it lacks the full
 richness of human intelligence and consciousness.



 Indeed; so part of the question is: Qhat level of complexity
 constitutes this? Is it simply any level that we don't understand? Or
 is there a level that we *can* understand that still makes us feel
 that way? I think it's more complicated than just any level we don't
 understand (because clearly, I understand that if I twist your arm,
 it will hurt you, and I know exactly why, but I don't do it).



 I don't think you know exactly why, unless you solved the problem of
  connecting qualia (pain) to physics (afferent nerve transmission) - but I
 agree that you know it heuristically.

 For my $0.02 I think that not understanding is significant because it
 leaves a lacuna which we tend to fill by projecting ourselves.  When people
 didn't understand atmospheric physics they projected super-humans that
 produced the weather.  If you let some Afghan peasants interact with a
 fairly simple AI program, such as used in the Loebner competition, they
 might well conclude you had created an artificial person; even though it
 wouldn't fool anyone computer literate.

 But even for an AI that we could in principle understand, if it is complex
 enough and acts enough like an animal I think we would feel ethical concerns
 for it.  I think a more difficult case is an intelligence which is so alien
 to us we can't project our feelings on it's behavior.  Stanislaw Lem has
 written stories on this theme: Solaris, His Masters Voice, Return from
 the Stars, Fiasco.

There doesn't seem to be much recognition of this possibility on this list.
 There's generally an implicit assumption that we know what consciousness is,
 we have it, and that's the only possible kind of consciousness.  All OMs are
 human OMs.  I think that's one interesting thing about Bruno's theory; it is
 definite enough (if I understand it) that it could elucidate different kinds
 of consciousness.  For example, I think Searle's Chinese room is conscious -
 but in a different way than we are.


I'll have to look into these things, but I do agree with you in general; I
don't think ours is the only type of consciousness at all. Though I do think
the concept that not understanding completely is interesting, because it
suggests that a god should actually not particularly care what happens to
us, because to them it's all predictable. (And obviously, the idea of moral
obligations to computer programs is arguably interesting).




 Brent

 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To post to this group, send email to everything-l...@googlegroups.com.
 To 

Re: on consciousness levels and ai

2010-01-18 Thread Brent Meeker

silky wrote:
On Tue, Jan 19, 2010 at 1:02 PM, Brent Meeker meeke...@dslextreme.com 
mailto:meeke...@dslextreme.com wrote:


silky wrote:

On Tue, Jan 19, 2010 at 10:09 AM, Brent Meeker
meeke...@dslextreme.com mailto:meeke...@dslextreme.com wrote:
 


silky wrote:
   


On Tue, Jan 19, 2010 at 1:24 AM, Stathis Papaioannou
stath...@gmail.com mailto:stath...@gmail.com
wrote:

 


2010/1/18 silky michaelsli...@gmail.com
mailto:michaelsli...@gmail.com:

   


It would be my (naive) assumption, that this
is arguably trivial to
do. We can design a program that has a desire
to 'live', as desire to
find mates, and otherwise entertain itself. In
this way, with some
other properties, we can easily model simply pets.

 


Brent's reasons are valid,

   


Where it falls down for me is that the programmer
should ever feel
guilt. I don't see how I could feel guilty for ending
a program when I
know exactly how it will operate (what paths it will
take), even if I
can't be completely sure of the specific decisions
(due to some
randomisation or whatever)
 


It's not just randomisation, it's experience.  If you
create and AI at
fairly high-level (cat, dog, rat, human) it will
necessarily have the
ability to learn and after interacting with it's
enviroment for a while it
will become a unique individual.  That's why you would
feel sad to kill it
- all that experience and knowledge that you don't know
how to replace.  Of
course it might learn to be evil or at least annoying,
which would make
you feel less guilty.
   



Nevertheless, though, I know it's exact environment,


Not if it interacts with the world.  You must be thinking of a
virtual cat AI in a virtual world - but even there the program, if
at all realistic, is likely to be to complex for you to really
comprehend.  Of course *in principle* you could spend years going
over a few terrabites of data and  you could understand, Oh
that's why the AI cat did that on day 2118 at 10:22:35, it was
because of the interaction of memories of day 1425 at 07:54:28 and
...(long string of stuff).  But you'd be in almost the same
position as the neuroscientist who understands what a clump of
neurons does but can't get a wholistic view of what the organism
will do.

Surely you've had the experience of trying to debug a large
program you wrote some years ago that now seems to fail on some
input you never tried before.  Now think how much harder that
would be if it were an AI that had been learning and modifying
itself for all those years.


I don't disagree with you that it would be significantly complicated, 
I suppose my argument is only that, unlike with a real cat, I - the 
programmer - know all there is to know about this computer cat.


But you *don't* know all there is to know about it.  You don't know what 
it has learned - and there's no practical way to find out.



I'm wondering to what degree that adds or removes to my moral obligations.


Destroying something can be good or bad.  Not knowing what you're 
destroying usually counts on the bad side.


 


so I can recreate
the things that it learned (I can recreate it all; it's all
deterministic: I programmed it). The only thing I can't
recreate, is
the randomness, assuming I introduced that (but as we know, I can
recreate that anyway, because I'd just use the same seed state;
unless the source of randomness is true).

 


I don't see how I could ever think No, you
can't harm X. But what I find very interesting, is
that even if I
knew *exactly* how a cat operated, I could never kill one.



 


but I don't think making an artificial
animal is as simple as you say.

   


So is it a complexity issue? That you only start to
care about the
entity when it's significantly complex. But exactly
how complex? Or is
it about the unknowningness; that the project is so
large you only
work on a small 

Re: on consciousness levels and ai

2010-01-18 Thread silky
On Tue, Jan 19, 2010 at 2:19 PM, Brent Meeker meeke...@dslextreme.comwrote:

 silky wrote:

 On Tue, Jan 19, 2010 at 1:02 PM, Brent Meeker 
 meeke...@dslextreme.commailto:
 meeke...@dslextreme.com wrote:

silky wrote:

On Tue, Jan 19, 2010 at 10:09 AM, Brent Meeker
meeke...@dslextreme.com mailto:meeke...@dslextreme.com wrote:

silky wrote:

On Tue, Jan 19, 2010 at 1:24 AM, Stathis Papaioannou
stath...@gmail.com mailto:stath...@gmail.com

wrote:


2010/1/18 silky michaelsli...@gmail.com
mailto:michaelsli...@gmail.com:



It would be my (naive) assumption, that this
is arguably trivial to
do. We can design a program that has a desire
to 'live', as desire to
find mates, and otherwise entertain itself. In
this way, with some
other properties, we can easily model simply pets.


Brent's reasons are valid,


Where it falls down for me is that the programmer
should ever feel
guilt. I don't see how I could feel guilty for ending
a program when I
know exactly how it will operate (what paths it will
take), even if I
can't be completely sure of the specific decisions
(due to some
randomisation or whatever)

It's not just randomisation, it's experience.  If you
create and AI at
fairly high-level (cat, dog, rat, human) it will
necessarily have the
ability to learn and after interacting with it's
enviroment for a while it
will become a unique individual.  That's why you would
feel sad to kill it
- all that experience and knowledge that you don't know
how to replace.  Of
course it might learn to be evil or at least annoying,
which would make
you feel less guilty.


Nevertheless, though, I know it's exact environment,


Not if it interacts with the world.  You must be thinking of a
virtual cat AI in a virtual world - but even there the program, if
at all realistic, is likely to be to complex for you to really
comprehend.  Of course *in principle* you could spend years going
over a few terrabites of data and  you could understand, Oh
that's why the AI cat did that on day 2118 at 10:22:35, it was
because of the interaction of memories of day 1425 at 07:54:28 and
...(long string of stuff).  But you'd be in almost the same
position as the neuroscientist who understands what a clump of
neurons does but can't get a wholistic view of what the organism
will do.

Surely you've had the experience of trying to debug a large
program you wrote some years ago that now seems to fail on some
input you never tried before.  Now think how much harder that
would be if it were an AI that had been learning and modifying
itself for all those years.


 I don't disagree with you that it would be significantly complicated, I
 suppose my argument is only that, unlike with a real cat, I - the programmer
 - know all there is to know about this computer cat.


 But you *don't* know all there is to know about it.  You don't know what it
 has learned - and there's no practical way to find out.


Here we disagree. I don't see (not that I have experience in AI-programming
specifically, mind you) how I can write a program and not have the results
be deterministic. I wrote it; I know, in general, the type of things it will
learn. I know, for example, that it won't learn how to drive a car. There
are no cars in the environment, and it doesn't have the capabilities to
invent a car, let alone the capabilities to drive it.

If you're suggesting that it will materialise these capabilities out of the
general model that I've implemented for it, then clearly I can see this path
as a possible one.

Is there a fundamental misunderstanding on my part; that in most
sufficiently-advanced AI systems, not even the programmer has an *idea* of
what the entity may learn?


[...]



Suppose we could add and emotion that put a positive value on
running backwards.  Would that add to their overall pleasure in
life - being able to enjoy something in addition to all the other
things they would have naturally enjoyed?  I'd say yes.  In which
case it would then be wrong to later remove that emotion and deny
them the potential pleasure - assuming of course there are no
contrary ethical considerations.


 So the only problem you see is if we ever add emotion, and then remove it.
 The problem doesn't lie in not adding it at all? Practically, the result is
 the same.


 No, because if we add 

Re: on consciousness levels and ai

2010-01-18 Thread Brent Meeker

silky wrote:



On Tue, Jan 19, 2010 at 2:19 PM, Brent Meeker meeke...@dslextreme.com 
mailto:meeke...@dslextreme.com wrote:


silky wrote:

On Tue, Jan 19, 2010 at 1:02 PM, Brent Meeker
meeke...@dslextreme.com mailto:meeke...@dslextreme.com
mailto:meeke...@dslextreme.com
mailto:meeke...@dslextreme.com wrote:

   silky wrote:

   On Tue, Jan 19, 2010 at 10:09 AM, Brent Meeker
   meeke...@dslextreme.com
mailto:meeke...@dslextreme.com
mailto:meeke...@dslextreme.com
mailto:meeke...@dslextreme.com wrote:
   
   silky wrote:
 
   On Tue, Jan 19, 2010 at 1:24 AM, Stathis

Papaioannou
   stath...@gmail.com mailto:stath...@gmail.com
mailto:stath...@gmail.com mailto:stath...@gmail.com

   wrote:

   
   2010/1/18 silky michaelsli...@gmail.com

mailto:michaelsli...@gmail.com
   mailto:michaelsli...@gmail.com
mailto:michaelsli...@gmail.com:


 
   It would be my (naive) assumption, that

this
   is arguably trivial to
   do. We can design a program that has a
desire
   to 'live', as desire to
   find mates, and otherwise entertain
itself. In
   this way, with some
   other properties, we can easily model
simply pets.

   
   Brent's reasons are valid,


 
   Where it falls down for me is that the programmer

   should ever feel
   guilt. I don't see how I could feel guilty for
ending
   a program when I
   know exactly how it will operate (what paths it
will
   take), even if I
   can't be completely sure of the specific decisions
   (due to some
   randomisation or whatever)
   
   It's not just randomisation, it's experience.  If you

   create and AI at
   fairly high-level (cat, dog, rat, human) it will
   necessarily have the
   ability to learn and after interacting with it's
   enviroment for a while it
   will become a unique individual.  That's why you would
   feel sad to kill it
   - all that experience and knowledge that you don't know
   how to replace.  Of
   course it might learn to be evil or at least
annoying,
   which would make
   you feel less guilty.
 


   Nevertheless, though, I know it's exact environment,


   Not if it interacts with the world.  You must be thinking of a
   virtual cat AI in a virtual world - but even there the
program, if
   at all realistic, is likely to be to complex for you to really
   comprehend.  Of course *in principle* you could spend years
going
   over a few terrabites of data and  you could understand, Oh
   that's why the AI cat did that on day 2118 at 10:22:35, it was
   because of the interaction of memories of day 1425 at
07:54:28 and
   ...(long string of stuff).  But you'd be in almost the same
   position as the neuroscientist who understands what a clump of
   neurons does but can't get a wholistic view of what the
organism
   will do.

   Surely you've had the experience of trying to debug a large
   program you wrote some years ago that now seems to fail on some
   input you never tried before.  Now think how much harder that
   would be if it were an AI that had been learning and modifying
   itself for all those years.


I don't disagree with you that it would be significantly
complicated, I suppose my argument is only that, unlike with a
real cat, I - the programmer - know all there is to know about
this computer cat.


But you *don't* know all there is to know about it.  You don't
know what it has learned - and there's no practical way to find out.


Here we disagree. I don't see (not that I have experience in 
AI-programming specifically, mind you) how I can write a program and 
not have the results be deterministic. I wrote it; I know, in general, 
the type of things it will learn. I know, for example, that it won't 
learn how to drive a car. 

Re: on consciousness levels and ai

2010-01-18 Thread silky
On Tue, Jan 19, 2010 at 3:02 PM, Brent Meeker meeke...@dslextreme.comwrote:

 silky wrote:



 [...]




 Here we disagree. I don't see (not that I have experience in
 AI-programming specifically, mind you) how I can write a program and not
 have the results be deterministic. I wrote it; I know, in general, the type
 of things it will learn. I know, for example, that it won't learn how to
 drive a car. There are no cars in the environment, and it doesn't have the
 capabilities to invent a car, let alone the capabilities to drive it.


 You seem to be assuming that your AI will only interact with a virtual
 world - which you will also create.  I was assuming your AI would be in
 something like a robot cat or dog, which interacted with the world.  I think
 there would be different ethical feelings about these two cases.


Well, it will be reacting with the real world; just a subset that I
specially allow it to interact with. I mean the computer is still in the
real world, whether or not the physical box actually has legs :) In my mind,
though, I was imagining a cat that specifically existed inside my screen,
reacting to other such cats. Lets say the cat is allowed out, in the form of
a robot, and can interact with real cats. Even still, it's programming will
allow it to act only in a deterministic way that I have defined (even if I
haven't defined out all it's behaviours; it may learn some from the other
cats).

So lets say that Robocat learns how to play with a ball, from Realcat. Would
my guilt in ending Robocat only lie in the fact that it learned something,
and given that I can't save it, that learning instance was unique? I'm not
sure. As a programmer, I'd simply be happy my program worked, and I'd
probably want to reproduce it. But showing it to a friend, they may wonder
why I turned it off; it worked, and now it needs to re-learn the next time
it's switched back on (interestingly, I would suggest that everyone would
consider it to be still the same Robocat, even though it needs to
effectively start from scratch).



  If you're suggesting that it will materialise these capabilities out of
 the general model that I've implemented for it, then clearly I can see this
 path as a possible one.


 Well it's certainly possible to write programs so complicated that the
 programmer doesn't forsee what it can do (I do it all the time  :-)  ).


 Is there a fundamental misunderstanding on my part; that in most
 sufficiently-advanced AI systems, not even the programmer has an *idea* of
 what the entity may learn?


 That's certainly the case if it learns from interacting with the world
 because the programmer can practically analyze all those interactions and
 their effect - except maybe by running another copy of the program on
 recorded input.



[...]


   Suppose we could add and emotion that put a positive value on
   running backwards.  Would that add to their overall pleasure in
   life - being able to enjoy something in addition to all the
other
   things they would have naturally enjoyed?  I'd say yes.  In
which
   case it would then be wrong to later remove that emotion
and deny
   them the potential pleasure - assuming of course there are no
   contrary ethical considerations.


So the only problem you see is if we ever add emotion, and
then remove it. The problem doesn't lie in not adding it at
all? Practically, the result is the same.


No, because if we add it and then remove it after the emotion is
experienced there will be a memory of it.  Unfortunately nature
already plays this trick on us.  I can remember that I felt a
strong emotion the first time a kissed girl - but I can't
experience it now.


 I don't mean we do it to the same entity, I mean to subsequent entites.
 (cats or real life babies). If, before the baby experiences anything, I
 remove an emotion it never used, what difference does it make to the baby?
 The main problem is that it's not the same as other babies, but that's
 trivially resolved by performing the same removal on all babies.
 Same applies to cat-instances; if during one compilation I give it
 emotion, and then I later decide to delete the lines of code that allow
 this, and run the program again, have I infringed on it's rights? Does the
 program even have any rights when it's not running?


 I don't think of rights as some abstract thing out there.  They are
 inventions of society saying we, as a society, will protect you when you
 want to do these things that you have a *right* to do.  We won't let others
 use force or coercion to prevent you.  So then the question becomes what
 rights is in societies interest to enforce for a computer program (probably
 none) or for an AI robot (maybe some).

 From this viewpoint the application to babies and cats is straightforward.
  What are the consequences for society and what kind of society do we want