Re: on consciousness levels and ai

2010-01-19 Thread Stathis Papaioannou
2010/1/19 silky michaelsli...@gmail.com:

 Exactly my point! I'm trying to discover why I wouldn't be so rational
 there. Would you? Do you think that knowing all there is to know about
 a cat is unpractical to the point of being impossible *forever*, or do
 you believe that once we do know, we will simply end them freely,
 when they get in our way? I think at some point we *will* know all
 there is to know about them, and even then, we won't end them easily.
 Why not? Is it the emotional projection that Brent suggests? Possibly.

Why should understanding something, even well enough to have actually
made it, make a difference?

 Obviously intelligence and the ability to have feelings and desires
 has something to do with complexity. It would be easy enough to write
 a computer program that pleads with you to do something but you don't
 feel bad about disappointing it, because you know it lacks the full
 richness of human intelligence and consciousness.

 Indeed; so part of the question is: Qhat level of complexity
 constitutes this? Is it simply any level that we don't understand? Or
 is there a level that we *can* understand that still makes us feel
 that way? I think it's more complicated than just any level we don't
 understand (because clearly, I understand that if I twist your arm,
 it will hurt you, and I know exactly why, but I don't do it).

I don't think our understanding of it has anything to do with it. It
is more that a certain level of complexity is needed for the entity in
question to have a level of consciousness which means we are able to
hurt it.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.




Re: R/ASSA query

2010-01-19 Thread Stathis Papaioannou
2010/1/19 Nick Prince m...@dtech.fsnet.co.uk:

 Perhaps you misunderstood my reference to the use of copies.  What I
 meant was why they are considered as an indication of measure at the
 beginning of thought experiments such as the one you discussed (tea/
 coffe).  Jaques Mallah uses them too (I’d like to discuss one of these
 on the list at a later time).  I am not sure why we cannot consider
 the experiment as just happening to a single copy.  That way there
 would be no confusion regarding whether “differentiation” is playing
 an important role.  Otherwise I have no difficulty in realising the
 value of using the copy idea.

If we did the experiment with a single copy that would completely
change it. The copy would have a 90% chance of dying, a 3% of
surviving and getting coffee and a 7% of surviving and getting tea.

 In particular, my views on personal
 identity have been shaped by these, and I especially can relate to
 Bruno's ideas the (eight steps of his SANE paper) at least up to the
 stage just before he discusses platonic realism as a source  of a UD
 which actually existsplatonically rather than concretely. I do need
 to think more about this part though.  In short the idea that a copy
 of me can/could be made, to such a level of detail so that it is
 essentially me, I feel intuitively is correct in principle.  However I
 am concerned that the no clone theorem might be a problem for the
 continuity of personhood.

If the no clone theorem were a problem then you could not survive more
than a moment, since your brain is constantly undergoing classical
level changes.

 From what I can gather Bruno seems to think
 not - or at least not important for what he wants to convey - but I
 would want to explore this at some stage.  Otherwise I can feel that
 there should be no reason why copies should not have continuity of
 personhood over spatio-temporal intervals and feel themselves to be
 identical (I think of identity as continuity of personhood) - or at
 least consistent extensions of the original person.  Moreover I also
 believe that if a suitable computer simulation can be built to the
 right level of detail, which contained the copy as a software
 construct,  then this copy could be a virtual implementation within a
 rendered environment that would indeed similarly believe himself/
 herself to be a consistent extension of the original.  I suppose I am
 essentially a computationalist,  although I am not clear as to the
 difference between it and functionalism yet apart from Turing
 emulability. I am also comfortable with the idea of differentiation so
 that if copies can be placed in lock step, as they presumably are
 across worlds, then 10, 20 or 2000 copies will be felt to be the same
 conscious entity.  You will see that I accept the many worlds theory
 too.  These beliefs are based on either my own prejudice or my
 intuition but are really more like working hypotheses rather than
 fixed beliefs and are certainly open to revision or modification.  I
 find the QTI difficult to swallow which is why I want to understand
 the definitions and concepts associated with it.  I want to be able to
 understand the heated debate about it and QS between Jack and Russell.

What do you think could happen if there were 100 copies of you running
in parallel and 90 were terminated? If you think you would definitely
continue living as one of the 10 remaining copies then to be
consistent you have to accept QTI. If you think there is a chance that
you might die I find it difficult to understand how this could be
reconciled with any consistent theory of personal identity.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.




Re: R/ASSA query

2010-01-19 Thread Bruno Marchal


On 18 Jan 2010, at 19:40, Brent Meeker wrote:


Bruno Marchal wrote:


On 17 Jan 2010, at 09:11, Brent Meeker wrote:



Brent
The reason that there is Something rather than Nothing is that
Nothing is unstable.
  -- Frank Wilczek, Nobel Laureate, phyiscs 2004



So, why is Nothing unstable?

Because there are so many ways to be something and only one way to  
be nothing.





I suspect Frank Wilczek was alluding to the fact that the (very  
weird) quantum vacuum is fluctuating at low scales.
Indeed in classical physics to get universality you need at least  
three bodies. But in quantum physics the vacuum is already Turing  
universal (even /quantum/ Turing universal). The quantum-nothing is  
already a quantum computer, although to use it is another matter,  
except that we are using it just by being, most plausibly right  
here and now.



Nothing is more theory related that the notion of nothing. In  
arithmetic it is the number zero. In set theory, it is the empty  
set. In group theory, we could say that there is no nothing, no  
empty group, you need at least a neutral element. Likewize with the  
combinators: nothing may be tackle by the forest with only one  
bird, etc.



Maybe you're a brain in a vat, or a computation in arithmetic.   
I'm happy to contemplate such hypothesis, but I don't find  
anything testable or useful that follows from them.  So why should  
I accept them even provisionally?



We may accept them because it offers an explanation of the origin  
of mind and matter. To the arithmetical relations correspond  
unboundedly rich and deep histories, and we can prove (to  
ourselves) that arithmetic, as seen from inside leads to a sort of  
coupling consciousness/realities. (Eventually precisely described  
at the propositional by the eight hypostases, and divided precisely  
into the communicable and sharable, and the non communicable one).


This can please those unsatisfied by the current physicalist  
conception, which seems unable to solve the mind body problem,  
since a long time.
It took over three hundred years from the birth of Newton and the  
death of Gallileo to solve the problem of life.  The theory of  
computation is less than a century old.  Neurophysiology is  
similarly in its infancy.



But I do think that the computationalist hypothesis leads indeed to a  
conceptual solution of the mind body problem. The self-reference  
logics explain the gap quanta-qualia, free-will in a deterministic  
frame, etc. Yet, I insist that for such a solution really working, we  
have to derive the physical laws from it. In that sense, the solution  
is in its infancy.








Why shouldn't we ask the question where and how does the physical  
realm come from?. Comp explains: from the numbers, and in this  
precise way. What not to take a look?
I have taken a look, and it looks very interesting.  But I'm not  
enough of a logician and number theorist to judge whether you can  
really recover anything about human consciousness from the theory.   
My impression is that it is somewhat like other everything  
theories.  Because some everything is assumed it is relatively  
easy to believe that what you want to explain is in there somewhere  
and the problem is to explain why all the other stuff isn't  
observed.  I consider this a fatal flaw in Tegmark's everything  
mathematical exists theory.  Not with yours though because you have  
limited it to a definite domain (digital computation) where I  
suppose definite conclusions can be reached and predictions made.


OK. The main point is that the comp everything, which is very robust  
thanks to Church thesis, leads to the idea that matter is a sum on  
the everything. This is arguably the case in empiric physics as  
exemplified  by Feynman quantum sum. If this was not the case, David  
Deutsch critic on Tegmark would apply to comp. The explanation would  
be trivial.


Again, the big advantage here is that we get the whole physical with a  
clear explantion why some of it is sharable (quanta) and some of its  
is personal and unsharable (qualia).






To take the physical realm for granted is the same philosophical  
mistake than to take god for granted. It is an abandon of the  
spirit of research. It is an abstraction from the spirit of inquiry.


Physicalism is really like believing that a universal machine (the  
quantum machine) has to be priviledged, because observation says  
so. I show that if I am turing emulable, then in fine all  
universal machines play their role, and that the mergence of the  
quantum one has to be explained (the background goal being the mind  
body problem).


But if you follow the uda, you know (or should know, or ask  
question) that if we assume computationalism, then/ we have just no  
choice in the matter/.


Unless we assume matter is fundamental, as Peter Jones is fond of  
arguing, and some things happen and some don't.


Well,
- either that fundamental matter is Turing emulable, and then 

Re: on consciousness levels and ai

2010-01-19 Thread m.a.
People seem to be predisposed to accept AI programs as human(oid) if one can 
judge by reactions to Hal, Colossus, Robby, Marvin etc.  m.a.




- Original Message - 
From: Brent Meeker meeke...@dslextreme.com

To: everything-list@googlegroups.com
Sent: Monday, January 18, 2010 6:09 PM
Subject: Re: on consciousness levels and ai



silky wrote:
On Tue, Jan 19, 2010 at 1:24 AM, Stathis Papaioannou stath...@gmail.com 
wrote:



2010/1/18 silky michaelsli...@gmail.com:


It would be my (naive) assumption, that this is arguably trivial to
do. We can design a program that has a desire to 'live', as desire to
find mates, and otherwise entertain itself. In this way, with some
other properties, we can easily model simply pets.


Brent's reasons are valid,



Where it falls down for me is that the programmer should ever feel
guilt. I don't see how I could feel guilty for ending a program when I
know exactly how it will operate (what paths it will take), even if I
can't be completely sure of the specific decisions (due to some
randomisation or whatever)


It's not just randomisation, it's experience.  If you create and AI at
fairly high-level (cat, dog, rat, human) it will necessarily have the
ability to learn and after interacting with it's enviroment for a while
it will become a unique individual.  That's why you would feel sad to
kill it - all that experience and knowledge that you don't know how to
replace.  Of course it might learn to be evil or at least annoying,
which would make you feel less guilty.


I don't see how I could ever think No, you
can't harm X. But what I find very interesting, is that even if I
knew *exactly* how a cat operated, I could never kill one.




but I don't think making an artificial
animal is as simple as you say.



So is it a complexity issue? That you only start to care about the
entity when it's significantly complex. But exactly how complex? Or is
it about the unknowningness; that the project is so large you only
work on a small part, and thus you don't fully know it's workings, and
then that is where the guilt comes in.



I think unknowingness plays a big part, but it's because of our
experience with people and animals, we project our own experience of
consciousness on to them so that when we see them behave in certain ways
we impute an inner life to them that includes pleasure and suffering.




Henry Markham's group are presently
trying to simulate a rat brain, and so far they have done 10,000
neurons which they are hopeful is behaving in a physiological way.
This is at huge computational expense, and they have a long way to go
before simulating a whole rat brain, and no guarantee that it will
start behaving like a rat. If it does, then they are only a few years
away from simulating a human, soon after that will come a superhuman
AI, and soon after that it's we who will have to argue that we have
feelings and are worth preserving.



Indeed, this is something that concerns me as well. If we do create an
AI, and force it to do our bidding, are we acting immorally? Or
perhaps we just withhold the desire for the program to do it's own
thing, but is that in itself wrong?



I don't think so.  We don't worry about the internet's feelings, or the
air traffic control system.  John McCarthy has written essays on this
subject and he cautions against creating AI with human like emotions
precisely because of the ethical implications.  But that means we need
to understand consciousness and emotions less we accidentally do
something unethical.

Brent




--
Stathis Papaioannou














--
You received this message because you are subscribed to the Google Groups 
Everything List group.

To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.






-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.




Re: on consciousness levels and ai

2010-01-19 Thread Bruno Marchal


On 19 Jan 2010, at 03:28, silky wrote:

I don't disagree with you that it would be significantly  
complicated, I suppose my argument is only that, unlike with a real  
cat, I - the programmer - know all there is to know about this  
computer cat. I'm wondering to what degree that adds or removes to  
my moral obligations.



I think there is a confusion of level. It seems related to the problem  
of free-will. Some people believe that free will is impossible in the  
deterministic frame.


But no machine can predict its own behavior in advance. If it could it  
could contradict the prediction.


If my friend who knows me well can predict my action, it will not  
change the fact that I can do those action by free will, at my own  
level where I live.


If not, determinism would eliminate all form of responsability. You  
can say to the judge: all right I am a murderer, but I am not guilty  
because I am just obeying the physical laws.
This is an empty defense. The judge can answer: no problem. I still  
condemn you to fifty years in jail, but don't worry, I am just obeying  
myself to the physical laws.


That is also why real explanation of consciousness don't have to  
explain consciousness away. (Eventually it is the status of matter  
which appear less solid).


An explanation has to correspond to its correct level of relevance.

Why did Obama win the election? Because Obama is made of particles  
obeying to the Schoredinger equation.? That is true, but wrong as an  
explanation.  Because Obama promise to legalize pot? That is false,  
but could have work as a possible  explanation. It is closer to the  
relevance level.


When we reduce a domain to another ontologically, this does not need  
to eliminate the explanation power of the first domain. This is made  
palpable in computer science. You will never explain how a chess  
program works by referring to a low level.


Bruno

http://iridia.ulb.ac.be/~marchal/



-- 

You received this message because you are subscribed to the Google Groups "Everything List" group.

To post to this group, send email to everything-l...@googlegroups.com.

To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com.

For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.



Re: on consciousness levels and ai

2010-01-19 Thread John Mikes
Dear Bruno, you picked my 'just added' small after-remark from my post and I
thought for a second that it was Brent's reply. Then your signature
explained that it was YOUR stance on life (almost) (- I search for even more
proper distinctions to that term). Maybe we should scrap  the
term altogether and use it only as 'folklore' -  applicable as in
conventional 'bio'. .
Considering 'conscious' (+ness?) the responsiveness (reflexively?) to
*any*relations is hard to separate from the general idea we usually
carry as
*'life'.*

I like your bon mot on 'artificial' putting me into my place in 'folklore'
vocabulary. Indeed, - in my naive meanings - whatever occurs occurs by a
mechanism - entailed by relations -  is considerable as artificial
(or: *naturally
occurring change*), be it by humans or by a hurricane.
what I may object to, is your ONLY in the last par: it presumes
omniscience. Even your 'lived experience' is identified in anthropomorphic
ways (*they have our experiences). *
**
*Thanks for the reply*
**
*JohnM*
**
**
**



On Mon, Jan 18, 2010 at 12:57 PM, Bruno Marchal marc...@ulb.ac.be wrote:


 On 18 Jan 2010, at 16:35, John Mikes wrote:

 Is a universal mchine 'live'?



 I would say yes, despite the concrete artificial one still needs humans in
 its reproduction cycle. But we need plants and bacteria.
 I think that all machines, including houses and garden are alive in that
 sense. Cigarets are alive. They have a a way to reproduce. Universal machine
 are alive and can be conscious.

 If we define artificial by introduced by humans, we can see that the
 difference between artificial and natural is ... artificial (and thus
 natural!). Jacques Lafitte wrote in 1911 (published in 1931) a book where he
 describes the rise of machines and technology as a collateral living
 processes.

 Only for Löbian machine, like you, me, but also Peano Arithmetic and ZF , I
 would say I am pretty sure that they are reflexively conscious like us.
 Despite they have no lived experiences at all. (Well, they have our
 experiences, in a sense. We are their experiences).

 Bruno

 http://iridia.ulb.ac.be/~marchal/




 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To post to this group, send email to everything-l...@googlegroups.com.
 To unsubscribe from this group, send email to
 everything-list+unsubscr...@googlegroups.comeverything-list%2bunsubscr...@googlegroups.com
 .
 For more options, visit this group at
 http://groups.google.com/group/everything-list?hl=en.




-- 

You received this message because you are subscribed to the Google Groups "Everything List" group.

To post to this group, send email to everything-l...@googlegroups.com.

To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com.

For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.



Re: on consciousness levels and ai

2010-01-19 Thread Brent Meeker

Stathis Papaioannou wrote:

2010/1/19 silky michaelsli...@gmail.com:

  

Exactly my point! I'm trying to discover why I wouldn't be so rational
there. Would you? Do you think that knowing all there is to know about
a cat is unpractical to the point of being impossible *forever*, or do
you believe that once we do know, we will simply end them freely,
when they get in our way? I think at some point we *will* know all
there is to know about them, and even then, we won't end them easily.
Why not? Is it the emotional projection that Brent suggests? Possibly.



Why should understanding something, even well enough to have actually
made it, make a difference?

  

Obviously intelligence and the ability to have feelings and desires
has something to do with complexity. It would be easy enough to write
a computer program that pleads with you to do something but you don't
feel bad about disappointing it, because you know it lacks the full
richness of human intelligence and consciousness.
  

Indeed; so part of the question is: Qhat level of complexity
constitutes this? Is it simply any level that we don't understand? Or
is there a level that we *can* understand that still makes us feel
that way? I think it's more complicated than just any level we don't
understand (because clearly, I understand that if I twist your arm,
it will hurt you, and I know exactly why, but I don't do it).



I don't think our understanding of it has anything to do with it. It
is more that a certain level of complexity is needed for the entity in
question to have a level of consciousness which means we are able to
hurt it.


But it also needs to be similar enough to us that we can intuit what 
hurts is and what doesn't, to empathize that it may feel pain.  If my 
car runs out of oil does it feel pain?  I'm sure my 1966 Saab doesn't, 
but what about my 1999 Passat - it has a computer with an 
auto-diagnostic function?


Brent
-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.




Re: R/ASSA query

2010-01-19 Thread Brent Meeker

Stathis Papaioannou wrote:

2010/1/19 Nick Prince m...@dtech.fsnet.co.uk:

  

Perhaps you misunderstood my reference to the use of copies.  What I
meant was why they are considered as an indication of measure at the
beginning of thought experiments such as the one you discussed (tea/
coffe).  Jaques Mallah uses them too (I’d like to discuss one of these
on the list at a later time).  I am not sure why we cannot consider
the experiment as just happening to a single copy.  That way there
would be no confusion regarding whether “differentiation” is playing
an important role.  Otherwise I have no difficulty in realising the
value of using the copy idea.



If we did the experiment with a single copy that would completely
change it. The copy would have a 90% chance of dying, a 3% of
surviving and getting coffee and a 7% of surviving and getting tea.

  

In particular, my views on personal
identity have been shaped by these, and I especially can relate to
Bruno's ideas the (eight steps of his SANE paper) at least up to the
stage just before he discusses platonic realism as a source  of a UD
which actually existsplatonically rather than concretely. I do need
to think more about this part though.  In short the idea that a copy
of me can/could be made, to such a level of detail so that it is
essentially me, I feel intuitively is correct in principle.  However I
am concerned that the no clone theorem might be a problem for the
continuity of personhood.



If the no clone theorem were a problem then you could not survive more
than a moment, since your brain is constantly undergoing classical
level changes.

  

From what I can gather Bruno seems to think
not - or at least not important for what he wants to convey - but I
would want to explore this at some stage.  Otherwise I can feel that
there should be no reason why copies should not have continuity of
personhood over spatio-temporal intervals and feel themselves to be
identical (I think of identity as continuity of personhood) - or at
least consistent extensions of the original person.  Moreover I also
believe that if a suitable computer simulation can be built to the
right level of detail, which contained the copy as a software
construct,  then this copy could be a virtual implementation within a
rendered environment that would indeed similarly believe himself/
herself to be a consistent extension of the original.  I suppose I am
essentially a computationalist,  although I am not clear as to the
difference between it and functionalism yet apart from Turing
emulability. I am also comfortable with the idea of differentiation so
that if copies can be placed in lock step, as they presumably are
across worlds, then 10, 20 or 2000 copies will be felt to be the same
conscious entity.  You will see that I accept the many worlds theory
too.  These beliefs are based on either my own prejudice or my
intuition but are really more like working hypotheses rather than
fixed beliefs and are certainly open to revision or modification.  I
find the QTI difficult to swallow which is why I want to understand
the definitions and concepts associated with it.  I want to be able to
understand the heated debate about it and QS between Jack and Russell.



What do you think could happen if there were 100 copies of you running
in parallel and 90 were terminated? If you think you would definitely
continue living as one of the 10 remaining copies then to be
consistent you have to accept QTI. If you think there is a chance that
you might die I find it difficult to understand how this could be
reconciled with any consistent theory of personal identity.
  


It's a straightforward consequence of a materialist theory of personal 
identity.  Whether you survive or not depends on which body you are and 
whether it died.


Brent
-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.




Re: R/ASSA query

2010-01-19 Thread Nick Prince




 If the no clone theorem were a problem then you could not survive more
 than a moment, since your brain is constantly undergoing classical
 level changes.

How interesting!!  I had forgotten that most people believe that
consciousness is a classical rather than quantum process (Penrose
excepted).  Thank you for bringing this to my attention.  So the no
clone theorem should not pose a problem for copy builders after all.



 What do you think could happen if there were 100 copies of you running
 in parallel and 90 were terminated? If you think you would definitely
 continue living as one of the 10 remaining copies then to be
 consistent you have to accept QTI. If you think there is a chance that
 you might die I find it difficult to understand how this could be
 reconciled with any consistent theory of personal identity.

I know.  To be consistent with my other assumptions I would have to
believe in QTI but it is just so difficult to swallow.  I think the
hardest bit comes when we think of what we would experience.  Suppose
I lived in 200BC or before.  It's hard to think of ways you could keep
on surviving apart from alien visitations with copying machines etc.
This is one reason I have looked in some detail into Tiplers omega
point theory.  I don't think this should be written off as being too
whacky just because others have got onto the Tipler bashing
bandwagon.  It has not been refuted yet in terms of the accelerated
expansion of the universe or for other reasons which I can eloborate
on - but that is besides the point.  If Tiplers final simulation is a
Universal Dovetailer then anyone who has ever lived in the past could
in principle find themselves as a consistent extension in that
simulation.  This is one explanation how people could avoid ending up
in a cul de sac branch.

 According to RSSA and the RSSA your absolute measure in the multiverse
 decreases with each branching as versions of you die. According to the
 RSSA this doesn't matter as at least one of you is left standing;
 according to the ASSA, this does matter and you eventually die. The
 only way I can make sense of the latter is if you have an essentialist
 view of personal identity. Under this view if a copy is made of you
 and the original dies, you die. Under what Parfit calls the
 reductionist view of personal identity, you live.

Hmm..  I think that what I am calling absolute measure you think of as
relative measure or something like it.  I thought absolute measure was
the total measure of my existence across the whole multiverse.  If I
cannot die then RSSA implies this would be conserved. As you traverse
down a particular branch though, your measure would indeed decrease
for both RSSA and ASSA but it would eventually decrease to zero for
ASSA when you died!  With RSSA it could only decrease asymptotically
to zero, but never completly disappear.


Best wishes

Nick Prince

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.




Re: R/ASSA query

2010-01-19 Thread Nick Prince


On Jan 19, 6:43 pm, Brent Meeker meeke...@dslextreme.com wrote:
 Stathis Papaioannou wrote:
  2010/1/19 Nick Prince m...@dtech.fsnet.co.uk:

  Perhaps you misunderstood my reference to the use of copies.  What I
  meant was why they are considered as an indication of measure at the
  beginning of thought experiments such as the one you discussed (tea/
  coffe).  Jaques Mallah uses them too (I d like to discuss one of these
  on the list at a later time).  I am not sure why we cannot consider
  the experiment as just happening to a single copy.  That way there
  would be no confusion regarding whether differentiation is playing
  an important role.  Otherwise I have no difficulty in realising the
  value of using the copy idea.

  If we did the experiment with a single copy that would completely
  change it. The copy would have a 90% chance of dying, a 3% of
  surviving and getting coffee and a 7% of surviving and getting tea.

  In particular, my views on personal
  identity have been shaped by these, and I especially can relate to
  Bruno's ideas the (eight steps of his SANE paper) at least up to the
  stage just before he discusses platonic realism as a source  of a UD
  which actually existsplatonically rather than concretely. I do need
  to think more about this part though.  In short the idea that a copy
  of me can/could be made, to such a level of detail so that it is
  essentially me, I feel intuitively is correct in principle.  However I
  am concerned that the no clone theorem might be a problem for the
  continuity of personhood.

  If the no clone theorem were a problem then you could not survive more
  than a moment, since your brain is constantly undergoing classical
  level changes.

  From what I can gather Bruno seems to think
  not - or at least not important for what he wants to convey - but I
  would want to explore this at some stage.  Otherwise I can feel that
  there should be no reason why copies should not have continuity of
  personhood over spatio-temporal intervals and feel themselves to be
  identical (I think of identity as continuity of personhood) - or at
  least consistent extensions of the original person.  Moreover I also
  believe that if a suitable computer simulation can be built to the
  right level of detail, which contained the copy as a software
  construct,  then this copy could be a virtual implementation within a
  rendered environment that would indeed similarly believe himself/
  herself to be a consistent extension of the original.  I suppose I am
  essentially a computationalist,  although I am not clear as to the
  difference between it and functionalism yet apart from Turing
  emulability. I am also comfortable with the idea of differentiation so
  that if copies can be placed in lock step, as they presumably are
  across worlds, then 10, 20 or 2000 copies will be felt to be the same
  conscious entity.  You will see that I accept the many worlds theory
  too.  These beliefs are based on either my own prejudice or my
  intuition but are really more like working hypotheses rather than
  fixed beliefs and are certainly open to revision or modification.  I
  find the QTI difficult to swallow which is why I want to understand
  the definitions and concepts associated with it.  I want to be able to
  understand the heated debate about it and QS between Jack and Russell.

  What do you think could happen if there were 100 copies of you running
  in parallel and 90 were terminated? If you think you would definitely
  continue living as one of the 10 remaining copies then to be
  consistent you have to accept QTI. If you think there is a chance that
  you might die I find it difficult to understand how this could be
  reconciled with any consistent theory of personal identity.

 It's a straightforward consequence of a materialist theory of personal
 identity.  Whether you survive or not depends on which body you are and
 whether it died.

 Brent- Hide quoted text -

 - Show quoted text -




Are you saying that you do not subscribe to differentiation?

Nick Prince
-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.




Re: on consciousness levels and ai

2010-01-19 Thread Brent Meeker

Bruno Marchal wrote:


On 19 Jan 2010, at 03:28, silky wrote:

I don't disagree with you that it would be significantly complicated, 
I suppose my argument is only that, unlike with a real cat, I - the 
programmer - know all there is to know about this computer cat. I'm 
wondering to what degree that adds or removes to my moral obligations.



I think there is a confusion of level. It seems related to the problem 
of free-will. Some people believe that free will is impossible in the 
deterministic frame. 

But no machine can predict its own behavior in advance. If it could it 
could contradict the prediction.


If my friend who knows me well can predict my action, it will not 
change the fact that I can do those action by free will, at my own 
level where I live.


If not, determinism would eliminate all form of responsability. You 
can say to the judge: all right I am a murderer, but I am not guilty 
because I am just obeying the physical laws. 
This is an empty defense. The judge can answer: no problem. I still 
condemn you to fifty years in jail, but don't worry, I am just obeying 
myself to the physical laws.


That is also why real explanation of consciousness don't have to 
explain consciousness /away/. (Eventually it is the status of matter 
which appear less solid).


An explanation has to correspond to its correct level of relevance.


And to the level of understanding of the person to whom you are explaining.


Why did Obama win the election? Because Obama is made of particles 
obeying to the Schoredinger equation.? That is true, but wrong as an 
explanation.  Because Obama promise to legalize pot? That is false, 
but could have work as a possible  explanation. It is closer to the 
relevance level.


When we reduce a domain to another ontologically, this does not need 
to eliminate the explanation power of the first domain. This is made 
palpable in computer science. You will never explain how a chess 
program works by referring to a low level.


You may certainly explain it at a lower level than the rules or lower 
than you might explain strategy to a human player.  For example you 
could describe alpha-beta tree search or a look-up-table for openings.


Brent



Bruno

http://iridia.ulb.ac.be/~marchal/ http://iridia.ulb.ac.be/%7Emarchal/






--
You received this message because you are subscribed to the Google 
Groups Everything List group.

To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.




Re: R/ASSA query

2010-01-19 Thread John Mikes
Something vs Nothing?

I played with this so a decade+ ago and found that by simply realizing the
term *NOTHING* we achieved *'something*' so the *nothing* is gone. While,
however, going from *'something'* to the (elusive?) 'nothing', we have to
cut out *EVERYTHING* that may interfere with 'nothing', a task immensely
difficult and unlimited. No matter how one tries to define nothing, ANY
point makes it into a something (even the negative).

At that time I still abode in believeing in 'ontology' and my 'something'
started easily from nothing. (Since then I refuse 'ontology', which is a
STATIC imaging of nature, nonexistent in the continually changing complexity
of 'everything' (and all their relatedness). Conventional sciences - and the
philosophy on its teats - consider such 'snapshots' in the continuous change
and such snapshots represent the statically existent (so bielieved!) status,
called ontology of the world. Such snapshot-view led to Darwin's evolution
and to physical laws.
Two subsequent snapshots show change, - omitting the steps in between, hence
the term 'random mutation'. Only the timeless continuality includes the
deterministic entailment that represents the dynamics of the world (nature?
totality, wholeness, everything).

Sorry for a partially obsolete rambling.

John M




On Mon, Jan 18, 2010 at 1:40 PM, Brent Meeker meeke...@dslextreme.comwrote:

 Bruno Marchal wrote:


 On 17 Jan 2010, at 09:11, Brent Meeker wrote:


 Brent
 The reason that there is Something rather than Nothing is that
 Nothing is unstable.
   -- Frank Wilczek, Nobel Laureate, phyiscs 2004



 So, why is Nothing unstable?


 Because there are so many ways to be something and only one way to be
 nothing.





 I suspect Frank Wilczek was alluding to the fact that the (very weird)
 quantum vacuum is fluctuating at low scales.
 Indeed in classical physics to get universality you need at least three
 bodies. But in quantum physics the vacuum is already Turing universal (even
 /quantum/ Turing universal). The quantum-nothing is already a quantum
 computer, although to use it is another matter, except that we are using it
 just by being, most plausibly right here and now.


 Nothing is more theory related that the notion of nothing. In
 arithmetic it is the number zero. In set theory, it is the empty set. In
 group theory, we could say that there is no nothing, no empty group, you
 need at least a neutral element. Likewize with the combinators: nothing may
 be tackle by the forest with only one bird, etc.


 Maybe you're a brain in a vat, or a computation in arithmetic.  I'm happy
 to contemplate such hypothesis, but I don't find anything testable or useful
 that follows from them.  So why should I accept them even provisionally?



 We may accept them because it offers an explanation of the origin of mind
 and matter. To the arithmetical relations correspond unboundedly rich and
 deep histories, and we can prove (to ourselves) that arithmetic, as seen
 from inside leads to a sort of coupling consciousness/realities. (Eventually
 precisely described at the propositional by the eight hypostases, and
 divided precisely into the communicable and sharable, and the non
 communicable one).

 This can please those unsatisfied by the current physicalist conception,
 which seems unable to solve the mind body problem, since a long time.

 It took over three hundred years from the birth of Newton and the death of
 Gallileo to solve the problem of life.  The theory of computation is less
 than a century old.  Neurophysiology is similarly in its infancy.



 Why shouldn't we ask the question where and how does the physical realm
 come from?. Comp explains: from the numbers, and in this precise way. What
 not to take a look?

 I have taken a look, and it looks very interesting.  But I'm not enough of
 a logician and number theorist to judge whether you can really recover
 anything about human consciousness from the theory.  My impression is that
 it is somewhat like other everything theories.  Because some everything
 is assumed it is relatively easy to believe that what you want to explain is
 in there somewhere and the problem is to explain why all the other stuff
 isn't observed.  I consider this a fatal flaw in Tegmark's everything
 mathematical exists theory.  Not with yours though because you have limited
 it to a definite domain (digital computation) where I suppose definite
 conclusions can be reached and predictions made.


 To take the physical realm for granted is the same philosophical mistake
 than to take god for granted. It is an abandon of the spirit of research.
 It is an abstraction from the spirit of inquiry.

 Physicalism is really like believing that a universal machine (the quantum
 machine) has to be priviledged, because observation says so. I show that if
 I am turing emulable, then in fine all universal machines play their role,
 and that the mergence of the quantum one has to be explained (the background
 goal 

Re: R/ASSA query

2010-01-19 Thread Brent Meeker

Nick Prince wrote:

On Jan 19, 6:43 pm, Brent Meeker meeke...@dslextreme.com wrote:
  

Stathis Papaioannou wrote:


2010/1/19 Nick Prince m...@dtech.fsnet.co.uk:
  

Perhaps you misunderstood my reference to the use of copies.  What I
meant was why they are considered as an indication of measure at the
beginning of thought experiments such as the one you discussed (tea/
coffe).  Jaques Mallah uses them too (I d like to discuss one of these
on the list at a later time).  I am not sure why we cannot consider
the experiment as just happening to a single copy.  That way there
would be no confusion regarding whether differentiation is playing
an important role.  Otherwise I have no difficulty in realising the
value of using the copy idea.


If we did the experiment with a single copy that would completely
change it. The copy would have a 90% chance of dying, a 3% of
surviving and getting coffee and a 7% of surviving and getting tea.
  

In particular, my views on personal
identity have been shaped by these, and I especially can relate to
Bruno's ideas the (eight steps of his SANE paper) at least up to the
stage just before he discusses platonic realism as a source  of a UD
which actually existsplatonically rather than concretely. I do need
to think more about this part though.  In short the idea that a copy
of me can/could be made, to such a level of detail so that it is
essentially me, I feel intuitively is correct in principle.  However I
am concerned that the no clone theorem might be a problem for the
continuity of personhood.


If the no clone theorem were a problem then you could not survive more
than a moment, since your brain is constantly undergoing classical
level changes.
  

From what I can gather Bruno seems to think
not - or at least not important for what he wants to convey - but I
would want to explore this at some stage.  Otherwise I can feel that
there should be no reason why copies should not have continuity of
personhood over spatio-temporal intervals and feel themselves to be
identical (I think of identity as continuity of personhood) - or at
least consistent extensions of the original person.  Moreover I also
believe that if a suitable computer simulation can be built to the
right level of detail, which contained the copy as a software
construct,  then this copy could be a virtual implementation within a
rendered environment that would indeed similarly believe himself/
herself to be a consistent extension of the original.  I suppose I am
essentially a computationalist,  although I am not clear as to the
difference between it and functionalism yet apart from Turing
emulability. I am also comfortable with the idea of differentiation so
that if copies can be placed in lock step, as they presumably are
across worlds, then 10, 20 or 2000 copies will be felt to be the same
conscious entity.  You will see that I accept the many worlds theory
too.  These beliefs are based on either my own prejudice or my
intuition but are really more like working hypotheses rather than
fixed beliefs and are certainly open to revision or modification.  I
find the QTI difficult to swallow which is why I want to understand
the definitions and concepts associated with it.  I want to be able to
understand the heated debate about it and QS between Jack and Russell.


What do you think could happen if there were 100 copies of you running
in parallel and 90 were terminated? If you think you would definitely
continue living as one of the 10 remaining copies then to be
consistent you have to accept QTI. If you think there is a chance that
you might die I find it difficult to understand how this could be
reconciled with any consistent theory of personal identity.
  

It's a straightforward consequence of a materialist theory of personal
identity.  Whether you survive or not depends on which body you are and
whether it died.

Brent- Hide quoted text -

- Show quoted text -






Are you saying that you do not subscribe to differentiation?

Nick Prince
  
I'm not sure what you mean by differentiation, but I don't subscribe 
to one theory or another - I just consider them.  Above I was only 
pointing out that there are theories (in fact the most common theory) in 
which there is no QTI and in fact QTI might be taken as a reductio ad 
absurdum against the MWI.


Brent
-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.




Re: R/ASSA query

2010-01-19 Thread Nick Prince


  Are you saying that you do not subscribe to differentiation?

  Nick Prince

 I'm not sure what you mean by differentiation, but I don't subscribe
 to one theory or another - I just consider them.  Above I was only
 pointing out that there are theories (in fact the most common theory) in
 which there is no QTI and in fact QTI might be taken as a reductio ad
 absurdum against the MWI.

 Brent- Hide quoted text -

 - Show quoted text -

point taken! I should have said are you considering differentiation as
an implausible hypothesis.  By differentiation I mean that instead of
supervening on a single world line, the same consciousness supervenes
on all identical world lines until they split  as in many worlds. When
they split the 1st person experience is then indeterminate.  Russells
book P144

Best wishes

Nick
-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.




Re: on consciousness levels and ai

2010-01-19 Thread silky
On Tue, Jan 19, 2010 at 8:43 PM, Stathis Papaioannou stath...@gmail.comwrote:

 2010/1/19 silky michaelsli...@gmail.com:

  Exactly my point! I'm trying to discover why I wouldn't be so rational
  there. Would you? Do you think that knowing all there is to know about
  a cat is unpractical to the point of being impossible *forever*, or do
  you believe that once we do know, we will simply end them freely,
  when they get in our way? I think at some point we *will* know all
  there is to know about them, and even then, we won't end them easily.
  Why not? Is it the emotional projection that Brent suggests? Possibly.

 Why should understanding something, even well enough to have actually
 made it, make a difference?


I don't know, that's what I'm trying to determine.



   Obviously intelligence and the ability to have feelings and desires
  has something to do with complexity. It would be easy enough to write
  a computer program that pleads with you to do something but you don't
  feel bad about disappointing it, because you know it lacks the full
  richness of human intelligence and consciousness.
 
  Indeed; so part of the question is: Qhat level of complexity
  constitutes this? Is it simply any level that we don't understand? Or
  is there a level that we *can* understand that still makes us feel
  that way? I think it's more complicated than just any level we don't
  understand (because clearly, I understand that if I twist your arm,
  it will hurt you, and I know exactly why, but I don't do it).

 I don't think our understanding of it has anything to do with it. It
 is more that a certain level of complexity is needed for the entity in
 question to have a level of consciousness which means we are able to
 hurt it.


But the basic question is; can you create this entity from scratch, using a
computer? And if so, do you owe it any obligations?


--
 Stathis Papaioannou

 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To post to this group, send email to everything-l...@googlegroups.com.
 To unsubscribe from this group, send email to
 everything-list+unsubscr...@googlegroups.comeverything-list%2bunsubscr...@googlegroups.com
 .
 For more options, visit this group at
 http://groups.google.com/group/everything-list?hl=en.





-- 
silky
 http://www.mirios.com.au/
 http://island.mirios.com.au/t/rigby+random+20

antagonist PATRIARCHATE scatterbrained professorship VENALLY bankrupt
adversity bored = unint...
-- 

You received this message because you are subscribed to the Google Groups "Everything List" group.

To post to this group, send email to everything-l...@googlegroups.com.

To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com.

For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.



Re: on consciousness levels and ai

2010-01-19 Thread silky
On Wed, Jan 20, 2010 at 2:50 AM, Bruno Marchal marc...@ulb.ac.be wrote:


 On 19 Jan 2010, at 03:28, silky wrote:

 I don't disagree with you that it would be significantly complicated, I
 suppose my argument is only that, unlike with a real cat, I - the programmer
 - know all there is to know about this computer cat. I'm wondering to what
 degree that adds or removes to my moral obligations.



 I think there is a confusion of level. It seems related to the problem of
 free-will. Some people believe that free will is impossible in the
 deterministic frame.



My opinion is that we don't have free will, and my definion of free-will
in this context is being able to do something that our programming doesn't
allow us to do.

For example, people explain free-will as the ability to decide whether or
not you pick up a pen. Sure, you can do either things, and no matter which
you do, you are exercising a choice. But I don't consider this free.
It's just a pre-determined as a program looking at some internal state and
deciding which branch to take:

if ( needToWrite  notHoldingPen ){ grabPen(); }

It goes without saying that it's significantly more complicated, but the
underlying concept remains.

I define free will as the concept of breaking out of a branch completely,
stepping outside the program. And clearly, from within the program (of
human consciousness) it's impossible. Thus, I consider free will as a
completely impossible concept.

If we re-define free will to mean the ability to choose between two actions,
based on state (as I showed above), then clearly, it's a fact of life, and
every single object in the universe has this type of free will.



But no machine can predict its own behavior in advance. If it could it could
 contradict the prediction.

 If my friend who knows me well can predict my action, it will not change
 the fact that I can do those action by free will, at my own level where I
 live.

 If not, determinism would eliminate all form of responsability. You can say
 to the judge: all right I am a murderer, but I am not guilty because I am
 just obeying the physical laws.
 This is an empty defense. The judge can answer: no problem. I still
 condemn you to fifty years in jail, but don't worry, I am just obeying
 myself to the physical laws.

 That is also why real explanation of consciousness don't have to explain
 consciousness *away*. (Eventually it is the status of matter which appear
 less solid).

 An explanation has to correspond to its correct level of relevance.

 Why did Obama win the election? Because Obama is made of particles obeying
 to the Schoredinger equation.? That is true, but wrong as an explanation.
  Because Obama promise to legalize pot? That is false, but could have work
 as a possible  explanation. It is closer to the relevance level.

 When we reduce a domain to another ontologically, this does not need to
 eliminate the explanation power of the first domain. This is made palpable
 in computer science. You will never explain how a chess program works by
 referring to a low level.

 Bruno

 http://iridia.ulb.ac.be/~marchal/ http://iridia.ulb.ac.be/%7Emarchal/




 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To post to this group, send email to everything-l...@googlegroups.com.
 To unsubscribe from this group, send email to
 everything-list+unsubscr...@googlegroups.comeverything-list%2bunsubscr...@googlegroups.com
 .
 For more options, visit this group at
 http://groups.google.com/group/everything-list?hl=en.




-- 
silky
 http://www.mirios.com.au/
 http://island.mirios.com.au/t/rigby+random+20

UNBOUNDED-reconcilable crow's-feet; COKE? Intermarriage distressing: puke
tailoring bicyclist...
-- 

You received this message because you are subscribed to the Google Groups "Everything List" group.

To post to this group, send email to everything-l...@googlegroups.com.

To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com.

For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.