Re: Block Universes

2014-02-23 Thread Stathis Papaioannou
On 24 February 2014 07:57, Edgar L. Owen edgaro...@att.net wrote:
 Stathis,

 If we assume time flows, as everyone in the universe other than block time
 devotees do, the answers to all your questions are obvious.

 First of all my universe is NOT a presentist universe. Don't use
 misleading incorrect labels to describe it.

 If time flows, as it clearly does, then all movement follows automatically.
 The flow of time is a fundamental assumption in my theory. Doesn't matter if
 it flows continuously or in minute increments. The way my theory says it
 actually flows is in minute processor cycles in which the current state of
 the universe is continually recomputed. This also corresponds to the
 continual extension of the radial dimension of a hyperspherical universe.

 The current present moment is simply the current surface of that
 hypersphere, and the current processor cycle of p-time. It is not the SAME
 present moment all the time because the present moment is just the current
 moment of p-time. It does continually move along the radial p-time axis of
 the universe. That's how the past continually transitions to the present as
 the universe continually recomputes its current state.

 This is a simple elegant theory that is consistent with all of science, and
 reflects the basic idea of science that time flows from the big bang to the
 present moment of time. Everyone believes this with very few exceptions, and
 everyone WITHOUT exception lives according to it. Even block universe
 believers live their entire lives as if time flows because that is the only
 way they can possibly function. That's overwhelming evidence that time does
 flow.

 Now, how does that work in a block universe? You didn't answer my questions,
 you just asked the same questions back to me and I gave you the answers. So
 now what are your answers please?

It can be shown that motion and the appearance of the flow of time can
survive a discontinuity. Imagine there is computer simulation with an
observer watching a moving object, such as ball thrown across his
field of vision. The computer goes through machine states M1,M2,M3,M4
corresponding (roughly) with subjective states in the observer
S1,S2,S3,S4. Now suppose at M2 the data is saved to disk, the program
stopped and the computer shut down. After a period, the computer is
rebooted, the program restarted and the saved data loaded. The
computer then goes through M3 and M4. Do you agree that the observer
cannot tell if the computer was shut down, or how long it was shut
down for? Do you agree that he has the same uninterrupted visual
experience S1,S2,S3,S4 of the ball flying through the air?

 And another question. What is the basic reason you think we need a block
 universe? What does it explain that the normal view of time flowing from the
 big bang to the present doesn't explain?

 The block universe theory explains nothing that the ordinary scientific view
 of the universe doesn't explain better and just adds all sorts of
 complications and convoluted explanations. So why come up with it in the
 first place?

I find the idea of a multiverse elegant and simple, and despite what
you say I think it is consistent with observation.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Block Universes

2014-02-23 Thread Stathis Papaioannou
On 24 February 2014 08:09, Edgar L. Owen edgaro...@att.net wrote:
 Stathis,

 This is just Sophistry that avoids the real question. Everyone of the
 Stathis instantiations may well feel it is the real one, but why is the one
 you are right now the one I am talking to?

 It could be anyone of them? Right? So why is it the one you think you are
 right now?

 The only logical answer is because it is the one that coincides with the
 present moment in which we are talking. Right?

 But the only way that can be true is if there is a real present moment that
 selects the current Stathis. There is no logical way around that. There
 absolutely has to be a selection mechanism that selects which Stathis you
 experience yourself as, and that can only be the one in the current present
 moment.

 Ten minutes ago you were that Stathis. Now you are this Stathis. Why the
 change in which one you are? The only possible mechanism is a current
 present moment, and that conclusively falsifies the block universe theory.

 There is simply no logical way around this...

 Why are you not the Stathis you were 10 minutes ago? Answer is because it is
 NOT 10 minutes ago now. It is now now, and that now is what selects the
 Stathis you are now

It's not sophistry. I maintain that the reason I feel myself to be me,
now, and not one of the other versions of me who may exist elsewhere
in the multiverse is trivially obvious, in the same way as it is
trivially obvious why I don't feel myself to be any of the other
billions of people in the world. The inhabitants of China are not me,
now even though they look a bit like me, now and their mental states
are a bit like mine, now. The me, yesterday is not me, now even though
he looks a bit like me, now and his mental state is a bit like mine,
now.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Block Universes

2014-02-24 Thread Stathis Papaioannou
On 25 February 2014 00:26, Edgar L. Owen edgaro...@att.net wrote:
 Stathis,

 1. This disproves what it sets out to prove. It assumes a RUNNING computer
 which assumes a flowing time. This example can't be taken seriously. If
 anything it's a proof that time has to flow to give the appearance of time
 flowing, which is the correct understanding...

No, what it shows is that the running time is not relevant to the
appearance of continuity. The computer can be restarted after a second
or after a billion years in the Andromeda galaxy, and it makes no
subjective difference. This is how the separate frames in a block
universe join up.

 2. I assume in this context you don't mean 'multiverse' but 'many worlds'
 and that your use of 'multiverse' was a typo?

 If so I have some questions I like to ask to clarify how you understand MWI,
 particularly in the block universe context you previously mentioned.

I meant multiverse, not specifically the MWI of QM.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Block Universes

2014-02-24 Thread Stathis Papaioannou
On 25 February 2014 00:35, Edgar L. Owen edgaro...@att.net wrote:
 Stathis,

 You've of course hit on the crux in your explanation, though perhaps
 unknowingly so.

 You state The me, yesterday is not me, now

 Yes, I agree completely. You, yourself have just stated the selection
 mechanism is the 'NOW' which you mention. It is the now that you are in that
 selects which version of Stathis you are on the basis of what time it is in
 that now. The Stathis that corresponds to that time is the Stathis that you
 are right now at that time.

 That is what I've been telling you, that you are the Stathis version of
 yourself that you are because that is the only one that exists in this NOW
 in which you exist.

 That in itself demonstrates there is a now, a present moment, which selects
 the actual version of yourself that you are at this particular time. And if
 there is a particular now, then time MUST flow...

 You, yourself demonstrate my point...

The point was that I, now am no more privileged in time compared to
other versions of myself than I am privileged in space compared to
other people.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Block Universes

2014-02-25 Thread Stathis Papaioannou
On 26 February 2014 04:50, Edgar L. Owen edgaro...@att.net wrote:
 Stathis,

 I understand your point but you don't understand my point.

 My point is that you try to prove time doesn't flow by giving me an example
 is which time DOES flow (the running projector). The projector has to run in
 time to give the motion of the frames.

 That kind of proof obviously doesn't work. Please give me a proof that time
 DOES NOT flow without using something running in time. I say this is
 impossible. There is no way you can prove time does not flow without using
 some FLOW of time, something running in time, to try to prove it.

 Therefore the notion that time doesn't flow cannot be proved.

 Do you see my point now?

The computation occurs in two parts, separated across time and space.
They could even be done simultaneously, in reverse order, or in
different universes. The effect of continuous motion would be
maintained for the observer in the computation. If running time were
needed to connect them how could mangling it in this way have no
effect?


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Block Universes

2014-02-25 Thread Stathis Papaioannou
On 26 February 2014 08:14, Edgar L. Owen edgaro...@att.net wrote:
 Stathis,

 PS: You claim you are not, but you ARE privileged in SPACE compared to other
 people because your consciousness and your biological being are located
 where you are, not where anyone else is. That's a stupid claim on your
 part

 So your example proves MY point, not yours..

Your claim is that running time is needed to make the present moment
special but it isn't: it is only special to me because I am me, here
and now. All the other people in the world feel special to themselves
in the same way, and all the other versions of me in a block universe
feel special to themselves in the same way. No spotlight from the
universe in the form of the present moment or the present location
is needed to create this effect.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Block Universes

2014-02-25 Thread Stathis Papaioannou
On 26 February 2014 08:07, Edgar L. Owen edgaro...@att.net wrote:
 Stathis,

 I know that's your point. You are just restating it once again, but you are
 completely UNABLE TO DEMONSTRATE IT without using some example in which time
 is already FLOWING.

 Since you can't demonstrate it, there is no reason to believe it. Belief in
 a block universe becomes a matter of blind faith, rather than a logical
 consequence of anything, and it is certainly NOT based on any empirical
 evidence whatsoever.

I'm not arguing that there is empirical evidence for a block universe,
just that a block universe is consistent with our experience.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Digital Neurology

2014-02-26 Thread Stathis Papaioannou
On 26 February 2014 04:51, Bruno Marchal marc...@ulb.ac.be wrote:

 The point of this is that if the brain is responsible for
 consciousness it is absurd to suppose that the brain's behaviour could
 be replaced with a functional analogue while leaving out any
 associated qualia. This constitutes a proof of functionalism, and of
 its subset computationalism if it is further established that physics
 is computable.


 ?

 On the contrary if computationalism is correct the physics cannot be
 entirely computable, some observable cannot be computed (but it might be no
 more that the frequency-operator, like in Graham Preskill. But still, we
 must explain why physics seems computable, despite it result of FMP on non
 computable domains).

If you start with the assumption that the physics relevant to brain
function is not computable then computationalism is false: it would be
impossible to make a machine that behaves like a human, either zombie
or conscious.

 Also,you are not using functionalism in its standard sense, which is
 Putnam names for comp (at a non specified level assumed to be close to
 neurons).

 What do you mean by function? If you take all functions (like in set
 theory), then it seems to me that functionalism is trivial, and the relation
 between consciousness and a process, even natural, become ambiguous.

 But if you take all functions computable in some topos or category, of
 computability on a ring, or that type of structure, then you *might* get
 genuine generalization of comp.

What I mean by functionalism is that the way the brain processes
information, its I/O behaviour, is what generates mind. This implies
multiple realisability of mental states, insofar as the same
information processing could be done by another machine. If the
machine is a digital computer then functionalism reduces to
computationalism. If the brain utilises non-computable physics then
you won't be able to reproduce its function (and the mind thus
generated) with a digital computer, so computationalism is false.
However, that does not necessarily mean that functionalism is false,
since you may be able to implement the appropriate brain function
through some other means. For example, if it turns out that a digital
implementation of the brain fails because real numbers and not
approximations are necessary, it may still be possible to implement a
brain using analogue devices.

 I don't think we have to settle for Bruno's modest
 assertion that comp is a matter of faith.


 It has to be, from a theoretical point of view. Assuming you are correct
 when betting on comp, you cannot prove, even to yourself (but your 1p does
 not need that!) that you did survive a teleportation.

 Of course I take proof in a rather strong literal sense. Non comp might be
 consistent with comp, like PA is inconsistent is consistent with PA.

What can be proved is that if consciousness is due to the brain then
replicating brain function in some other substrate will also replicate
its consciousness.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Alien Hand/Limb Syndrome

2014-02-27 Thread Stathis Papaioannou
On 26 February 2014 23:58, Craig Weinberg whatsons...@gmail.com wrote:

 The alien hand syndrome, as originally defined, was used to describe
 cases involving anterior corpus callosal lesions producing involuntary
 movement and a concomitant inability to distinguish the affected hand from
 an examiner's hand when these were placed in the patient's unaffected hand.
 In recent years, acceptable usage of the term has broadened considerably,
 and has been defined as involuntary movement occurring in the context of
 feelings of estrangement from or personification of the affected limb or its
 movements. Three varieties of alien hand syndrome have been reported,
 involving lesions of the corpus callosum alone, the corpus callosum plus
 dominant medial frontal cortex, and posterior cortical/subcortical areas. A
 patient with posterior alien hand syndrome of vascular aetiology is reported
 and the findings are discussed in the light of a conceptualisation of
 posterior alien hand syndrome as a disorder which may be less associated
 with specific focal neuropathology than are its callosal and
 callosal-frontal counterparts. - http://jnnp.bmj.com/content/68/1/83.full


 This kind of alienation from the function of a limb would seem to contradict
 functionalism. If functionalism identifies consciousness with function, then
 it would seem problematic that a functioning limb could be seen as estranged
 from the personal awareness, is it is really no different from a zombie in
 which the substitution level is set at the body level. There is no damage to
 the arm, no difference between one arm and another, and yet, its is felt to
 be outside of one's control and its sensations are felt not to be your
 sensations.

 This would be precisely the kind of estrangement that I would expect to
 encounter during a gradual replacement of the brain with any inorganic
 substitute. At the level at which food becomes non-food, so too would the
 brain become non-brain, and any animation of the nervous system would fail
 to be incorporated into personal awareness. The living brain could still
 learn to use the prosthetic, and ultimately imbue it with its own
 articulation and familiarity to a surprising extent, but it is a one way
 street and the prosthetic has no capacity to find the personal awareness and
 merge with it.

This example shows that if there is a lesion in the neural circuitry
it affects consciousness. If you fix the lesion such that the
circuitry works properly but the consciousness is affected (keeping
the environmental input constant) then that implies that consciousness
is generated by something other than the brain.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Tegmark and UDA step 3

2014-02-27 Thread Stathis Papaioannou
On 27 February 2014 00:49, Jason Resch jasonre...@gmail.com wrote:
 I came upon an interesting passage in Our Mathematical Universe, starting
 on page 194, which I think members of this list might appreciate:

 It gradually hit me that this illusion of randomness business really wasn't
 specific to quantum mechanics at all. Suppose that some future technology
 allows you to be cloned while you're sleeping, and that your two copies are
 placed in rooms numbered 0 and 1 (Figure 8.3). When they wake up, they'll
 both feel that the room number they read is completely unpredictable and
 random. If in the future, it becomes possible for you to upload your mind to
 a computer, then what I'm saying here will feel totally obvious and
 intuitive to you, since cloning yourself will be as easy as making a copy of
 your software. If you repeated the cloning experiment from Figure 8.3 many
 times and wrote down your room number each time, you'd in almost all cases
 find that the sequence of zeros and ones you'd written looked random, with
 zeros occurring about 50% of the time. In other words, causal physics will
 produce the illusion of randomness from your subjective viewpoint in any
 circumstance where you're being cloned. The fundamental reason that quantum
 mechanics appears random even though the wave function evolves
 deterministically is that the Schrodinger equation can evolve a wavefunction
 with a single you into one with clones of you in parallel universes. So how
 does it feel when you get cloned? It feels random! And every time something
 fundamentally random appears to happen to you, which couldn't have been
 predicted even in principle, it's a sign that you've been cloned.

 Jason

I remember this pointr being made on this list in the late 90's when
quantum immortality was a new and mindblowing idea for me, James Higgo
was still alive, and Jacques Mallah was calling everyone a crackpot.
Fond memories!


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Alien Hand/Limb Syndrome

2014-02-27 Thread Stathis Papaioannou
On 28 February 2014 01:05, Craig Weinberg whatsons...@gmail.com wrote:


 On Thursday, February 27, 2014 4:13:22 AM UTC-5, stathisp wrote:

 On 26 February 2014 23:58, Craig Weinberg whats...@gmail.com wrote:
 
  The alien hand syndrome, as originally defined, was used to describe
  cases involving anterior corpus callosal lesions producing involuntary
  movement and a concomitant inability to distinguish the affected hand
  from
  an examiner's hand when these were placed in the patient's unaffected
  hand.
  In recent years, acceptable usage of the term has broadened
  considerably,
  and has been defined as involuntary movement occurring in the context
  of
  feelings of estrangement from or personification of the affected limb
  or its
  movements. Three varieties of alien hand syndrome have been reported,
  involving lesions of the corpus callosum alone, the corpus callosum
  plus
  dominant medial frontal cortex, and posterior cortical/subcortical
  areas. A
  patient with posterior alien hand syndrome of vascular aetiology is
  reported
  and the findings are discussed in the light of a conceptualisation of
  posterior alien hand syndrome as a disorder which may be less
  associated
  with specific focal neuropathology than are its callosal and
  callosal-frontal counterparts. -
  http://jnnp.bmj.com/content/68/1/83.full
 
 
  This kind of alienation from the function of a limb would seem to
  contradict
  functionalism. If functionalism identifies consciousness with function,
  then
  it would seem problematic that a functioning limb could be seen as
  estranged
  from the personal awareness, is it is really no different from a zombie
  in
  which the substitution level is set at the body level. There is no
  damage to
  the arm, no difference between one arm and another, and yet, its is felt
  to
  be outside of one's control and its sensations are felt not to be your
  sensations.
 
  This would be precisely the kind of estrangement that I would expect to
  encounter during a gradual replacement of the brain with any inorganic
  substitute. At the level at which food becomes non-food, so too would
  the
  brain become non-brain, and any animation of the nervous system would
  fail
  to be incorporated into personal awareness. The living brain could still
  learn to use the prosthetic, and ultimately imbue it with its own
  articulation and familiarity to a surprising extent, but it is a one way
  street and the prosthetic has no capacity to find the personal awareness
  and
  merge with it.

 This example shows that if there is a lesion in the neural circuitry
 it affects consciousness. If you fix the lesion such that the
 circuitry works properly but the consciousness is affected (keeping
 the environmental input constant) then that implies that consciousness
 is generated by something other than the brain.


 Paying attention to the circuitry is a red herring. What I'm bringing up is
 how dissociation of functions identified with the self does not make sense
 for the functionalist view of consciousness. How do you give a program
 'alien subroutine syndrome'? Why does the program make a distinction between
 the pure function of the subroutine and some feeling of belonging that is
 generated by something other than the program?

I don't know why you distinguish between a function such as moving the
hand and identifying the hand as your own. Both of these depend on
correctly working brain circuitry, which is why a brain lesion can
cause paralysis but can also cause alien hand syndrome.

-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Alien Hand/Limb Syndrome

2014-02-28 Thread Stathis Papaioannou
 but the brain
did not.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Alien Hand/Limb Syndrome

2014-03-01 Thread Stathis Papaioannou
On Saturday, March 1, 2014, Craig Weinberg whatsons...@gmail.com wrote:



 On Saturday, March 1, 2014 1:57:45 AM UTC-5, Liz R wrote:

 On 28 February 2014 15:22, Craig Weinberg whats...@gmail.com wrote:

 On Thursday, February 27, 2014 8:03:15 PM UTC-5, Liz R wrote:

 On 28 February 2014 03:02, Craig Weinberg whats...@gmail.com wrote:


 In other words, why, in a functionalist/materialist world would we
 need a breakable program to keep telling us that our hand is not Alien?

 Or contrariwise, why do you need a breakable programme to tell you
 that it's your hand?


 Sure, that too. It doesn't make sense functionally. What difference does
 it make 'who' the hand 'belongs' to, as long as it performs as a hand.


 It's important for an animal to be able to distinguish self from
 non-self, as can be seen if two animals are locked in combat - one that
 can't tell its own limb from its opponent's is just as likely to bite
 itself as its prey. Repeat that often enough and you have a strong
 evolutionary pressure to distinguish self from non-self. I would imagine
 alien hand syndrome is a breakdown of this system.


 Sure, but I don't see that functionalism provides a basis to distinguish
 self from non-self other than function. As long as the functionality of the
 hand is there, and other people cannot tell any difference in what the hand
 can do, there should be no basis for any particular distress. We could make
 up a different evolutionary story too - that being physically close to your
 family or social group is important to survival and reproduction, so that
 there is a strong evolutionary pressure to suppress the difference between
 self and not-self. If it were the case that AHS were a breakdown in a
 global system like that, I would expect that victims might identify their
 family as strangers, etc.

 The particulars aren't the important thing though. I use AHS to add to
 blindsight and synesthesia as examples where the function-feeling
 equivalence which functionalism depends on appears to be violated.


You have too simplistic a view of what function means in the context of
an intelligent being. That is actually your whole problem: you look at
machine, imagine that you can see how it works, then look at a human, can't
figure out how it works, so conclude there must be something non-machine
like in the human. Yet the very examples you use demonstrate that even
mysterious-seeming behaviours such as those displayed in ALH are generated
by neural circuitry which can be easily disrupted.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Alien Hand/Limb Syndrome

2014-03-01 Thread Stathis Papaioannou
On Saturday, March 1, 2014, Craig Weinberg whatsons...@gmail.com wrote:



 On Friday, February 28, 2014 3:31:25 PM UTC-5, stathisp wrote:



 On Friday, February 28, 2014, Craig Weinberg whats...@gmail.com wrote:



 On Thursday, February 27, 2014 7:54:53 PM UTC-5, stathisp wrote:

 On 28 February 2014 01:05, Craig Weinberg whats...@gmail.com wrote:
 
 
  On Thursday, February 27, 2014 4:13:22 AM UTC-5, stathisp wrote:
 
  On 26 February 2014 23:58, Craig Weinberg whats...@gmail.com wrote:
  
   The alien hand syndrome, as originally defined, was used to
 describe
   cases involving anterior corpus callosal lesions producing
 involuntary
   movement and a concomitant inability to distinguish the affected
 hand
   from
   an examiner's hand when these were placed in the patient's
 unaffected
   hand.
   In recent years, acceptable usage of the term has broadened
   considerably,
   and has been defined as involuntary movement occurring in the
 context
   of
   feelings of estrangement from or personification of the affected
 limb
   or its
   movements. Three varieties of alien hand syndrome have been
 reported,
   involving lesions of the corpus callosum alone, the corpus callosum
   plus
   dominant medial frontal cortex, and posterior cortical/subcortical
   areas. A
   patient with posterior alien hand syndrome of vascular aetiology is
   reported
   and the findings are discussed in the light of a conceptualisation
 of
   posterior alien hand syndrome as a disorder which may be less
   associated
   with specific focal neuropathology than are its callosal and
   callosal-frontal counterparts. -
   http://jnnp.bmj.com/content/68/1/83.full
  
  
   This kind of alienation from the function of a limb would seem to
   contradict
   functionalism. If functionalism identifies consciousness with
 function,
   then
   it would seem problematic that a functioning limb could be seen as
   estranged
   from the personal awareness, is it is really no different from a
 zombie
   in
   which the substitution level is set at the body level. There is no
   damage to
   the arm, no difference between one arm and another, and yet, its is
 felt
   to
   be outside of one's control and its sensations are felt not to be
 your
   sensations.
  
   This would be precisely the kind of estrangement that I would expect
 to
   encounter during a gradual replacement of the brain with any
 inorganic
   substitute. At the level at which food becomes non-food, so too would
   the
   brain become non-brain, and any animation of the nervous system would
   fail
   to be incorporated into personal awareness. The living brain could
 still
   learn to use the prosthetic, and ultimately imbue it with its own
   articulation and familiarity to a surprising extent, but it is a one
 way
   street and the prosthetic has no capacity to find the personal
 awareness
   and
   merge with it.
 
  This example shows that if there is a lesion in the neural circuitry
  it affects consciousness. If you fix the lesion such that the
  circuitry works properly but the consciousness is affected (keeping
  the environmental input constant) then that implies that consciousness
  is generated by something other than the brain.
 
 
  Paying attention to the circuitry is a red herring. What I'm bringing up
 is
  how dissociation of functions identified with the self does not make
 sense
  for the functionalist view of consciousness. How do you give a program
  'alien subroutine syndrome'? Why does the program make a distinction
 between
  the pure function of the subroutine and some feeling of belonging that
 is
  generated by something other than the progr


 I'm sure there is some difference, but it doesn't affect the functionality
 of the hand. Under functionalism, since we can observe no difference
 between the function of the body with or without AHS, we should assume no
 such thing as AHS. If consciousness is like AHS, and the hand is like the
 brain or body, then we should not be able to see a difference between a
 conscious brain and simulation of brain activity that is unconscious.


 There is an observable difference in the body with AHS: the subject says
that it doesn't feel like his hand. This happens because the neural
circuits between the hand and the language centres are disrupted. If they
were not disrupted the language centres would get normal input and the
subject would say everything was normal.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Digital Neurology

2014-03-01 Thread Stathis Papaioannou
On 1 March 2014 01:40, Bruno Marchal marc...@ulb.ac.be wrote:

 If you start with the assumption that the physics relevant to brain
 function is not computable then computationalism is false: it would be
 impossible to make a machine that behaves like a human, either zombie
 or conscious.


 I agree with you, the physics *relevant* to brain function has to be
 computable, for comp to be true. But the point is that below the
 substitution level, the physical details are not relevant. Then by the
FPI,
 they must be undetermined, and this on an infinite non computable domain,
 and so, our computable brain must rely on a non computable physics, or a
 non necessarily computable physics, with some non computable aspect. This
is
 what comp predicts, and of course this is confirmed by QM. Again,
 eventually, QM might to much computable for comp to be true. That is what
 remain to be seen.


 What I mean by functionalism is that the way the brain processes
 information, its I/O behaviour, is what generates mind. This implies
 multiple realisability of mental states, insofar as the same
 information processing could be done by another machine. If the
 machine is a digital computer then functionalism reduces to
 computationalism. If the brain utilises non-computable physics then
 you won't be able to reproduce its function (and the mind thus
 generated) with a digital computer, so computationalism is false.
 However, that does not necessarily mean that functionalism is false,
 since you may be able to implement the appropriate brain function
 through some other means. For example, if it turns out that a digital
 implementation of the brain fails because real numbers and not
 approximations are necessary, it may still be possible to implement a
 brain using analogue devices.


 OK, but that functionalism seems to me trivially true. How could such
 functionalism be refuted, if you can invoke arbitrary functions?
 (Also, functionalism is used for a stringer (less general) version of
 computationalism, by Putnam, so this use of functionalism is non standard
 and can be confusing.
 Last remark, I am not sure that the notion of information processing can
 make sense in a non digital framework. In both quantum and classical
 information theory, information is digital (words like bits and qubits
come
 from there).

I think functionalism is true, but it's not obviously true, at least to
most people. It could be that the observable behaviour of the brain is
reproduced perfectly but the resulting creature has no consciousness or a
different conscious. That would be the case if consciousness were
substrate-dependent. It could also be that the behaviour cannot be
reproduced by a computer because the substitution level requires
non-computable physics (true randomness, real numbers, non-computable
functions), but it could be reproduced by a non-computational device. So
there are these possibilities with brain replacement:

(a) the behaviour is not reproduced and neither is the consciousness;
(b) The behaviour is reproduced but the consciousness is not reproduced;
(c) The behaviour is reproduced and so is the consciousness;
(d) The behaviour is not reproduced but the consciousness is

 What can be proved is that if consciousness is due to the brain then
 replicating brain function in some other substrate will also replicate
 its consciousness.


 OK. What I meant is that we cannot prove that consciousness is due to the
 brain.

Yes, a dualist, for example, could consistently deny fuctionalism, but
someone who believes that consciousness is due to the brain could not.


--
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Alien Hand/Limb Syndrome

2014-03-01 Thread Stathis Papaioannou
On 2 March 2014 16:55, Craig Weinberg whatsons...@gmail.com wrote:

  There is an observable difference in the body with AHS: the subject says
 that it doesn't feel like his hand.


 They don't have to say anything. They can keep their symptoms to themselves
 if they want.

So can a blind person pretending to be able to see. The point is there
is a *functional* difference because the behaviour is different. If
there is no behavioural difference whatsoever then there is no
disorder.

 This happens because the neural circuits between the hand and the language
 centres are disrupted. If they were not disrupted the language centres would
 get normal input and the subject would say everything was normal.


 It doesn't matter why it happens, it matters that it cannot happen under
 functionalism in the first place. By definition, consciousness is deflated
 to the sum of a set of functions. The quality of inclusion or exclusion from
 that set is simply a matter of fact, not a separate consideration. A program
 can't decide that part of itself has a quality of not being itself. That has
 no meaning to the function of the program. If the code works, then it is
 part of the program, period. If it doesn't work, then it doesn't work, but
 there is no language under functionalism to make that dysfunction related to
 what amounts to the loss of soul.

You can write a program that considers the right hand self and the
left hand non-self, with the consequence that the right hand will be
favoured if both hands are at risk of being lost, or whatever else you
want to make non-self mean.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Alien Hand/Limb Syndrome

2014-03-02 Thread Stathis Papaioannou
On 2 March 2014 16:49, Craig Weinberg whatsons...@gmail.com wrote:

 You have too simplistic a view of what function means in the context of
 an intelligent being.


 I think that you have too naive a view of what function means.


 That is actually your whole problem: you look at machine, imagine that you
 can see how it works, then look at a human, can't figure out how it works,
 so conclude there must be something non-machine like in the human.


 It has nothing to do with not being able to figure out how humans work.
 Nothing to do with human consciousness or biology at all. I'm *always* only
 talking about the bare metal basics of awareness itself. Sensation.
 Detection. Signal. You take them for granted, but I don't. If you take them
 for granted, then it is no great surprise that you can imagine consciousness
 coming from function.


 Yet the very examples you use demonstrate that even mysterious-seeming
 behaviours such as those displayed in ALH are generated by neural circuitry
 which can be easily disrupted.


 It doesn't matter where they are generated, all that matters is whether
 possession of one's own function can be defined as a computable object under
 functionalism. I think that it is a clear double standard to say that the
 'mine-ness' of a hand can of course be detected, but the 'mine-ness' of a
 human experience would require zombies to justify. You're looking at the
 wrong thing. I don't care about the details of any particular machine or
 organism, I care about the properties of awareness being incompatible in
 every way to the properties of function unless awareness comes first.

The mine-ness of a hand cannot be directly detected but the
behaviour can be detected. The behaviour is generated by the
underlying processes, as is the consciousness. Although not
immediately obvious, it turns out that if you can replicate the
function you will also replicate the consciouness, even if you do it
using a different mechanism.

The use of the words behaviour and function can be confusing.
Essentially, replicating the function of an entity involves
replicating its behaviour under all circumstances, or to put it
differently ensuring the outputs are the same for all inputs.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Digital Neurology

2014-03-02 Thread Stathis Papaioannou
On 2 March 2014 22:18, Bruno Marchal marc...@ulb.ac.be javascript:;
wrote:

 On 02 Mar 2014, at 08:09, Stathis Papaioannou wrote:



 On 1 March 2014 01:40, Bruno Marchal marc...@ulb.ac.be javascript:;
wrote:

 If you start with the assumption that the physics relevant to brain
 function is not computable then computationalism is false: it would be
 impossible to make a machine that behaves like a human, either zombie
 or conscious.


 I agree with you, the physics *relevant* to brain function has to be
 computable, for comp to be true. But the point is that below the
 substitution level, the physical details are not relevant. Then by the
 FPI,
 they must be undetermined, and this on an infinite non computable domain,
 and so, our computable brain must rely on a non computable physics, or
a
 non necessarily computable physics, with some non computable aspect. This
 is
 what comp predicts, and of course this is confirmed by QM. Again,
 eventually, QM might to much computable for comp to be true. That is what
 remain to be seen.


 What I mean by functionalism is that the way the brain processes
 information, its I/O behaviour, is what generates mind. This implies
 multiple realisability of mental states, insofar as the same
 information processing could be done by another machine. If the
 machine is a digital computer then functionalism reduces to
 computationalism. If the brain utilises non-computable physics then
 you won't be able to reproduce its function (and the mind thus
 generated) with a digital computer, so computationalism is false.
 However, that does not necessarily mean that functionalism is false,
 since you may be able to implement the appropriate brain function
 through some other means. For example, if it turns out that a digital
 implementation of the brain fails because real numbers and not
 approximations are necessary, it may still be possible to implement a
 brain using analogue devices.


 OK, but that functionalism seems to me trivially true. How could such
 functionalism be refuted, if you can invoke arbitrary functions?
 (Also, functionalism is used for a stringer (less general) version of
 computationalism, by Putnam, so this use of functionalism is non standard
 and can be confusing.
 Last remark, I am not sure that the notion of information processing can
 make sense in a non digital framework. In both quantum and classical
 information theory, information is digital (words like bits and qubits
 come
 from there).

 I think functionalism is true, but it's not obviously true, at least to
most
 people. It could be that the observable behaviour of the brain is
reproduced
 perfectly but the resulting creature has no consciousness or a different
 conscious.


 What if someone says that the function of the brain is to provide
 consciousness. Is that functionalism?
 What if someone says that the function of the brain is to link a divine
 soul to a person through a body?
 What is a function?

No, a function is an observable pattern of behaviour. Functionalism says
that if you replicate this, you also replicate the mind. You need to
replicate not only a special behaviour (which could be quite easy) but all
outputs for all inputs.

 That would be the case if consciousness were substrate-dependent.


 But you can put the substrate in the function. A brain would have the
 function to associate to that substrate the experience. How could I say no
 to the doctor who guaranties that all the function of the brain are
 preserved.
 The term function, like set, is too general, to much powerful.

Then I'm using it in a somewhat precise sense as above.

 It could also be that the behaviour cannot be reproduced by a computer
 because the substitution level requires non-computable physics (true
 randomness, real numbers, non-computable functions), but it could be
 reproduced by a non-computational device. So there are these possibilities
 with brain replacement:

 (a) the behaviour is not reproduced and neither is the consciousness;


 = ~ BEH-MEC


 (b) The behaviour is reproduced but the consciousness is not reproduced;


 ~ comp.



 (c) The behaviour is reproduced and so is the consciousness;


 = comp, unless it is the consciousness is the one by an impostor.



 (d) The behaviour is not reproduced but the consciousness is


 = bad substitution.




 What can be proved is that if consciousness is due to the brain then
 replicating brain function in some other substrate will also replicate
 its consciousness.


 OK. What I meant is that we cannot prove that consciousness is due to the
 brain.

 Yes, a dualist, for example, could consistently deny fuctionalism,


 Not sure. It depends on how you define function.



 but someone who believes that consciousness is due to the brain could not.


 Most dualist believes that consciousness is due to the brain. They will
 usually deny that the functional relation can be obtained with this or
that
 type of functions, but to throw out all functions, makes

Re: Digital Neurology

2014-03-03 Thread Stathis Papaioannou
,
 transparent, white, rare crystal is a diamond—the most infamous alternative
 being cubic zirconia. Diamonds are carbon crystals with specific molecular
 lattice structures. Being a diamond is a matter of being a certain kind of
 physical stuff. (That cubic zirconia is not quite as clear or hard as
 diamonds explains something about why it is not equally valued. But even if
 it were equally hard and equally clear, a CZ crystal would not thereby be a
 diamond.)

 These examples can be used to explain the core idea of functionalism.
 Functionalism is the theory that mental states are more like mouse traps
 than they are like diamonds.


 Hmm This is quite fuzzy and level dependent.



 That is, what makes something a mental state is more a matter of what it
 does, not what it is made of.


 OK. But then functionalism is just mechanism.




 This distinguishes functionalism from traditional mind-body dualism, such as
 that of René Descartes, according to which minds are made of a special kind
 of substance, the res cogitans (the thinking substance.)


 And here I think that Descartes abandoned that idea, but that's a bit beside
 the topic.



 It also distinguishes functionalism from contemporary monisms such as J. J.
 C. Smart’s mind-brain identity theory. The identity theory says that mental
 states are particular kinds of biological states—namely, states of
 brains—and so presumably have to be made of certain kinds of stuff, namely,
 brain stuff. Mental states, according to the identity theory, are more like
 diamonds than like mouse traps. Functionalism is also distinguished from B.
 F. Skinner’s behaviorism because it accepts the reality of internal mental
 states, rather than simply attributing psychological states to the whole
 organism. According to behaviorism, which mental states a creature has
 depends just on how it behaves (or is disposed to behave) in response to
 stimuli. In contrast functionalists typically believe that internal and
 psychological states can be distinguished with a “finer grain” than
 behavior—that is, distinct internal or psychological states could result in
 the same behaviors. So functionalists think that it is what the internal
 states do that makes them mental states, not just what is done by the
 creature of which they are parts.
 end quote


 This is too much fuzzy. What is a state when we allow any functions? A
 physical state? Physical state are all known to be Turing emulable. I think
 that this quote defends comp implicitly, as it uses terms like state as
 that was a simple notion, which it is, but only with comp. Your
 functionalism is just mechanism. I think, with the option of being perhaps
 non digital.

 Bruno





 --
 Stathis Papaioannou


 --
 Stathis Papaioannou

 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/groups/opt_out.


 http://iridia.ulb.ac.be/~marchal/



 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/groups/opt_out.



-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Vehiculus automobilius

2014-03-10 Thread Stathis Papaioannou
On 7 March 2014 15:46, Craig Weinberg whatsons...@gmail.com wrote:

 If the doctor became more ambitious, and decided to replace a species with
 a simulation, we have a ready example of what it might be like. Cars have
 replaced the functionality of horses in human society. They reproduce in a
 different, more centralized way, but otherwise they move around like
 horses, carry people and their possessions like horses, they even evolve
 into new styles over time.

 Notice, however, that despite our occasional use of a name like Pinto or
 Mustang, no horse-like properties have emerged from cars. They do not
 whinny or swat flies. They do not get spooked and send their drivers
 careening off of the road. They did not develop DNA. Certainly a car does
 not perform as many complex computations as a horse, but neither does it
 need to. The function of a horse really doesn't need to be very
 complicated. A Google self-driving car is a better horse for almost all
 practical purposes than a horse.

 Maybe the doctor can replace all species with a functional equivalent? We
 could even do without all of the moving around and just keep the cars in
 the factory in which they are built and include a simulation screen on each
 windshield that interacts with Google Maps. With a powerful enough
 artificial intelligence, why not replace function altogether?

 I don't think you understand the essential idea of functionalism, which is
multiple realisability. You try to think of analogies to show that it's not
obvious, but we know it's not obvious. However, it's true. You don't
address the arguments showing it to be true. It's like focussing on how we
would fall off the earth if it were round but failing to explain the photos
from space.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Video of VCR

2014-03-15 Thread Stathis Papaioannou
On 16 March 2014 09:09, Craig Weinberg whatsons...@gmail.com wrote:


 https://31.media.tumblr.com/935c9f6ad77f94164442956d8929da19/tumblr_mncj8t2OCc1qz63ydo10_250.gif
 http://www.jesseengland.net/index.php?/project/vide-uhhh/

 Have a look at this quick video (or get the idea from this_)

 Since the VCR can get video feedback of itself, is there any computational
 reason why this doesn't count as a degree of self awareness? Would VCRs
 which have 'seen themselves' in this way have a greater chance of
 developing that awareness than those which have not? If not, what initial
 conditions would be necessary for such an awareness to develop in some
 machines and how would those initial conditions appear?


Perhaps seeing itself is not enough: it may have to be able to adjust its
behaviour incorporating its own image in a feedback loop, or something. In
any case, it makes more sense that self-awareness should develop as a
result of some such complex behaviour than because the VCR is made out of
meat.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Quick video about materialism

2014-03-15 Thread Stathis Papaioannou
On 14 March 2014 13:12, Craig Weinberg whatsons...@gmail.com wrote:

 http://www.youtube.com/watch?v=ZH2QXQu-HGE

 A brief, handy rebuttal to materialistic views of consciousness. I would
 go further, and say that information, even though it is immaterial in its
 conception, is still derived from the principles of object interaction.
 Even when forms and functions are divorced from any particular physical
 substance, they are still tethered to the third person omniscient view -
 artifacts of communication *about* rather *appreciation of*. Real
 experiences are not valued just because they inform us about something or
 other, they are valued because of their intrinsic aesthetic and semantic
 content. It’s not even content, it is the experience itself. Information
 must be made evident through sensory participation, or it is nothing at all.

 The thing is, this happens in the brain as well. You look at something,
neurons fire, and there is no obvious reason why that should result in a
particular sensation rather than another, or any sensation at all. And yet
it does. If it's magic, then why can't the same magic happen with the
computer? If it isn't magic, but a natural effect, then why can't the same
natural effect happen with the computer?


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Quick video about materialism

2014-03-16 Thread Stathis Papaioannou
On Monday, March 17, 2014, Craig Weinberg whatsons...@gmail.com wrote:



 On Sunday, March 16, 2014 1:02:08 AM UTC-4, stathisp wrote:




 On 14 March 2014 13:12, Craig Weinberg whats...@gmail.com wrote:

 http://www.youtube.com/watch?v=ZH2QXQu-HGE

 A brief, handy rebuttal to materialistic views of consciousness. I would
 go further, and say that information, even though it is immaterial in its
 conception, is still derived from the principles of object interaction.
 Even when forms and functions are divorced from any particular physical
 substance, they are still tethered to the third person omniscient view -
 artifacts of communication *about* rather *appreciation of*. Real
 experiences are not valued just because they inform us about something or
 other, they are valued because of their intrinsic aesthetic and semantic
 content. It’s not even content, it is the experience itself. Information
 must be made evident through sensory participation, or it is nothing at all.

 The thing is, this happens in the brain as well. You look at something,
 neurons fire, and there is no obvious reason why that should result in a
 particular sensation rather than another, or any sensation at all. And yet
 it does. If it's magic, then why can't the same magic happen with the
 computer? If it isn't magic, but a natural effect, then why can't the same
 natural effect happen with the computer?


 For the same reason that words on a page can't write a book. Computers
 host meaningless patterns which we use to represent ideas that we find
 significant. The patterns are not even patterns on their own, just the
 presence of billions of disconnected micro-phenomenal states.


But if you look at a brain the patterns in it are no more meaningful than
the patterns in a computer, and the matter in it is no more meaningful than
the matter in a computer.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Quick video about materialism

2014-03-16 Thread Stathis Papaioannou
On Monday, March 17, 2014, Craig Weinberg whatsons...@gmail.com wrote:



 On Sunday, March 16, 2014 10:02:04 AM UTC-4, stathisp wrote:



 On Monday, March 17, 2014, Craig Weinberg whats...@gmail.com wrote:



 On Sunday, March 16, 2014 1:02:08 AM UTC-4, stathisp wrote:




 On 14 March 2014 13:12, Craig Weinberg whats...@gmail.com wrote:

 http://www.youtube.com/watch?v=ZH2QXQu-HGE

 A brief, handy rebuttal to materialistic views of consciousness. I
 would go further, and say that information, even though it is immaterial 
 in
 its conception, is still derived from the principles of object 
 interaction.
 Even when forms and functions are divorced from any particular physical
 substance, they are still tethered to the third person omniscient view -
 artifacts of communication *about* rather *appreciation of*. Real
 experiences are not valued just because they inform us about something or
 other, they are valued because of their intrinsic aesthetic and semantic
 content. It’s not even content, it is the experience itself. Information
 must be made evident through sensory participation, or it is nothing at 
 all.

 The thing is, this happens in the brain as well. You look at
 something, neurons fire, and there is no obvious reason why that should
 result in a particular sensation rather than another, or any sensation at
 all. And yet it does. If it's magic, then why can't the same magic happen
 with the computer? If it isn't magic, but a natural effect, then why can't
 the same natural effect happen with the computer?


 For the same reason that words on a page can't write a book. Computers
 host meaningless patterns which we use to represent ideas that we find
 significant. The patterns are not even patterns on their own, just the
 presence of billions of disconnected micro-phenomenal states.


 But if you look at a brain the patterns in it are no more meaningful than
 the patterns in a computer, and the matter in it is no more meaningful than
 the matter in a computer.


 Right. That's why we can't assume that the patterns that we see of other
 bodies through our body is the relevant picture. It is the patterns which
 we feel directly which are important as far as consciousness is concerned.


So why do you think the meaningless patterns and matter in a brain but not
in a computer can be associated with consciousness?


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Quick video about materialism

2014-03-16 Thread Stathis Papaioannou
On 17 March 2014 10:43, Craig Weinberg whatsons...@gmail.com wrote:



 On Sunday, March 16, 2014 6:21:18 PM UTC-4, stathisp wrote:



 On Monday, March 17, 2014, Craig Weinberg whats...@gmail.com wrote:



 On Sunday, March 16, 2014 10:02:04 AM UTC-4, stathisp wrote:



 On Monday, March 17, 2014, Craig Weinberg whats...@gmail.com wrote:



 On Sunday, March 16, 2014 1:02:08 AM UTC-4, stathisp wrote:




 On 14 March 2014 13:12, Craig Weinberg whats...@gmail.com wrote:

 http://www.youtube.com/watch?v=ZH2QXQu-HGE

 A brief, handy rebuttal to materialistic views of consciousness. I
 would go further, and say that information, even though it is 
 immaterial in
 its conception, is still derived from the principles of object 
 interaction.
 Even when forms and functions are divorced from any particular physical
 substance, they are still tethered to the third person omniscient view -
 artifacts of communication *about* rather *appreciation of*. Real
 experiences are not valued just because they inform us about something 
 or
 other, they are valued because of their intrinsic aesthetic and semantic
 content. It’s not even content, it is the experience itself. Information
 must be made evident through sensory participation, or it is nothing at 
 all.

 The thing is, this happens in the brain as well. You look at
 something, neurons fire, and there is no obvious reason why that should
 result in a particular sensation rather than another, or any sensation at
 all. And yet it does. If it's magic, then why can't the same magic happen
 with the computer? If it isn't magic, but a natural effect, then why 
 can't
 the same natural effect happen with the computer?


 For the same reason that words on a page can't write a book. Computers
 host meaningless patterns which we use to represent ideas that we find
 significant. The patterns are not even patterns on their own, just the
 presence of billions of disconnected micro-phenomenal states.


 But if you look at a brain the patterns in it are no more meaningful
 than the patterns in a computer, and the matter in it is no more meaningful
 than the matter in a computer.


 Right. That's why we can't assume that the patterns that we see of other
 bodies through our body is the relevant picture. It is the patterns which
 we feel directly which are important as far as consciousness is concerned.


 So why do you think the meaningless patterns and matter in a brain but
 not in a computer can be associated with consciousness?


 The brain is a public record of what we know to be a private human life,
 the ultimate definition of which is unknown.  A computer is a public record
 of a known manufacturing process within a shared human experience. If you
 only look at the public side, there is no private phenomena anyways, so it
 is not surprising that we would assume that the public side should be
 sufficient.


How do you know that a private computer life is not possible?


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Nova Spivack on 'Consciousness is More Fundamental Than Computation'

2014-03-24 Thread Stathis Papaioannou
On 25 March 2014 07:36, Craig Weinberg whatsons...@gmail.com wrote:

http://www.novaspivack.com/uncategorized/consciousness-is-not-a-computation-2



He could make similar arguments claiming consciousness is not chemistry.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Nova Spivack on 'Consciousness is More Fundamental Than Computation'

2014-03-25 Thread Stathis Papaioannou
On 26 March 2014 10:59, Craig Weinberg whatsons...@gmail.com wrote:



 On Monday, March 24, 2014 5:13:26 PM UTC-4, stathisp wrote:




 On 25 March 2014 07:36, Craig Weinberg whats...@gmail.com wrote:

 http://www.novaspivack.com/uncategorized/consciousness-
 is-not-a-computation-2



 He could make similar arguments claiming consciousness is not chemistry.


 In that case, he would still be correct.



He would be correct, but the argument is not that consciousness is
chemistry or that consciousness is electronic circuits, it is that
consciousness can be associated with electronic systems such as computers
in the same way that it is associated with chemical systems such as brains.
This is even consistent with theories claiming that consciousness is
primary or that consciousness exists as a separate non-physical entity.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Scott Aaronson vs. Max Tegmark

2014-03-25 Thread Stathis Papaioannou
On 26 March 2014 11:29, LizR lizj...@gmail.com wrote:

 On 26 March 2014 12:12, Stathis Papaioannou stath...@gmail.com wrote:


 An infinite universe (Tegmark type 1) implies that our consciousness
 flits about from one copy of us to another and that as a consequence we are
 immortal, so it does affect us even if there is no physical communication
 between its distant parts.

 Only if one assumes comp, I think, or something akin to Frank Tipler's
 Physics of Immortality view which basically says that identical quantum
 states are good enough to be mapped onto one another, and we experience all
 the states together in an infinite BEC type thing until differentiation
 occurs. (Cosmic, man!)


You don't have to assume comp. If the theory is that consciousness is
secreted by the brain like bile is secreted by the liver, so that a
simulation can't be conscious, there will be other brains in the universe
similar enough to yours that they will have a similar consciousness. This
is a concrete, no nonsense, no consciousness-flitting-about type of theory
- but your consciousness will still effectively flit about because you
can't be sure which copy you are.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Scott Aaronson vs. Max Tegmark

2014-03-25 Thread Stathis Papaioannou
On 26 March 2014 11:16, chris peck chris_peck...@hotmail.com wrote:

  An infinite universe (Tegmark type 1) implies that our consciousness
 flits about from one copy of us to another and that as a consequence we are
 immortal, so it does affect us even if there is no physical communication
 between its distant parts.

 I don't think it implies that at all. We don't know what consciousness
 really is but if it turns out to emerge from or supervene on some localized
 lump of stuff then there would be lots of independent consciousnesses that
 experienced similar things to me, rather than one consciousness per
 person-set that flits about faster than light over the set of infinite
 universes; somehow making time to get back to me per time iteration.


The consciousness doesn't actually go anywhere, it's just that if there are
multiple copies producing multiple similar consciousnesses (through
whatever mechanism) then you can't know which copy your current
consciousness is supervening on.


 But even if your implication stood, it would open up a huge can of
 philosophical worms. What exactly constitutes a 'me' 10^10^29 meters away
 from here? In the infinite space there are a fair few mes, all of whom have
 some differences, differences in history, differences in location,
 differences in body, differences in vocations, beliefs even wives etc. An
 infinite spectrum of me. A happy thought for women everywhere but at what
 point does it become ridiculous to say this or that copy is still me? This
 is the problem Lewis faces with modal realism and why he gets wishy washy
 about whether these copies are me or are not me but are just similar to me
 in so many regards.


It's a problem but you can't avoid it altogether. It's not as if God is
going to say, OK mate, it's too difficult to keep track of who you are with
all these different copies and near-copies around, so you can just stay
this one here.


 More importantly, when we are talking about cause and effect we are
 talking about something other than dodgy metaphysical consequences such as
 'immortality'. We're want something that can be measured.


It's a pretty significant dodgy metaphysical consequence if you actually
live forever.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Scott Aaronson vs. Max Tegmark

2014-03-25 Thread Stathis Papaioannou
On 26 March 2014 12:15, meekerdb meeke...@verizon.net wrote:

  An infinite universe (Tegmark type 1) implies that our consciousness
 flits about from one copy of us to another and that as a consequence we are
 immortal, so it does affect us even if there is no physical communication
 between its distant parts.


 That seems to imply that one's consciousness is unique and moves around
 like a soul.


There's no dodgy metaphysical mechanism involved. If there are multiple
physical copies of you, and each copy has a similar consciousness to you,
then you can't know which copy is currently generating your consciousness.


 I think the idea is that the stream of consciousness is unified so long
 as all the copies are being realized identically, in fact they are not
 multiple per Leibniz's identity of indiscernibles.  When there is some
 quantum event amplified enough to make a difference in the stream of
 consciousness then the stream divides and there are two (or more) streams.


An implication of this is that if one of the streams terminates your
consciousness will continue in the other.

-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Scott Aaronson vs. Max Tegmark

2014-03-25 Thread Stathis Papaioannou
On 26 March 2014 12:40, LizR lizj...@gmail.com wrote:

 On 26 March 2014 13:37, Stathis Papaioannou stath...@gmail.com wrote:

 On 26 March 2014 11:29, LizR lizj...@gmail.com wrote:

 On 26 March 2014 12:12, Stathis Papaioannou stath...@gmail.com wrote:


 An infinite universe (Tegmark type 1) implies that our consciousness
 flits about from one copy of us to another and that as a consequence we are
 immortal, so it does affect us even if there is no physical communication
 between its distant parts.

 Only if one assumes comp, I think, or something akin to Frank Tipler's
 Physics of Immortality view which basically says that identical quantum
 states are good enough to be mapped onto one another, and we experience all
 the states together in an infinite BEC type thing until differentiation
 occurs. (Cosmic, man!)


 You don't have to assume comp.


 I said if you assume comp OR if you assume Frank Tipler's theory of
 immortailty. I added comp because that has the same implications, but the
 rest of what I said was assuming Tipler-esque continuity of consciousness
 through duplication of quantum states. Admittedly I dashed the post off and
 may not have made myself very clear :)


 If the theory is that consciousness is secreted by the brain like bile is
 secreted by the liver, so that a simulation can't be conscious, there will
 be other brains in the universe similar enough to yours that they will have
 a similar consciousness. This is a concrete, no nonsense, no
 consciousness-flitting-about type of theory - but your consciousness will
 still effectively flit about because you can't be sure which copy you are.


 Yes, that's what I was trying to get at. Assuming that consciousness
 arises somehow from the quantum state of your brain, and assuming that
 identical quantum states are sufficiently identical that consciousness
 continues when your quantum state is duplicated, regardless of where that
 happens (as Frank Tipler assumes when he says you can die and wake up in a
 simulated version of yourself at the end of time) - then you effectively
 exist in all the (infinite number of) places your brain's quantum state
 does. I've heard from my good friend the internet that the number of
 possible quantum states a brain can be in is around 10 ^ 10 ^ 70, which
 probably makes the nearest exact copy of my brain quite a long way away
 (assuming an infinite universe with the same laws of physics throughout,
 and similar initial conditions, and ergodicity whatever that is, etc, etc).
 But given worlds enough and time, as we are in eternal inflation for
 example, I'm virtually guaranteed to be peppered around the place, a
 monstrous regiment which you will be pleased to know is ridiculously far
 away, well beyond our cosmic horizon for a googolplex years to come.

 However, this assumes these copies are all me, or maybe I should start
 using the Royal we from now on (if my name hasn't given that away
 already). So I am she as she is me as you are me and we are all together,
 except for you. To not assume this - to assume these are all different
 people who happen to think they are me - is I think the same as assuming
 that identical quantum states can nevertheless be distinguished, somehow -
 but I believe the observed properties of BECs argues against this?


What is the difference between the copies being you and only thinking they
are you?

I'll put it differently. I propose that, since the matter in your synapses
turns over every few minutes, you are not really you a few minutes from
now, but merely a copy who thinks you are you. Can you prove if this claim
is true or false, and even if you can, does it matter?


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Scott Aaronson vs. Max Tegmark

2014-03-25 Thread Stathis Papaioannou
On 26 March 2014 12:45, meekerdb meeke...@verizon.net wrote:

  On 3/25/2014 6:34 PM, Stathis Papaioannou wrote:




 On 26 March 2014 12:15, meekerdb meeke...@verizon.net wrote:

  An infinite universe (Tegmark type 1) implies that our
 consciousness flits about from one copy of us to another and that as a
 consequence we are immortal, so it does affect us even if there is no
 physical communication between its distant parts.


  That seems to imply that one's consciousness is unique and moves around
 like a soul.


  There's no dodgy metaphysical mechanism involved. If there are multiple
 physical copies of you, and each copy has a similar consciousness to you,
 then you can't know which copy is currently generating your consciousness.


 I think the idea is that the stream of consciousness is unified so long
 as all the copies are being realized identically, in fact they are not
 multiple per Leibniz's identity of indiscernibles.  When there is some
 quantum event amplified enough to make a difference in the stream of
 consciousness then the stream divides and there are two (or more) streams.


  An implication of this is that if one of the streams terminates your
 consciousness will continue in the other.


 But it will, at best be *similar* to the deceased you, just as I am
 quite different from Brent Meeker of 50yrs ago.  And there is no quarantee
 that some stream will continue.


Similar is good enough. There is a guarantee that some branch will continue
if everything that can happen does happen.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Scott Aaronson vs. Max Tegmark

2014-03-25 Thread Stathis Papaioannou
On 26 March 2014 12:55, LizR lizj...@gmail.com wrote:

 On 26 March 2014 14:50, Stathis Papaioannou stath...@gmail.com wrote:

 On 26 March 2014 12:45, meekerdb meeke...@verizon.net wrote:

  On 3/25/2014 6:34 PM, Stathis Papaioannou wrote:


 On 26 March 2014 12:15, meekerdb meeke...@verizon.net wrote:

  An infinite universe (Tegmark type 1) implies that our
 consciousness flits about from one copy of us to another and that as a
 consequence we are immortal, so it does affect us even if there is no
 physical communication between its distant parts.


  That seems to imply that one's consciousness is unique and moves
 around like a soul.


  There's no dodgy metaphysical mechanism involved. If there are
 multiple physical copies of you, and each copy has a similar consciousness
 to you, then you can't know which copy is currently generating your
 consciousness.


 I think the idea is that the stream of consciousness is unified so
 long as all the copies are being realized identically, in fact they are not
 multiple per Leibniz's identity of indiscernibles.  When there is some
 quantum event amplified enough to make a difference in the stream of
 consciousness then the stream divides and there are two (or more) streams.


  An implication of this is that if one of the streams terminates your
 consciousness will continue in the other.


 But it will, at best be *similar* to the deceased you, just as I am
 quite different from Brent Meeker of 50yrs ago.  And there is no quarantee
 that some stream will continue.


 Similar is good enough. There is a guarantee that some branch will
 continue if everything that can happen does happen.

 Surely in an infinite universe, and assuming the identity of quantum
 states, you don't need similarity - you will get a quantum state that is a
 follow-on from your previous one, but in which you continue to be alive...

 Of course this depends on what it means for quantum states to follow on
 from other ones. But our brains already seem to know what that means, in
 that we feel we're the same person we were this morning, and so we feel
 continuity of similar enough quantum states. Unless QM is wrong about the
 nature of quantum states, we will feel continuity if the follow on state
 is actually 10 ^ 10 ^ 100 light years away (or 10 ^ 10 ^ 100 years away)
 from the preceeding state.


I agree but I don't think you need to refer to QM at all. The conclusion
would still follow in a classical infinite universe.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Scott Aaronson vs. Max Tegmark

2014-03-25 Thread Stathis Papaioannou


 On 26 Mar 2014, at 1:46 pm, meekerdb meeke...@verizon.net wrote:
 
 On 3/25/2014 6:50 PM, Stathis Papaioannou wrote:
 
 
 
 On 26 March 2014 12:45, meekerdb meeke...@verizon.net wrote:
 On 3/25/2014 6:34 PM, Stathis Papaioannou wrote:
 
 
 
 On 26 March 2014 12:15, meekerdb meeke...@verizon.net wrote:
 
 An infinite universe (Tegmark type 1) implies that our consciousness 
 flits about from one copy of us to another and that as a consequence we 
 are immortal, so it does affect us even if there is no physical 
 communication between its distant parts.
 
 That seems to imply that one's consciousness is unique and moves around 
 like a soul. 
 
 There's no dodgy metaphysical mechanism involved. If there are multiple 
 physical copies of you, and each copy has a similar consciousness to you, 
 then you can't know which copy is currently generating your consciousness.
  
 I think the idea is that the stream of consciousness is unified so long 
 as all the copies are being realized identically, in fact they are not
  multiple per Leibniz's identity of 
 indiscernibles.  When there is some quantum event amplified enough to 
 make a difference in the stream of consciousness then the stream divides 
 and there are two (or more) streams.
 
 An implication of this is that if one of the streams terminates your 
 consciousness will continue in the other.
 
 But it will, at best be *similar* to the deceased you, just as I am quite 
 different from Brent Meeker of 50yrs ago.  And there is no quarantee that 
 some stream will continue.
 
 Similar is good enough. There is a guarantee that some branch will continue 
 if everything that can happen does happen.
 
 That's a to casual reading of can happen there are many things in quantum 
 mechanics that can't happen.  Just because we can imagine something 
 happening, it doesn't follow that it is nomologically possible.

What sorts of things that might conceivably save your life do you think are not 
nomologically possible?

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Scott Aaronson vs. Max Tegmark

2014-03-25 Thread Stathis Papaioannou


 On 26 Mar 2014, at 1:56 pm, meekerdb meeke...@verizon.net wrote:
 
 On 3/25/2014 6:57 PM, Stathis Papaioannou wrote:
 
 
 
 On 26 March 2014 12:55, LizR lizj...@gmail.com wrote:
 On 26 March 2014 14:50, Stathis Papaioannou stath...@gmail.com wrote:
 On 26 March 2014 12:45, meekerdb meeke...@verizon.net wrote:
 On 3/25/2014 6:34 PM, Stathis Papaioannou wrote:
 
 On 26 March 2014 12:15, meekerdb meeke...@verizon.net wrote:
 
 An infinite universe (Tegmark type 1) implies that our consciousness 
 flits about from one copy of us to another and that as a consequence 
 we are immortal, so it does   
 affect us even if there is no physical communication 
 between its distant parts.
 
 That seems to imply that one's consciousness is unique and moves around 
 like a soul. 
 
 There's no dodgy metaphysical mechanism involved. If there are multiple 
 physical copies of you, and each copy has a similar consciousness to 
 you, then you can't know which copy is currently generating your 
 consciousness.
  
 I think the idea is that the stream of consciousness  
  is unified so long as all the 
 copies are being realized identically, in fact they are not multiple 
 per Leibniz's identity of indiscernibles.  When there is some quantum 
 event amplified enough to make a difference in the stream of 
 consciousness then the stream divides and there are two (or more) 
 streams.
 
 An implication of this is that if one of the streams terminates your 
 consciousness will continue in the other.
 
 But it will, at best be *similar* to the deceased you, just as I am 
 quite different from Brent Meeker of 50yrs ago.  And there is no 
 quarantee that some stream will continue.
 
 Similar is good enough. There is a guarantee that some branch will 
 continue if everything that can happen does happen. 
 Surely in an infinite universe, and assuming the identity of quantum 
 states, you don't need similarity - you will get a quantum state that is a 
 follow-on   from your previous one, but in which you 
 continue to be alive...
 
 Of course this depends on what it means for quantum states to follow on 
 from other ones. But our brains already seem to know what that means, in 
 that we feel we're the same person we were this morning, and so we feel 
 continuity of similar enough quantum states. Unless QM is wrong about the 
 nature of quantum states, we will feel continuity if the follow on state 
 is actually 10 ^ 10 ^ 100 light years away (or 10 ^ 10 ^ 100 years away) 
 from the preceeding state.
 
 I agree but I don't think you need to refer to QM at all. The conclusion 
 would still follow in a classical infinite universe.
 
 Probably not since classical physics is based on real numbers (and so is 
 quantum mechanics for that matter).  Of course you could still fall back on 
 similar enough. But in that case you will, as you are dying, pass into a 
 state of consciousness (i.e. none) that is similar enough to a fetus (of 
 some animal) or maybe a cabbage.

You don't need an *exact* copy, just a good enough copy. If an exact copy were 
needed, either at the quantum level or to an infinite number of decimal places, 
then we could not survive from one moment to the next, since in a very small 
period there are quite gross physical changes in our bodies.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Scott Aaronson vs. Max Tegmark

2014-03-25 Thread Stathis Papaioannou


 On 26 Mar 2014, at 2:22 pm, chris peck chris_peck...@hotmail.com wrote:
 
 It's a pretty significant dodgy metaphysical consequence if you actually 
 live forever.
 
 Its many things. Interesting, strange, wonderful and so on but the one thing 
 it isn't is significant.
 
 The continuation of an experiential history on some other earth, a history 
 common to the one that just ended here on this earth, is not an effect on 
 this earth. Its as insignificant to this earth as things can be.

It's not insignificant if you and your experiments are not on this earth but on 
any number of separate, similar earths.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Scott Aaronson vs. Max Tegmark

2014-03-26 Thread Stathis Papaioannou




 On 26 Mar 2014, at 2:23 pm, LizR lizj...@gmail.com wrote:
 
 On 26 March 2014 14:57, Stathis Papaioannou stath...@gmail.com wrote:
 
 I agree but I don't think you need to refer to QM at all. The conclusion 
 would still follow in a classical infinite universe. 
 I don't see that, because you can subdivide classical states indefinitely 
 (hence the space-time continuum) while with QM you only have a certain number 
 of allowed states for some things at least (electrons and suchlike), and it's 
 hypothesised this might also apply to space-time (I think it has to for this 
 argument to work.)

The engineering tolerance of the brain must be finite (and far higher than the 
Planck level) if we are to survive from moment to moment, and that implies 
there are only a finite number of possible brains and hence mental states.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Scott Aaronson vs. Max Tegmark

2014-03-26 Thread Stathis Papaioannou
On 26 March 2014 17:13, meekerdb meeke...@verizon.net wrote:

  On 3/25/2014 9:57 PM, Stathis Papaioannou wrote:

 You don't need an *exact* copy, just a good enough copy. If an exact copy
 were needed, either at the quantum level or to an infinite number of
 decimal places, then we could not survive from one moment to the next,
 since in a very small period there are quite gross physical changes in our
 bodies.



 My point exactly - We DON'T survive moment to moment except in rough
 approximation and so as we deteriorate in old age we may come to
 approximate topsoil.  The question is, why should conscious continuity
 preserve us while physical continuity doesn't count?  Is it just our ego
 that says consciouness should be preserved - no matter how much it changes?


Physical continuity is important only insofar as it leads to psychological
continuity. Psychological continuity is important because we are programmed
to think it is; it has no intrinsic importance.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Scott Aaronson vs. Max Tegmark

2014-03-26 Thread Stathis Papaioannou
On Wednesday, March 26, 2014, Bruno Marchal marc...@ulb.ac.be wrote:


 On 26 Mar 2014, at 01:37, Stathis Papaioannou wrote:




 On 26 March 2014 11:29, LizR 
 lizj...@gmail.comjavascript:_e(%7B%7D,'cvml','lizj...@gmail.com');
  wrote:

 On 26 March 2014 12:12, Stathis Papaioannou 
 stath...@gmail.comjavascript:_e(%7B%7D,'cvml','stath...@gmail.com');
  wrote:


 An infinite universe (Tegmark type 1) implies that our consciousness
 flits about from one copy of us to another and that as a consequence we are
 immortal, so it does affect us even if there is no physical communication
 between its distant parts.

 Only if one assumes comp, I think, or something akin to Frank Tipler's
 Physics of Immortality view which basically says that identical quantum
 states are good enough to be mapped onto one another, and we experience all
 the states together in an infinite BEC type thing until differentiation
 occurs. (Cosmic, man!)


 You don't have to assume comp. If the theory is that consciousness is
 secreted by the brain like bile is secreted by the liver, so that a
 simulation can't be conscious, there will be other brains in the universe
 similar enough to yours that they will have a similar consciousness.


 Assuming comp!
 If y consciousness is really needing the exact material bile in my liver,
 the other brain will just not be similar enough, and it is conceivable that
 although conscious like me, the copy might be another person. This makes no
 sense, if you use some form of comp.



 This is a concrete, no nonsense, no consciousness-flitting-about type of
 theory - but your consciousness will still effectively flit about because
 you can't be sure which copy you are.


 Assuming comp. If the exact infinite state of the bile is required, then
 by definition, the other person is a different person. I agree this seems
 absurd, but that is a comp prejudice. After all, I *can* conceive that the
 other might be an impostor an authentically other person.


If consciousness is secreted by the brain, then if you make a similar brain
you will make a similar consciousness. The actual theory of consciousness
doesn't make any difference here. The claim that the copy isn't really the
same person is equivalent to, and as absurd as,  the claim that I'm not the
same person after a night's sleep.

-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Scott Aaronson vs. Max Tegmark

2014-03-26 Thread Stathis Papaioannou
On Thursday, March 27, 2014, Russell Standish li...@hpcoders.com.au wrote:

 On Wed, Mar 26, 2014 at 05:06:46PM +1100, Stathis Papaioannou wrote:
 
  The engineering tolerance of the brain must be finite (and far higher
 than the Planck level) if we are to survive from moment to moment, and that
 implies there are only a finite number of possible brains and hence mental
 states.
 

 Steady on, I don't think it does that at all, unless you constrain the
 physical world to be bounded somehow in both space and time.

 I think you were just trying to say that the space of brains (and
 mental states) is discrete, something I could agree with.


Unless you allow brains to grow infinitely big, there are only a finite
number of possible brains even in an infinite universe.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Scott Aaronson vs. Max Tegmark

2014-03-27 Thread Stathis Papaioannou
On 27 March 2014 18:48, Bruno Marchal marc...@ulb.ac.be wrote:


 On 26 Mar 2014, at 13:47, Stathis Papaioannou wrote:



 On Wednesday, March 26, 2014, Bruno Marchal marc...@ulb.ac.be wrote:


 On 26 Mar 2014, at 01:37, Stathis Papaioannou wrote:




 On 26 March 2014 11:29, LizR lizj...@gmail.com wrote:

 On 26 March 2014 12:12, Stathis Papaioannou stath...@gmail.com wrote:


 An infinite universe (Tegmark type 1) implies that our consciousness
 flits about from one copy of us to another and that as a consequence we are
 immortal, so it does affect us even if there is no physical communication
 between its distant parts.

 Only if one assumes comp, I think, or something akin to Frank Tipler's
 Physics of Immortality view which basically says that identical quantum
 states are good enough to be mapped onto one another, and we experience all
 the states together in an infinite BEC type thing until differentiation
 occurs. (Cosmic, man!)


 You don't have to assume comp. If the theory is that consciousness is
 secreted by the brain like bile is secreted by the liver, so that a
 simulation can't be conscious, there will be other brains in the universe
 similar enough to yours that they will have a similar consciousness.


 Assuming comp!
 If y consciousness is really needing the exact material bile in my liver,
 the other brain will just not be similar enough, and it is conceivable that
 although conscious like me, the copy might be another person. This makes no
 sense, if you use some form of comp.



 This is a concrete, no nonsense, no consciousness-flitting-about type of
 theory - but your consciousness will still effectively flit about because
 you can't be sure which copy you are.


 Assuming comp. If the exact infinite state of the bile is required,
 then by definition, the other person is a different person. I agree this
 seems absurd, but that is a comp prejudice. After all, I *can* conceive
 that the other might be an impostor an authentically other person.


 If consciousness is secreted by the brain, then if you make a similar
 brain you will make a similar consciousness.


 yes, but if the brain secrets consciousness, and if my identity is in the
 identity of the matter involved, the consciousness is conceivably similar,
 but not mine. I agree this makes not a lot of sense, but this is because
 we put the identity (and consciousness) in the relational information, and
 this uses comp.




 The actual theory of consciousness doesn't make any difference here.

 The claim that the copy isn't really the same person is equivalent to, and
 as absurd as,  the claim that I'm not the same person after a night's
 sleep.



 I agree, but I think you are using some functionalism here. Someone who
 associates consciousness to its actual matter might say that he is the same
 person after one night, but not after seven years (assuming the whole
 material body constitution has been changed). That is a difficulty for his
 theory, but it is logically conceivable if we abandon
 comp/functionalism/CTM. Comp has not that problem, but then eventually we
 must explain matter from information handled through number
 relations/computations.

 Bruno


It doesn't follow that if consciousness is substrate specific it can't be
duplicated; it can in fact be duplicated in a straightforward way, by
making a biological brain. Even if consciousness is due to an immaterial
soul one could say that it could be duplicated if God performs a miracle.
The claim that the duplicated consciousness isn't really me is a claim
about the nature of personal identity, and is independent of any theory of
how consciousness is generated.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Scott Aaronson vs. Max Tegmark

2014-03-27 Thread Stathis Papaioannou
On 27 March 2014 19:11, Bruno Marchal marc...@ulb.ac.be wrote:


 On 26 Mar 2014, at 22:30, Stathis Papaioannou wrote:



 On Thursday, March 27, 2014, Russell Standish li...@hpcoders.com.au
 wrote:

 On Wed, Mar 26, 2014 at 05:06:46PM +1100, Stathis Papaioannou wrote:
 
  The engineering tolerance of the brain must be finite (and far higher
 than the Planck level) if we are to survive from moment to moment, and that
 implies there are only a finite number of possible brains and hence mental
 states.
 

 Steady on, I don't think it does that at all, unless you constrain the
 physical world to be bounded somehow in both space and time.

 I think you were just trying to say that the space of brains (and
 mental states) is discrete, something I could agree with.


 Unless you allow brains to grow infinitely big, there are only a finite
 number of possible brains even in an infinite universe.




 Assuming comp. If the brain is defined by its material quantum state,
 and assuming electron position is a continuous observable, then we can have
 an infinity of brains, even when limiting their size.


Is electron position a continuous observable? Even if it is and there are
an infinity of brains, why should that result in an infinity of minds? It
would seem unlikely that brains would evolve so that an arbitrarily small
change in the position of an electron would cause a change in
consciousness, and we know that even gross changes in the brain, as occur
in stroke or head injury, sometimes have remarkably little effect.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Scott Aaronson vs. Max Tegmark

2014-03-27 Thread Stathis Papaioannou




 On 28 Mar 2014, at 1:47 am, Bruno Marchal marc...@ulb.ac.be wrote:
 
 
 On 27 Mar 2014, at 11:35, Stathis Papaioannou wrote:
 
 
 
 
 On 27 March 2014 18:48, Bruno Marchal marc...@ulb.ac.be wrote:
 
 On 26 Mar 2014, at 13:47, Stathis Papaioannou wrote:
 
 
 
 On Wednesday, March 26, 2014, Bruno Marchal marc...@ulb.ac.be wrote:
 
 On 26 Mar 2014, at 01:37, Stathis Papaioannou wrote:
 
 
 
 
 On 26 March 2014 11:29, LizR lizj...@gmail.com wrote:
 On 26 March 2014 12:12, Stathis Papaioannou stath...@gmail.com wrote:
 
 An infinite universe (Tegmark type 1) implies that our consciousness 
 flits about from one copy of us to another and that as a consequence 
 we are immortal, so it does affect us even if there is no physical 
 communication between its distant parts.
 
 Only if one assumes comp, I think, or something akin to Frank Tipler's 
 Physics of Immortality view which basically says that identical 
 quantum states are good enough to be mapped onto one another, and we 
 experience all the states together in an infinite BEC type thing until 
 differentiation occurs. (Cosmic, man!)
 
 
 You don't have to assume comp. If the theory is that consciousness is 
 secreted by the brain like bile is secreted by the liver, so that a 
 simulation can't be conscious, there will be other brains in the 
 universe similar enough to yours that they will have a similar 
 consciousness.
 
 Assuming comp!
 If y consciousness is really needing the exact material bile in my liver, 
 the other brain will just not be similar enough, and it is conceivable 
 that although conscious like me, the copy might be another person. This 
 makes no sense, if you use some form of comp. 
 
 
 
 This is a concrete, no nonsense, no consciousness-flitting-about type of 
 theory - but your consciousness will still effectively flit about 
 because you can't be sure which copy you are.
 
 Assuming comp. If the exact infinite state of the bile is required, 
 then by definition, the other person is a different person. I agree this 
 seems absurd, but that is a comp prejudice. After all, I *can* conceive 
 that the other might be an impostor an authentically other person.
 
 If consciousness is secreted by the brain, then if you make a similar 
 brain you will make a similar consciousness.
 
 yes, but if the brain secrets consciousness, and if my identity is in the 
 identity of the matter involved, the consciousness is conceivably similar, 
 but not mine. I agree this makes not a lot of sense, but this is because 
 we put the identity (and consciousness) in the relational information, and 
 this uses comp.
 
 
 
 
 The actual theory of consciousness doesn't make any difference here.
 The claim that the copy isn't really the same person is equivalent to, and 
 as absurd as,  the claim that I'm not the same person after a night's 
 sleep.
 
 
 I agree, but I think you are using some functionalism here. Someone who 
 associates consciousness to its actual matter might say that he is the same 
 person after one night, but not after seven years (assuming the whole 
 material body constitution has been changed). That is a difficulty for his 
 theory, but it is logically conceivable if we abandon 
 comp/functionalism/CTM. Comp has not that problem, but then eventually we 
 must explain matter from information handled through number 
 relations/computations.
 
 Bruno
 
 It doesn't follow that if consciousness is substrate specific it can't be 
 duplicated;
 
 OK. But the point is that it might, and that would be the case if my 
 consciousness is attached to both the exact quantum state of my brain and 
 substrate specific (which is a vague thing, yet incompatible with 
 computationalism).
 
 
 
 it can in fact be duplicated in a straightforward way, by making a 
 biological brain.
 
 But we do have evidences that biological copying is at some rather high 
 level, and that it does not copy any piece of matter. It replaces all 
 molecules and atoms with new atoms extracted from food.
 
 Here I am just playing the role of devil's advocate and I assume non comp to 
 make a logical point.
 
 
 
 
 Even if consciousness is due to an immaterial soul one could say that it 
 could be duplicated if God performs a miracle.
 
 Right again, but here too, it might not be the case. God could decide to NOT 
 do a miracle, given that It is so powerful.
 
 
 
 The claim that the duplicated consciousness isn't really me is a claim 
 about the nature of personal identity, and is independent of any theory of 
 how consciousness is generated.
 
 Not if the theory of consciousness is based on personal identity. Your claim 
 makes sense again for a functionalist, but not necessarily to all 
 non-functionalists.

A functionalist could agree that a computer can replicate his consciousness but 
it would not really be him. There is no explicit or implicit position on 
personal identity in functionalism.

-- 
You received this message because you are subscribed to the Google

Re: Max and FPI

2014-03-27 Thread Stathis Papaioannou
On 28 March 2014 07:49, Richard Ruquist yann...@gmail.com wrote:

 Brent,

 If as you say in the multiverse everything happens and infinitely many
 times
 then there can be only one multiverse, which negates a number of cosmology
 theories like Linde's Chaotic Inflation Cosmology. But then the potential
 he used provides the best fit to BICEP2 gravitational-wave data. Perhaps it
 is the multiverse that is falsified?


2 x multiverse = multiverse


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Scott Aaronson vs. Max Tegmark

2014-03-27 Thread Stathis Papaioannou
On 28 March 2014 09:37, LizR lizj...@gmail.com wrote:

 On 27 March 2014 23:42, Stathis Papaioannou stath...@gmail.com wrote:

 On 27 March 2014 19:11, Bruno Marchal marc...@ulb.ac.be wrote:

 On 26 Mar 2014, at 22:30, Stathis Papaioannou wrote:

 On Thursday, March 27, 2014, Russell Standish li...@hpcoders.com.au
 wrote:

 On Wed, Mar 26, 2014 at 05:06:46PM +1100, Stathis Papaioannou wrote:
 
  The engineering tolerance of the brain must be finite (and far higher
 than the Planck level) if we are to survive from moment to moment, and that
 implies there are only a finite number of possible brains and hence mental
 states.
 

 Steady on, I don't think it does that at all, unless you constrain the
 physical world to be bounded somehow in both space and time.

 I think you were just trying to say that the space of brains (and
 mental states) is discrete, something I could agree with.


 Unless you allow brains to grow infinitely big, there are only a finite
 number of possible brains even in an infinite universe.

 Assuming comp. If the brain is defined by its material quantum state,
 and assuming electron position is a continuous observable, then we can have
 an infinity of brains, even when limiting their size.


 Is electron position a continuous observable? Even if it is and there are
 an infinity of brains, why should that result in an infinity of minds? It
 would seem unlikely that brains would evolve so that an arbitrarily small
 change in the position of an electron would cause a change in
 consciousness, and we know that even gross changes in the brain, as occur
 in stroke or head injury, sometimes have remarkably little effect.


 I think Bruno must have a materialist hat on here?! In comp the
 substitution level isn't necessarily at the level of individual electrons,
 surely...

 But that raises another question, for me at least - in comp are there only
 finitely many possible states of mind? So one would literally be able to
 travel full circle through all possible minds - eventually?

 I would say there is only a finite number of possible biological human
minds, but an infinite number of possible minds if you are running them on
the Turing machine in Platonia.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Scott Aaronson vs. Max Tegmark

2014-03-27 Thread Stathis Papaioannou
On 28 March 2014 09:51, LizR lizj...@gmail.com wrote:

 On 28 March 2014 11:46, Stathis Papaioannou stath...@gmail.com wrote:

 I would say there is only a finite number of possible biological human
 minds,


 Because the number is limited by the Beckenstein bound if we assume
 physical supervenience ?


  but an infinite number of possible minds if you are running them on the
 Turing machine in Platonia.


 (Or an infinite number of Turing machines, according to comp ;-)

 Does comp suggest that consciousness corresponds to an infinite number of
 different possible mental states (rather than a very large, but finite,
 number of them) ?

 (If so should I assume we're talkng about a countable infinity?)

 I think you have to specify whether comp means merely that a computer
simulation of a brain can be conscious or go the whole way with Bruno's
conclusion that there is no actual physical computer and all possible
computations are necessarily implemented by virtue of their status as
platonic objects.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Scott Aaronson vs. Max Tegmark

2014-03-27 Thread Stathis Papaioannou
On 28 March 2014 10:16, LizR lizj...@gmail.com wrote:

 On 28 March 2014 12:00, Stathis Papaioannou stath...@gmail.com wrote:

 On 28 March 2014 09:51, LizR lizj...@gmail.com wrote:

 On 28 March 2014 11:46, Stathis Papaioannou stath...@gmail.com wrote:

 I would say there is only a finite number of possible biological human
 minds,


 Because the number is limited by the Beckenstein bound if we assume
 physical supervenience ?


  but an infinite number of possible minds if you are running them on
 the Turing machine in Platonia.


 (Or an infinite number of Turing machines, according to comp ;-)

 Does comp suggest that consciousness corresponds to an infinite number
 of different possible mental states (rather than a very large, but finite,
 number of them) ?

 (If so should I assume we're talkng about a countable infinity?)

 I think you have to specify whether comp means merely that a computer
 simulation of a brain can be conscious or go the whole way with Bruno's
 conclusion that there is no actual physical computer and all possible
 computations are necessarily implemented by virtue of their status as
 platonic objects.


 So what's the answer in either case?


Even in the first case it could be infinite if the physical universe is
infinite and we allow for post-human brains that can increase without bound.

The comment about comp was a general comment. On my understanding it just
means that a mind can be simulated on a computer.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Scott Aaronson vs. Max Tegmark

2014-03-28 Thread Stathis Papaioannou
On 29 March 2014 03:24, Bruno Marchal marc...@ulb.ac.be wrote:


 On 27 Mar 2014, at 18:21, Stathis Papaioannou wrote:

 A functionalist could agree that a computer can replicate his
 consciousness but it would not really be him. There is no explicit or
 implicit position on personal identity in functionalism.


 This is weird. I guess you mean your notion of functionalism, which is too
 much general I think, but I was still thinking it could have a relation
 with functionalism in the math sense, where an object is defined by its
 functional relations with other objects, and the identity *is* in the
 functionality.

 Then function is always used in two very different sense, especially in
 computer science, as it can be extensional function (defined by the
 functionality), or its intension (the code, the description, the body).

 Could your functionalist say yes to a doctor, which build the right
 computer (to replicate his consciousness), and add enough original atoms
 to preserve the identity? Is someone saying yes to that doctor, but only if
 a priest blesses the artificial brain with holy water a functionalist?

 Can you describe an experience refuting functionalism (in your sense)?
 Just to help me to understand. Thanks.


A person could conceivably say the following: it is impossible for a
computer to be conscious because consciousness is a magical substance that
comes from God. Therefore, if you make an artificial brain it may behave
like a real brain, but it will be a zombie. God could by a miracle grant
the artificial brain consciousness, and he could even grant it a similar
consciousness to my own, so that it will think it is me. However, it won't
*really* be me, because it could only be me if we were numerically
identical, and not even God can make two distinct things numerically
identical.

I don't accept this position, but it is the position many people have on
personal identity, and it is independent of their position on the
possibility of computer consciousness.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Scott Aaronson vs. Max Tegmark

2014-03-28 Thread Stathis Papaioannou
On 29 March 2014 05:15, Bruno Marchal marc...@ulb.ac.be wrote:


 On 28 Mar 2014, at 00:00, Stathis Papaioannou wrote:




 On 28 March 2014 09:51, LizR lizj...@gmail.com wrote:

 On 28 March 2014 11:46, Stathis Papaioannou stath...@gmail.com wrote:

 I would say there is only a finite number of possible biological human
 minds,


 Because the number is limited by the Beckenstein bound if we assume
 physical supervenience ?


  but an infinite number of possible minds if you are running them on the
 Turing machine in Platonia.


 (Or an infinite number of Turing machines, according to comp ;-)

 Does comp suggest that consciousness corresponds to an infinite number of
 different possible mental states (rather than a very large, but finite,
 number of them) ?

 (If so should I assume we're talkng about a countable infinity?)

 I think you have to specify whether comp means merely that a computer
 simulation of a brain can be conscious or go the whole way with Bruno's
 conclusion that there is no actual physical computer and all possible
 computations are necessarily implemented by virtue of their status as
 platonic objects.



 It is not so much in virtue of their status as platonic object (which
 seems to imply some metaphysical hypothesis), but in virtue of being true
 independently of my will, or even of the notion of universe, god, etc.


But there is the further notion of implementation. The obvious objection is
that computations might be true but they cannot give rise to
consciousness unless implemented on a physical computer. Step 8 of the UDA
says the physical computer is not necessary; which is a metaphysical
position if anything is.


 You need just to assume, or accept as true, relations like x + 0 = x, for
 all x, etc. It is a very weak form of realism, and basically, this is
 assumed by all scientists.

 *After* UDA, the assumptions are no more than classical logic and , for
 all x and y:

 0 ≠ (x + 1)
 ((x + 1) = (y + 1))  - x = y
 x + 0 = x
 x + (y + 1) = (x + y) + 1
 x * 0 = 0
 x * (y + 1) = (x * y) + x

 The boxes and diamond are defined in that theory, the theology and
 physics is derived in the extensions of that theory (the observers)
 simulated by that theory.

 There are many other equivalent theories.

 There are some metaphysical or theological consequences, clear with comp,
 but except for the yes doctor, there is no special ontological commitment
 done, not even on the numbers, that is no more than in Euclid proofs of the
 infinity of the prime numbers.

 The computations are implemented in virtue of the consequences of the
 axioms above.

 Bruno




 --
 Stathis Papaioannou

 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.


 http://iridia.ulb.ac.be/~marchal/



  --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.




-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Fwd: Scott Aaronson vs. Max Tegmark

2014-03-29 Thread Stathis Papaioannou
On 29 March 2014 19:27, Bruno Marchal
marc...@ulb.ac.bejavascript:_e(%7B%7D,'cvml','marc...@ulb.ac.be');
 wrote:


 On 28 Mar 2014, at 23:41, Stathis Papaioannou wrote:




 On 29 March 2014 03:24, Bruno Marchal 
 marc...@ulb.ac.bejavascript:_e(%7B%7D,'cvml','marc...@ulb.ac.be');
  wrote:


 On 27 Mar 2014, at 18:21, Stathis Papaioannou wrote:

 A functionalist could agree that a computer can replicate his
 consciousness but it would not really be him. There is no explicit or
 implicit position on personal identity in functionalism.


 This is weird. I guess you mean your notion of functionalism, which is
 too much general I think, but I was still thinking it could have a relation
 with functionalism in the math sense, where an object is defined by its
 functional relations with other objects, and the identity *is* in the
 functionality.

 Then function is always used in two very different sense, especially in
 computer science, as it can be extensional function (defined by the
 functionality), or its intension (the code, the description, the body).

 Could your functionalist say yes to a doctor, which build the right
 computer (to replicate his consciousness), and add enough original atoms
 to preserve the identity? Is someone saying yes to that doctor, but only if
 a priest blesses the artificial brain with holy water a functionalist?

 Can you describe an experience refuting functionalism (in your sense)?
 Just to help me to understand. Thanks.


 A person could conceivably say the following: it is impossible for a
 computer to be conscious because consciousness is a magical substance that
 comes from God. Therefore, if you make an artificial brain it may behave
 like a real brain, but it will be a zombie. God could by a miracle grant
 the artificial brain consciousness, and he could even grant it a similar
 consciousness to my own, so that it will think it is me.


 Hmm... OK, but usually comp is not just that a computer can be conscious,
 but that it can be conscious (c= can support consciousness) in virtue of
 doing computation. That is why I add sometime qua computatio to remind
 this. If functionalism accept a role for a magical substance, it is
 obviously non computationalism.


Of course, the computer or computing device must be doing the computations;
if not it is unconscious or only potentially consciousness.


 However, it won't *really* be me, because it could only be me if we were
 numerically identical, and not even God can make two distinct things
 numerically identical.


 Even with God. This makes the argument weird. Even if God cannot do that.
 But it can make sense, with magic matter, many things can make sense.


It's not so weird, since even God or magic can't do something logically
impossible like make 1 = 2, and under one theory of personal identity
(which by the way I think is completely wrong) that is what would have to
happen for a person to survive teleportation.



 I don't accept this position, but it is the position many people have on
 personal identity, and it is independent of their position on the
 possibility of computer consciousness.


 OK.

 I think you have to specify whether comp means merely that a computer
 simulation of a brain can be conscious or go the whole way with Bruno's
 conclusion that there is no actual physical computer and all possible
 computations are necessarily implemented by virtue of their status as
 platonic objects.



 It is not so much in virtue of their status as platonic object (which
 seems to imply some metaphysical hypothesis), but in virtue of being true
 independently of my will, or even of the notion of universe, god, etc.


 But there is the further notion of implementation. The obvious objection
 is that computations might be true but they cannot give rise to
 consciousness unless implemented on a physical computer.


 Only IF you assume that one universal machine (the physical universe or
 some part of it) has a special (metaphysical) status, and that it plays a
 special role. Implementation in computer science is defined purely by a
 relation between a universal machine/number and a machine/number (which can
 be universal or not).
 u implements machine x if phi_u(x,y) = phi_x(y) for all y, and that can be
 defined in the theory quoted below.

 A physicalist, somehow, just pick out one universal being and asserts
 that it is more fundamental. The computationalist know better, and know
 that the special physical universal machine has to win some competition
 below our substitution level.


But most computationalists are probably physicalists who believe that
consciousness can only occur if an actual physical computer is using energy
and heating up in the process of implementing computations. They don't
believe that the abstract computation on its own is enough. They may be
wrong, but that's what they think, and they call themselves
computationalists.


 Step 8 of the UDA says the physical computer is not necessary; which

Re: Max and FPI

2014-03-31 Thread Stathis Papaioannou
On 1 April 2014 12:24, meekerdb meeke...@verizon.net wrote:

  On 3/31/2014 5:53 PM, Stathis Papaioannou wrote:




 On 1 April 2014 04:04, meekerdb meeke...@verizon.net wrote:

  On 3/31/2014 12:30 AM, Bruno Marchal wrote:


OK...you see an elegant explanation sBould the empirically observed
 fact actually not be.

 But would even that alone have been remotely near the ballpark of things
 taken seriously, had there not been extreme quantum strangeness
 irreconcilable at that time, with the most core, most
 fundamental accomplishments of science to date?


  MWI evacuates all weirdness from QM. It restores fully
 - determinacy
 - locality
 - physical realism

  The price is not that big, as nature is used to multiplied things, like
 the water molecules in the ocean, the stars in the sky, the galaxies, etc.
 Each time, the humans are shocked by this, and Gordiano Bruno get burned
 for saying that stars are other suns, and that they might have planets,
 with other living being.
 It is humbling, but not coneptually new, especially for a
 computationalist, which explains the MW from simple arithmetic, where you
 need only to believe in the consequence of addition and multiplication of
 integers.



  The price is not having a unified 'self' - which many people would
 consider a big price since all observation and record keeping which is used
 to empirically test theories assumes this unity.  If you observe X and you
 want to use that as empircal test of a theory it isn't helpful if your
 theory of the instruments says they also recorded not-X.


  Are you saying that the fact that we don't see many worlds is evidence
 against many worlds?


 No, the fact that whatever our instrument reads our *theory* says there
 are infinitely many other readings.


Is that just a psychological problem or do you think it implies the theory
is wrong? If the theory were right, what should we expect to see?


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Max and FPI

2014-03-31 Thread Stathis Papaioannou
On 1 April 2014 13:56, meekerdb meeke...@verizon.net wrote:

  On 3/31/2014 6:41 PM, Stathis Papaioannou wrote:

  Are you saying that the fact that we don't see many worlds is
 evidence against many worlds?


  No, the fact that whatever our instrument reads our *theory* says there
 are infinitely many other readings.


  Is that just a psychological problem or do you think it implies the
 theory is wrong? If the theory were right, what should we expect to see?



 No, I think it implies the theory is incomplete.  It needs to explain why
 our instrument readings seem to obey the laws of probability.


Yes, it has been said many times that there is a problem with probability
in an infinite universe but I assume this is not enough to conclude that an
infinite universe is impossible a priori, so what *should* we observe in a
such a universe?


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Daphne du Maurier was right!

2014-04-03 Thread Stathis Papaioannou
On 4 April 2014 15:59, Samiya Illias samiyaill...@gmail.com wrote:

 I suggest we study and evaluate it for its literal merit, rather than
 'what it might mean' thus removing all constructs and myths surrounding it.
 Dr. Maurice Bucaille did something similar when he examined the scriptures
 in the light of scientific knowledge. Online translation:

 https://ia700504.us.archive.org/18/items/TheBibletheQuranScienceByDr.mauriceBucaille/TheBibletheQuranScienceByDr.mauriceBucaille.pdf



To be fair, you have to allow that if there is a scientific inaccuracy in a
holy book which is considered the word of God then, unless God got the
science wrong, that would be evidence against the holy book being the word
of God. The problem is that even if a believer says they are open-minded in
this way they don't really mean it because that would be an admission that
they are willing to test God, which is contrary to faith and therefore bad.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Daphne du Maurier was right!

2014-04-04 Thread Stathis Papaioannou
On 4 April 2014 20:33, Richard Ruquist yann...@gmail.com wrote:




 On Fri, Apr 4, 2014 at 1:24 AM, Stathis Papaioannou stath...@gmail.comwrote:




 On 4 April 2014 15:59, Samiya Illias samiyaill...@gmail.com wrote:

 I suggest we study and evaluate it for its literal merit, rather than
 'what it might mean' thus removing all constructs and myths surrounding it.
 Dr. Maurice Bucaille did something similar when he examined the scriptures
 in the light of scientific knowledge. Online translation:

 https://ia700504.us.archive.org/18/items/TheBibletheQuranScienceByDr.mauriceBucaille/TheBibletheQuranScienceByDr.mauriceBucaille.pdf



 To be fair, you have to allow that if there is a scientific inaccuracy in
 a holy book which is considered the word of God then, unless God got the
 science wrong, that would be evidence against the holy book being the word
 of God. The problem is that even if a believer says they are open-minded in
 this way they don't really mean it because that would be an admission that
 they are willing to test God, which is contrary to faith and therefore bad.


 What are you called if you are willing to test god?
 A believer?


Rational.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Daphne du Maurier was right!

2014-04-04 Thread Stathis Papaioannou
On 4 April 2014 16:41, Samiya Illias samiyaill...@gmail.com wrote:


 What is more important? Faith or Honest Faith? How can we honestly believe
 in God when we think God doesn't know what He created? I think its a
 disservice to God, to religion and to ourselves when we choose to not to
 question Faith, and not to examine it. Its not 'to test God', rather its to
 test what we accept as from God.
 If we believe in Life After Death, then the quality of our life in the
 Hereafter is dependent on the version of scripture that we took on faith.
 If Judgement is inevitable, then it is of utmost importance that we base
 our beliefs and actions upon critical inquiry and honest understanding.


So are you saying that if a scientific error is pointed out to you in the
Bible or the Quran you will accept that they are not the word of God?


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: The Yes-Doctor Experiment for real

2013-12-10 Thread Stathis Papaioannou
On 11 December 2013 06:20, George gl...@quantics.net wrote:
 Hi List

 I haven't contributed to this list for a while but I thought you might be
 interested in this article from the Science Daily on line magazine

 Neural Prosthesis Restores Behavior After Brain Injury

 George Levy

The rat has the same behaviour, but does it have the same experience?


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: The Yes-Doctor Experiment for real

2013-12-11 Thread Stathis Papaioannou
On 12 December 2013 11:53, LizR lizj...@gmail.com wrote:
 On 12 December 2013 11:25, meekerdb meeke...@verizon.net wrote:

 On 12/11/2013 1:18 PM, LizR wrote:

 ISTM that Yes Doctor sums up comp. If a digital brain made below my
 substitution level can substitute for my organic one, then I literally have
 a 50% chance of waking up as the digital version.

 However if the Subst Level is quantum, no cloning stops it being actually
 possible.


 But I don't think substitution level is sharply defined.  You brain must
 be mostly classical (otherwise it would be evolutionarily useless) and so
 one might well say yes to the doctor, while realizing that the immediate
 state of your brain at the micro-level would not be duplicated.  But this
 would be no worse than losing the state under anesthetic - which I hope the
 doctor was going to use anyway.

 It depends what is the important  level for maintaining selfhood. It seems
 reasonable to assume that the self remains the same when the brain is
 duplicated at the quantum level (if one believes the MWI this is happening
 all the time). It's possible that the self is retained during duplication at
 higher levels, but it isn't guaranteed. If my brain was duplicated at, say,
 the cellular level, I might simply die, and someone who thinks she's me
 would be created. (Or then again, that might be happening all the time
 anyway.)

 These are the sort of consideration that make me think that if you say yes
 to the Doctor, you've already effectively swallowed all the implcations of
 comp.

The required substitution level cannot be the quantum level since we
know that people can survive with their cognitive faculties intact
even with gross brain changes, such as after a stroke or head injury.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Beware of the bitcoin

2013-12-14 Thread Stathis Papaioannou
On 15 December 2013 09:33, LizR lizj...@gmail.com wrote:
 Bitcoins are apparently based on long and complex calculations rather than
 just nothing. I work with a guy who manufactures bitcoins and heats his
 house with the computer power required - or so he says. It sounds a
 ridiculous waste of power, time and effort, but he's probably autistic or
 something similar, so admittedly behaviour that seems weird to me isn't
 unusual for him.

It's not just the guy you work with - bitcoin mining is major business
now that the coins sell for around $1000 each.

http://www.businessinsider.com.au/bitcoin-mining-is-booming-chart-2013-12

The nature of the bitcoin protocol is such that the cost of mining is
close to the bitcoin price. It may be wasteful, but perhaps no more
wasteful than the resources spent mining gold, let alone the vast sums
wasted the current financial system, which some proponents claim
cryptocurrencies may eventually partly supplant.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Minds, Machines and Gödel

2013-12-18 Thread Stathis Papaioannou
On 19 December 2013 08:32, LizR lizj...@gmail.com wrote:
 If this is a proof of the falsity of mechanism, is there any chance of a
 precis? :-)

The argument has been restated with elaboration by Penrose, and has
been extensively criticised.

http://www.iep.utm.edu/lp-argue/


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: The difficulties of executing simple algorithms: why brains make mistakes computers don't.

2013-12-24 Thread Stathis Papaioannou
 by the chemistry. You can have an
absolutely rigid underlying process that can lead to strange and
unpredictable effects, accounting for most natural phenomena.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Tegmark and consciousness

2014-01-11 Thread Stathis Papaioannou
On 12 January 2014 15:12, Colin Geoffrey Hales cgha...@unimelb.edu.au wrote:
 RE: arXiv: 1401.1219v1 [quant-ph] 6 Jan 2014

 Consciousness as a State of Matter

 Max Tegmark, January 8, 2014



 Hi Folk,

 Grrr!

 I confess that after 12 years of deep immersion in science’s grapplings with
 consciousness, the blindspot I see operating is so obvious and so pervasive
 and so incredibly unseen it beggars belief. I know it’s a long way from
 physics to neuroscience (discipline-wise). But surely in 2014 we can see it
 for what it is. Can’t they (Tegmark and ilk)  see that the so-called
 “science of consciousness” is

 · the “the science of the scientific observer”

 · trying to explain observing with observations

 · trying to explain experience with experiences

 · trying to explain how scientists do science.

 · a science of scientific behaviour.

 · Descriptive and never explanatory.

 · Assuming that the use of consciousness to confirm ‘laws of nature’
 contacts the actual underlying reality...

 · Assuming there’s only 1 scientific behaviour and never ever ever
 questioning that.

 · Assuming scientists are not scientific evidence of anything.

 · Assuming that objectivity, in objectifying something out of
 subjectivity, doesn’t evidence the subjectivity at the heart of it.

 · Confusing scientific evidence as being an identity with
 objectified phenomena.



 2500 years of blinkered paradigmatic tacit presuppositionnow gives us
 exactly what happened for phlogiston during the 1600s. A new ‘state of
 matter’?  Bah! Phlogiston!!! Of course not! All we have to do is admit we
 are actually inside the universe, made of whatever it is made of, getting a
 view from the point of view of being a bit of it.. g. The big
 mistake is that thinking that physics has ever, in the history of science,
 ever ever ever dealt with what the universe is actually made of, as opposed
 to merely describing what a presupposed observer ‘sees it looking like’. The
 next biggest mistake is assuming that we can’t deal with what the universe
 is actually made of, when that very stuff is delivering an ability to
 scientifically observe in the first place.



 These sorts of expositions have failed before the authors have even lifted a
 finger over the keyboard. Those involved don’t even know what the problem
 is. The problem is not one _for_ science. The problem is _science itself_
 ... _us_.



 Sorry. I just get very very frustrated at times. I have written a book on
 this and hopefully it’ll be out within 6 months. That’ll sort them out.



 Happy new year!

I'm a lump of dumb matter arranged in a special way and I am
conscious, so I don't see why another lump of dumb matter arranged in
a special way might not also be conscious. What is it about that idea
that you see as not only wrong, but ridiculous?


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Edge.org: 2014 : WHAT SCIENTIFIC IDEA IS READY FOR RETIREMENT? The Computational Metaphor

2014-01-16 Thread Stathis Papaioannou
On 16 January 2014 16:26, Jason Resch jasonre...@gmail.com wrote:
 The computational metaphor in the sense of the brain works like the Intel
 CPU inside the box on your desk is clearly misleading, but the sense that a
 computer can in theory do everything your brain can do is almost certainly
 correct. It is not that the brain is like a computer, but rather, that a
 computer can be like almost anything, including your brain or body, or
 entire planet and all the people on it.

 Jason

I think neuroscientists have, over decades, used the computational
metaphor in too literal a way. It is obviously not true that the brain
is a digital computer, just as it is not true that the weather is a
digital computer. But a digital computer can simulate the behaviour of
any physical process in the universe (if physics is computable),
including the behaviour of weather or the human brain. That means
that, at least, it would be possible to make a philosophical zombie
using a computer. The only way to avoid this conclusion would be if
physics, and specifically the physics in the brain, is not computable.
Pointing out where the non-computable physics is in the brain rarely
figures on the agenda of the anti-computationalists. And even if there
is non-computational physics in the brain, that invalidates
computationalism, but not its superset, functionalism.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Edge.org: 2014 : WHAT SCIENTIFIC IDEA IS READY FOR RETIREMENT? The Computational Metaphor

2014-01-16 Thread Stathis Papaioannou
On 16 January 2014 23:08, Bruno Marchal marc...@ulb.ac.be wrote:

 On 16 Jan 2014, at 09:11, Stathis Papaioannou wrote:

 On 16 January 2014 16:26, Jason Resch jasonre...@gmail.com wrote:

 The computational metaphor in the sense of the brain works like the Intel
 CPU inside the box on your desk is clearly misleading, but the sense that
 a
 computer can in theory do everything your brain can do is almost
 certainly
 correct. It is not that the brain is like a computer, but rather, that a
 computer can be like almost anything, including your brain or body, or
 entire planet and all the people on it.

 Jason


 I think neuroscientists have, over decades, used the computational
 metaphor in too literal a way. It is obviously not true that the brain
 is a digital computer, just as it is not true that the weather is a
 digital computer. But a digital computer can simulate the behaviour of
 any physical process in the universe (if physics is computable),
 including the behaviour of weather or the human brain. That means
 that, at least, it would be possible to make a philosophical zombie
 using a computer. The only way to avoid this conclusion would be if
 physics, and specifically the physics in the brain, is not computable.
 Pointing out where the non-computable physics is in the brain rarely
 figures on the agenda of the anti-computationalists. And even if there
 is non-computational physics in the brain, that invalidates
 computationalism, but not its superset, functionalism.


 OK. But in a non standard sense of functionalism, as in the philosophy of
 mind, functionalism is used for a subset of computationalism. Functionalism
 is computationalism with some (unclear) susbtitution level in mind (usually
 the neurons).

 Now, I would like to see a precise definition of your functionalism. If
 you take *all* functions, it becomes trivially true, I think. But any
 restriction on the accepted functions, can perhaps lead to some interesting
 thesis. For example, the functions computable with this or that oracles, the
 continuous functions, etc.

Briefly, computationalism is the idea that you could replace the brain
with a Turing machine and you would preserve the mind. This would not
be possible if there is non-computable physics in the brain, as for
example Penrose proposes. But in that case, you could replace the
brain with whatever other type of device is needed, such as a
hypercomputer, and still preserve the mind. I would say that is
consistent with functionalism but not computationalism. The idea that
replicating the function of the brain by whatever means would not
preserve the mind, i.e. would result in a philosophical zombie, is
inconsistent with functionalism.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Edge.org: 2014 : WHAT SCIENTIFIC IDEA IS READY FOR RETIREMENT? The Computational Metaphor

2014-01-16 Thread Stathis Papaioannou
On 17 January 2014 01:17, Jason Resch jasonre...@gmail.com wrote:


 On Jan 16, 2014, at 2:11 AM, Stathis Papaioannou stath...@gmail.com wrote:

 On 16 January 2014 16:26, Jason Resch jasonre...@gmail.com wrote:

 The computational metaphor in the sense of the brain works like the Intel
 CPU inside the box on your desk is clearly misleading, but the sense that
 a
 computer can in theory do everything your brain can do is almost
 certainly
 correct. It is not that the brain is like a computer, but rather, that a
 computer can be like almost anything, including your brain or body, or
 entire planet and all the people on it.

 Jason


 I think neuroscientists have, over decades, used the computational
 metaphor in too literal a way. It is obviously not true that the brain
 is a digital computer, just as it is not true that the weather is a
 digital computer. But a digital computer can simulate the behaviour of
 any physical process in the universe (if physics is computable),
 including the behaviour of weather or the human brain. That means
 that, at least, it would be possible to make a philosophical zombie
 using a computer.


 How does this follow? Personally I don't find the notion that philosophical
 zombies make logical sense at all.

I meant that if the physics of the brain is computable it follows as a
straighforward deduction that it would *at least* be possible to make
a philosophical zombie. It is then a further argument to show that it
would not be a zombie but a conscious being.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Tegmark and consciousness

2014-01-16 Thread Stathis Papaioannou
On 13 January 2014 00:00, Craig Weinberg whatsons...@gmail.com wrote:


 On Sunday, January 12, 2014 12:21:48 AM UTC-5, stathisp wrote:



 I'm a lump of dumb matter arranged in a special way and I am
 conscious, so I don't see why another lump of dumb matter arranged in
 a special way might not also be conscious. What is it about that idea
 that you see as not only wrong, but ridiculous?


 Water is just dumb matter arranged in a special way. Why not just drink
 chlorine instead? Liquid is liquid.

You could turn chlorine into water by rearranging the subatomic
particles. You have argued that it is not possible to create a living
cell by arranging atoms.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Tegmark and consciousness

2014-01-16 Thread Stathis Papaioannou
On 13 January 2014 02:23, Bruno Marchal marc...@ulb.ac.be wrote:

 On 12 Jan 2014, at 06:21, Stathis Papaioannou wrote:


 I'm a lump of dumb matter arranged in a special way and I am
 conscious,


 I think this is misleading. Are you really a dumb of matter? I think that
 your body can be a lump of dumb matter, but that *you* are a person, using
 that dumb of matter as a vehicle and mean to manifest yourself. In principle
 (assuming comp of course), you can change your body every morning (and as
 you have often explain your self, we do change our lump of dumb matter
 every n number of years.

Perhaps it is misleading to say that I am the dumb matter if my
consciousness is not necessarily attached to any particular matter.

 so I don't see why another lump of dumb matter arranged in
 a special way might not also be conscious.


 But here I agree with your point, although it is less misleading to consider
 the person as some immaterial entity (like a game, a program, memories,
 personality traits, ... no need of magical soul with wings) owning your
 body.
 If the human would born directly fixed inside a car, they would also believe
 that their car is part of their body. Nature provides us with a body at
 birth, and that might be the reason why we tend to identify ourselves with
 our bodies, but comp, which I think you accept, shows the limit of this
 identification, imo.
 Eventually, the UDA shows that at a very fundamental level, bodies are only
 statistical machine's percepts, or statistical relative numbers percepts.




 What is it about that idea
 that you see as not only wrong, but ridiculous?


 It is not what I am saying here, to be sure.

 Bruno





 http://iridia.ulb.ac.be/~marchal/




 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/groups/opt_out.



-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Tegmark and consciousness

2014-01-16 Thread Stathis Papaioannou
On 13 January 2014 04:42, Telmo Menezes te...@telmomenezes.com wrote:

 I'm a lump of dumb matter arranged in a special way and I am
 conscious, so I don't see why another lump of dumb matter arranged in
 a special way might not also be conscious. What is it about that idea
 that you see as not only wrong, but ridiculous?

 I'm sorry I repeat this answer so many times, but this claim is also
 made so many times. The main problem I see with this idea is that no
 progress has been made so far in explaining how a lump of matter
 becomes conscious, as opposed to just being a zombie mechanically
 performing complex behaviors. Insisting that such an explanation must
 exist instead of entertaining other models of reality strikes me as a
 form of mysticism.

It may be a problem that I'm not producing a theory of consciousness
to your satisfaction, but which part of the claim I made do you
actually disagree with?

-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Edge.org: 2014 : WHAT SCIENTIFIC IDEA IS READY FOR RETIREMENT? The Computational Metaphor

2014-01-16 Thread Stathis Papaioannou
On 17 January 2014 11:43, Jason Resch jasonre...@gmail.com wrote:



 On Thu, Jan 16, 2014 at 6:42 PM, LizR lizj...@gmail.com wrote:

 On 17 January 2014 13:34, Stathis Papaioannou stath...@gmail.com wrote:


 I meant that if the physics of the brain is computable it follows as a
 straighforward deduction that it would *at least* be possible to make
 a philosophical zombie. It is then a further argument to show that it
 would not be a zombie but a conscious being.

 I don't see this. Why would it at least be possible to make a p-zombie?
 (And if you can show by a further argument that it's a conscious being, then
 clearly it wasn't a zombie...)


 I think he means that strong AI would be possible, and then strong AI + comp
 - conscious programs.

At least *weak* AI would be possible. Weak AI means computers could do
everything we do but without necessarily being conscious. Strong AI
means they would also be conscious.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Scientists Claim That Quantum Theory Proves Consciousness Moves To Another Universe At Death

2014-01-20 Thread Stathis Papaioannou
 Laura Mersini-Houghton from the North Carolina
 University with her colleagues argue: the anomalies of the microwave
 background exist due to the fact that our universe is influenced by
 other universes existing nearby. And holes and gaps are a direct
 result of attacks on us by neighboring universes.

 Soul
 So, there is abundance of places or other universes where our soul
 could migrate after death, according to the theory of neo-biocentrism.
 But does the soul exist?  Is there any scientific theory of
 consciousness that could accommodate such a claim?  According to Dr.
 Stuart Hameroff, a near-death experience happens when the quantum
 information that inhabits the nervous system leaves the body and
 dissipates into the universe.  Contrary to materialistic accounts of
 consciousness, Dr. Hameroff offers an alternative explanation of
 consciousness that can perhaps appeal to both the rational scientific
 mind and personal intuitions.

 Consciousness resides, according to Stuart and British physicist Sir
 Roger Penrose, in the microtubules of the brain cells, which are the
 primary sites of quantum processing.  Upon death, this information is
 released from your body, meaning that your consciousness goes with it.
 They have argued that our experience of consciousness is the result of
 quantum gravity effects in these microtubules, a theory which they
 dubbed orchestrated objective reduction (Orch-OR).

 Consciousness, or at least proto-consciousness is theorized by them to
 be a fundamental property of the universe, present even at the first
 moment of the universe during the Big Bang. “In one such scheme
 proto-conscious experience is a basic property of physical reality
 accessible to a quantum process associated with brain activity.”

 Our souls are in fact constructed from the very fabric of the universe
 – and may have existed since the beginning of time.  Our brains are
 just receivers and amplifiers for the proto-consciousness that is
 intrinsic to the fabric of space-time. So is there really a part of
 your consciousness that is non-material and will live on after the
 death of your physical body?

 Dr Hameroff told the Science Channel’s Through the Wormhole
 documentary: “Let’s say the heart stops beating, the blood stops
 flowing, the microtubules lose their quantum state. The quantum
 information within the microtubules is not destroyed, it can’t be
 destroyed, it just distributes and dissipates to the universe at
 large”.  Robert Lanza would add here that not only does it exist in
 the universe, it exists perhaps in another universe.

 If the patient is resuscitated, revived, this quantum information can
 go back into the microtubules and the patient says “I had a near death
 experience”‘

 He adds: “If they’re not revived, and the patient dies, it’s possible
 that this quantum information can exist outside the body, perhaps
 indefinitely, as a soul.”

 This account of quantum consciousness explains things like near-death
 experiences, astral projection, out of body experiences, and even
 reincarnation without needing to appeal to religious ideology.  The
 energy of your consciousness potentially gets recycled back into a
 different body at some point, and in the mean time it exists outside
 of the physical body on some other level of reality, and possibly in
 another universe.

Lanza's theory bears only a superficial resemblance to multiverse
theories. At its simplest, a multiverse theory says that there are
multiple copies of you having your current thought, and if one of
these copies suddenly stops, you live on in the others. This does not
require the existence of a soul that flies from one body to the other,
as Lanza implies. It also doesn't require any explicit theory of
consciousness.It is just a consequence of the fact that you, now,
consider yourself a continuation of you, yesterday, even though the
matter in your body is different and in a different configuration.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Better Than the Chinese Room

2014-01-23 Thread Stathis Papaioannou
On 13 January 2014 00:40, Craig Weinberg whatsons...@gmail.com wrote:
 Here then is simpler and more familiar example of how computation can differ
 from natural understanding which is not susceptible to any mereological
 Systems argument.

 If any of you have use passwords which are based on a pattern of keystrokes
 rather than the letters on the keys, you know that you can enter your
 password every day without ever knowing what it is you are typing (something
 with a #r5f^ in it…?).

 I think this is a good analogy for machine intelligence. By storing and
 copying procedures, a pseudo-semantic analysis can be performed, but it is
 an instrumental logic that has no way to access the letters of the ‘human
 keyboard’. The universal machine’s keyboard is blank and consists only of
 theoretical x,y coordinates where keys would be. No matter how good or
 sophisticated the machine is, it will still have no way to understand what
 the particular keystrokes mean to a person, only how they fit in with
 whatever set of fixed possibilities has been defined.

 Taking the analogy further, the human keyboard only applies to public
 communication. Privately, we have no keys to strike, and entire paragraphs
 or books can be represented by a single thought. Unlike computers, we do not
 have to build our ideas up from syntactic digits. Instead the public-facing
 computation follows from the experienced sense of what is to be communicated
 in general, from the top down, and the inside out.

I think you have a problem with the idea that a system could display
properties that are not obvious from examining its parts. There's no
way to argue around this, you just believe it and that's that.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Better Than the Chinese Room

2014-01-23 Thread Stathis Papaioannou
On 24 January 2014 01:15, Craig Weinberg whatsons...@gmail.com wrote:


 On Thursday, January 23, 2014 5:39:08 AM UTC-5, stathisp wrote:

 On 13 January 2014 00:40, Craig Weinberg whats...@gmail.com wrote:
  Here then is simpler and more familiar example of how computation can
  differ
  from natural understanding which is not susceptible to any mereological
  Systems argument.
 
  If any of you have use passwords which are based on a pattern of
  keystrokes
  rather than the letters on the keys, you know that you can enter your
  password every day without ever knowing what it is you are typing
  (something
  with a #r5f^ in it…?).
 
  I think this is a good analogy for machine intelligence. By storing and
  copying procedures, a pseudo-semantic analysis can be performed, but it
  is
  an instrumental logic that has no way to access the letters of the
  ‘human
  keyboard’. The universal machine’s keyboard is blank and consists only
  of
  theoretical x,y coordinates where keys would be. No matter how good or
  sophisticated the machine is, it will still have no way to understand
  what
  the particular keystrokes mean to a person, only how they fit in with
  whatever set of fixed possibilities has been defined.
 
  Taking the analogy further, the human keyboard only applies to public
  communication. Privately, we have no keys to strike, and entire
  paragraphs
  or books can be represented by a single thought. Unlike computers, we do
  not
  have to build our ideas up from syntactic digits. Instead the
  public-facing
  computation follows from the experienced sense of what is to be
  communicated
  in general, from the top down, and the inside out.

 I think you have a problem with the idea that a system could display
 properties that are not obvious from examining its parts. There's no
 way to argue around this, you just believe it and that's that.


 I don't have a problem with the idea that a system could DISPLAY
 properties that are not obvious from EXAMINING its parts, but you overlook
 that DISPLAYING and EXAMINING are functions of consciousness only. If they
 were not, then consciousness would be superfluous. If my brain could examine
 the display of the body's environment, then it would, and the presence or
 absence of perceptual experience would not make any difference.

 Systems and parts are defined by level of description - scales and scopes of
 perception and abstracted potential perception. They aren't primitively
 real. A machine is not a machine in its own eyes, but our body is an
 expression of a single event which spans a human lifetime. A person is
 another expression of that event. The system of a person does not emerge
 from the activity of the body parts, as the entire coherence of the body is
 as a character within relativistically scoped perceptual experiences.

 I don't think that I believe, I think that I understand. I think that you do
 not understand what I mean, but are projecting that onto me, and therefore
 have assigned a straw man to take my place. It is your straw man projection
 who must believe.

 Craig

Tell me what you believe so we can be clear:

My understanding is that you believe that if the parts of the Chinese
Room don't understand Chinese, then the Chinese Room can't understand
Chinese. Have I got this wrong?


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Better Than the Chinese Room

2014-01-24 Thread Stathis Papaioannou
On 25 January 2014 00:26, Craig Weinberg whatsons...@gmail.com wrote:

 Tell me what you believe so we can be clear:

 My understanding is that you believe that if the parts of the Chinese
 Room don't understand Chinese, then the Chinese Room can't understand
 Chinese. Have I got this wrong?


 The fact that the Chinese Room can't understand Chinese is not related to
 its parts, but to the category error of the root assumption that forms and
 functions can understand things.  I see forms and functions as one of the
 effects of experience, not as a cause of them.

But that doesn't answer the question: do you think (or understand, or
whatever you think the appropriate term is) that the Chinese Room
COULD POSSIBLY be conscious or do you think that it COULD NOT POSSIBLY
be conscious? Or do you claim that the question is meaningless, a
category error (which ironically is a term beloved of positivists)? If
the latter, how is it that the question can be meaningfully asked
about humans but not the Chinese Room?

 I like my examples better than the Chinese Room, because they are simpler:

 1. I can type a password based on the keystrokes instead of the letters on
 the keys. This way no part of the system needs to know the letters,
 indeed, they could be removed altogether, thereby showing that data
 processing does not require all of the qualia that can be associated with
 it, and therefore it follows that data processing does not necessarily
 produce any or all qualia.

 2. The functional aspects of playing cards are unrelated to the suits, their
 colors, the pictures of the royal cards, and the participation of the
 players. No digital simulation of playing card games requires any aesthetic
 qualities to simulate any card game.

 3. The difference between a game like chess and a sport like basketball is
 that in chess, the game has only to do with the difficulty for the human
 intellect to compute all of the possibilities and prioritize them logically.
 Sports have strategy as well, but they differ fundamentally in that the real
 challenge of the game is the physical execution of the moves. A machine has
 no feeling so it can never participate meaningfully in a sport. It doesn't
 get tired or feel pain, it need not attempt to accomplish something that it
 cannot accomplish, etc. If chess were a sport, completing each move would be
 subject to the possibility of failure and surprise, and the end can never
 result in checkmate, since there is always the chance of weaker pieces
 getting lucky and overpowering the strong. There is no Cinderella Story in
 real chess, the winning strategy always wins because there can be no
 difference between theory and reality in an information-theoretic universe.

How can you start a sentence a machine has no feeling so... and
purport to discuss the question of whether a machine can have feeling?

 So no, I do not believe this, I understand it. I do not think that the
 Chinese Room is valid because wholes must be identical to their parts. The
 Chinese Room is valid because it can (if you let it) illustrate that the
 difference between understanding and processing is a difference in kind
 rather than a difference in degree. Technically, it is a difference in kind
 going one way (from the quantitative to the qualitative) and a difference in
 degree going the other way. You can reduce a sport to a game (as in computer
 basketball) but you can't turn a video game into a sport unless you bring in
 hardware that is physical/aesthetic rather than programmatic. Which leads me
 to:

The Chinese Room argument is valid if it follows that if the parts of
the system have no understanding then the system can have no
understanding. It is pointed out (correctly) by Searle that the person
in the room does not understand Chinese, from which he CONCLUDES that
the room does not understand Chinese, and uses this conclusion to
support the idea that the difference between understanding and
processing is a difference in kind, so no matter how clever the
computer or how convincing its behaviour it will never have
understanding.

I don't think your example with the typing is as good as the Chinese
Room, because by changing the keys around a bit it would be obvious
that there is no real understanding, while with the Chinese Room would
be able to pass any test that a Chinese speaker could pass.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Better Than the Chinese Room

2014-01-25 Thread Stathis Papaioannou
 to me - which may not be your fault. Your
psychological
 specialization may not permit you to see any other possibility than the
 mereological argument that you keep turning to. Of course the whole can
have
 properties that the parts do not have, that is not what I am denying at
all.
 I am saying that there is no explanation of the Chinese Room which
requires
 that it understands anything except one in which understanding itself is
 smuggled in from the real world and attached to it arbitrarily on blind
 faith.

Then you don't consider the Chinese Room argument valid. You agree with the
conclusion and premises but you don't agree that the conclusion follows
from the premises in the way Searle claims.

 It is pointed out (correctly) by Searle that the person
 in the room does not understand Chinese, from which he CONCLUDES that
 the room does not understand Chinese,


 Rooms don't understand anything. Rooms are walls with a roof. Walls and
 roofs are planed matter. Matter is bonded molecules. Molecules are sensory
 experiences frozen in some externalized perceptual gap.

The claim is that the consciousness of the room stands in relation to the
physical room as the consciousness of a person stands in relation to the
physical person.

 and uses this conclusion to
 support the idea that the difference between understanding and
 processing is a difference in kind, so no matter how clever the
 computer or how convincing its behaviour it will never have
 understanding.


 The conclusion is just the same if you use the room as a whole instead of
 the person. You could have the book be a simulation of John Wayne talking
 instead. No matter how great the collection of John Wayne quotes, and how
 great a job the book does at imitating what John Wayne would say, the
 room/computer/simulation cannot ever become John Wayne.

It could not become John Wayne physically, and it could not become John
Wayne mentally if the actual matter in John Wayne is required to reproduce
John Wayne's mind, but you have not proved that the latter is the case.

 I don't think your example with the typing is as good as the Chinese
 Room, because by changing the keys around a bit it would be obvious
 that there is no real understanding, while with the Chinese Room would
 be able to pass any test that a Chinese speaker could pass.


 Tests are irrelevant, since the pass/fail standard can only be subjective.
 There can never be a Turing test or a Voigh-Kampff test which is
objective,
 but there will always be tests which designers of AI can use to identify
the
 signature of their design.

That's what Searle claims, which is why he makes the Room pass a Turing
test in Chinese and then purports to prove (invalidly, according to what
you've said) that despite passing the test it isn't conscious.

--
Stathis Papaioannou


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Better Than the Chinese Room

2014-01-28 Thread Stathis Papaioannou
 of the people can't be fooled some of the time, only that all of the
 people cannot be fooled all of the time.

You still haven't come up with any reason better than a vague
prejudice why, for example, the AI in the movie Her could not be
conscious.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Better Than the Chinese Room

2014-01-28 Thread Stathis Papaioannou
 that she was based on the personalities of
the programmers, but then develops her own personality; much as
children may be shaped by the genes and personalities of their parents
but then use this as a basis to develop their own unique
personalities.

Samantha is fictional, but do you think that if she existed and you
interacted with her over a long period you could maintain your
conviction that she could not possibly be conscious?


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Better Than the Chinese Room

2014-01-29 Thread Stathis Papaioannou
On 30 January 2014 09:39, Craig Weinberg whatsons...@gmail.com wrote:


 On Wednesday, January 29, 2014 5:38:04 PM UTC-5, Liz R wrote:

 On 30 January 2014 11:24, Craig Weinberg whats...@gmail.com wrote:

 On Wednesday, January 29, 2014 1:34:48 PM UTC-5, John Clark wrote:

 On Sat, Jan 25, 2014 at 9:35 AM, Craig Weinberg whats...@gmail.com
 wrote:

  NO ROOM CAN BE CONSCIOUS.


 And we know that because we can say it in all capital letters, or
 possibly from the teachings of two of your favorite subjects, astrology and
 numerology.


 The all caps were in response to Bruno's all caps, and no, you don't need
 astrology and numerology to understand that rooms are not haunted by the
 spirits of system-hood.


 Imagine a small, roughly spherical room made out of a fairly hard material
 something like limestone. Make a few holes in it, fill it with some goop
 with the consistency of blancmange, decorate with sense organs and throw in
 a body.

 Et voila!


 Voila, a cadaver.

Unless it's all set up to function properly.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Better Than the Chinese Room

2014-01-29 Thread Stathis Papaioannou
On 30 January 2014 10:00, Craig Weinberg whatsons...@gmail.com wrote:


 On Wednesday, January 29, 2014 5:46:25 PM UTC-5, stathisp wrote:

 On 30 January 2014 09:39, Craig Weinberg whats...@gmail.com wrote:
 
 
  On Wednesday, January 29, 2014 5:38:04 PM UTC-5, Liz R wrote:
 
  On 30 January 2014 11:24, Craig Weinberg whats...@gmail.com wrote:
 
  On Wednesday, January 29, 2014 1:34:48 PM UTC-5, John Clark wrote:
 
  On Sat, Jan 25, 2014 at 9:35 AM, Craig Weinberg whats...@gmail.com
  wrote:
 
   NO ROOM CAN BE CONSCIOUS.
 
 
  And we know that because we can say it in all capital letters, or
  possibly from the teachings of two of your favorite subjects,
  astrology and
  numerology.
 
 
  The all caps were in response to Bruno's all caps, and no, you don't
  need
  astrology and numerology to understand that rooms are not haunted by
  the
  spirits of system-hood.
 
 
  Imagine a small, roughly spherical room made out of a fairly hard
  material
  something like limestone. Make a few holes in it, fill it with some
  goop
  with the consistency of blancmange, decorate with sense organs and
  throw in
  a body.
 
  Et voila!
 
 
  Voila, a cadaver.

 Unless it's all set up to function properly.


 What's wrong with the way a cadaver functions?

Many changes occur after death, the end result of which is that in a
cadaver, the parts are in the wrong configuration and therefore don't
work together as they do in a living person. Death is said to occur
when the changes are irreversible, but people who have themselves
cryonically preserved hope that future technology will allow what is
currently thought to be irreversible to become reversible.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Better Than the Chinese Room

2014-01-29 Thread Stathis Papaioannou
On 30 January 2014 13:30, Craig Weinberg whatsons...@gmail.com wrote:

  What's wrong with the way a cadaver functions?

 Many changes occur after death, the end result of which is that in a
 cadaver, the parts are in the wrong configuration and therefore don't
 work together as they do in a living person.


 Wrong for whom? They are in a better configuration for certain microrganisms
 to thrive. There's probably more complexity in the computation of a
 decomposing body than a healthy one .


 Death is said to occur
 when the changes are irreversible, but people who have themselves
 cryonically preserved hope that future technology will allow what is
 currently thought to be irreversible to become reversible.


 Had we not already discovered the impossibility of resurrecting a dead
 person with raw electricity, would your position offer any insight into why
 that strategy would fail 100% of the time?

Actually, we can sometimes resurrect a dead person with raw
electricity in cases of cardiac arrest, which would previously have
been defined as death. It's a case of the definition of death changing
with technology. In future, there will probably be patients who would
currently considered brain dead who will be able to be revived.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Better Than the Chinese Room

2014-01-29 Thread Stathis Papaioannou
On 30 January 2014 16:00, meekerdb meeke...@verizon.net wrote:
 On 1/29/2014 5:06 PM, David Nyman wrote:

 On 29 January 2014 22:15, Craig Weinberg whatsons...@gmail.com wrote:

 The problem that concerns me about this way of looking at things is that
 any and all behaviour associated with consciousness - including, crucially,
 the articulation of our very thoughts and beliefs about conscious phenomena
 - can at least in principle be exhausted by an extrinsic account. But if
 this be so, it is very difficult indeed to understand how such extrinsic
 behaviours could possibly make reference to any intrinsic remainder, even
 were its existence granted. It isn't merely that any postulated remainder
 would be redundant in the explanation of such behaviour, but that it is
 hardly possible to see how an inner dual could even be accessible in
 principle to a complete (i.e. causally closed) extrinsic system of reference
 in the first place.


 Right, because the extrinsic perspective is blind to the limits of causal
 closure.


 But I'm afraid the problem is precisely that it behaves as if it is NOT in
 fact blind to such limits. As Bruno points out in a recent response to John
 Clark, if we rely on the causal closure of the extrinsic account (and which
 of us does not?) then we commit ourselves to the view that there must be
 such an account, at some level, of any behaviour to which we might otherwise
 wish to impute a conscious origin. However, my point above is that the
 problem is in fact even worse than this. In fact, it amounts to a paradox.

 The existence of a causally closed extrinsic account forces us to the view
 that the very thoughts and utterances - even our own - that purport to refer
 to irreducibly conscious phenomena must also be fully explicable
 extrinsically. But how then could any such sequence of extrinsic events
 possibly be linked to anything outside its causally-closed circle of
 explanation? To put this baldly, even whilst asserting with absolute
 certainty the fact that I am conscious I am forced nonetheless to accept
 that this very assertion need have nothing to do (and, more strongly, cannot
 have anything to do) with the fact that I am conscious!

 I take no credit for being the originator of this insight,


 But you have explained it well.  And it's not at all clear to me that
 Bruno's computational theory avoids this paradox.  It seems there will
 still, in the UD computation, be a closed account of the physical processes.
 No doubt it will be computationally linked with some provable sentences,
 which Bruno wants to then identify with beliefs.  But this still leaves
 beliefs as epiphenomena of the physical processes; even if comp explains
 them both.

I don't think there is a problem if consciousness is an epiphenomenon.
If you start looking for consciousness being an extra thing with
(perhaps) its own separate causal efficacy, that's where problems
arise.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Better Than the Chinese Room

2014-01-30 Thread Stathis Papaioannou
On 31 January 2014 02:29, Craig Weinberg whatsons...@gmail.com wrote:


 On Thursday, January 30, 2014 12:19:56 AM UTC-5, stathisp wrote:

 On 30 January 2014 16:00, meekerdb meek...@verizon.net wrote:
  On 1/29/2014 5:06 PM, David Nyman wrote:
 
  On 29 January 2014 22:15, Craig Weinberg whats...@gmail.com wrote:
 
  The problem that concerns me about this way of looking at things is
  that
  any and all behaviour associated with consciousness - including,
  crucially,
  the articulation of our very thoughts and beliefs about conscious
  phenomena
  - can at least in principle be exhausted by an extrinsic account. But
  if
  this be so, it is very difficult indeed to understand how such
  extrinsic
  behaviours could possibly make reference to any intrinsic remainder,
  even
  were its existence granted. It isn't merely that any postulated
  remainder
  would be redundant in the explanation of such behaviour, but that it
  is
  hardly possible to see how an inner dual could even be accessible in
  principle to a complete (i.e. causally closed) extrinsic system of
  reference
  in the first place.
 
 
  Right, because the extrinsic perspective is blind to the limits of
  causal
  closure.
 
 
  But I'm afraid the problem is precisely that it behaves as if it is NOT
  in
  fact blind to such limits. As Bruno points out in a recent response to
  John
  Clark, if we rely on the causal closure of the extrinsic account (and
  which
  of us does not?) then we commit ourselves to the view that there must be
  such an account, at some level, of any behaviour to which we might
  otherwise
  wish to impute a conscious origin. However, my point above is that the
  problem is in fact even worse than this. In fact, it amounts to a
  paradox.
 
  The existence of a causally closed extrinsic account forces us to the
  view
  that the very thoughts and utterances - even our own - that purport to
  refer
  to irreducibly conscious phenomena must also be fully explicable
  extrinsically. But how then could any such sequence of extrinsic events
  possibly be linked to anything outside its causally-closed circle of
  explanation? To put this baldly, even whilst asserting with absolute
  certainty the fact that I am conscious I am forced nonetheless to
  accept
  that this very assertion need have nothing to do (and, more strongly,
  cannot
  have anything to do) with the fact that I am conscious!
 
  I take no credit for being the originator of this insight,
 
 
  But you have explained it well.  And it's not at all clear to me that
  Bruno's computational theory avoids this paradox.  It seems there will
  still, in the UD computation, be a closed account of the physical
  processes.
  No doubt it will be computationally linked with some provable sentences,
  which Bruno wants to then identify with beliefs.  But this still leaves
  beliefs as epiphenomena of the physical processes; even if comp explains
  them both.

 I don't think there is a problem if consciousness is an epiphenomenon.
 If you start looking for consciousness being an extra thing with
 (perhaps) its own separate causal efficacy, that's where problems
 arise.


 Then you would still have the problem of why there are epiphenomema. They
 are already an extra thing with no functional explanation.

That statement assumes the possibility of zombies. If consciousness is
epiphenomenal, zombies are impossible.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Better Than the Chinese Room

2014-01-30 Thread Stathis Papaioannou
On 31 January 2014 02:51, Craig Weinberg whatsons...@gmail.com wrote:

  Had we not already discovered the impossibility of resurrecting a dead
  person with raw electricity, would your position offer any insight into
  why
  that strategy would fail 100% of the time?

 Actually, we can sometimes resurrect a dead person with raw
 electricity in cases of cardiac arrest, which would previously have
 been defined as death. It's a case of the definition of death changing
 with technology. In future, there will probably be patients who would
 currently considered brain dead who will be able to be revived.


 That does not resurrect a dead person, it just helps restart a still-living
 person's heart. True, cardiac arrest will eventually kill a person, but
 sending electricity through the body of someone who has died of cholera or a
 stroke is not going to revive them. My point though is that there is nothing
 within functionalism which predicts the finality or complexity of death. If
 we are just a machine halting, why wouldn't fixing the machine restart it in
 theory? We can smuggle in our understanding of the irreversibility of death,
 and rationalize it after the fact, but can you honestly say that
 functionalism predicts the pervasiveness of it?

Death used to be defined as the cessation of heartbeat and breathing,
so according to this definition you *could* resurrect a dead person
with fairly simple techniques which fix the machine. In the future,
this may be possible with what is currently defined as brain death.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Better Than the Chinese Room

2014-01-30 Thread Stathis Papaioannou
On 31 January 2014 04:19, Bruno Marchal marc...@ulb.ac.be wrote:

 I don't think there is a problem if consciousness is an epiphenomenon.


 Is it not that very idea which leads to the notion of zombie?
 If consciousness is an epiphenomenon, eliminating it would change nothing in
 the 3p.

There can be no zombies if consciousness is epiphenomenal.
Equivalently, if consciousness is epiphenomenal we could say it does
not really exist and we are all zombies; but I think that's just
semantics, and misleading.

 If you start looking for consciousness being an extra thing with
 (perhaps) its own separate causal efficacy, that's where problems
 arise.


 Dualism is a problem. Making consciousness epiphenomenal is not satisfying,
 and basically contradicted in the everyday life. It is because pain is
 unpleasant that we take anesthetic medicine.

 The brain is obliged to lie at some (uncknown, crypted) level, not for
 consciousness (that it filters), but for pain and joy. That's normal. If you
 run toward the lion mouth, you lower the probability of surviving.

 Epiphenomenalism does not eliminate consciousness, but it still eliminate
 conscience and persons.

I don't think it diminishes the significance of consciousness, but
maybe I just look at it differently.

 With comp I think we avoid it, even if the solution will appear to be very
 Platonist, as truth, beauty, and universal values (mostly unknown) will be
 more real than their local terrestrial approximations through primitively
 physical brains and other interacting molecules like galaxies foam.

 Bruno



 --
 Stathis Papaioannou


 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/groups/opt_out.


 http://iridia.ulb.ac.be/~marchal/




 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/groups/opt_out.



-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: HOW YOU CAN BECOME A LIBERAL THEOLOGIAN IN JUST 4 STEPS.

2013-01-17 Thread Stathis Papaioannou


On 17/01/2013, at 8:17 AM, Craig Weinberg whatsons...@gmail.com wrote:

 I agree. Even Hamerov would agree, despite the low and quantum level. Only 
 Penrose, but probably also Searle, would disagree, I guess. Perhaps Craig, 
 and most believer in non comp.
 
 We could ask one of the people who are made of a different kind of matter 
 than human beings. While we are at it, we could ask them which arithmetic 
 incantation will allow us to drink brine from the sea instead of fresh water. 
 Shouldn't be a big deal... ;)

There are those who believe that the very atoms are necessary in order to 
preserve a consciousness: making an arbitrarily close copy won't do. From what 
you have said before, this is what you think, but it goes against any widely 
accepted biological or physical scientific theory. 


-- Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: HOW YOU CAN BECOME A LIBERAL THEOLOGIAN IN JUST 4 STEPS.

2013-01-19 Thread Stathis Papaioannou
On Fri, Jan 18, 2013 at 8:23 AM, Craig Weinberg whatsons...@gmail.com wrote:

 There are those who believe that the very atoms are necessary in order to
 preserve a consciousness: making an arbitrarily close copy won't do. From
 what you have said before, this is what you think, but it goes against any
 widely accepted biological or physical scientific theory.


 Since there is no widely accepted biological or physical scientific theory
 of what consciousness is, that doesn't bother me very much.

The assumption by scientists is that consciousness is caused by the
brain, and if brain function doesn't change, consciousness doesn't
change either. So swapping out atoms in the brain for different atoms
of the same kind leaves brain function unchanged and therefore leaves
consciousness unchanged also. Also, swapping out atoms in the brain
for different atoms of a different but related type, such as a
different isotope, leaves brain function unchanged and leaves
consciousness unchanged. This is because the brain works using
chemical rather than nuclear reactions. It is an assumption but it is
consistent with every observation ever made.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: HOW YOU CAN BECOME A LIBERAL THEOLOGIAN IN JUST 4 STEPS.

2013-01-21 Thread Stathis Papaioannou
On Mon, Jan 21, 2013 at 5:59 AM, Craig Weinberg whatsons...@gmail.com wrote:

 The assumption by scientists is that consciousness is caused by the
 brain,


 We could also assume that ground beef is caused by the grocery store, but
 that doesn't tell us about ground beef.

Do you disagree that it is assumed by scientists that consciousness is
caused by the brain?

 and if brain function doesn't change, consciousness doesn't
 change either. So swapping out atoms in the brain for different atoms
 of the same kind leaves brain function unchanged and therefore leaves
 consciousness unchanged also.


 An idea can change the function of the brain as much as a chemical change -
 maybe more so, especially if we are talking about a life altering idea. To
 me, the fact that physics seems more generic to us than chemistry which
 seems more generic than biology is a function of the ontology of matter
 rather than a mechanism for consciousness. The whole idea of brain function
 or consciousness being 'unchanged' is broken concept to begin with. It
 assumes a normative baseline at an arbitrary level of description. In
 reality, of course brain function and consciousness are constantly changing,
 sometimes because of chemistry, sometimes in spite of it.

Do you disagree that swapping a carbon atom for another carbon atom in
the brain will leave brain function and consciousness unchanged?

 Also, swapping out atoms in the brain
 for different atoms of a different but related type, such as a
 different isotope, leaves brain function unchanged and leaves
 consciousness unchanged. This is because the brain works using
 chemical rather than nuclear reactions.


 That's because on the level of nuclear reactions there is no brain. That
 doesn't mean that changing atoms has no effect on some non-human level of
 experience, only that our native experience is distant enough that we don't
 notice a difference. Some people might notice a difference, who knows? I
 wouldn't think that people could tell the difference between different kinds
 of light of the same spectrum, but they can, even down to a geographic
 specificity in some cases.

The field of nuclear medicine involves injecting radiolabeled
chemicals into subjects and then scanning for them with radiosensitive
equipment. This is how PET scanners work, for example. The idea is
that if the injected chemical is similar enough to normal biological
matter it will replace this matter without affecting function,
including brain function and consciousness. You could say this is a
practical application of the theory that consciousness is
substrate-independent, verified thousands of times every day in
clinical situations.

 It is an assumption but it is
 consistent with every observation ever made.


 The consistency doesn't surprise me, it's the interpretation which I see as
 an unscientific assumption.

So how do you explain the replacement of brain matter with different
but functionally equivalent matter leaving consciousness unchanged?


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Brain as Machine (was: HOW YOU CAN BECOME A LIBERAL THEOLOGIAN IN JUST 4 STEPS.)

2013-01-22 Thread Stathis Papaioannou
On Tue, Jan 22, 2013 at 1:09 AM, Craig Weinberg whatsons...@gmail.com wrote:

 Do you disagree that swapping a carbon atom for another carbon atom in
 the brain will leave brain function and consciousness unchanged?


 I don't believe that we will necessarily know that our consciousness is
 changed. Even LSD takes a few micrograms to have an effect that we notice.
 Changing one person in the city of New York with another may not change the
 city in any appreciably obvious way, but it's a matter of scale and
 proportion, not functional sequestering.

 The field of nuclear medicine involves injecting radiolabeled
 chemicals into subjects and then scanning for them with radiosensitive
 equipment. This is how PET scanners work, for example. The idea is
 that if the injected chemical is similar enough to normal biological
 matter it will replace this matter without affecting function,
 including brain function and consciousness. You could say this is a
 practical application of the theory that consciousness is
 substrate-independent, verified thousands of times every day in
 clinical situations.


 That's because the radioactivity is mild. Heavy doses of gamma radiation are
 not without their effects on consciousness. Anything that you do on the
 nuclear level can potentially effect the chemical level, which can effect
 the biological level, etc. These levels have different qualities as well as
 quantitative scales so it is simplistic to approach it from a
 quantitative-only view. Awareness is qualities, not just quantities.

Obviously, if the change you make to the brain changes its function it
could also change consciousness. This is the functionalist position.
You have claimed that this is wrong, and that no matter how closely a
replacement brain part duplicates the function of the original there
will be a change in consciousness, simply because it isn't the
original. If this were so, you would expect a change in consciousness
when atoms in the brain are replaced with different isotopes, even if
the isotopes are not radioactive. And yet this is not what happens.
The scientific explanation is that chemistry is for the most part
unaffected by the number of neutrons in the nucleus, and that since
the brain works by means of chemical reactions, brain function and
hence consciousness are also unaffected. It's not that there is
anything magically consciousness-preserving about switching isotopes,
it's just that switching isotopes is an example of part replacement
that makes no functional difference, like replacing a part in your car
with a new part that is 0.001 mm bigger.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Sensing the presence of God

2013-01-24 Thread Stathis Papaioannou
On Fri, Jan 25, 2013 at 4:55 AM, meekerdb meeke...@verizon.net wrote:

 It's probably a lot simpler than that.  In the U.S. if you're an atheist it
 may be hard to find a sympathetic ear.  Depending a lot on where you live,
 you may be isolated and reviled.

Is that really true? I was in the US recently for the first time,
Scottsdale Arizona and NYC, and other than Christmas decorations I
can't recall seeing much evidence of religion at all. This is perhaps
a superficial impression but I was a bit surprised nevertheless.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [Metadiscussion] Off topic posting on the everything-list

2013-01-31 Thread Stathis Papaioannou
On Thu, Jan 31, 2013 at 12:46 PM, Kim Jones kimjo...@ozemail.com.au wrote:
 I'm getting a bit jack of this term metadiscussion becuse it only ever gets 
 applied to what other people are choosing to discuss. People talk about what 
 people want to talk about. It's about taste, perception, preference and 
 prejudice. Even WITH rigidly adhered-to rules and conventions, this still 
 applies. The challenge is to take WHATEVER is spoken about and MAKE that 
 relevant somehow (to whatever you want to make it relevant to). That's 
 harder, more interesting and dare I say it - more relevant a process than 
 simply corralling all thinking under one topic or heading. As soon as you 
 start to set up rules, conventions and expectations the population divides 
 into those who feel that it is to their advantage to play by the rules and 
 those who believe that this is a constraint. This list is remarkably 
 troll-free. For that very reason I see no need to restrict what is spoken of. 
 The ensemble theories of everything probably won't come from the brains of 
 those who are exclusively obsessed by these things anyway since by now their 
 perception is circular and their belief supports their belief. You need 
 random thinkers, people who will break the local equilibrium and who will 
 introduce the creative concept of idea movement from time to time.

I like the idea of a moderator-free list, but nonetheless I agree with
Russell. The list was set up with a particular purpose in mind but in
the last few months the range of discussion topics has changed
radically. The Internet is large and there are plenty of other forums
in which to discuss politics and religion. Could we return to the old
list please?


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: How can intelligence be physical ?

2013-02-05 Thread Stathis Papaioannou
On Wed, Feb 6, 2013 at 11:52 AM, Craig Weinberg whatsons...@gmail.com wrote:

 I question whether it is possible to ask whether your fellow human beings
 have minds without resorting to sophistry. I say that not because I am
 incapable of questioning naive reasoning, but because it does not accurately
 represent the reality of the situation. Just as our 'belief' in our own mind
 is an a prori ontological condition which cannot be questioned without
 incurring a paradox (whatever disbelieves in its own mind is by definition a
 mind), the belief that our fellow human beings have minds does not
 necessarily require a logical analysis to arrive at. We know that we have
 access to information beyond what we can consciously understand, and part of
 that may very well include a capacity to sense, on some level, the
 authenticity of another mind, barring any prejudices which might interfere.

So you're saying that we can somehow sense the reality of other minds,
beyond any reasoning? Would you agree then that if someone sensed that
a computer had a mind it would have a mind?


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: How can intelligence be physical ?

2013-02-05 Thread Stathis Papaioannou
On Wed, Feb 6, 2013 at 1:08 PM, Craig Weinberg whatsons...@gmail.com wrote:

 Maybe if there was a computer which was not specifically designed to deceive
 our senses... which would mean that it was one which occurred naturally and
 did not include anything which was ever designed or programmed by a human
 being.

Which contradicts your original claim that we can just sense that
other people are conscious without any logical analysis.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




How can intelligence be physical ?

2013-02-05 Thread Stathis Papaioannou
On Wednesday, February 6, 2013, Craig Weinberg wrote:



 On Tuesday, February 5, 2013 9:13:40 PM UTC-5, stathisp wrote:

 On Wed, Feb 6, 2013 at 1:08 PM, Craig Weinberg whats...@gmail.com
 wrote:

  Maybe if there was a computer which was not specifically designed to
 deceive
  our senses... which would mean that it was one which occurred naturally
 and
  did not include anything which was ever designed or programmed by a
 human
  being.

 Which contradicts your original claim that we can just sense that
 other people are conscious without any logical analysis.


 No because we also realize intuitively that computers are unconscious
 without any logical analysis. That's why behaving 'like a robot' or a
 machine is synonymous with mindless repetitive action. Just because we can
 make an optical illusion which fools our eye into seeing three dimensional
 perspective in a 2D painting doesn't mean that we can't authentically tell
 when something natural is 3D.


You're saying that a robot behaving like a human may fool you, but how do
you know that your apparently fellow humans are not robots? You're going by
their behaviour.


-- Stathis Papaioannou


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: How can intelligence be physical ?

2013-02-06 Thread Stathis Papaioannou
On Wed, Feb 6, 2013 at 3:22 PM, Craig Weinberg whatsons...@gmail.com wrote:

 You're saying that a robot behaving like a human may fool you, but how do
 you know that your apparently fellow humans are not robots?


 Because I live in 2013 AD, where I now need to reboot my office telephone if
 I want the headset to work. It's pretty easy to tell when something is a
 piece of digital technology built by human beings, because it is constantly
 breaking. Besides that though, you can tell because of the uncanny valley
 feeling. Even when a simulation of a person is good enough to elicit a
 positive response beyond the uncanny valley, it doesn't mean that we are
 completely fooled by it, even if we report that we are.

That's just because the simulation of a person isn't good enough. The
question is what if the simulation *is* good enough to completely fool
you.

 If we consider that the Libet experiments show that we are making decisions
 without knowing it, and Blindsight shows that we are able to see without
 being conscious of it, then there is no reason why we should suddenly trust
 our own reporting of what we think that we know about the sense of
 interacting with a living person. A true Turing test would require a
 face-to-face interaction, so that none of our natural sensory capabilities
 would be blocked as they would with just a text or video interaction.

That's the situation that is assumed in the idea of a philosophical
zombie: you interact with the being face to face. If at the end of
several days' interaction (or however long you think you need) you are
completely convinced that it is conscious, does that mean it is
conscious?

 I think that it is important to remember that in theory, logically,
 consciousness cannot exist. It is only through our own undeniable experience
 of consciousness that we feel the need to justify it with logic - but so far
 we have only projected the religious miracles of the past into a science
 fiction future. If it was up to logic alone, there could not, and would not
 every be a such thing as experience.

You could as well say that logically there's no reason for anything to
exist, but it does.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: How can intelligence be physical ?

2013-02-07 Thread Stathis Papaioannou
On Thu, Feb 7, 2013 at 1:02 AM, Stephen P. King stephe...@charter.net wrote:

 Hi Stathis,

 The simulation of our 'self' that our brain generates *is* good enough
 to fool oneself! I speculate that schizophrenia and autism are caused by
 failures of the self-simulation system... The former is a failure where
 multiple self-simulations are generated and no stability on their convergent
 occurs and the latter is where the self-simulation fails altogether. Mind
 version of autism, such as Aspergers syndrome are where bad simulations
 occur and/or the self-simulation fails to update properly.

That's an interesting idea, but schizophrenia is where the the
connections between functional subsystems in the brain is disrupted,
so that you get perceptions, beliefs, emotions occurring without the
normal chain of causation, while autism is where the concept of other
minds is disrupted. I think the self-image is present but distorted.

 If we consider that the Libet experiments show that we are making
 decisions
 without knowing it, and Blindsight shows that we are able to see without
 being conscious of it, then there is no reason why we should suddenly
 trust
 our own reporting of what we think that we know about the sense of
 interacting with a living person. A true Turing test would require a
 face-to-face interaction, so that none of our natural sensory
 capabilities
 would be blocked as they would with just a text or video interaction.

 That's the situation that is assumed in the idea of a philosophical
 zombie: you interact with the being face to face. If at the end of
 several days' interaction (or however long you think you need) you are
 completely convinced that it is conscious, does that mean it is
 conscious?


 As I see things, the only coherent concept of a zombie is what we see in
 the autistic case. Such is 'conscious' with no self-image/self-awareness,
 thus it has no ability to report on its 1p content.

I think of autistic people as differently conscious, not unconscious.
Incidentally, there is a movement among higher functioning autistic
people whereby they resent being labelled as disabled, but assert that
their way of thinking is just as valid and intrinsically worthwhile as
that of the neurotypicals.

 I think that it is important to remember that in theory, logically,
 consciousness cannot exist. It is only through our own undeniable
 experience
 of consciousness that we feel the need to justify it with logic - but so
 far
 we have only projected the religious miracles of the past into a science
 fiction future. If it was up to logic alone, there could not, and would
 not
 every be a such thing as experience.

 You could as well say that logically there's no reason for anything to
 exist, but it does.



 How about that! Does this not tell us that we must start, in our musing
 about existence with the postulate that something exists?

Perhaps, but there are other ways to look at it. A primary
mathematical/Platonic universe necessarily rather than contingently
exists.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: How can intelligence be physical ?

2013-02-07 Thread Stathis Papaioannou
On Thu, Feb 7, 2013 at 4:01 AM, Craig Weinberg whatsons...@gmail.com wrote:

 That's just because the simulation of a person isn't good enough. The
 question is what if the simulation *is* good enough to completely fool
 you.


 Fooling me is meaningless. I think that you think therefore you are fails
 to account for the subjective thinker in the first place. If someone kills
 you, but they then find a nifty way to use your cadaver as a ventriloquist's
 dummy, does it matter if it fools someone into thinking that you are still
 alive?

You have said that you can just sense the consciousness of other
minds but you have contradicted that, or at least admitted that the
sensing faculty can be fooled. If you have no sure test for
consciousness that means you might see it where it isn't present or
miss it where it is present. So your friend might be unconscious
despite your feeling that he is, and your computer might be conscious
despite your feeling that it is not.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: How can intelligence be physical ?

2013-02-07 Thread Stathis Papaioannou
On Fri, Feb 8, 2013 at 9:20 AM, Craig Weinberg whatsons...@gmail.com wrote:


 On Thursday, February 7, 2013 7:12:08 AM UTC-5, stathisp wrote:

 On Thu, Feb 7, 2013 at 4:01 AM, Craig Weinberg whats...@gmail.com wrote:

  That's just because the simulation of a person isn't good enough. The
  question is what if the simulation *is* good enough to completely fool
  you.
 
 
  Fooling me is meaningless. I think that you think therefore you are
  fails
  to account for the subjective thinker in the first place. If someone
  kills
  you, but they then find a nifty way to use your cadaver as a
  ventriloquist's
  dummy, does it matter if it fools someone into thinking that you are
  still
  alive?

 You have said that you can just sense the consciousness of other
 minds but you have contradicted that, or at least admitted that the
 sensing faculty can be fooled.


 An individual's sense can be fooled, but not necessarily fooled forever, and
 not everyone can be fooled. That doesn't mean that when we look at a beercan
 in the trash we can't tell that it doesn't literally feel crushed and
 abandoned.


 If you have no sure test for
 consciousness that means you might see it where it isn't present or
 miss it where it is present. So your friend might be unconscious
 despite your feeling that he is,


 Of course. People have been buried alive because the undertaker was fooled.


 and your computer might be conscious
 despite your feeling that it is not.


 Except my feeling is backed up with my knowledge of what it is - a human
 artifact designed to mimic certain mental functions. That knowledge should
 augment my personal intuition, as well as social and cultural reinforcements
 that indeed there is no reason to suspect that this map of mind is sentient
 territory.

You're avoiding the question. What is your definitive test for
consciousness? If you don't have one, then you have to admit that your
friend (who talks to you and behaves like people do, not in a coma,
not on a video recording, not dead in the morgue) may not be conscious
and your computer may be conscious. You talk with authority on what
can and can't have consciousness but it seems you don't have even an
operational definition of the word. I am not asking for an explanation
or theory of consciousness, just for a test to indicate its presence,
which is a much weaker requirement.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: How can intelligence be physical ?

2013-02-07 Thread Stathis Papaioannou
On Fri, Feb 8, 2013 at 11:52 AM, Craig Weinberg whatsons...@gmail.com wrote:

 You're avoiding the question. What is your definitive test for
 consciousness? If you don't have one, then you have to admit that your
 friend (who talks to you and behaves like people do, not in a coma,
 not on a video recording, not dead in the morgue) may not be conscious
 and your computer may be conscious.


 No, you are avoiding my answer. What is your definitive test for your own
 consciousness?

The test for my own consciousness is that I feel I am conscious. That
is not at issue. At issue is the test for *other* entities'
consciousness. You are convinced that computers and other machines
don't have consciousness, but you can't say what test you will apply
to them and see them fail.

 My point is that sense is broader, deeper, and more primitive than our
 cognitive ability to examine it, since cognitive qualities are only the tip
 of the iceberg of sense. To test is to circumvent direct sense in favor of
 indirect sense - which is a good thing, but it is by definition not
 applicable to consciousness itself in any way. There is no test to tell if
 you are conscious, because none is required. If you need to ask if you are
 conscious, then you are probably having a lucid dream or in some phase of
 shock. In those cases, no test will help you as you can dream a test result
 as easily as you can experience one while awake.

 The only test for consciousness is the test of time. If you are fooled by
 some inanimate object, eventually you will probably see through it or
 outgrow the fantasy.

So if, in future, robots live among us for years and are accepted by
most people as conscious, does that mean they are conscious? This is
essentially a form of the Turing test.

 You talk with authority on what
 can and can't have consciousness but it seems you don't have even an
 operational definition of the word.


 Consciousness is what defines, not what can be defined.

 I am not asking for an explanation
 or theory of consciousness, just for a test to indicate its presence,
 which is a much weaker requirement.


 That is too much to ask, since all tests supervene upon the consciousness to
 evaluate results.

It's the case for any test that you will use your consciousness to
evaluate the results.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: How can intelligence be physical ?

2013-02-09 Thread Stathis Papaioannou
On Fri, Feb 8, 2013 at 1:42 PM, Craig Weinberg whatsons...@gmail.com wrote:

 You are convinced that computers and other machines
 don't have consciousness, but you can't say what test you will apply
 to them and see them fail.


 I'm convinced of that because I understand why there is no reason why they
 would have consciousness... there is no 'they' there. Computers are not born
 in a single moment through cell fertilization, they are assembled by people.
 Computers have to be programmed to do absolutely everything, they have no
 capacity to make sense of anything which is not explicitly defined. This is
 the polar opposite of living organisms which are general purpose entities
 who explore and adapt when they can, on their own, for their own internally
 generated motives. Computers lack that completely. We use objects to compute
 for us, but those objects are not actually computing themselves, just as
 these letters don't actually mean anything for themselves.

Why would being generated in a single moment through cell
fertilization have any bearing on consciousness? Why would something
created by someone else not have consciousness? Why would something
lacking internally generated motives (which does not apply to
computers any more than to people) lack consciousness? To make these
claims you would have to show either that they are necessarily true or
present empirical evidence in their support, and you have done
neither.

 So if, in future, robots live among us for years and are accepted by
 most people as conscious, does that mean they are conscious? This is
 essentially a form of the Turing test.


 I don't think that will happen unless they aren't robots. The whole point is
 that the degree to which an organism is conscious is inversely proportionate
 to the degree that the organism is 100% controllable. That's the purpose of
 intelligence - to advance your own agenda rather than to be overpowered by
 your environment. So if something is a robot, it will never be accepted by
 anyone as conscious, and if something is conscious it will never be useful
 to anyone as a robot - it would in fact be a slave.

You don't think it would happen, but would you be prepared to say that
if a robot did pass the test, as tough as you want to make it, it
would be conscious?


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: How can intelligence be physical ?

2013-02-10 Thread Stathis Papaioannou
On Mon, Feb 11, 2013 at 7:06 AM, Craig Weinberg whatsons...@gmail.com wrote:

 Why would being generated in a single moment through cell
 fertilization have any bearing on consciousness?


 Because consciousness is a singularity of perspective through time, or
 rather through which time is created.

That's not an explanation.

 Why would something
 created by someone else not have consciousness?


 Because it is assembled rather than created. It's like asking why wood
 doesn't catch on fire by itself just by stacking it in a pile.

That's not an explanation.

 Why would something
 lacking internally generated motives (which does not apply to
 computers any more than to people) lack consciousness?


 Why would computers have an internally generated motive? It doesn't care
 whether it functions or not. We know that people have personal motives
 because it isn't possible for us to doubt it without doubting our ability to
 doubt.

You're saying a computer can't be conscious because it would need to
be conscious in order to be conscious.

 To make these
 claims you would have to show either that they are necessarily true or
 present empirical evidence in their support, and you have done
 neither.


 You would have to show that these criteria are relevant for consciousness,
 which you have not, and you cannot.

You make claims such as that a conscious being has to arise at a
moment of fertilization, which is completely without basis. You need
to present some explanation for such claims. Consciousness is a
singularity of perspective through time is not an explanation.

 As long as you fail to recognize
 consciousness as the ground of being, you will continue to justify it
 against one of its own products - rationality, logic, empirical examples,
 all of which are 100% sensory-motor. Consciousness can only be explained to
 consciousness, in the terms of consciousness, to satisfy consciousness. All
 other possibilities are subordinate. How could it be otherwise without
 ending up with a sterile ontology which prohibits our own participation?

Again, you've just made up consciousness is the ground of being.
It's like saying consciousness is the light, light is not black, so
black people are not conscious.

 You don't think it would happen, but would you be prepared to say that
 if a robot did pass the test, as tough as you want to make it, it
 would be conscious?


 It's like asking me if there were a test for dehydrated water, would I be
 prepared to say that it would be wet if it passed the test. No robot can
 ever be conscious. Nothing conscious can ever be a robot. Heads cannot be
 Tails, even if we move our heads to where the tails side used to be and
 blink a lot.

So you accept the possibility of zombies, beings which could live
among us and consistently fool everyone into thinking they were
conscious?


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: The duplicators and the restorers

2013-02-12 Thread Stathis Papaioannou
On Wed, Feb 13, 2013 at 12:24 PM, Craig Weinberg whatsons...@gmail.com wrote:
 1. Do you consider yourself to have experienced the torture in the case of
 the Restorers, even though you no longer remember it?  If not, why not.

 Yes


 2. If yes, do you consider yourself to have experienced the torture in the
 case of the Duplicators?  If yes, please explain, if not, please explain.

 The idea that atoms can be duplicated is an assumption. If we only look at
 the part of a plant that we can see and tried to duplicate that, it would
 not have an roots and it would die. I think of the roots of atoms to be
 experiences through time. Just having a person who seems to be shaped like
 you according to an electron microscope does not make them you.

 3. Both scenarios I think are based on misconceptions. Nothing in the
 universe can be duplicated absolutely and nothing can be erased absolutely,
 because what we see of time is, again, missing the roots that extend out to
 eternity.  I find it bizarre that we find it so easy to doubt our naive
 realism when it comes to physics but not when it comes to consciousness.
 Somehow we think that the idea that this moment of 'now' is mandated by
 physics to be universal and uniform.

What is to stop duplication of, say, the simplest possible conscious
being made up of only a few atoms? Sometimes the objection is raised
that an exact quantum state cannot be measured (although it can be
duplicated via quantum teleportation, with destruction of the
original), but this is probably spurious. If duplication down to the
quantum level were needed to maintain continuity of consciousness then
it would be impossible to maintain continuity of consciousness from
moment to moment in ordinary life, since the state of your body
changes in a relatively gross way and you remain you.

So what you have to explain Craig is what you think would happen if
you tried to duplicate a person using very advanced science, and why
you don't think that happens when a person lives his life from day to
day, having his brain replaced completely (and imprecisely) over the
course of months with the matter in the food he eats.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: The duplicators and the restorers

2013-02-12 Thread Stathis Papaioannou
On Wed, Feb 13, 2013 at 11:58 AM, Jason Resch jasonre...@gmail.com wrote:
 Consider the following thought experiment, called The Duplicators:

 At 1:00 PM tomorrow, you will be abducted by aliens. The aliens will tell
 you not to worry, that you won't be harmed but they wish to conduct some
 experiments on the subject of pain, which is unknown to them. These aliens
 possess technology far in advance of our own. They have the ability to scan
 and replicate objects down to the atomic level and the aliens use this
 technology to create an atom-for-atom duplicate of yourself, which they call
 you2. The aliens thank you for your assistance and return you unharmed back
 to your home by 5:00 PM. You ask them What about the pain experiments? and
 they hand you an informational pamphlet and quickly fly off. You read the
 pamphlet which explains that a duplicate of you (you2) was created and
 subjected to some rather terrible pain experiments, akin to what humans call
 torture and at the end of the experiment you2 was euthanized. You consider
 this awful, but are nonetheless glad that they tortured your duplicate
 rather than you.

 Now consider the slightly different thought experiment, called The
 Restorers:

 At 1:00 PM tomorrow, you will be abducted by aliens. Unlike the aliens with
 the duplication technology (the duplicators), these aliens possess a
 restorative technology. They can perfectly erase memories and all other
 physical traces to perfectly restore you to a previous state. The aliens
 will tell you not to worry, that you won't be harmed but they wish to
 conduct some experiments on the subject of pain, which is unknown to them.
 They then proceed to brutually torture you for many hours, conducting test
 after test on pain. Afterwards, they erase your memory of the torture and
 all traces of injury and stress from your body. When they are finished, you
 are atom-for-atom identical to how you were before the torture began. The
 aliens thank you for your assistance and return you unharmed back to your
 home by 5:00 PM. You ask them What about the pain experiments? and they
 hand you an informational pamphlet and quickly fly off. You read the
 pamphlet which explains that a duplicate of you (you2) was created and
 subjected to some rather terrible pain experiments, akin to what humans call
 torture and at the end of the experiment you2 was euthenized. You consider
 this awful, but are nonetheless glad that they tortured your duplicate
 rather than you.

 My questions for the list:

 1. Do you consider yourself to have experienced the torture in the case of
 the Restorers, even though you no longer remember it?  If not, why not.

 2. If yes, do you consider yourself to have experienced the torture in the
 case of the Duplicators?  If yes, please explain, if not, please explain.

 3. If you could choose which aliens would abduct you, is there one you would
 prefer?  If you have a preference, please provide some justification.

The two experiments are equivalent. Rationally, you should not have a
preference for either - though both are bad in that you experience
pain but then forget it.

-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




<    1   2   3   4   5   6   7   8   9   10   >