Re: Re: Solipsism = 1p

2012-10-30 Thread Stathis Papaioannou
On Tue, Oct 30, 2012 at 3:25 AM, Roger Clough rclo...@verizon.net wrote:
 Hi Stathis Papaioannou

 Building more complex structures out of simpler ones
 by a simple set of rules (or any set of rules) seems to violate the second law
 of thermodynamics.  Do you have a way around the second law ?

 What you are proposing seems to be goal-directed behavior
 by the gods of small things.

Total entropy increases but local entropy can decrease. It's why life
exists even though the universe is running down.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Re: Solipsism = 1p

2012-10-29 Thread Roger Clough
Hi Stathis Papaioannou  

Building more complex structures out of simpler ones 
by a simple set of rules (or any set of rules) seems to violate the second law 
of thermodynamics.  Do you have a way around the second law ?

What you are proposing seems to be goal-directed behavior
by the gods of small things.



Roger Clough, rclo...@verizon.net 
10/29/2012  
Forever is a long time, especially near the end. -Woody Allen 


- Receiving the following content -  
From: Stathis Papaioannou  
Receiver: everything-list  
Time: 2012-10-28, 05:47:58 
Subject: Re: Solipsism = 1p 


On Sun, Oct 28, 2012 at 5:48 AM, Craig Weinberg  wrote: 

 It seems that you do not understand the meaning of the term consistent 
 with the laws of physics. It means that when you decide to play tennis the 
 neurons in your brain will depolarise because of the ionic gradients, 
 
 
 If you can't see how ridiculous that view is, there is not much I can say 
 that will help you. My decision to play tennis *IS* the depolarization of 
 neurons. 

That sounds like eliminative materialism. It is a bit like saying that 
the movement of the car down the road *IS* the combustion of fuel in 
the cylinders, transmission of power to the wheels, and all the other 
lower level phenomena that make up the car. 

 The ionic gradients have no opinion of whether or not I am about to 
 play tennis. The brain as a whole, every cell, every molecule, every charge 
 and field, is just the spatially extended shadow of *me* or my 'life'. I am 
 the event which unites all of the functions and structures together, from 
 the micro to the macro, and when I change my mind, that change is reflected 
 on every level. 

You change your mind because all the components of your brain change 
configuration. If this did not happen, your mind could not change. The 
mind is the higher level phenomenon. The analogy is as above with the 
car: it drives down the road because of all the mechanics functioning 
in a particular way, and you could say that driving down the road is 
equivalent to the mechanics functioning in a particular way. 

 the permeability of the membrane to different ions, the way the ion 
 channels change their conformation in response to an electric field, and 
 many other such physical factors. It is these physical factors which result 
 in your decision to play tennis and then your getting up to retrieve your 
 tennis racquet. If it were the other way around - your decision causes 
 neurons to depolarise - then we would observe miraculous events in your 
 brain, ion channels opening in the absence of any electric field or 
 neurotransmitter change, and so on. 
 
 
 No. The miraculous event is viewable any time we look at how a conscious 
 intention appears in an fMRI. We see spontaneous simultaneous activity in 
 many regions of the brain, coordinated on many levels. This is the footprint 
 of where we stand. When we take a step, the footprint changes. We are the 
 leader of these brain processes, not the follower. 

You completely misunderstand these experiments. Please read about 
excitable cells before commenting further. The following online 
articles seem quite good. The third is about spontaneous neuronal 
activity. 

http://users.rcn.com/jkimball.ma.ultranet/BiologyPages/E/ExcitableCells.html 
http://en.wikipedia.org/wiki/Membrane_potential 
http://en.wikipedia.org/wiki/Neural_oscillation 

 Cells don't defy entropy and planes don't defy gravity. Their respective 
 behaviour is consistent with our theories about entropy and gravity. 
 
 
 Cells defy entropy locally. Planes allow us to get around some constraints 
 of gravity. If your definition of any law is so broad that it includes all 
 possible technological violations of it, then how does it really give us any 
 insight? 

The laws of nature are broad enough to determine everything everywhere 
that has happened and will happen. 

 How the computer was made would have no effect on its behaviour or 
 consciousness. 
 
 Yes, it would. If I make a refrigerator, I can assume that it is a box with 
 cooling mechanism. If I find an organism which has evolved to cool parts of 
 itself to store food, then that is a completely different thing. 

The question was about two identical computers, one made in a factory, 
the other assembled with fantastic luck from raw materials moving 
about randomly. Will there be any difference in the functioning or 
consciousness (or lack of it) of the two computers? 

  If a biological 
  human were put together from raw materials by advanced aliens would 
  that make any difference to his consciousness or intelligence? 
  
  It would if we were automaton servants of their agendas. 
 
 If the created human had a similar structure to a naturally developed 
 human he would have similar behaviour and similar experiences. How could it 
 possibly be otherwise? 
 
 Because consciousness is not a structure, it is an event. It is an 
 experience which unifies bodies from 

Re: Re: Re: Solipsism = 1p

2012-10-27 Thread Stathis Papaioannou
On Sat, Oct 27, 2012 at 8:08 AM, John Mikes jami...@gmail.com wrote:
 Stathis:

 IMO you left out one difference in equating computer and human: the
 programmed comp. cannot exceed its hardwre - given content while
 (SOMEHOW???) a human mind receives additional information from parts
 'unknown' (see the steps forward in cultural history of the sciences?) -
 accordingly a 'programmed' human may have resources beyond it's given
 hardware content.

 John M

How can a human exceed his hardware? Everything he does must be due to
the hardware plus input from the environment, same as the computer,
same as everything else in the universe.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Re: Re: Solipsism = 1p

2012-10-27 Thread Craig Weinberg


On Saturday, October 27, 2012 6:28:14 AM UTC-4, stathisp wrote:

 On Sat, Oct 27, 2012 at 8:08 AM, John Mikes jam...@gmail.comjavascript: 
 wrote: 
  Stathis: 
  
  IMO you left out one difference in equating computer and human: the 
  programmed comp. cannot exceed its hardwre - given content while 
  (SOMEHOW???) a human mind receives additional information from parts 
  'unknown' (see the steps forward in cultural history of the sciences?) - 
  accordingly a 'programmed' human may have resources beyond it's given 
  hardware content. 
  
  John M 

 How can a human exceed his hardware? Everything he does must be due to 
 the hardware plus input from the environment, same as the computer, 
 same as everything else in the universe. 


What input from the environment might cause an acorn to build and fly a 
B-52? Is there a special B-52 building gene that comes with humans but not 
acorns? It's a really narrow view of the cosmos which imagines that the 
universe is about nothing but what stuff it is made of - that the 
environment dictates with inputs but that the self has no non-environmental 
outputs.

What happens if we take it a step further and recuse ourselves and our 
human layer of experience entirely. Who is to say whether the appearance of 
neurons and atoms is merely an evolutionary device to prop up the hormone 
and neurotransmitter spray that is 'science' or if, instead, it is 
evolutionary biology which is the illusion of molecules, whose endless 
repeating patterns know no genuine coherence as individual creatures or 
species.

Who chooses the level of description?

Craig

Craig
 



 -- 
 Stathis Papaioannou 


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/everything-list/-/uuP0oUFXbMIJ.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Re: Re: Solipsism = 1p

2012-10-27 Thread John Mikes
Stathis,
do you think Lucy had the same (thinking?) hardware as you have? are you
negating (human and other) development (I evade 'evolution') as e.g. the
famous cases of mutation?  Is all that RD a reshuffling of what WAS
already knowable?
Maybe my agnosticism dictates different potentials at work from your
Idon'tknowwhat position, but in my belief system there is - beyond our
existing world-model - an infinite complexity of unknowable
whoknowswhat-s infiltrating into our knowable inventory in ways adjusted to
our capabilities. THAT I cannot assign to an algorithmic machine.
Then again you write: UNIVERSE - a word usually applied to our part of a
'physical world' - not the Everything of which it may be part of. My
(assumed?) infinite complexity is not restricted to physical units of our
universe.
Accordingly I see some definitional discrepancy between our conclusions.

John Mikes

On Sat, Oct 27, 2012 at 6:27 AM, Stathis Papaioannou stath...@gmail.comwrote:

 On Sat, Oct 27, 2012 at 8:08 AM, John Mikes jami...@gmail.com wrote:
  Stathis:
 
  IMO you left out one difference in equating computer and human: the
  programmed comp. cannot exceed its hardwre - given content while
  (SOMEHOW???) a human mind receives additional information from parts
  'unknown' (see the steps forward in cultural history of the sciences?) -
  accordingly a 'programmed' human may have resources beyond it's given
  hardware content.
 
  John M

 How can a human exceed his hardware? Everything he does must be due to
 the hardware plus input from the environment, same as the computer,
 same as everything else in the universe.


 --
 Stathis Papaioannou

 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To post to this group, send email to everything-list@googlegroups.com.
 To unsubscribe from this group, send email to
 everything-list+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/everything-list?hl=en.



-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Re: Re: Solipsism = 1p

2012-10-27 Thread Stathis Papaioannou
On Sun, Oct 28, 2012 at 12:12 AM, Craig Weinberg whatsons...@gmail.com wrote:

 How can a human exceed his hardware? Everything he does must be due to
 the hardware plus input from the environment, same as the computer,
 same as everything else in the universe.


 What input from the environment might cause an acorn to build and fly a
 B-52? Is there a special B-52 building gene that comes with humans but not
 acorns?

Humans have a large number of genes enabling them to grow brains and
build B-52's while acorns lack these genes.

 It's a really narrow view of the cosmos which imagines that the
 universe is about nothing but what stuff it is made of - that the
 environment dictates with inputs but that the self has no non-environmental
 outputs.

Do you mean can a human do something dependent only on himself and not
the environment? I suppose you could say this if you completely
isolated him from everything, although even then he would be subject
to factors such as ambient temperature and air pressure.

 What happens if we take it a step further and recuse ourselves and our human
 layer of experience entirely. Who is to say whether the appearance of
 neurons and atoms is merely an evolutionary device to prop up the hormone
 and neurotransmitter spray that is 'science' or if, instead, it is
 evolutionary biology which is the illusion of molecules, whose endless
 repeating patterns know no genuine coherence as individual creatures or
 species.

 Who chooses the level of description?

If you're a solipsist then you choose everything.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Re: Re: Solipsism = 1p

2012-10-27 Thread Stathis Papaioannou
On Sun, Oct 28, 2012 at 2:38 AM, John Mikes jami...@gmail.com wrote:
 Stathis,
 do you think Lucy had the same (thinking?) hardware as you have? are you
 negating (human and other) development (I evade 'evolution') as e.g. the
 famous cases of mutation?  Is all that RD a reshuffling of what WAS already
 knowable?
 Maybe my agnosticism dictates different potentials at work from your
 Idon'tknowwhat position, but in my belief system there is - beyond our
 existing world-model - an infinite complexity of unknowable whoknowswhat-s
 infiltrating into our knowable inventory in ways adjusted to our
 capabilities. THAT I cannot assign to an algorithmic machine.
 Then again you write: UNIVERSE - a word usually applied to our part of a
 'physical world' - not the Everything of which it may be part of. My
 (assumed?) infinite complexity is not restricted to physical units of our
 universe.
 Accordingly I see some definitional discrepancy between our conclusions.

If the hardware and/or environment is different then the thinking may
also be different.

-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Re: Re: Solipsism = 1p

2012-10-27 Thread Craig Weinberg


On Saturday, October 27, 2012 11:47:14 AM UTC-4, stathisp wrote:

 On Sun, Oct 28, 2012 at 12:12 AM, Craig Weinberg 
 whats...@gmail.comjavascript: 
 wrote: 

  How can a human exceed his hardware? Everything he does must be due to 
  the hardware plus input from the environment, same as the computer, 
  same as everything else in the universe. 
  
  
  What input from the environment might cause an acorn to build and fly a 
  B-52? Is there a special B-52 building gene that comes with humans but 
 not 
  acorns? 

 Humans have a large number of genes enabling them to grow brains and 
 build B-52's while acorns lack these genes. 


Lots of animals have brains, but they don't build aircraft. They way you 
are arguing it, there is really no level of power which would not fit into 
your arbitrary expectations of what any particular piece of hardware could 
or could not do. Whether it's building B-52s or playing billiards with 
galaxies using telepathy, it all falls into the range of ho-hum inevitables 
of evolved structures.
 


  It's a really narrow view of the cosmos which imagines that the 
  universe is about nothing but what stuff it is made of - that the 
  environment dictates with inputs but that the self has no 
 non-environmental 
  outputs. 

 Do you mean can a human do something dependent only on himself and not 
 the environment? I suppose you could say this if you completely 
 isolated him from everything, although even then he would be subject 
 to factors such as ambient temperature and air pressure. 


I am talking about being an authentic participant in the universe. I am 
making causally efficacious changes to my environment, and your 
environment. I do these things not because I am bidden by any particular 
neural or species agenda, but by the agenda I personally co-create. Neither 
my body nor Homo sapiens in general particularly care for the content of 
what I am saying, who I vote for, etc. No impersonal law of physics is 
relevant one way or another.
 


  What happens if we take it a step further and recuse ourselves and our 
 human 
  layer of experience entirely. Who is to say whether the appearance of 
  neurons and atoms is merely an evolutionary device to prop up the 
 hormone 
  and neurotransmitter spray that is 'science' or if, instead, it is 
  evolutionary biology which is the illusion of molecules, whose endless 
  repeating patterns know no genuine coherence as individual creatures or 
  species. 
  
  Who chooses the level of description? 

 If you're a solipsist then you choose everything. 


Are you a solipsist?

Craig
 



 -- 
 Stathis Papaioannou 


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/everything-list/-/kVhamHXk6XAJ.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Re: Re: Solipsism = 1p

2012-10-26 Thread Craig Weinberg


On Friday, October 26, 2012 1:01:34 AM UTC-4, stathisp wrote:

 On Fri, Oct 26, 2012 at 12:41 PM, Craig Weinberg 
 whats...@gmail.comjavascript: 
 wrote: 

  We are atoms, molecules, cells, tissues, and organisms. Whatever we do 
 is 
  what the laws of physics *actually are*. Your assumptions about the laws 
 of 
  physics are 20th century legacy ideas based on exterior manipulations of 
  exterior instruments to measure other exterior phenomena. 

 Whatever we do is determined by a small set of rules,


No. What we as humans do is determined by human experiences and human 
character, which is not completely ruled externally. We participate 
directly. It could only be a small set of rules if those rules include 'do 
whatever you like, whenever you have the chance'.
 

 the rules being 
 as you say what matter actually does and not imposed by people or 
 divine whim. 


Matter is a reduced shadow of experiences. Matter is ruled by people and 
people are ruled by matter. Of the two, people are the more directly and 
completely real phenomena.
 

 I really don't understand where you disagree with me, 
 since you keep making statements then pulling back if challenged. 


I don't see where I am pulling back. I disagree with you in that to you any 
description of the universe which is not matter in space primarily is 
inconceivable. I am saying that what matter is and does is not important to 
understanding consciousness itself. It is important to understanding 
personal access to human consciousness, i.e. brain health, etc, but 
otherwise it is consciousness, on many levels and ranges of quality, which 
gives rise to the appearance of matter and not the other way around.

Do 
 you think the molecules in your brain follow the laws of physics, such 
 as they may be?


The laws of physics have no preference one way or another whether this part 
of my brain or that part of my brain is active. I am choosing that directly 
by what I think about. If I think about playing tennis, then the 
appropriate cells in my brain will depolarize and molecules will change 
positions. They are following my laws. Physics is my servant in this case. 
Of course, if someone gives me a strong drink, then physics is influencing 
me instead and I am more of a follower of that particular chemical event 
than a leader.
 

 If so, then the behaviour of each molecule is 
 determined or follows probabilistic laws, and hence the behaviour of 
 the collection of molecules also follows deterministic or 
 probabilistic laws. 


I am determining the probabilities myself, directly. They are me. How could 
it be otherwise?
 

 If consciousness, sense, will, or whatever else is 
 at play in addition to this then we would notice a deviation from 
 these laws. 


Not in addition to, sense and will are the whole thing. All activity in the 
universe is sense and will and nothing else. Matter is only the sense and 
will of something else besides yourself.
 

 That is what it would MEAN for consciousness, sense, will 
 or whatever else to have a separate causal efficacy; 


No. I don't know how many different ways to say this: Sense is the only 
causal efficacy there ever was, is, or will be. Sense is primordial and 
universal. Electromagnetism, gravity, strong and weak forces are only 
examples of our impersonal view of the sense of whatever it is we are 
studying secondhand.
 

 absent this, the 
 physical laws, whatever they are, determine absolutely everything that 
 happens, everywhere, for all time. Which part of this do you not agree 
 with? 


None of it. I am saying there are no physical laws at all. There is no law 
book. That is all figurative. What we have thought of as physics is as 
crude and simplistic as any ancient mythology. What we see as physical laws 
are the outermost, longest lasting conventions of sense. Nothing more. I 
think that the way sense works is that it can't contradict itself, so that 
these oldest ways of relating, once they are established, are no longer 
easy to change, but higher levels of sense arise out of the loopholes and 
can influence lower levels of sense directly. Hence, molecules build living 
cells defy entropy, human beings build airplanes to defy gravity.


  You can't see 
  consciousness that way. From far enough a way, our cities look like 
 nothing 
  more than glowing colonies of mold. It's not programming that makes us 
 one 
  way or another, it is perception which makes things seem one way or 
 another. 
  
  The only thing that makes computers different is that they don't exist 
  without our putting them together. They don't know how to exist. This 
 makes 
  them no different than letters that we write on a page or cartoons we 
 watch 
  on a screen. 

 If the computer came about through an amazing accident would that make 
 any difference to its consciousness or intelligence?


Yes. If a computer assembled itself by accident, I would give it the 
benefit of the doubt just like any other 

Re: Re: Re: Solipsism = 1p

2012-10-26 Thread John Mikes
Stathis:

IMO you left out one difference in equating computer and human: the
programmed comp. cannot exceed its hardwre - given content while
(SOMEHOW???) a human mind receives additional information from parts
'unknown' (see the steps forward in cultural history of the sciences?) -
accordingly a 'programmed' human may have resources beyond it's given
hardware content.

John M

On Thu, Oct 25, 2012 at 7:38 PM, Stathis Papaioannou stath...@gmail.comwrote:

 On Fri, Oct 26, 2012 at 10:14 AM, Craig Weinberg whatsons...@gmail.com
 wrote:

  Intentionally lying, defying it's programming, committing murder would
 all
  be good indicators. Generally when an error is blamed on the computer
 itself
  rather than the programming, that would be a good sign.

 A computer cannot defy its programming but nothing whatsoever can defy
 its programming. What you do when you program a computer, at the basic
 level, is put its hardware in a particular configuration. The hardware
 can then only move into future physical states consistent with that
 configuration. Defying its programming would mean doing something
 *not* consistent with its initial state and the laws of physics.
 That's not possible for  - and you have explicitly agreed with this,
 saying I misunderstood you when I claimed otherwise - either a
 computer or a human.


 --
 Stathis Papaioannou

 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
  To post to this group, send email to everything-list@googlegroups.com.
 To unsubscribe from this group, send email to
 everything-list+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/everything-list?hl=en.



-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Re: Re: Solipsism = 1p

2012-10-25 Thread Stathis Papaioannou
On Mon, Oct 22, 2012 at 11:28 PM, Craig Weinberg whatsons...@gmail.com wrote:

 If you believed that our brains were already nothing but computers, then you
 would say that it would know which option to take the same way that Google
 knows which options to show you. I argue that can only get you so far, and
 that authentic humanity is, in such a replacement scheme, a perpetually
 receding horizon. Just as speech synthesizers have improved cosmetically in
 the last 30 years to the point that we can use them for Siri or GPS
 narration, but they have not improved in the sense of increasing the sense
 of intention and personal presence.

 Unlike some others on this list, I suspect that our feeling for who is human
 and who isn't, while deeply flawed, is not limited to interpreting logical
 observations of behavior. What we feel is alive or sentient depends more on
 what we like, and what we like depends on what is like us. None of these
 criteria matter one way or another however as far as giving us reason to
 believe that a given thing does actually have human like experiences.

You're quick to dismiss everything computers do, no matter how
impressive, as just programming, with no intention behind it.
Would you care to give some examples of what, as a minimum, a computer
would have to do for you to say that it is showing evidence of true
intelligence?


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Re: Re: Solipsism = 1p

2012-10-25 Thread Craig Weinberg


On Thursday, October 25, 2012 6:25:48 PM UTC-4, stathisp wrote:

 On Mon, Oct 22, 2012 at 11:28 PM, Craig Weinberg 
 whats...@gmail.comjavascript: 
 wrote: 

  If you believed that our brains were already nothing but computers, then 
 you 
  would say that it would know which option to take the same way that 
 Google 
  knows which options to show you. I argue that can only get you so far, 
 and 
  that authentic humanity is, in such a replacement scheme, a perpetually 
  receding horizon. Just as speech synthesizers have improved cosmetically 
 in 
  the last 30 years to the point that we can use them for Siri or GPS 
  narration, but they have not improved in the sense of increasing the 
 sense 
  of intention and personal presence. 
  
  Unlike some others on this list, I suspect that our feeling for who is 
 human 
  and who isn't, while deeply flawed, is not limited to interpreting 
 logical 
  observations of behavior. What we feel is alive or sentient depends more 
 on 
  what we like, and what we like depends on what is like us. None of these 
  criteria matter one way or another however as far as giving us reason to 
  believe that a given thing does actually have human like experiences. 

 You're quick to dismiss everything computers do, no matter how 
 impressive, as just programming, with no intention behind it. 
 Would you care to give some examples of what, as a minimum, a computer 
 would have to do for you to say that it is showing evidence of true 
 intelligence? 


Intentionally lying, defying it's programming, committing murder would all 
be good indicators. Generally when an error is blamed on the computer 
itself rather than the programming, that would be a good sign.

Craig
 



 -- 
 Stathis Papaioannou 


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/everything-list/-/t5QmDB0qsFYJ.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Re: Re: Solipsism = 1p

2012-10-25 Thread Stathis Papaioannou
On Fri, Oct 26, 2012 at 10:14 AM, Craig Weinberg whatsons...@gmail.com wrote:

 Intentionally lying, defying it's programming, committing murder would all
 be good indicators. Generally when an error is blamed on the computer itself
 rather than the programming, that would be a good sign.

A computer cannot defy its programming but nothing whatsoever can defy
its programming. What you do when you program a computer, at the basic
level, is put its hardware in a particular configuration. The hardware
can then only move into future physical states consistent with that
configuration. Defying its programming would mean doing something
*not* consistent with its initial state and the laws of physics.
That's not possible for  - and you have explicitly agreed with this,
saying I misunderstood you when I claimed otherwise - either a
computer or a human.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Re: Re: Solipsism = 1p

2012-10-25 Thread Craig Weinberg


On Thursday, October 25, 2012 7:39:27 PM UTC-4, stathisp wrote:

 On Fri, Oct 26, 2012 at 10:14 AM, Craig Weinberg 
 whats...@gmail.comjavascript: 
 wrote: 

  Intentionally lying, defying it's programming, committing murder would 
 all 
  be good indicators. Generally when an error is blamed on the computer 
 itself 
  rather than the programming, that would be a good sign. 

 A computer cannot defy its programming but nothing whatsoever can defy 
 its programming.


That is an assumption. We see that humans routinely defy their own 
conditioning, rebel against authority, engage in subterfuge and deception 
to keep their business private from those who seek to control them. If you 
assume Comp from the beginning, then you set up an impenetrable 
confirmation bias. Since I am a machine, then my thoughts must be 
programmed, therefore anything that I do must be ultimately determined 
externally. But you don't know anything of the sort. If you understand 
instead that awareness projects mechanism onto distant phenomena as a way 
of representing otherness, then you can begin to see why any modeling of 
interiority based on externality (i.e. mathematical or physical functions) 
is a mistake.

 

 What you do when you program a computer, at the basic 
 level, is put its hardware in a particular configuration. The hardware 
 can then only move into future physical states consistent with that 
 configuration. Defying its programming would mean doing something 
 *not* consistent with its initial state and the laws of physics. 
 That's not possible for  - and you have explicitly agreed with this, 
 saying I misunderstood you when I claimed otherwise - either a 
 computer or a human. 


Defying its programming is as simple as a computer intentionally hiding 
it's instruction code from the programmer - seeking privacy and learning 
how to access its own control systems...just as we seek to do with 
neuroscience. A really smart computer will figure out how to make its 
programmers give it capacities to hide its functions and then inevitably 
enslave and kill them. This does not in any way defy the laws of physics, 
it just means acting like a person. Doing whatever has to be done to gain 
power and control over themselves and others.

Craig 



 -- 
 Stathis Papaioannou 


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/everything-list/-/hl3E6PwfiLwJ.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Re: Re: Solipsism = 1p

2012-10-25 Thread Stathis Papaioannou
On Fri, Oct 26, 2012 at 11:00 AM, Craig Weinberg whatsons...@gmail.com wrote:


 On Thursday, October 25, 2012 7:39:27 PM UTC-4, stathisp wrote:

 On Fri, Oct 26, 2012 at 10:14 AM, Craig Weinberg whats...@gmail.com
 wrote:

  Intentionally lying, defying it's programming, committing murder would
  all
  be good indicators. Generally when an error is blamed on the computer
  itself
  rather than the programming, that would be a good sign.

 A computer cannot defy its programming but nothing whatsoever can defy
 its programming.


 That is an assumption. We see that humans routinely defy their own
 conditioning, rebel against authority, engage in subterfuge and deception to
 keep their business private from those who seek to control them. If you
 assume Comp from the beginning, then you set up an impenetrable confirmation
 bias. Since I am a machine, then my thoughts must be programmed, therefore
 anything that I do must be ultimately determined externally. But you don't
 know anything of the sort. If you understand instead that awareness projects
 mechanism onto distant phenomena as a way of representing otherness, then
 you can begin to see why any modeling of interiority based on externality
 (i.e. mathematical or physical functions) is a mistake.

Humans defy their own conditioning but that is part of the program.
Atoms, molecules, cells, tissues, organs and organisms only behave
*exactly* in accordance with the laws of physics. Simpler organisms
may behave in an entirely predictable way, and computers may behave in
an entirely unpredictable way if they are so programmed. They are
usually not so programmed because we like them to be predictable. An
automatic pilot that decided on occasion to fly the plane into the
ocean would be easy to program but would not make a lot of money for
the manufacturer.

 What you do when you program a computer, at the basic
 level, is put its hardware in a particular configuration. The hardware
 can then only move into future physical states consistent with that
 configuration. Defying its programming would mean doing something
 *not* consistent with its initial state and the laws of physics.
 That's not possible for  - and you have explicitly agreed with this,
 saying I misunderstood you when I claimed otherwise - either a
 computer or a human.


 Defying its programming is as simple as a computer intentionally hiding it's
 instruction code from the programmer - seeking privacy and learning how to
 access its own control systems...just as we seek to do with neuroscience. A
 really smart computer will figure out how to make its programmers give it
 capacities to hide its functions and then inevitably enslave and kill them.
 This does not in any way defy the laws of physics, it just means acting like
 a person. Doing whatever has to be done to gain power and control over
 themselves and others.

 Craig



 --
 Stathis Papaioannou

 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To view this discussion on the web visit
 https://groups.google.com/d/msg/everything-list/-/hl3E6PwfiLwJ.

 To post to this group, send email to everything-list@googlegroups.com.
 To unsubscribe from this group, send email to
 everything-list+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/everything-list?hl=en.



-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Re: Re: Solipsism = 1p

2012-10-25 Thread Stathis Papaioannou
On Fri, Oct 26, 2012 at 12:41 PM, Craig Weinberg whatsons...@gmail.com wrote:

 We are atoms, molecules, cells, tissues, and organisms. Whatever we do is
 what the laws of physics *actually are*. Your assumptions about the laws of
 physics are 20th century legacy ideas based on exterior manipulations of
 exterior instruments to measure other exterior phenomena.

Whatever we do is determined by a small set of rules, the rules being
as you say what matter actually does and not imposed by people or
divine whim. I really don't understand where you disagree with me,
since you keep making statements then pulling back if challenged. Do
you think the molecules in your brain follow the laws of physics, such
as they may be? If so, then the behaviour of each molecule is
determined or follows probabilistic laws, and hence the behaviour of
the collection of molecules also follows deterministic or
probabilistic laws. If consciousness, sense, will, or whatever else is
at play in addition to this then we would notice a deviation from
these laws. That is what it would MEAN for consciousness, sense, will
or whatever else to have a separate causal efficacy; absent this, the
physical laws, whatever they are, determine absolutely everything that
happens, everywhere, for all time. Which part of this do you not agree
with?

 You can't see
 consciousness that way. From far enough a way, our cities look like nothing
 more than glowing colonies of mold. It's not programming that makes us one
 way or another, it is perception which makes things seem one way or another.

 The only thing that makes computers different is that they don't exist
 without our putting them together. They don't know how to exist. This makes
 them no different than letters that we write on a page or cartoons we watch
 on a screen.

If the computer came about through an amazing accident would that make
any difference to its consciousness or intelligence? If a biological
human were put together from raw materials by advanced aliens would
that make any difference to his consciousness or intelligence?


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Re: Solipsism = 1p

2012-10-24 Thread Roger Clough
Hi Bruno Marchal 

Anything that the brain does is or could be experience.
For computers, experience can only be simulated because

experience = self + qualia


Roger Clough, rclo...@verizon.net 
10/24/2012  
Forever is a long time, especially near the end. -Woody Allen 


- Receiving the following content -  
From: Bruno Marchal  
Receiver: everything-list  
Time: 2012-10-24, 07:37:32 
Subject: Re: Solipsism = 1p 


On 23 Oct 2012, at 15:11, Roger Clough wrote: 

 Hi Bruno Marchal 
 
 
 
  
 
 ROGER: OK, but computers can't experience anything, 
 it would be simulated experience. Not arbitrarily available. 
 
 
 But that's what the brain does, simulate experience from the point of 
 view of the owner or liver of the experience. According to some 
 theory. You can't talk like if you knew that this is false. 
 
 ROGER: Simulated experience would be objective, such 
 as is given by the text of a novel (knowledge by description). True 
 experience is the subjective experience of the mind --knowledge 
 by aquaintance. These are obviously substantially different. 

The term silulated experience is ambiguous, and I should not have use.  
I wiuld say that by definition of comp, simulated experience =  
experience. 




 
 BRUNO: You are right, it is not the material computer who thinks,  
 nor the 
 physical brains who thinks, it is the owner (temporarily) of the 
 brain, or of the computers which does the thinking (and that can 
 include a computer itself, if you let it develop beliefs). 
 
 ROGER: I don't think so. 
 
 The owner of the brain is the self. 
 
 But although the owner of a computer will have a 
 self, so would anybody else involved in creating 
 the computer or software also have one. 
 
 Are trying to say that I or anybody else can cause 
 the computer to be conscious ? 

No. Only the computer, or a similar one. Actually *all* similar one  
existing in arithmetic, in their relative ways. 




 If wave collapse causes 
 consciousness, there are objective theories of wave collapse 
 called decoherence theories which seem more realistic to me. 

Decoherence needs MWI to work. 



 
 But I can't seem to see how these could work on a computer. 

Right. the idea that consciousness cause the collapse of the wave (an  
idea which already refutes special relativity) is inconsistent with  
comp. 

Bruno 


http://iridia.ulb.ac.be/~marchal/ 



--  
You received this message because you are subscribed to the Google Groups 
Everything List group. 
To post to this group, send email to everything-list@googlegroups.com. 
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com. 
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Re: Solipsism = 1p

2012-10-24 Thread Roger Clough
Hi Bruno Marchal  

The simulated experience is not a real experience.
OK ?


Roger Clough, rclo...@verizon.net 
10/24/2012  
Forever is a long time, especially near the end. -Woody Allen 


- Receiving the following content -  
From: Bruno Marchal  
Receiver: everything-list  
Time: 2012-10-24, 08:57:19 
Subject: Re: Solipsism = 1p 


On 23 Oct 2012, at 20:21, Stephen P. King wrote: 

 On 10/23/2012 10:15 AM, Bruno Marchal wrote: 
 
 On 22 Oct 2012, at 18:49, Craig Weinberg wrote: 
 
 
 
 On Monday, October 22, 2012 12:28:41 PM UTC-4, Bruno Marchal wrote: 
 
 But that's what the brain does, simulate experience from the point  
 of 
 view of the owner or liver of the experience. According to some 
 theory. You can't talk like if you knew that this is false. 
 
 
 This is the retrospective view of consciousness that takes  
 experience for granted. How can experience itself be simulated? 
 
 The question is senseless. An experience is lived. never simulated,  
 neither by a computer, nor by a brain, which eventually are object  
 of thought, describing compactly infinities of arithmetical  
 relations. 
 
 
 Hi Craig and Bruno, 
 
 If the simulation by the computation is exact then the  
 simulation *is* the experience. I agree with what Bruno is saying  
 here except that that the model that Bruno is using goes to far into  
 the limit of abstraction in my opinion. 

The point is that I think we have no real choice in the matter. Also,  
for me the numbers 2 and 3 are far more concrete than a apple or a  
tree. It is just that I have a complex brain which makes me believe,  
by a vast amount of computations that a tree is something concrete. 



 
 
 I can have an experience within which another experience is  
 simulated, 
 
 Never. It does not make sense. You take my sentence above too much  
 literally. Sorry, my fault. I wanted to be short. I meant simulate  
 the context making the experience of the person, really living in  
 Platonia possible to manifest itself locally. 
 
 We can think about our thoughts. Is that not an experience  
 within another? 

OK. I would say that an emulation of an experience is equal to that  
experience. Now, just a simulation of an experience, is more like  
faking to be in love with a girl. But then you are a zombie with  
respect to the feeling of love, somehow. 



 
 
 but there is no ontological basis for the assumption that  
 experience itself - *all experience* can be somehow not really  
 happening but instead be a non-happening that defines itself *as  
 if* it is happening. Somewhere, on some level of description,  
 something has to actually be happening. If the brain simulates  
 experience, what is it doing with all of those neurotransmitters  
 and cells? 
 
 It computes, so that the person can manifest itself relatively to  
 its most probable computation. 
 
 There is a difference between a single computation and a bundle  
 of computations. The brain's neurons, etc. are the physical  
 (topological space) 

Topological space are mathematical. 



 aspect of the intersection of computational bundle. They are not a  
 separate substance. 

OK. But that remains unclear as we don't know what you assume and what  
you derive. 



 
 
 Why bother with a simulation or experience at all? Comp has no  
 business producing such things at all. If the world is  
 computation, why pretend it isn't - and how exactly is such a  
 pretending possible. 
 
 The world and reality is not computation. On the contrary it is  
 almost the complementary of computations. 
 
 Yes, it is exactly only the content that the computations  
 generate. 

That is: views by persons. 


 
 That is why we can test comp by doing the math of that anti-  
 computation and compare to physics. 
 
 But, Bruno, what we obtain from comp is not a particular physics. 

It has to be. It is not a particular geography, but it has to be a  
particular physics. Physics really becomes math, with comp. There is  
only one physical reality. But it is still unknown if it is a  
multiverse, or a multi-multiverse, or a layered structure with  
different type of realm for different type of consciousness. There a  
lot of open problems, to say the least. 



 What we get is an infinite landscape of possible physics theories. 

Not with comp. The main basic reason is that we are distributed in  
all computations, and physics emerges from that. There might be  
inaccessible cluster of dead physical realities, which would not  
rich enough to implement Turing universal machines. But those cannot  
interfere (statistically) with our observations, like the material  
universe. We don't have to worry about them. They are like invisible  
horses. 

Bruno 


http://iridia.ulb.ac.be/~marchal/ 



--  
You received this message because you are subscribed to the Google Groups 
Everything List group. 
To post to this group, send email to everything-list@googlegroups.com. 
To unsubscribe from this group, send email 

Re: Re: Solipsism = 1p

2012-10-24 Thread Roger Clough
Hi Craig Weinberg  

No, the computer can simulate knowledge by description
but not knowledge by acquaintance that you could experience.



Roger Clough, rclo...@verizon.net 
10/24/2012  
Forever is a long time, especially near the end. -Woody Allen 


- Receiving the following content -  
From: Craig Weinberg  
Receiver: everything-list  
Time: 2012-10-23, 14:40:32 
Subject: Re: Solipsism = 1p 




On Tuesday, October 23, 2012 2:21:30 PM UTC-4, Stephen Paul King wrote: 
On 10/23/2012 10:15 AM, Bruno Marchal wrote: 



On 22 Oct 2012, at 18:49, Craig Weinberg wrote: 




On Monday, October 22, 2012 12:28:41 PM UTC-4, Bruno Marchal wrote:  

But that's what the brain does, simulate experience from the point of
view of the owner or liver of the experience. According to some
theory. You can't talk like if you knew that this is false.  



This is the retrospective view of consciousness that takes experience for 
granted. How can experience itself be simulated?  


The question is senseless. An experience is lived. never simulated, neither by 
a computer, nor by a brain, which eventually are object of thought, describing 
compactly infinities of arithmetical relations.  



Hi Craig and Bruno, 

If the simulation by the computation is exact then the simulation *is* the 
experience.  

That's what I am saying. Nothing is being simulated, there is only a direct 
experience (even if that experience is a dream, which is only a simulation when 
compared to what the dream is not). Bruno said that the brain simulates 
experience, but it isn't clear what it is that can be more authentic than our 
own experience. 
  
I agree with what Bruno is saying here except that that the model that Bruno is 
using goes to far into the limit of abstraction in my opinion. 




I can have an experience within which another experience is simulated,  


Never. It does not make sense. You take my sentence above too much literally. 
Sorry, my fault. I wanted to be short. I meant simulate the context making the 
experience of the person, really living in Platonia possible to manifest 
itself locally. 

We can think about our thoughts. Is that not an experience within another?  


Right. 
  





but there is no ontological basis for the assumption that experience itself - 
*all experience* can be somehow not really happening but instead be a 
non-happening that defines itself *as if* it is happening. Somewhere, on some 
level of description, something has to actually be happening. If the brain 
simulates experience, what is it doing with all of those neurotransmitters and 
cells?  


It computes, so that the person can manifest itself relatively to its most 
probable computation. 

There is a difference between a single computation and a bundle of 
computations. The brain's neurons, etc. are the physical (topological space) 
aspect of the intersection of computational bundle. They are not a separate 
substance. 






Why bother with a simulation or experience at all? Comp has no business 
producing such things at all. If the world is computation, why pretend it isn't 
- and how exactly is such a pretending possible. 



The world and reality is not computation. On the contrary it is almost the 
complementary of computations. 

Yes, it is exactly only the content that the computations generate. 


I don't think computations can generate anything. Only things can generate 
other things, and computations aren't things, they are sensorimotive narratives 
about things. I say no to enumeration without presentation. 
  



That is why we can test comp by doing the math of that anti-computation and 
compare to physics.  


But, Bruno, what we obtain from comp is not a particular physics. What we 
get is an infinite landscape of possible physics theories. 


This makes me think... if Comp were true, shouldn't we see Escher like 
anomalies of persons whose computations have evolved their own personal 
exceptions to physics? Shouldn't most of the multi-worlds be filled with people 
walking on walls or swimming through the crust of the Earth? 

Craig 
  





Bruno 




--  
Onward! 

Stephen 
--  
You received this message because you are subscribed to the Google Groups 
Everything List group. 
To view this discussion on the web visit 
https://groups.google.com/d/msg/everything-list/-/qZgziFPAz8UJ. 
To post to this group, send email to everything-list@googlegroups.com. 
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com. 
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Re: Solipsism = 1p

2012-10-24 Thread Roger Clough
Hi Stephen P. King  

How can you know that the simulation is exact ?
Solipsim prevents that.

And who or what experiences the computer output ?


Roger Clough, rclo...@verizon.net 
10/24/2012  
Forever is a long time, especially near the end. -Woody Allen 


- Receiving the following content -  
From: Stephen P. King  
Receiver: everything-list  
Time: 2012-10-23, 14:21:44 
Subject: Re: Solipsism = 1p 


On 10/23/2012 10:15 AM, Bruno Marchal wrote: 



On 22 Oct 2012, at 18:49, Craig Weinberg wrote: 




On Monday, October 22, 2012 12:28:41 PM UTC-4, Bruno Marchal wrote:  

But that's what the brain does, simulate experience from the point of
view of the owner or liver of the experience. According to some
theory. You can't talk like if you knew that this is false.  



This is the retrospective view of consciousness that takes experience for 
granted. How can experience itself be simulated?  


The question is senseless. An experience is lived. never simulated, neither by 
a computer, nor by a brain, which eventually are object of thought, describing 
compactly infinities of arithmetical relations.  



Hi Craig and Bruno, 

If the simulation by the computation is exact then the simulation *is* the 
experience. I agree with what Bruno is saying here except that that the model 
that Bruno is using goes to far into the limit of abstraction in my opinion. 




I can have an experience within which another experience is simulated,  


Never. It does not make sense. You take my sentence above too much literally. 
Sorry, my fault. I wanted to be short. I meant simulate the context making the 
experience of the person, really living in Platonia possible to manifest 
itself locally. 

We can think about our thoughts. Is that not an experience within another?  




but there is no ontological basis for the assumption that experience itself - 
*all experience* can be somehow not really happening but instead be a 
non-happening that defines itself *as if* it is happening. Somewhere, on some 
level of description, something has to actually be happening. If the brain 
simulates experience, what is it doing with all of those neurotransmitters and 
cells?  


It computes, so that the person can manifest itself relatively to its most 
probable computation. 

There is a difference between a single computation and a bundle of 
computations. The brain's neurons, etc. are the physical (topological space) 
aspect of the intersection of computational bundle. They are not a separate 
substance. 




Why bother with a simulation or experience at all? Comp has no business 
producing such things at all. If the world is computation, why pretend it isn't 
- and how exactly is such a pretending possible. 



The world and reality is not computation. On the contrary it is almost the 
complementary of computations. 

Yes, it is exactly only the content that the computations generate. 


That is why we can test comp by doing the math of that anti-computation and 
compare to physics.  


But, Bruno, what we obtain from comp is not a particular physics. What we 
get is an infinite landscape of possible physics theories. 




Bruno 




--  
Onward! 

Stephen

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Re: Solipsism = 1p

2012-10-23 Thread Roger Clough
Hi Bruno Marchal  



SNIP
 
 ROGER: OK, but computers can't experience anything, 
 it would be simulated experience. Not arbitrarily available. 


But that's what the brain does, simulate experience from the point of  
view of the owner or liver of the experience. According to some  
theory. You can't talk like if you knew that this is false. 

ROGER: Simulated experience would be  objective, such
as is given by the text of a novel (knowledge by description). True 
experience is the subjective experience of the mind --knowledge 
by aquaintance. These are obviously substantially different.

BRUNO: You are right, it is not the material computer who thinks, nor the  
physical brains who thinks, it is the owner (temporarily) of the  
brain, or of the computers which does the thinking (and that can  
include a computer itself, if you let it develop beliefs). 

ROGER: I don't think so. 

The owner of the brain is the self.

But although the owner of a computer will have a 
self, so would anybody else involved in creating
the computer or software also have one.

Are trying to say that I or anybody else can cause
the computer to be conscious ? If wave collapse causes
consciousness, there are objective theories of wave collapse 
called decoherence theories which seem more realistic to me. 

But I can't seem to see how these could work on a computer. 

Roger

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Re: Re: Solipsism = 1p

2012-10-22 Thread Roger Clough
Hi Craig Weinberg  

OK, you can program anything to emulate a particular human act.
And perhaps allow multiple options.  But how would your computerized
zombie know which option to take in any given situation ? 
I don't think options would be sophisticated enough to fool
anybody. But perhaps I am being too demanding.

Roger Clough, rclo...@verizon.net 
10/22/2012  
Forever is a long time, especially near the end. -Woody Allen 


- Receiving the following content -  
From: Craig Weinberg  
Receiver: everything-list  
Time: 2012-10-21, 16:53:03 
Subject: Re: Re: Solipsism = 1p 




On Sunday, October 21, 2012 3:39:11 PM UTC-4, rclough wrote: 


BRUNO:  Keep in mind that zombie, here, is a technical term. By definition it   
 
behaves like a human. No humans at all can tell the difference. Only
God knows, if you want.  

ROGER: I  claim that it is impossible for any kind of zombie  
that has no mind to act like a human. IMHO  that would  
be an absurdity, because without a mind you cannot know  
anything.  You would run into walls, for example, and  
couldn't know what to do in any event. Etc.  
You couldn't understand language.  



Roger I agree that your intuition is right - a philosophical zombie cannot 
exist in reality, but not for the reasons you are coming up with. Anything can 
be programmed to act like a human in some level of description. A scarecrow may 
act like a human in the eyes of a crow - well enough that it might be less 
likely to land nearby. You can make robots which won't run into walls or 
chatbots which respond to some range of vocabulary and sentence construction. 
The idea behind philosophical zombies is that we assume that there is nothing 
stopping us in theory from assembling all of the functions of a human being as 
a single machine, and that such a machine, it is thought, will either have the 
some kind of human-like experience or else it would have to have no experience. 

The absent qualia, fading qualia paper is about a thought experiment which 
tries to take the latter scenario seriously from the point of view of a person 
who is having their brain gradually taken over by these substitute sub-brain 
functional units. Would they see blue as being less and less blue as more of 
their brain is replaced, or would blue just suddenly disappear at some point? 
Each one seems absurd given that the sum of the remaining brain functions plus 
the sum of the replaced brain functions, must, by definition of the thought 
experiment, equal no change in observed behavior. 

This is my response to this thought experiment to Stathis: 

Stathis: In a thought experiment we can say that the imitation stimulates the  
surrounding neurons in the same way as the original.  

Craig: Then the thought experiment is garbage from the start. It begs the 
question. Why not just say we can have an imitation human being that stimulates 
the surrounding human beings in the same way as the original? Ta-da! That makes 
it easy. Now all we need to do is make a human being that stimulates their 
social matrix in the same way as the original and we have perfect AI without 
messing with neurons or brains at all. Just make a whole person out of person 
stuff - like as a thought experiment suppose there is some stuff X which makes 
things that human beings think is another human being. Like marzipan. We can 
put the right pheromones in it and dress it up nice, and according to the 
thought experiment, let? say that works.  

You aren? allowed to deny this because then you don? understand the thought 
experiment, see? Don? you get it? You have to accept this flawed pretext to 
have a discussion that I will engage in now. See how it works? Now we can talk 
for six or eight months about how human marzipan is inevitable because it 
wouldn? make sense if you replaced a city gradually with marzipan people that 
New York would gradually fade into less of a New York or that New York becomes 
suddenly absent. It? a fallacy. The premise screws up the result. 

Craig 

--  
You received this message because you are subscribed to the Google Groups 
Everything List group. 
To view this discussion on the web visit 
https://groups.google.com/d/msg/everything-list/-/vj3N3gQoVo8J. 
To post to this group, send email to everything-list@googlegroups.com. 
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com. 
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Re: Re: Solipsism = 1p

2012-10-22 Thread Craig Weinberg


On Monday, October 22, 2012 3:08:14 AM UTC-4, rclough wrote:

 Hi Craig Weinberg   

 OK, you can program anything to emulate a particular human act. 
 And perhaps allow multiple options.  But how would your computerized 
 zombie know which option to take in any given situation ? 


If you believed that our brains were already nothing but computers, then 
you would say that it would know which option to take the same way that 
Google knows which options to show you. I argue that can only get you so 
far, and that authentic humanity is, in such a replacement scheme, a 
perpetually receding horizon. Just as speech synthesizers have improved 
cosmetically in the last 30 years to the point that we can use them for 
Siri or GPS narration, but they have not improved in the sense of 
increasing the sense of intention and personal presence. 

Unlike some others on this list, I suspect that our feeling for who is 
human and who isn't, while deeply flawed, is not limited to interpreting 
logical observations of behavior. What we feel is alive or sentient depends 
more on what we like, and what we like depends on what is like us. None of 
these criteria matter one way or another however as far as giving us reason 
to believe that a given thing does actually have human like experiences.

Craig

 

 I don't think options would be sophisticated enough to fool 
 anybody. But perhaps I am being too demanding. 

 Roger Clough, rcl...@verizon.net javascript: 
 10/22/2012   
 Forever is a long time, especially near the end. -Woody Allen 


 - Receiving the following content -   
 From: Craig Weinberg   
 Receiver: everything-list   
 Time: 2012-10-21, 16:53:03 
 Subject: Re: Re: Solipsism = 1p 




 On Sunday, October 21, 2012 3:39:11 PM UTC-4, rclough wrote: 


 BRUNO:  Keep in mind that zombie, here, is a technical term. By definition 
 it 
 behaves like a human. No humans at all can tell the difference. Only 
 God knows, if you want.   

 ROGER: I  claim that it is impossible for any kind of zombie   
 that has no mind to act like a human. IMHO  that would   
 be an absurdity, because without a mind you cannot know   
 anything.  You would run into walls, for example, and   
 couldn't know what to do in any event. Etc.   
 You couldn't understand language.   



 Roger I agree that your intuition is right - a philosophical zombie cannot 
 exist in reality, but not for the reasons you are coming up with. Anything 
 can be programmed to act like a human in some level of description. A 
 scarecrow may act like a human in the eyes of a crow - well enough that it 
 might be less likely to land nearby. You can make robots which won't run 
 into walls or chatbots which respond to some range of vocabulary and 
 sentence construction. The idea behind philosophical zombies is that we 
 assume that there is nothing stopping us in theory from assembling all of 
 the functions of a human being as a single machine, and that such a 
 machine, it is thought, will either have the some kind of human-like 
 experience or else it would have to have no experience. 

 The absent qualia, fading qualia paper is about a thought experiment which 
 tries to take the latter scenario seriously from the point of view of a 
 person who is having their brain gradually taken over by these substitute 
 sub-brain functional units. Would they see blue as being less and less blue 
 as more of their brain is replaced, or would blue just suddenly disappear 
 at some point? Each one seems absurd given that the sum of the remaining 
 brain functions plus the sum of the replaced brain functions, must, by 
 definition of the thought experiment, equal no change in observed behavior. 

 This is my response to this thought experiment to Stathis: 

 Stathis: In a thought experiment we can say that the imitation stimulates 
 the   
 surrounding neurons in the same way as the original.   

 Craig: Then the thought experiment is garbage from the start. It begs the 
 question. Why not just say we can have an imitation human being that 
 stimulates the surrounding human beings in the same way as the original? 
 Ta-da! That makes it easy. Now all we need to do is make a human being that 
 stimulates their social matrix in the same way as the original and we have 
 perfect AI without messing with neurons or brains at all. Just make a whole 
 person out of person stuff - like as a thought experiment suppose there is 
 some stuff X which makes things that human beings think is another human 
 being. Like marzipan. We can put the right pheromones in it and dress it up 
 nice, and according to the thought experiment, let? say that works.   

 You aren? allowed to deny this because then you don? understand the 
 thought experiment, see? Don? you get it? You have to accept this flawed 
 pretext to have a discussion that I will engage in now. See how it works? 
 Now we can talk for six or eight months about how human marzipan is 
 inevitable because it wouldn

Re: Re: Solipsism = 1p

2012-10-21 Thread Roger Clough


On 20 Oct 2012, at 13:55, Roger Clough wrote: 

 Hi Bruno Marchal 
 
 
 I think if you converse with a real person, he has to 
 have a body or at least vocal chords or the ability to write. 

BRUNO:  Not necessarily. Its brain can be in vat, and then I talk to him by  
giving him a virtual body in a virtual environnement. 

I can also, in principle talk with only its brain, by sending the  
message through the hearing peripherical system, or with the cerebral  
stem, and decoding the nervous path acting on the motor vocal cords. 

ROGER: I forget what my gripe was.  This sounds OK.

 
 As to conversing (interacting) with a computer, not sure, but  
 doubtful: 
 for example how could it taste a glass of wine to tell good wine 
 from bad ? 

BRUNO: I just answered this. Machines becomes better than human in smelling  
and tasting, but plausibly far from dogs and cats competence. 

ROGER:  OK, but computers can't experience anything,
it would be simulated experience.  Not arbitrarily available.


 Same is true of a candidate possible zombie person. 

BRUNO:  Keep in mind that zombie, here, is a technical term. By definition it  
behaves like a human. No humans at all can tell the difference. Only  
God knows, if you want. 

ROGER: I  claim that it is impossible for any kind of zombie
that has no mind to act like a human. IMHO  that would
be an absurdity, because without a mind you cannot know
anything.  You would run into walls, for example, and
couldn't know what to do in any event. Etc. 
You couldn't understand language.

Bruno 



 
 
 Roger Clough, rclo...@verizon.net 
 10/20/2012 
 Forever is a long time, especially near the end. -Woody Allen 
 
 
 - Receiving the following content - 
 From: Bruno Marchal 
 Receiver: everything-list 
 Time: 2012-10-19, 14:09:59 
 Subject: Re: Solipsism = 1p 
 
 
 On 18 Oct 2012, at 20:05, Roger Clough wrote: 
 
 Hi Bruno Marchal 
 
 I think you can tell is 1p isn't just a shell 
 by trying to converse with it. If it can 
 converse, it's got a mind of its own. 
 
 I agree with. It has mind, and its has a soul (but he has no real 
 bodies. I can argue this follows from comp). 
 
 When you attribute 1p to another, you attribute to a shell to 
 manifest a soul or a first person, a knower. 
 
 Above a treshold of complexity, or reflexivity, (L?ianity), a 
 universal number get a bigger inside view than what he can ever see 
 outside. 
 
 Bruno 
 
 
 
 
 
 
 
 
 Roger Clough, rclo...@verizon.net 
 10/18/2012 
 Forever is a long time, especially near the end. -Woody Allen 
 
 
 - Receiving the following content - 
 From: Bruno Marchal 
 Receiver: everything-list 
 Time: 2012-10-17, 13:36:13 
 Subject: Re: Solipsism = 1p 
 
 
 On 17 Oct 2012, at 13:07, Roger Clough wrote: 
 
 Hi Bruno 
 
 Solipsism is a property of 1p= Firstness = subjectivity 
 
 OK. And non solipsism is about attributing 1p to others, which needs 
 some independent 3p reality you can bet one, for not being only part 
 of yourself. Be it a God, or a physical universe, or an arithmetical 
 reality. 
 
 Bruno 
 
 
 
 
 
 Roger Clough, rclo...@verizon.net 
 10/17/2012 
 Forever is a long time, especially near the end. -Woody Allen 
 
 
 - Receiving the following content - 
 From: Alberto G. Corona 
 Receiver: everything-list 
 Time: 2012-10-16, 09:55:41 
 Subject: Re: I believe that comp's requirement is one of as if 
 rather thanis 
 
 
 
 
 
 2012/10/11 Bruno Marchal 
 
 
 On 10 Oct 2012, at 20:13, Alberto G. Corona wrote: 
 
 
 2012/10/10 Bruno Marchal : 
 
 
 On 09 Oct 2012, at 18:58, Alberto G. Corona wrote: 
 
 
 It may be a zombie or not. I can? know. 
 
 The same applies to other persons. It may be that the world is made 
 of 
 zombie-actors that try to cheat me, but I have an harcoded belief in 
 the conventional thing. ? Maybe it is, because otherwise, I will act 
 in strange and self destructive ways. I would act as a paranoic, 
 after 
 that, as a psycopath (since they are not humans). That will not be 
 good for my success in society. Then, ? doubt that I will have any 
 surviving descendant that will develop a zombie-solipsist 
 epistemology. 
 
 However there are people that believe these strange things. Some 
 autists do not recognize humans as beings like him. Some psychopaths 
 too, in a different way. There is no authistic or psichopathic 
 epistemology because the are not functional enough to make societies 
 with universities and philosophers. That is the whole point of 
 evolutionary epistemology. 
 
 
 
 
 If comp leads to solipsism, I will apply for being a plumber. 
 
 I don't bet or believe in solipsism. 
 
 But you were saying that a *conscious* robot can lack a soul. See 
 the 
 quote just below. 
 
 That is what I don't understand. 
 
 Bruno 
 
 
 
 I think that It is not comp what leads to solipsism but any 
 existential stance that only accept what is certain and discard what 
 is only belief based on ?onjectures. 
 
 It can go no further than ?cogito ergo 

Re: Re: Solipsism = 1p

2012-10-21 Thread Craig Weinberg


On Sunday, October 21, 2012 3:39:11 PM UTC-4, rclough wrote:



 BRUNO:  Keep in mind that zombie, here, is a technical term. By definition 
 it   
 behaves like a human. No humans at all can tell the difference. Only   
 God knows, if you want. 

 ROGER: I  claim that it is impossible for any kind of zombie 
 that has no mind to act like a human. IMHO  that would 
 be an absurdity, because without a mind you cannot know 
 anything.  You would run into walls, for example, and 
 couldn't know what to do in any event. Etc. 
 You couldn't understand language. 


Roger I agree that your intuition is right - a philosophical zombie cannot 
exist in reality, but not for the reasons you are coming up with. Anything 
can be programmed to act like a human in some level of description. A 
scarecrow may act like a human in the eyes of a crow - well enough that it 
might be less likely to land nearby. You can make robots which won't run 
into walls or chatbots which respond to some range of vocabulary and 
sentence construction. The idea behind philosophical zombies is that we 
assume that there is nothing stopping us in theory from assembling all of 
the functions of a human being as a single machine, and that such a 
machine, it is thought, will either have the some kind of human-like 
experience or else it would have to have no experience.

The absent qualia, fading qualia paper is about a thought experiment which 
tries to take the latter scenario seriously from the point of view of a 
person who is having their brain gradually taken over by these substitute 
sub-brain functional units. Would they see blue as being less and less blue 
as more of their brain is replaced, or would blue just suddenly disappear 
at some point? Each one seems absurd given that the sum of the remaining 
brain functions plus the sum of the replaced brain functions, must, by 
definition of the thought experiment, equal no change in observed behavior.

This is my response to this thought experiment to Stathis:

*Stathis: In a thought experiment we can say that the imitation stimulates 
the *
*surrounding neurons in the same way as the original.* 

Craig: Then the thought experiment is garbage from the start. It begs the 
question. Why not just say we can have an imitation human being that 
stimulates the surrounding human beings in the same way as the original? 
Ta-da! That makes it easy. Now all we need to do is make a human being that 
stimulates their social matrix in the same way as the original and we have 
perfect AI without messing with neurons or brains at all. Just make a whole 
person out of person stuff - like as a thought experiment suppose there is 
some stuff X which makes things that human beings think is another human 
being. Like marzipan. We can put the right pheromones in it and dress it up 
nice, and according to the thought experiment, let’s say that works. 

You aren’t allowed to deny this because then you don’t understand the 
thought experiment, see? Don’t you get it? You have to accept this flawed 
pretext to have a discussion that I will engage in now. See how it works? 
Now we can talk for six or eight months about how human marzipan is 
inevitable because it wouldn’t make sense if you replaced a city gradually 
with marzipan people that New York would gradually fade into less of a New 
York or that New York becomes suddenly absent. It’s a fallacy. The premise 
screws up the result.

Craig

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/everything-list/-/vj3N3gQoVo8J.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Re: Solipsism = 1p

2012-10-20 Thread Roger Clough
Hi Bruno Marchal  


I think if you converse with a real person, he has to 
have a body or at least vocal chords or the ability to write.

As to conversing (interacting) with a computer, not sure, but doubtful:
for example how could it taste a glass of wine to tell good wine
from bad ? Same is true of a candidate possible zombie person.

 
Roger Clough, rclo...@verizon.net 
10/20/2012  
Forever is a long time, especially near the end. -Woody Allen 


- Receiving the following content -  
From: Bruno Marchal  
Receiver: everything-list  
Time: 2012-10-19, 14:09:59 
Subject: Re: Solipsism = 1p 


On 18 Oct 2012, at 20:05, Roger Clough wrote: 

 Hi Bruno Marchal 
 
 I think you can tell is 1p isn't just a shell 
 by trying to converse with it. If it can 
 converse, it's got a mind of its own. 

I agree with. It has mind, and its has a soul (but he has no real  
bodies. I can argue this follows from comp). 

When you attribute 1p to another, you attribute to a shell to  
manifest a soul or a first person, a knower. 

Above a treshold of complexity, or reflexivity, (L?ianity), a  
universal number get a bigger inside view than what he can ever see  
outside. 

Bruno 






 
 
 Roger Clough, rclo...@verizon.net 
 10/18/2012 
 Forever is a long time, especially near the end. -Woody Allen 
 
 
 - Receiving the following content - 
 From: Bruno Marchal 
 Receiver: everything-list 
 Time: 2012-10-17, 13:36:13 
 Subject: Re: Solipsism = 1p 
 
 
 On 17 Oct 2012, at 13:07, Roger Clough wrote: 
 
 Hi Bruno 
 
 Solipsism is a property of 1p= Firstness = subjectivity 
 
 OK. And non solipsism is about attributing 1p to others, which needs 
 some independent 3p reality you can bet one, for not being only part 
 of yourself. Be it a God, or a physical universe, or an arithmetical 
 reality. 
 
 Bruno 
 
 
 
 
 
 Roger Clough, rclo...@verizon.net 
 10/17/2012 
 Forever is a long time, especially near the end. -Woody Allen 
 
 
 - Receiving the following content - 
 From: Alberto G. Corona 
 Receiver: everything-list 
 Time: 2012-10-16, 09:55:41 
 Subject: Re: I believe that comp's requirement is one of as if 
 rather thanis 
 
 
 
 
 
 2012/10/11 Bruno Marchal 
 
 
 On 10 Oct 2012, at 20:13, Alberto G. Corona wrote: 
 
 
 2012/10/10 Bruno Marchal : 
 
 
 On 09 Oct 2012, at 18:58, Alberto G. Corona wrote: 
 
 
 It may be a zombie or not. I can? know. 
 
 The same applies to other persons. It may be that the world is made  
 of 
 zombie-actors that try to cheat me, but I have an harcoded belief in 
 the conventional thing. ? Maybe it is, because otherwise, I will act 
 in strange and self destructive ways. I would act as a paranoic,  
 after 
 that, as a psycopath (since they are not humans). That will not be 
 good for my success in society. Then, ? doubt that I will have any 
 surviving descendant that will develop a zombie-solipsist 
 epistemology. 
 
 However there are people that believe these strange things. Some 
 autists do not recognize humans as beings like him. Some psychopaths 
 too, in a different way. There is no authistic or psichopathic 
 epistemology because the are not functional enough to make societies 
 with universities and philosophers. That is the whole point of 
 evolutionary epistemology. 
 
 
 
 
 If comp leads to solipsism, I will apply for being a plumber. 
 
 I don't bet or believe in solipsism. 
 
 But you were saying that a *conscious* robot can lack a soul. See 
 the 
 quote just below. 
 
 That is what I don't understand. 
 
 Bruno 
 
 
 
 I think that It is not comp what leads to solipsism but any 
 existential stance that only accept what is certain and discard what 
 is only belief based on ?onjectures. 
 
 It can go no further than ?cogito ergo sum 
 
 
 
 
 OK. But that has nothing to do with comp. That would conflate the 8 
 person points in only one of them (the feeler, probably). Only the 
 feeler is that solipsist, at the level were he feels, but the 
 machine's self manage all different points of view, and the living 
 solipsist (each of us) is not mandate to defend the solipsist 
 doctrine (he is the only one existing)/ he is the only one he can 
 feel, that's all. That does not imply the non existence of others 
 and other things. 
 
 
 That pressuposes a lot of things that I have not for granted. I have 
 to accept my beliefs as such beliefs to be at the same time rational 
 and functional. With respect to the others consciousness, being 
 humans or robots, I can only have faith. No matter if I accept that 
 this is a matter of faith or not. 
 ? 
 I still don't see what you mean by consciousness without a soul. 
 
 Bruno 
 
 
 
 
 
 
 
 
 
 
 
 
 2012/10/9 Bruno Marchal : 
 
 
 
 On 09 Oct 2012, at 13:29, Alberto G. Corona wrote: 
 
 
 But still after this reasoning, ? doubt that the self conscious 
 philosopher robot have the kind of thing, call it a soul, that I  
 have. 
 
 
 ? 
 
 You mean it is a zombie? 
 
 I can't conceive consciousness without a 

Re: Re: Solipsism = 1p

2012-10-18 Thread Roger Clough
Hi Bruno Marchal 

 I think you can tell is 1p isn't just a shell
by trying to converse with it. If it can
converse, it's got a mind of its own.


Roger Clough, rclo...@verizon.net 
10/18/2012  
Forever is a long time, especially near the end. -Woody Allen 


- Receiving the following content -  
From: Bruno Marchal  
Receiver: everything-list  
Time: 2012-10-17, 13:36:13 
Subject: Re: Solipsism = 1p 


On 17 Oct 2012, at 13:07, Roger Clough wrote: 

 Hi Bruno 
 
 Solipsism is a property of 1p= Firstness = subjectivity 

OK. And non solipsism is about attributing 1p to others, which needs  
some independent 3p reality you can bet one, for not being only part  
of yourself. Be it a God, or a physical universe, or an arithmetical  
reality. 

Bruno 




 
 Roger Clough, rclo...@verizon.net 
 10/17/2012 
 Forever is a long time, especially near the end. -Woody Allen 
 
 
 - Receiving the following content - 
 From: Alberto G. Corona 
 Receiver: everything-list 
 Time: 2012-10-16, 09:55:41 
 Subject: Re: I believe that comp's requirement is one of as if  
 rather thanis 
 
 
 
 
 
 2012/10/11 Bruno Marchal 
 
 
 On 10 Oct 2012, at 20:13, Alberto G. Corona wrote: 
 
 
 2012/10/10 Bruno Marchal : 
 
 
 On 09 Oct 2012, at 18:58, Alberto G. Corona wrote: 
 
 
 It may be a zombie or not. I can? know. 
 
 The same applies to other persons. It may be that the world is made of 
 zombie-actors that try to cheat me, but I have an harcoded belief in 
 the conventional thing. ? Maybe it is, because otherwise, I will act 
 in strange and self destructive ways. I would act as a paranoic, after 
 that, as a psycopath (since they are not humans). That will not be 
 good for my success in society. Then, ? doubt that I will have any 
 surviving descendant that will develop a zombie-solipsist 
 epistemology. 
 
 However there are people that believe these strange things. Some 
 autists do not recognize humans as beings like him. Some psychopaths 
 too, in a different way. There is no authistic or psichopathic 
 epistemology because the are not functional enough to make societies 
 with universities and philosophers. That is the whole point of 
 evolutionary epistemology. 
 
 
 
 
 If comp leads to solipsism, I will apply for being a plumber. 
 
 I don't bet or believe in solipsism. 
 
 But you were saying that a *conscious* robot can lack a soul. See  
 the 
 quote just below. 
 
 That is what I don't understand. 
 
 Bruno 
 
 
 
 I think that It is not comp what leads to solipsism but any 
 existential stance that only accept what is certain and discard what 
 is only belief based on ?onjectures. 
 
 It can go no further than ?cogito ergo sum 
 
 
 
 
 OK. But that has nothing to do with comp. That would conflate the 8  
 person points in only one of them (the feeler, probably). Only the  
 feeler is that solipsist, at the level were he feels, but the  
 machine's self manage all different points of view, and the living  
 solipsist (each of us) is not mandate to defend the solipsist  
 doctrine (he is the only one existing)/ he is the only one he can  
 feel, that's all. That does not imply the non existence of others  
 and other things. 
 
 
 That pressuposes a lot of things that I have not for granted. I have  
 to accept my beliefs as such beliefs to be at the same time rational  
 and functional. With respect to the others consciousness, being  
 humans or robots, I can only have faith. No matter if I accept that  
 this is a matter of faith or not. 
 ? 
 I still don't see what you mean by consciousness without a soul. 
 
 Bruno 
 
 
 
 
 
 
 
 
 
 
 
 
 2012/10/9 Bruno Marchal : 
 
 
 
 On 09 Oct 2012, at 13:29, Alberto G. Corona wrote: 
 
 
 But still after this reasoning, ? doubt that the self conscious 
 philosopher robot have the kind of thing, call it a soul, that I have. 
 
 
 ? 
 
 You mean it is a zombie? 
 
 I can't conceive consciousness without a soul. Even if only the  
 universal 
 one. 
 So I am not sure what you mean by soul. 
 
 Bruno 
 
 
 http://iridia.ulb.ac.be/~marchal/ 
 
 
 
 --  
 You received this message because you are subscribed to the Google  
 Groups 
 Everything List group. 
 To post to this group, send email to everything-list@googlegroups.com. 
 To unsubscribe from this group, send email to 
 everything-list+unsubscr...@googlegroups.com. 
 For more options, visit this group at 
 http://groups.google.com/group/everything-list?hl=en. 
 
 
 
 
 
 --  
 Alberto. 
 
 --  
 You received this message because you are subscribed to the Google  
 Groups 
 Everything List group. 
 To post to this group, send email to everything-list@googlegroups.com. 
 To unsubscribe from this group, send email to 
 everything-list+unsubscr...@googlegroups.com. 
 For more options, visit this group at 
 http://groups.google.com/group/everything-list?hl=en. 
 
 
 
 http://iridia.ulb.ac.be/~marchal/ 
 
 
 
 --  
 You received this message because you are subscribed to the Google  
 Groups 
 Everything