Re: Re: A test for solipsism

2012-10-20 Thread Roger Clough
Hi Bruno Marchal 

In that definition of a p-zombie below, it says that 
a p-zombie cannot experience qualia, and qualia
are what the senses tell you. The mind then transforms
what is sensed into a sensation. The sense of red
is what the body gives you, the sensation of red 
is what the mind transforms that into. Our mind
also can recall past sensations of red to compare
it with and give it a name red, which a real
person can identify as eg a red traffic light
and stop. A zombie would not stop (I am not allowing
the fact that red and green lights are in different
positions). 
That would be a test of zombieness.

Roger Clough, rclo...@verizon.net 
10/20/2012 
Forever is a long time, especially near the end. -Woody Allen 

- Receiving the following content - 
From: Bruno Marchal 
Receiver: everything-list 
Time: 2012-10-19, 03:47:51 
Subject: Re: A test for solipsism 

On 17 Oct 2012, at 19:12, Roger Clough wrote: 
 Hi Bruno Marchal 
 
 Sorry, I lost the thread on the doctor, and don't know what Craig 
 believes about the p-zombie. 
 
 http://en.wikipedia.org/wiki/Philosophical_zombie 
 
 A philosophical zombie or p-zombie in the philosophy of mind and 
 perception is a hypothetical being 
 that is indistinguishable from a normal human being except in that 
 it lacks conscious experience, qualia, or sentience.[1] When a 
 zombie is poked with a sharp object, for example, it does not feel 
 any pain though it behaves 
 exactly as if it does feel pain (it may say ouch and recoil from 
 the stimulus, or tell us that it is in intense pain). 
 
 My guess is that this is the solipsism issue, to which I would say 
 that if it has no mind, it cannot converse with you, 
 which would be a test for solipsism,-- which I just now found in 
 typing the first part of this sentence. 
Solipsism makes everyone zombie except you. 
But in some context some people might conceive that zombie exists, 
without making everyone zombie. Craig believes that computers, if they 
might behave like conscious individuals would be a zombie, but he is 
no solipsist. 
There is no test for solipsism, nor for zombieness. BY definition, 
almost. A zombie behaves exactly like a human being. There is no 3p 
features that you could use at all to make a direct test. Now a theory 
which admits zombie, can have other features which might be testable, 
and so some indirect test are logically conceivable, relatively to 
some theory. 
Bruno 



 
 
 Roger Clough, rclo...@verizon.net 
 10/17/2012 
 Forever is a long time, especially near the end. -Woody Allen 
 
 
 - Receiving the following content - 
 From: Bruno Marchal 
 Receiver: everything-list 
 Time: 2012-10-17, 08:57:36 
 Subject: Re: Is consciousness just an emergent property of 
 overlycomplexcomputations ? 
 
 
 
 
 On 16 Oct 2012, at 15:33, Stephen P. King wrote: 
 
 
 On 10/16/2012 9:20 AM, Roger Clough wrote: 
 
 Hi Stephen P. King 
 
 Thanks. My mistake was to say that P's position is that 
 consciousness, arises at (or above ?) 
 the level of noncomputability. He just seems to 
 say that intuiton does. But that just seems 
 to be a conjecture of his. 
 
 
 ugh, rclo...@verizon.net 
 10/16/2012 
 Forever is a long time, especially near the end. -Woody Allen 
 
 
 Hi Roger, 
 
 IMHO, computability can only capture at most a simulation of the 
 content of consciousness, but we can deduce a lot from that ... 
 
 
 
 So you do say no to the doctor? And you do follow Craig on the 
 existence of p-zombie? 
 
 
 Bruno 
 
 
 
 
 http://iridia.ulb.ac.be/~marchal/ 
 
 -- 
 You received this message because you are subscribed to the Google 
 Groups Everything List group. 
 To post to this group, send email to everything-list@googlegroups.com. 
 To unsubscribe from this group, send email to 
 everything-list+unsubscr...@googlegroups.com 
 . 
 For more options, visit this group at 
 http://groups.google.com/group/everything-list?hl=en 
 . 
 
http://iridia.ulb.ac.be/~marchal/ 

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group. 
To post to this group, send email to everything-list@googlegroups.com. 
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com. 
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Sense and sensation

2012-10-20 Thread Roger Clough
Hi Bruno Marchal 

Obviously, my statement wasn't very clear.

All living things can sense their environments. 
Plants turn themselves sometimes to the light
and know night from day. I don't know
if they have the sensation of light, which is
a clear indication of what is produced in the 
mind by consciousness. To the degree that 
a plant can do that would be how conscious it is.
I would say that a plant's consciousness
would be more like the consciousness we have
when we dream. But that's just a speculation.
 



Roger Clough, rclo...@verizon.net 
10/20/2012 
Forever is a long time, especially near the end. -Woody Allen 


- Receiving the following content - 
From: Bruno Marchal 
Receiver: everything-list 
Time: 2012-10-18, 10:39:33 
Subject: Re: Continuous Game of Life 


On 17 Oct 2012, at 19:19, Roger Clough wrote: 

 Hi Bruno Marchal 
 
 IMHO all life must have some degree of consciousness 
 or it cannot perceive its environment. 

Are you sure? 

Would you say that the plants are conscious? I do think so, but I am 
not sure they have self-consciousness. 

Self-consciousness accelerates the information treatment, and might 
come from the need of this for the self-movie living creature having 
some important mass. 

all life is a very fuzzy notion. 

Bruno 






 
 
 Roger Clough, rclo...@verizon.net 
 10/17/2012 
 Forever is a long time, especially near the end. -Woody Allen 
 
 
 - Receiving the following content - 
 From: Bruno Marchal 
 Receiver: everything-list 
 Time: 2012-10-17, 10:13:37 
 Subject: Re: Continuous Game of Life 
 
 
 
 
 On 16 Oct 2012, at 18:37, John Clark wrote: 
 
 
 On Mon, Oct 15, 2012 at 2:40 PM, meekerdb wrote: 
 
 
 If consciousness doesn't do anything then Evolution can't see it, 
 so how and why did Evolution produce it? The fact that you have no 
 answer to this means your ideas are fatally flawed. 
 
 I don't see this as a *fatal* flaw. Evolution, as you've noted, is 
 not a paradigm of efficient design. Consciousness might just be a 
 side-effect 
 
 But that's exactly what I've been saying for months, unless Darwin 
 was dead wrong consciousness must be a side effect of intelligence, 
 so a intelligent computer must be a conscious computer. And I don't 
 think Darwin was dead wrong. 
 
 
 
 
 
 Darwin does not need to be wrong. Consciousness role can be deeper, 
 in the evolution/selection of the laws of physics from the 
 coherent dreams (computations from the 1p view) in arithmetic. 
 
 
 Bruno 
 
 
 
 
 http://iridia.ulb.ac.be/~marchal/ 
 
 -- 
 You received this message because you are subscribed to the Google 
 Groups Everything List group. 
 To post to this group, send email to everything-list@googlegroups.com. 
 To unsubscribe from this group, send email to 
 everything-list+unsubscr...@googlegroups.com 
 . 
 For more options, visit this group at 
 http://groups.google.com/group/everything-list?hl=en 
 . 
 

http://iridia.ulb.ac.be/~marchal/ 



-- 
You received this message because you are subscribed to the Google Groups 
Everything List group. 
To post to this group, send email to everything-list@googlegroups.com. 
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com. 
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



What's the difference between sense and sensation ?

2012-10-20 Thread Roger Clough

The dictionary makes little or no differentiation between sense and sensation,
but there is a difference to psychology.  Senses come from the body, 
sensations are what the mind makes of the the sensual input. Psychology has 
this to say:

http://en.wikipedia.org/wiki/Sensation_%28psychology%29

 In psychology, sensation and perception are stages of processing of the 
senses in human and animal systems, 
such as vision, auditory, vestibular, and pain senses. These topics are 
considered part of psychology, and not anatomy or physiology, 
because processes in the brain so greatly affect the perception of a stimulus. 
Included in this topic is the study of illusions such as 
motion aftereffect, color constancy, auditory illusions, and depth perception. 

Sensation is the function of the low-level biochemical and neurological events 
that begin with the impinging of a 
stimulus upon the receptor cells of a sensory organ. It is the detection of the 
elementary properties of a stimulus.[1] 

Perception is the mental process or state that is reflected in statements like 
I see a uniformly blue wall,
representing awareness or understanding of the real-world cause of the sensory 
input. The goal of sensation [I think they meant to say sense] is 
detection, the goal of perception is to create useful information of the 
surroundings.[2] 

In other words, sensations are the first stages in the functioning of senses to 
represent stimuli from the
 environment, and perception is a higher brain function about interpreting 
events and objects in the world.[3] Stimuli from the environment is transformed 
into neural signals which are then interpreted by the brain
through a process called transduction. Transduction can be likened to a bridge 
connecting sensation to perception. 

Gestalt theorists believe that with the two together a person experiences a 
personal reality that is greater than the parts. 



Roger Clough, rclo...@verizon.net 
10/20/2012  
Forever is a long time, especially near the end. -Woody Allen

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



a criticism of comp

2012-10-20 Thread Roger Clough
Hi Bruno Marchal  

Comp cannot give subjective content, can only provide an
objective simulation on the BEHAVIOR of a person (or his physical brain).
This behavioral information can be dealt with by the 
philosophy of mind called functionalism:

http://plato.stanford.edu/entries/functionalism/

Functionalism in the philosophy of mind is the doctrine that what makes 
something a mental 
state of a particular type does not depend on its internal constitution, but 
rather on the way 
it functions, or the role it plays, in the system of which it is a part. This 
doctrine is rooted in 
Aristotle's conception of the soul, and has antecedents in Hobbes's conception 
of the mind as 
a “calculating machine”, but it has become fully articulated (and popularly 
endorsed) only in 
the last third of the 20th century. Though the term ‘functionalism’ is used to 
designate a variety 
of positions in a variety of other disciplines, including psychology, 
sociology, economics, and architecture, this entry focuses exclusively on 
functionalism as a philosophical thesis about the nature of mental states.

A criticism of functionalism and hence of comp is that if one only
considers his physical behavior (and possibily but not necessarily his brain's 
behavior),  
a person can behave in a certain way but have a different mental content.




Roger Clough, rclo...@verizon.net 
10/20/2012  
Forever is a long time, especially near the end. -Woody Allen 


- Receiving the following content -  
From: Bruno Marchal  
Receiver: everything-list  
Time: 2012-10-19, 03:31:54 
Subject: Re: I believe that comp's requirement is one of as if ratherthanis 




On 17 Oct 2012, at 15:28, Stephen P. King wrote: 


On 10/17/2012 8:45 AM, Bruno Marchal wrote: 


On 16 Oct 2012, at 15:00, Stephen P. King wrote:  


On 10/16/2012 8:23 AM, Craig Weinberg wrote:  

On Tuesday, October 16, 2012 4:02:44 AM UTC-4, stathisp wrote:  



There is of course the idea that the universe is actually a simulation but that 
is more controversial.  

A tempting idea until we question what it is a simulation of?  


We can close this by considering when is a simulation of a real thing 
indistinguishable from the real thing!  


What law states that computations exist ab initio, but the capacity to 
experience and participate in a simulated world does not?  


Good point! Why not both existing ab initio?  


But they exists ab initio in the arithmetical truth. So with comp, we can 
postulate only the numbers, or the computations (they are ontologically 
equivalent), then consciousness is semantical fixed point, existing for 
arithmetical reason, yet not describable in direct arithmetical term (like 
truth, by Tarski, or knowledge by Scott-Montague. The Theaetetical Bp  p is 
very appealing in that setting, as it is not arithmetically definable, yet 
makes sense in purely arithmetical term for each p in the language of the 
machine (arithmetic, say).  

So we don't have to postulate consciousness to explain why machine will 
correctly believe in, and develop discourse about, some truth that they can 
know, and that they can also know them to be non justifiable, non sharable, and 
possibly invariant for digital self-transformation, etc.  

Bruno  


Hi Bruno, 

We seem to have a fundamental disagreement on what constitutes arithmetic 
truth. In my thinking, the truth value of a proposition is not separable from 
the ability to evaluate the proposition 


I agree for mundane truth, but not for the truth we can accept to built a 
fundamental theory. 


If you accept comp, you know that the ability to evaluate a proposition will be 
explained in term of a functioning machine, and this is build on elementary 
arithmetical truth. So, with comp, you statement would make comp circular. 


Bruno 








(as Jaakko Hintikka considers) and thus is not some Platonic form that has some 
ontological weight in an eternal pre-established harmony way. I do not 
believe that our reality is merely some pre-defined program since I am claiming 
that the pre-definition is an NP-Hard problem that must be solved prior to 
its use.  
The best fit for me is an infinity of 1p, each that is a bundle of infinite 
computations, that eternally interact with each other (via bisimulation) and 
not some frozen and pre-existing Being. My philosophy is based on that of 
Heraclitus and not that of Parmenides. Being is defined in my thinking as the 
automorphisms within Becoming, thus what is stable and fixed is just those 
things that relatively do not change within an eternally evolving Universe. 


--  
Onward! 

Stephen 


--  
You received this message because you are subscribed to the Google Groups 
Everything List group. 
To post to this group, send email to everything-list@googlegroups.com. 
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com. 
For more options, visit this group at 

Re: Re: Solipsism = 1p

2012-10-20 Thread Roger Clough
Hi Bruno Marchal  


I think if you converse with a real person, he has to 
have a body or at least vocal chords or the ability to write.

As to conversing (interacting) with a computer, not sure, but doubtful:
for example how could it taste a glass of wine to tell good wine
from bad ? Same is true of a candidate possible zombie person.

 
Roger Clough, rclo...@verizon.net 
10/20/2012  
Forever is a long time, especially near the end. -Woody Allen 


- Receiving the following content -  
From: Bruno Marchal  
Receiver: everything-list  
Time: 2012-10-19, 14:09:59 
Subject: Re: Solipsism = 1p 


On 18 Oct 2012, at 20:05, Roger Clough wrote: 

 Hi Bruno Marchal 
 
 I think you can tell is 1p isn't just a shell 
 by trying to converse with it. If it can 
 converse, it's got a mind of its own. 

I agree with. It has mind, and its has a soul (but he has no real  
bodies. I can argue this follows from comp). 

When you attribute 1p to another, you attribute to a shell to  
manifest a soul or a first person, a knower. 

Above a treshold of complexity, or reflexivity, (L?ianity), a  
universal number get a bigger inside view than what he can ever see  
outside. 

Bruno 






 
 
 Roger Clough, rclo...@verizon.net 
 10/18/2012 
 Forever is a long time, especially near the end. -Woody Allen 
 
 
 - Receiving the following content - 
 From: Bruno Marchal 
 Receiver: everything-list 
 Time: 2012-10-17, 13:36:13 
 Subject: Re: Solipsism = 1p 
 
 
 On 17 Oct 2012, at 13:07, Roger Clough wrote: 
 
 Hi Bruno 
 
 Solipsism is a property of 1p= Firstness = subjectivity 
 
 OK. And non solipsism is about attributing 1p to others, which needs 
 some independent 3p reality you can bet one, for not being only part 
 of yourself. Be it a God, or a physical universe, or an arithmetical 
 reality. 
 
 Bruno 
 
 
 
 
 
 Roger Clough, rclo...@verizon.net 
 10/17/2012 
 Forever is a long time, especially near the end. -Woody Allen 
 
 
 - Receiving the following content - 
 From: Alberto G. Corona 
 Receiver: everything-list 
 Time: 2012-10-16, 09:55:41 
 Subject: Re: I believe that comp's requirement is one of as if 
 rather thanis 
 
 
 
 
 
 2012/10/11 Bruno Marchal 
 
 
 On 10 Oct 2012, at 20:13, Alberto G. Corona wrote: 
 
 
 2012/10/10 Bruno Marchal : 
 
 
 On 09 Oct 2012, at 18:58, Alberto G. Corona wrote: 
 
 
 It may be a zombie or not. I can? know. 
 
 The same applies to other persons. It may be that the world is made  
 of 
 zombie-actors that try to cheat me, but I have an harcoded belief in 
 the conventional thing. ? Maybe it is, because otherwise, I will act 
 in strange and self destructive ways. I would act as a paranoic,  
 after 
 that, as a psycopath (since they are not humans). That will not be 
 good for my success in society. Then, ? doubt that I will have any 
 surviving descendant that will develop a zombie-solipsist 
 epistemology. 
 
 However there are people that believe these strange things. Some 
 autists do not recognize humans as beings like him. Some psychopaths 
 too, in a different way. There is no authistic or psichopathic 
 epistemology because the are not functional enough to make societies 
 with universities and philosophers. That is the whole point of 
 evolutionary epistemology. 
 
 
 
 
 If comp leads to solipsism, I will apply for being a plumber. 
 
 I don't bet or believe in solipsism. 
 
 But you were saying that a *conscious* robot can lack a soul. See 
 the 
 quote just below. 
 
 That is what I don't understand. 
 
 Bruno 
 
 
 
 I think that It is not comp what leads to solipsism but any 
 existential stance that only accept what is certain and discard what 
 is only belief based on ?onjectures. 
 
 It can go no further than ?cogito ergo sum 
 
 
 
 
 OK. But that has nothing to do with comp. That would conflate the 8 
 person points in only one of them (the feeler, probably). Only the 
 feeler is that solipsist, at the level were he feels, but the 
 machine's self manage all different points of view, and the living 
 solipsist (each of us) is not mandate to defend the solipsist 
 doctrine (he is the only one existing)/ he is the only one he can 
 feel, that's all. That does not imply the non existence of others 
 and other things. 
 
 
 That pressuposes a lot of things that I have not for granted. I have 
 to accept my beliefs as such beliefs to be at the same time rational 
 and functional. With respect to the others consciousness, being 
 humans or robots, I can only have faith. No matter if I accept that 
 this is a matter of faith or not. 
 ? 
 I still don't see what you mean by consciousness without a soul. 
 
 Bruno 
 
 
 
 
 
 
 
 
 
 
 
 
 2012/10/9 Bruno Marchal : 
 
 
 
 On 09 Oct 2012, at 13:29, Alberto G. Corona wrote: 
 
 
 But still after this reasoning, ? doubt that the self conscious 
 philosopher robot have the kind of thing, call it a soul, that I  
 have. 
 
 
 ? 
 
 You mean it is a zombie? 
 
 I can't conceive consciousness without a 

The circular logic of Dennett and other materialists

2012-10-20 Thread Roger Clough
Hi Bruno Marchal  

This is also where I run into trouble with the p-zombie
definition of what a zombie is.  It has no mind
but it can still behave just as a real person would.

But that assumes, as the materialists do, that the mind
has no necessary function. Which is nonsense, at least
to a realist. 

Thus Dennett claims that a real candidate person
does not need to have a mind. But that's in his
definition of what a real person is. That's circular logic.



Roger Clough, rclo...@verizon.net 
10/20/2012  
Forever is a long time, especially near the end. -Woody Allen 


- Receiving the following content -  
From: Bruno Marchal  
Receiver: everything-list  
Time: 2012-10-19, 14:30:47 
Subject: Re: A test for solipsism 


On 19 Oct 2012, at 11:41, Roger Clough wrote: 

 Hi Russell Standish 
 
 Not so. A zombie can't converse with you, a real person can. 


By definition a (philosophical) zombie can converse with you. A zombie  
is en entity assumed not having consciousness, nor any private  
subjective life, and which behaves *exactly* like a human being. 

Bruno 



 
 
 Roger Clough, rclo...@verizon.net 
 10/19/2012 
 Forever is a long time, especially near the end. -Woody Allen 
 
 
 - Receiving the following content - 
 From: Russell Standish 
 Receiver: everything-list 
 Time: 2012-10-18, 17:48:57 
 Subject: Re: Re: A test for solipsism 
 
 
 On Thu, Oct 18, 2012 at 01:58:29PM -0400, Roger Clough wrote: 
 Hi Stathis Papaioannou 
 
 If a zombie really has a mind it could converse with you. 
 If not, not. 
 
 
 If true, then you have demonstrated the non-existence of zombies 
 (zombies, by definition, are indistinguishable from real people). 
 
 However, somehow I remain unconvinced by this line of reasoning... 
 
 --  
 
  
 Prof Russell Standish Phone 0425 253119 (mobile) 
 Principal, High Performance Coders 
 Visiting Professor of Mathematics hpco...@hpcoders.com.au 
 University of New South Wales http://www.hpcoders.com.au 
  
 
 --  
 You received this message because you are subscribed to the Google  
 Groups Everything List group. 
 To post to this group, send email to everything-list@googlegroups.com. 
 To unsubscribe from this group, send email to 
 everything-list+unsubscr...@googlegroups.com  
 . 
 For more options, visit this group at 
 http://groups.google.com/group/everything-list?hl=en  
 . 
 
 --  
 You received this message because you are subscribed to the Google  
 Groups Everything List group. 
 To post to this group, send email to everything-list@googlegroups.com. 
 To unsubscribe from this group, send email to 
 everything-list+unsubscr...@googlegroups.com  
 . 
 For more options, visit this group at 
 http://groups.google.com/group/everything-list?hl=en  
 . 
 

http://iridia.ulb.ac.be/~marchal/ 



--  
You received this message because you are subscribed to the Google Groups 
Everything List group. 
To post to this group, send email to everything-list@googlegroups.com. 
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com. 
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Re: Re: Re: Why self-organization programs cannot be alive

2012-10-20 Thread Roger Clough
Hi Russell Standish 

But the robot plants could not grow more robot structure
for free nor produce seeds. Or produce beautiful sweet-smelling
flowers. If they could produce more robot structure,
we ought to use them to produce more manf capabilities
(including producing more chips for free).

Roger Clough
 
Receiver: everything-list 
Time: 2012-10-19, 19:26:04 
Subject: Re: Re: Re: Why self-organization programs cannot be alive 


On Fri, Oct 19, 2012 at 05:39:58AM -0400, Roger Clough wrote: 
 Hi Russell Standish 
 
 Bernard cells are mechanical, not caused by a self as agent but by 
 laws of physics. They may be self-organizing, but there's no self 
 to organize things. 
 
 Photosynthesis is a life process, not mechanical because it does things no 
 computer 
 program can do, namely turn light into energy, and CO2 in O2. 

The former can be done with traditional photovoltaic cells made from silicon. 

As for the latter, there are a variety of ways of doing this 
mechanically (ie chemical, but not biological). See 
http://en.wikipedia.org/wiki/Artificial_photosynthesis for more 
details. Also suggested was the following: 

Alternatively, you could heat CO2 over a catalyst of iron doped 
zeolite and hydrogen to produce water and ethylene. A nonthermal 
plasma applied to ethylene will generate carbon soot and recover the 
hydrogen. Electrolysis of water gives back the extra hydrogen and 
produces oxygen. (Hey! I didn't say it was efficient.) It might be 
useful to someone on Mars who has endless power in the form of a 
nuclear reactor and plenty of CO2 but not so much oxygen. 

(see http://www.physicsforums.com/archive/index.php/t-154820.html) 

I remember reading a New Scientist article on artificial 
photosynthesis. It is possible today, although not with the same 
efficiency as plants. The aim is ultimately produce something far more 
efficient (plants aren't exactly optimal - as John Clark would say, 
they are good enough). 

 This requires intelligence, which can't be programmed, 

Why do you say that? Chloroplasts don't seem particularly 
intelligent. They produce oxygen in the presence of light and CO2, 
otherwise metabolise as a normal cell when one or other of these 
ingredients is missing. 

 since it must be free choice, even if just a wee bit. 

Even more bizarre - have you evidence of a chloroplast deciding not to 
produce oxygen when light and CO2 are present, just because it didn't 
feel like it? 

 Choice is needed because like Maxwell's Demon, it goes against entropy. 
 

You mean the second law. No it doesn't, as the light provides plenty 
of free energy to drive the reaction. 

 Self-organization has neither a self nor intelligence, 
 since it is purely mechanical. Only life has intelligence and self. 
 

I can't object to that statement, per se:). Of course, distinguishing between 
life processes and mechanical processes is a bit dubious. Most 
scientists think that life _is_ mechanical. Someone who doesn't is 
the late Robert Rosen - but his arguments are rather difficult to 
follow, and I don't find myself in 100% agreement with them. 


-- 

 
Prof Russell Standish Phone 0425 253119 (mobile) 
Principal, High Performance Coders 
Visiting Professor of Mathematics hpco...@hpcoders.com.au 
University of New South Wales http://www.hpcoders.com.au 
 

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group. 
To post to this group, send email to everything-list@googlegroups.com. 
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com. 
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Re: Re: Re: The objective world of autopoesis

2012-10-20 Thread Roger Clough
Hi Terren Suydam 

Thanks, I have been confusing what the senses provide (which I call senses) 
with what the mind
converts them into (which I call sensations).  I don't know how Dennett could 
know the
ways things seem to us without a mind. Apparently he thinks seems is a 
derogatory word.
But in the mind, everything seems.

http://en.wikipedia.org/wiki/Qualia

Qualia (play /'kw??li?/ or /'kwe?li?/; singular form: quale (Latin 
pronunciation: ['kwa?le]) is a term used in philosophy to refer to individual 
instances of subjective, conscious experience. 
The term derives from a Latin word meaning for what sort or what kind. 
Examples of qualia are the 
pain of a headache, the taste of wine, the experience of taking a recreational 
drug, or the perceived 
redness of an evening sky. Daniel Dennett writes that qualia is an unfamiliar 
term for something 
that could not be more familiar to each of us: the ways things seem to us.[1] 
Erwin Schr鰀inger, the famous physicist, had this counter-materialist take: The 
sensation of colour cannot be accounted for by the physicist's objective 
picture of light-waves. Could the physiologist account for it, if he had fuller 
knowledge than he has of the processes in the 
retina and the nervous processes set up by them in the optical nerve bundles 
and in the brain? I do not think so. [2] 

The importance of qualia in philosophy of mind comes largely from the fact that 
they are seen as 
posing a fundamental problem for materialist explanations of the mind-body 
problem. 
Much of the debate over their importance hinges on the definition of the term 
that is used, 
as various philosophers emphasize or deny the existence of certain features of 
qualia. 
As such, the nature and existence of qualia are controversial.


Roger Clough, rclo...@verizon.net 
10/20/2012  
Forever is a long time, especially near the end. -Woody Allen 


- Receiving the following content -  
From: Terren Suydam  
Receiver: everything-list  
Time: 2012-10-19, 13:37:23 
Subject: Re: Re: Re: The objective world of autopoesis 


Hi Roger, 

I'm not sure your notion of quale is the one commonly held, for 
instance see http://en.wikipedia.org/wiki/Quale. 

Arguing and otherwise communicating about this stuff is hard enough 
even when everyone is using the same definition. It's impossible if 
words can mean what we want them to. 

Terren 

On Fri, Oct 19, 2012 at 5:59 AM, Roger Clough  wrote: 
 Hi Terren Suydam 
 
 IMHO a quale is the stuff of an unperceived-as-of-yet input sensory signal. 
 It is unprocessed Firstness, so not sure of its status. My less than certain 
 opinion is that being unprocessed, it is not yet an experience. 
 
 IMHO nobody knows much about how that Firstness is turned into experience 
 in any detail, although Kant and Hume and Locke had some philosophical 
 remarks. Kant perhaps the most, as he added that the raw 
 signal is categorized. That might be Secondness. 
 
 However, Penrose theorized that neurons maintain a coherent 
 quantum field and in a process he calls OR or orchestrated 
 reduction, in which, as I understand it, the quantum field 
 collapses to produce a unit of conscious experience. 
 
 http://en.wikipedia.org/wiki/Orch-OR 
 
 Roger Clough, rclo...@verizon.net 
 10/19/2012 
 Forever is a long time, especially near the end. -Woody Allen 
 
 
 - Receiving the following content - 
 From: Terren Suydam 
 Receiver: everything-list 
 Time: 2012-10-18, 15:28:22 
 Subject: Re: Re: The objective world of autopoesis 
 
 
 Hi Roger, 
 
 A quale as I understand it is simply a unit of subjective 
 experience. It's a bit of an abstraction since experience does not 
 reduce to constituent units, but as a convention for talking about 
 subjective experience, I suppose it is sometimes useful to be able to 
 refer to a singular 'quale' rather than the plural qualia. Personally 
 I think we could do away with the word and not suffer much for it. 
 
 To go further and refer to qualia as raw unprocessed input signals 
 presupposes a theory, namely that it is possible to experience qualia 
 without any processing, or even that they correspond with input 
 signals. It is not necessary to imbue qualia with the baggage of a 
 particular theory to make it a useful construct for discussion. In the 
 present conversation, it would hinder our ability to understand one 
 another, as the autopoietic model cannot make sense of a phrase like 
 raw unprocessed input signals. 
 
 I would say that the autopoietic model I am considering would posit 
 that human subjective experience as we know it is the *result* of the 
 processing of the output signals produced by various neuroreceptors, 
 as they are perturbed (or not) by the environment outside the body. 
 IOW in this model it is not helpful to identify quales with the inputs 
 to the receptors, as we don't have access to whatever is perturbing 
 the receptors, due to the autopoietic closure. This is the same as 
 saying that our 

Re: RE: RE: A test for solipsism

2012-10-20 Thread Roger Clough
Hi William R. Buckley  

Thank you for reminding me that materialists 
do believe that there is a mind identical to or
in some fashion related to the brain.  Since I
see no possibility that one substance (mind)
can act on another substance (brain), I
don't take their concept of mind seriously,
but I have remember that many (most) people
believe in the materialist view of mind. 


Roger Clough, rclo...@verizon.net 
10/20/2012  
Forever is a long time, especially near the end. -Woody Allen 


- Receiving the following content -  
From: William R. Buckley  
Receiver: everything-list  
Time: 2012-10-19, 08:42:36 
Subject: RE: RE: A test for solipsism 


 Hi William R. Buckley 
  
 You can speak to a potential test subject, 
 but it can only reply if it indeed has a mind. 

This is an assumption you make. 

 This is the Turing test, the results of which are not  
 certain. But it is the only test I can think of unless  
 you want to get into the Chinese room argument, etc. 
  
 If it does not reply, it's a zombie. 

Another assumption. In this case, you can talk to me and  
I will refuse to reply. That make me a zombie? 

 But just to be certain, 
 if it does, as a Turing test, I would ask a series of questions 
 a zombie (someone without a mind) would probably not know, 
 such as 
  
 1) what color are your eyes ? 
 2) What color are my eyes ? 
 3) What is your mother's name ? 
 4) How many fingers am I holding up ? 
 5) What color is a plenget ? 
 6) Who are you going to vote for in the upcoming election? 
 7) What is your birth date? 
 8) Where were you born? 
 9) How tall am I ? 
 10) Am I taller than you are ? 
 10) Do you prefer vanillaberries to Mukle pudding ? 

If one is able to fabricate (lie) with perfect recall (remembering  
all the lies), then one need not know anything in order to give you  
answer to all questions. 

Your thought process is muddled, Mr. Clough. 

wrb 

 etc. 
  
 Roger Clough, rclo...@verizon.net 
 10/19/2012 
 Forever is a long time, especially near the end. -Woody Allen 
  
  
 - Receiving the following content - 
 From: William R. Buckley 
 Receiver: everything-list 
 Time: 2012-10-18, 21:36:39 
 Subject: RE: A test for solipsism 
  
  
 Just because the individual holds the position that he/she is the 
 only living entity in all the universe does not imply that such a 
 person (the solipsist) is incapable of carrying on a conversation, 
 even if that conversation is with an illusion. 
  
 For instance, I have no logical reason to believe that you, Roger 
 Clough, exist. You may in fact exist, and you may in fact be a 
 figment of my imagination; logically, I cannot tell the difference. 
  
 Yet, I can exchange written dialog with you, in spite of any belief 
 I may hold regarding your existence in the physical universe. 
  
 wrb 
  
  
  -Original Message- 
  From: everything-list@googlegroups.com [mailto:everything- 
  l...@googlegroups.com] On Behalf Of Roger Clough 
  Sent: Wednesday, October 17, 2012 10:13 AM 
  To: everything-list 
  Subject: A test for solipsism 
  
  Hi Bruno Marchal 
  
  Sorry, I lost the thread on the doctor, and don't know what Craig 
  believes about the p-zombie. 
  
  http://en.wikipedia.org/wiki/Philosophical_zombie 
  
  A philosophical zombie or p-zombie in the philosophy of mind and 
  perception is a hypothetical being 
  that is indistinguishable from a normal human being except in that it 
  lacks conscious experience, qualia, or sentience.[1] When a zombie is 
  poked with a sharp object, for example, it does not feel any pain 
  though it behaves 
  exactly as if it does feel pain (it may say ouch and recoil from 
 the 
  stimulus, or tell us that it is in intense pain). 
  
  My guess is that this is the solipsism issue, to which I would say 
 that 
  if it has no mind, it cannot converse with you, 
  which would be a test for solipsism,-- which I just now found in 
 typing 
  the first part of this sentence. 
  
  
  Roger Clough, rclo...@verizon.net 
  10/17/2012 
  Forever is a long time, especially near the end. -Woody Allen 
  
  
  - Receiving the following content - 
  From: Bruno Marchal 
  Receiver: everything-list 
  Time: 2012-10-17, 08:57:36 
  Subject: Re: Is consciousness just an emergent property of 
  overlycomplexcomputations ? 
  
  
  
  
  On 16 Oct 2012, at 15:33, Stephen P. King wrote: 
  
  
  On 10/16/2012 9:20 AM, Roger Clough wrote: 
  
  Hi Stephen P. King 
  
  Thanks. My mistake was to say that P's position is that 
  consciousness, arises at (or above ?) 
  the level of noncomputability. He just seems to 
  say that intuiton does. But that just seems 
  to be a conjecture of his. 
  
  
  ugh, rclo...@verizon.net 
  10/16/2012 
  Forever is a long time, especially near the end. -Woody Allen 
  
  
  Hi Roger, 
  
  IMHO, computability can only capture at most a simulation of the 
  content of consciousness, but we can deduce a lot from that ... 
  
  
  
  So you do say no 

Re: Re: A test for solipsism

2012-10-20 Thread Alberto G. Corona
Roger
Different Qualia are a result fo different phisical effect in the senses.
So a machine does not need to have qualia to distinguish between phisical
effectds. It only need sensors that distinguish between them.

A sensor can detect a red light and the attached computer can stop a car.
With no problems.

http://www.gizmag.com/mercedes-benz-smart-stop-system/13122/


2012/10/20 Roger Clough rclo...@verizon.net

  Hi Bruno Marchal

 In that definition of a p-zombie below, it says that
 a p-zombie cannot experience qualia, and qualia
 are what the senses tell you. The mind then transforms
 what is sensed into a sensation. The sense of red
 is what the body gives you, the sensation of red
 is what the mind transforms that into. Our mind
 also can recall past sensations of red to compare
 it with and give it a name red, which a real
 person can identify as eg a red traffic light
 and stop. A zombie would not stop (I am not allowing
 the fact that red and green lights are in different
 positions).
 That would be a test of zombieness.

 Roger Clough, rclo...@verizon.net
 10/20/2012
 Forever is a long time, especially near the end. -Woody Allen

 - Receiving the following content -
 From: Bruno Marchal
 Receiver: everything-list
 Time: 2012-10-19, 03:47:51
 Subject: Re: A test for solipsism

 On 17 Oct 2012, at 19:12, Roger Clough wrote:
  Hi Bruno Marchal
 
  Sorry, I lost the thread on the doctor, and don't know what Craig
  believes about the p-zombie.
 
  http://en.wikipedia.org/wiki/Philosophical_zombie
 
  A philosophical zombie or p-zombie in the philosophy of mind and
  perception is a hypothetical being
  that is indistinguishable from a normal human being except in that
  it lacks conscious experience, qualia, or sentience.[1] When a
  zombie is poked with a sharp object, for example, it does not feel
  any pain though it behaves
  exactly as if it does feel pain (it may say ouch and recoil from
  the stimulus, or tell us that it is in intense pain).
 
  My guess is that this is the solipsism issue, to which I would say
  that if it has no mind, it cannot converse with you,
  which would be a test for solipsism,-- which I just now found in
  typing the first part of this sentence.
 Solipsism makes everyone zombie except you.
 But in some context some people might conceive that zombie exists,
 without making everyone zombie. Craig believes that computers, if they
 might behave like conscious individuals would be a zombie, but he is
 no solipsist.
 There is no test for solipsism, nor for zombieness. BY definition,
 almost. A zombie behaves exactly like a human being. There is no 3p
 features that you could use at all to make a direct test. Now a theory
 which admits zombie, can have other features which might be testable,
 and so some indirect test are logically conceivable, relatively to
 some theory.
 Bruno



 
 
  Roger Clough, rclo...@verizon.net
  10/17/2012
  Forever is a long time, especially near the end. -Woody Allen
 
 
  - Receiving the following content -
  From: Bruno Marchal
  Receiver: everything-list
  Time: 2012-10-17, 08:57:36
  Subject: Re: Is consciousness just an emergent property of
  overlycomplexcomputations ?
 
 
 
 
  On 16 Oct 2012, at 15:33, Stephen P. King wrote:
 
 
  On 10/16/2012 9:20 AM, Roger Clough wrote:
 
  Hi Stephen P. King
 
  Thanks. My mistake was to say that P's position is that
  consciousness, arises at (or above ?)
  the level of noncomputability. He just seems to
  say that intuiton does. But that just seems
  to be a conjecture of his.
 
 
  ugh, rclo...@verizon.net
  10/16/2012
  Forever is a long time, especially near the end. -Woody Allen
 
 
  Hi Roger,
 
  IMHO, computability can only capture at most a simulation of the
  content of consciousness, but we can deduce a lot from that ...
 
 
 
  So you do say no to the doctor? And you do follow Craig on the
  existence of p-zombie?
 
 
  Bruno
 
 
 
 
  http://iridia.ulb.ac.be/~marchal/
 
  --
  You received this message because you are subscribed to the Google
  Groups Everything List group.
  To post to this group, send email to everything-list@googlegroups.com.
  To unsubscribe from this group, send email to
 everything-list+unsubscr...@googlegroups.com
  .
  For more options, visit this group at
 http://groups.google.com/group/everything-list?hl=en
  .
 
 http://iridia.ulb.ac.be/~marchal/

 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To post to this group, send email to everything-list@googlegroups.com.
 To unsubscribe from this group, send email to
 everything-list+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/everything-list?hl=en.

 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To post to this group, send email to everything-list@googlegroups.com.
 To unsubscribe from this group, send email to
 

Measurability is not a condition of reality.

2012-10-20 Thread Roger Clough
Hi Alberto G. Corona  

I have no problem with that, the problem I have
is that I believe that nonphysical things (things,
like Descartes' mind, not extended in space)
like spirit, truly exist.  But to materialists,
that's nonsense, because being inextended it 
can't be measured and so doesn't exist.
And life is just a unique form of matter, 
so can be created.  And what is man but a 
bunch of atoms ?



Roger Clough, rclo...@verizon.net 
10/20/2012  
Forever is a long time, especially near the end. -Woody Allen 


- Receiving the following content -  
From: Alberto G. Corona  
Receiver: everything-list  
Time: 2012-10-20, 08:48:39 
Subject: Re: Re: A test for solipsism 


Roger 
Different Qualia are a result fo different phisical effect in the senses. So a 
machine does not need to have qualia to distinguish between phisical effectds. 
It only need sensors that distinguish between them. 


A sensor can detect a red light and the attached computer can stop a car. With 
no problems.? 


http://www.gizmag.com/mercedes-benz-smart-stop-system/13122/ 



2012/10/20 Roger Clough  

Hi Bruno Marchal  

In that definition of a p-zombie below, it says that  
a p-zombie cannot experience qualia, and qualia 
are what the senses tell you. The mind then transforms 
what is sensed into a sensation. The sense of red 
is what the body gives you, the sensation of red  
is what the mind transforms that into. Our mind 
also can recall past sensations of red to compare 
it with and give it a name red, which a real 
person can identify as eg a red traffic light 
and stop. A zombie would not stop (I am not allowing 
the fact that red and green lights are in different 
positions).  
That would be a test of zombieness. 
? 
Roger Clough, rclo...@verizon.net  
10/20/2012  

Forever is a long time, especially near the end. -Woody Allen  

- Receiving the following content -  
From: Bruno Marchal  
Receiver: everything-list  

Time: 2012-10-19, 03:47:51  
Subject: Re: A test for solipsism  

On 17 Oct 2012, at 19:12, Roger Clough wrote:  
 Hi Bruno Marchal  
  
 Sorry, I lost the thread on the doctor, and don't know what Craig  
 believes about the p-zombie.  
  
 http://en.wikipedia.org/wiki/Philosophical_zombie  
  
 A philosophical zombie or p-zombie in the philosophy of mind and  
 perception is a hypothetical being  
 that is indistinguishable from a normal human being except in that  
 it lacks conscious experience, qualia, or sentience.[1] When a  
 zombie is poked with a sharp object, for example, it does not feel  
 any pain though it behaves  
 exactly as if it does feel pain (it may say ouch and recoil from  
 the stimulus, or tell us that it is in intense pain).  
  
 My guess is that this is the solipsism issue, to which I would say  
 that if it has no mind, it cannot converse with you,  
 which would be a test for solipsism,-- which I just now found in  
 typing the first part of this sentence.  
Solipsism makes everyone zombie except you.  
But in some context some people might conceive that zombie exists,  
without making everyone zombie. Craig believes that computers, if they  
might behave like conscious individuals would be a zombie, but he is  
no solipsist.  
There is no test for solipsism, nor for zombieness. BY definition,  
almost. A zombie behaves exactly like a human being. There is no 3p  
features that you could use at all to make a direct test. Now a theory  
which admits zombie, can have other features which might be testable,  
and so some indirect test are logically conceivable, relatively to  
some theory.  
Bruno  
? 
? 
? 
  
  
 Roger Clough, rclo...@verizon.net  
 10/17/2012  
 Forever is a long time, especially near the end. -Woody Allen  
  
  
 - Receiving the following content -  
 From: Bruno Marchal  
 Receiver: everything-list  
 Time: 2012-10-17, 08:57:36  
 Subject: Re: Is consciousness just an emergent property of  
 overlycomplexcomputations ?  
  
  
  
  
 On 16 Oct 2012, at 15:33, Stephen P. King wrote:  
  
  
 On 10/16/2012 9:20 AM, Roger Clough wrote:  
  
 Hi Stephen P. King  
  
 Thanks. My mistake was to say that P's position is that  
 consciousness, arises at (or above ?)  
 the level of noncomputability. He just seems to  
 say that intuiton does. But that just seems  
 to be a conjecture of his.  
  
  
 ugh, rclo...@verizon.net  
 10/16/2012  
 Forever is a long time, especially near the end. -Woody Allen  
  
  
 Hi Roger,  
  
 IMHO, computability can only capture at most a simulation of the  
 content of consciousness, but we can deduce a lot from that ...  
  
  
  
 So you do say no to the doctor? And you do follow Craig on the  
 existence of p-zombie?  
  
  
 Bruno  
  
  
  
  
 http://iridia.ulb.ac.be/~marchal/  
  
 --  
 You received this message because you are subscribed to the Google  
 Groups Everything List group.  
 To post to this group, send email to everything-list@googlegroups.com.  
 To unsubscribe from this group, send 

Re: A test for solipsism

2012-10-20 Thread Bruno Marchal


On 20 Oct 2012, at 12:38, Roger Clough wrote:


Hi Bruno Marchal

In that definition of a p-zombie below, it says that
a p-zombie cannot experience qualia, and qualia
are what the senses tell you.


Yes. Qualia are the subjective 1p view, sometimes brought by percepts,  
and supposed to be treated by the brain.

And yes a zombie as no qualia, as a qualia needs consciousness.







The mind then transforms
what is sensed into a sensation. The sense of red
is what the body gives you, the sensation of red
is what the mind transforms that into. Our mind
also can recall past sensations of red to compare
it with and give it a name red, which a real
person can identify as eg a red traffic light
and stop. A zombie would not stop



No, a zombie will stop at the red light. By definition it behaves like  
a human, or like a conscious entity.
By definition, if you marry a zombie, your will never been aware of  
that, your whole life.




(I am not allowing
the fact that red and green lights are in different
positions).
That would be a test of zombieness.


There exists already detector of colors, smells, capable of doing  
finer discrimination than human.

I have heard about a machine testing old wine better than human experts.

Machines evolve quickly. That is why the non-comp people are  
confronted with the idea that zombie might be logically possible for  
them.


Bruno







Roger Clough, rclo...@verizon.net
10/20/2012
Forever is a long time, especially near the end. -Woody Allen

- Receiving the following content -
From: Bruno Marchal
Receiver: everything-list
Time: 2012-10-19, 03:47:51
Subject: Re: A test for solipsism

On 17 Oct 2012, at 19:12, Roger Clough wrote:
 Hi Bruno Marchal

 Sorry, I lost the thread on the doctor, and don't know what Craig
 believes about the p-zombie.

 http://en.wikipedia.org/wiki/Philosophical_zombie

 A philosophical zombie or p-zombie in the philosophy of mind and
 perception is a hypothetical being
 that is indistinguishable from a normal human being except in that
 it lacks conscious experience, qualia, or sentience.[1] When a
 zombie is poked with a sharp object, for example, it does not feel
 any pain though it behaves
 exactly as if it does feel pain (it may say ouch and recoil from
 the stimulus, or tell us that it is in intense pain).

 My guess is that this is the solipsism issue, to which I would say
 that if it has no mind, it cannot converse with you,
 which would be a test for solipsism,-- which I just now found in
 typing the first part of this sentence.
Solipsism makes everyone zombie except you.
But in some context some people might conceive that zombie exists,
without making everyone zombie. Craig believes that computers, if they
might behave like conscious individuals would be a zombie, but he is
no solipsist.
There is no test for solipsism, nor for zombieness. BY definition,
almost. A zombie behaves exactly like a human being. There is no 3p
features that you could use at all to make a direct test. Now a theory
which admits zombie, can have other features which might be testable,
and so some indirect test are logically conceivable, relatively to
some theory.
Bruno





 Roger Clough, rclo...@verizon.net
 10/17/2012
 Forever is a long time, especially near the end. -Woody Allen


 - Receiving the following content -
 From: Bruno Marchal
 Receiver: everything-list
 Time: 2012-10-17, 08:57:36
 Subject: Re: Is consciousness just an emergent property of
 overlycomplexcomputations ?




 On 16 Oct 2012, at 15:33, Stephen P. King wrote:


 On 10/16/2012 9:20 AM, Roger Clough wrote:

 Hi Stephen P. King

 Thanks. My mistake was to say that P's position is that
 consciousness, arises at (or above ?)
 the level of noncomputability. He just seems to
 say that intuiton does. But that just seems
 to be a conjecture of his.


 ugh, rclo...@verizon.net
 10/16/2012
 Forever is a long time, especially near the end. -Woody Allen


 Hi Roger,

 IMHO, computability can only capture at most a simulation of the
 content of consciousness, but we can deduce a lot from that ...



 So you do say no to the doctor? And you do follow Craig on the
 existence of p-zombie?


 Bruno




 http://iridia.ulb.ac.be/~marchal/

 --
 You received this message because you are subscribed to the Google
 Groups Everything List group.
 To post to this group, send email to everything-list@googlegroups.com 
.

 To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
 .
 For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
 .

http://iridia.ulb.ac.be/~marchal/

--
You received this message because you are subscribed to the Google  
Groups Everything List group.

To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com 
.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en 

Re: a criticism of comp

2012-10-20 Thread Bruno Marchal


On 20 Oct 2012, at 13:35, Roger Clough wrote:


Hi Bruno Marchal

Comp cannot give subjective content,


This is equivalent to saying that comp is false.

By definition of comp, our consciousness remains intact when we get  
the right computer, featuring the brain at a genuine description level.


Then the math confirms this, even in the ideal case of the  
arithmetically sound machine, and this by using the most classical  
definition of belief, knowledge, etc.





can only provide an
objective simulation on the BEHAVIOR of a person (or his physical  
brain).

This behavioral information can be dealt with by the
philosophy of mind called functionalism:

http://plato.stanford.edu/entries/functionalism/



Here you defend a reductionist conception of what machines and numbers  
are. It fails already at 3p level, by the incompleteness phenomena.
(functionalism is an older version of comp, with the substitution  
level made implicit, and usually fixed at the neuronal level for the  
brain, and in that sense comp is a weaker hypothesis than  
functionalism, as it does not bound the comp subst. level.









Functionalism in the philosophy of mind is the doctrine that what  
makes something a mental
state of a particular type does not depend on its internal  
constitution, but rather on the way
it functions, or the role it plays, in the system of which it is a  
part. This doctrine is rooted in
Aristotle's conception of the soul, and has antecedents in Hobbes's  
conception of the mind as
a “calculating machine”, but it has become fully articulated (and  
popularly endorsed) only in
the last third of the 20th century. Though the term ‘functionalism’  
is used to designate a variety

of positions in a variety of other disciplines, including psychology,
sociology, economics, and architecture, this entry focuses  
exclusively on
functionalism as a philosophical thesis about the nature of mental  
states.


A criticism of functionalism and hence of comp is that if one only
considers his physical behavior (and possibily but not necessarily  
his brain's behavior),
a person can behave in a certain way but have a different mental  
content.


Good point, and this is a motivation for making explicit the existence  
of the level of substitution explicit in the definition.


To survive *for a long time* I would personally ask a correct  
simulation of the molecular levels of both the neurons and the glial  
cells in the brain.


The UD Argument does NOT depend on the choice of the substitution  
level, as long you get a finite digital description relatively to a  
universal number/theory/machine.


Bruno








Roger Clough, rclo...@verizon.net
10/20/2012
Forever is a long time, especially near the end. -Woody Allen


- Receiving the following content -
From: Bruno Marchal
Receiver: everything-list
Time: 2012-10-19, 03:31:54
Subject: Re: I believe that comp's requirement is one of as if  
ratherthanis





On 17 Oct 2012, at 15:28, Stephen P. King wrote:


On 10/17/2012 8:45 AM, Bruno Marchal wrote:


On 16 Oct 2012, at 15:00, Stephen P. King wrote:


On 10/16/2012 8:23 AM, Craig Weinberg wrote:

On Tuesday, October 16, 2012 4:02:44 AM UTC-4, stathisp wrote:



There is of course the idea that the universe is actually a  
simulation but that is more controversial.


A tempting idea until we question what it is a simulation of?


   We can close this by considering when is a simulation of a real  
thing indistinguishable from the real thing!



What law states that computations exist ab initio, but the capacity  
to experience and participate in a simulated world does not?



   Good point! Why not both existing ab initio?


But they exists ab initio in the arithmetical truth. So with comp,  
we can postulate only the numbers, or the computations (they are  
ontologically equivalent), then consciousness is semantical fixed  
point, existing for arithmetical reason, yet not describable in  
direct arithmetical term (like truth, by Tarski, or knowledge by  
Scott-Montague. The Theaetetical Bp  p is very appealing in that  
setting, as it is not arithmetically definable, yet makes sense in  
purely arithmetical term for each p in the language of the machine  
(arithmetic, say).


So we don't have to postulate consciousness to explain why machine  
will correctly believe in, and develop discourse about, some truth  
that they can know, and that they can also know them to be non  
justifiable, non sharable, and possibly invariant for digital self- 
transformation, etc.


Bruno


Hi Bruno,

   We seem to have a fundamental disagreement on what constitutes  
arithmetic truth. In my thinking, the truth value of a proposition  
is not separable from the ability to evaluate the proposition



I agree for mundane truth, but not for the truth we can accept to  
built a fundamental theory.



If you accept comp, you know that the ability to evaluate a  
proposition will be explained in term of a functioning machine, and 

Re: Solipsism = 1p

2012-10-20 Thread Bruno Marchal


On 20 Oct 2012, at 13:55, Roger Clough wrote:


Hi Bruno Marchal


I think if you converse with a real person, he has to
have a body or at least vocal chords or the ability to write.


Not necessarily. Its brain can be in vat, and then I talk to him by  
giving him a virtual body in a virtual environnement.


I can also, in principle talk with only its brain, by sending the  
message through the hearing peripherical system, or with the cerebral  
stem, and decoding the nervous path acting on the motor vocal cords.






As to conversing (interacting) with a computer, not sure, but  
doubtful:

for example how could it taste a glass of wine to tell good wine
from bad ?


I just answered this. Machines becomes better than human in smelling  
and tasting, but plausibly far from dogs and cats competence.





Same is true of a candidate possible zombie person.


Keep in mind that zombie, here, is a technical term. By definition it  
behaves like a human. No humans at all can tell the difference. Only  
God knows, if you want.


Bruno






Roger Clough, rclo...@verizon.net
10/20/2012
Forever is a long time, especially near the end. -Woody Allen


- Receiving the following content -
From: Bruno Marchal
Receiver: everything-list
Time: 2012-10-19, 14:09:59
Subject: Re: Solipsism = 1p


On 18 Oct 2012, at 20:05, Roger Clough wrote:


Hi Bruno Marchal

I think you can tell is 1p isn't just a shell
by trying to converse with it. If it can
converse, it's got a mind of its own.


I agree with. It has mind, and its has a soul (but he has no real
bodies. I can argue this follows from comp).

When you attribute 1p to another, you attribute to a shell to
manifest a soul or a first person, a knower.

Above a treshold of complexity, or reflexivity, (L?ianity), a
universal number get a bigger inside view than what he can ever see
outside.

Bruno









Roger Clough, rclo...@verizon.net
10/18/2012
Forever is a long time, especially near the end. -Woody Allen


- Receiving the following content -
From: Bruno Marchal
Receiver: everything-list
Time: 2012-10-17, 13:36:13
Subject: Re: Solipsism = 1p


On 17 Oct 2012, at 13:07, Roger Clough wrote:


Hi Bruno

Solipsism is a property of 1p= Firstness = subjectivity


OK. And non solipsism is about attributing 1p to others, which needs
some independent 3p reality you can bet one, for not being only part
of yourself. Be it a God, or a physical universe, or an arithmetical
reality.

Bruno






Roger Clough, rclo...@verizon.net
10/17/2012
Forever is a long time, especially near the end. -Woody Allen


- Receiving the following content -
From: Alberto G. Corona
Receiver: everything-list
Time: 2012-10-16, 09:55:41
Subject: Re: I believe that comp's requirement is one of as if
rather thanis





2012/10/11 Bruno Marchal


On 10 Oct 2012, at 20:13, Alberto G. Corona wrote:


2012/10/10 Bruno Marchal :


On 09 Oct 2012, at 18:58, Alberto G. Corona wrote:


It may be a zombie or not. I can? know.

The same applies to other persons. It may be that the world is made
of
zombie-actors that try to cheat me, but I have an harcoded belief in
the conventional thing. ? Maybe it is, because otherwise, I will act
in strange and self destructive ways. I would act as a paranoic,
after
that, as a psycopath (since they are not humans). That will not be
good for my success in society. Then, ? doubt that I will have any
surviving descendant that will develop a zombie-solipsist
epistemology.

However there are people that believe these strange things. Some
autists do not recognize humans as beings like him. Some psychopaths
too, in a different way. There is no authistic or psichopathic
epistemology because the are not functional enough to make societies
with universities and philosophers. That is the whole point of
evolutionary epistemology.




If comp leads to solipsism, I will apply for being a plumber.

I don't bet or believe in solipsism.

But you were saying that a *conscious* robot can lack a soul. See
the
quote just below.

That is what I don't understand.

Bruno



I think that It is not comp what leads to solipsism but any
existential stance that only accept what is certain and discard what
is only belief based on ?onjectures.

It can go no further than ?cogito ergo sum




OK. But that has nothing to do with comp. That would conflate the 8
person points in only one of them (the feeler, probably). Only the
feeler is that solipsist, at the level were he feels, but the
machine's self manage all different points of view, and the living
solipsist (each of us) is not mandate to defend the solipsist
doctrine (he is the only one existing)/ he is the only one he can
feel, that's all. That does not imply the non existence of others
and other things.


That pressuposes a lot of things that I have not for granted. I have
to accept my beliefs as such beliefs to be at the same time rational
and functional. With respect to the others consciousness, being
humans or robots, I can only 

Re: The circular logic of Dennett and other materialists

2012-10-20 Thread Bruno Marchal


On 20 Oct 2012, at 14:04, Roger Clough wrote:


Hi Bruno Marchal

This is also where I run into trouble with the p-zombie
definition of what a zombie is.  It has no mind
but it can still behave just as a real person would.

But that assumes, as the materialists do, that the mind
has no necessary function. Which is nonsense, at least
to a realist.

Thus Dennett claims that a real candidate person
does not need to have a mind. But that's in his
definition of what a real person is. That's circular logic.


I agree with you on this.
Dennett is always on the verge of eliminativism. That is deeply wrong.

Now, if you want eliminate the zombie, and keep comp, you have to  
eventually associate the mind to the logico-arithmetical relations  
defining a computation relative to a universal number, and then a  
reasoning explains where the laws of physics comes from (the number's  
dream statistics).


This leads also to the arithmetical understanding of Plotinus, and of  
all those rare people aware of both the importance of staying rational  
on those issue, *and* open minded on, if not aware of,  the existence  
of consciousness and altered consciousness states.


Bruno








Roger Clough, rclo...@verizon.net
10/20/2012
Forever is a long time, especially near the end. -Woody Allen


- Receiving the following content -
From: Bruno Marchal
Receiver: everything-list
Time: 2012-10-19, 14:30:47
Subject: Re: A test for solipsism


On 19 Oct 2012, at 11:41, Roger Clough wrote:


Hi Russell Standish

Not so. A zombie can't converse with you, a real person can.



By definition a (philosophical) zombie can converse with you. A zombie
is en entity assumed not having consciousness, nor any private
subjective life, and which behaves *exactly* like a human being.

Bruno






Roger Clough, rclo...@verizon.net
10/19/2012
Forever is a long time, especially near the end. -Woody Allen


- Receiving the following content -
From: Russell Standish
Receiver: everything-list
Time: 2012-10-18, 17:48:57
Subject: Re: Re: A test for solipsism


On Thu, Oct 18, 2012 at 01:58:29PM -0400, Roger Clough wrote:

Hi Stathis Papaioannou

If a zombie really has a mind it could converse with you.
If not, not.



If true, then you have demonstrated the non-existence of zombies
(zombies, by definition, are indistinguishable from real people).

However, somehow I remain unconvinced by this line of reasoning...

--  



Prof Russell Standish Phone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Professor of Mathematics hpco...@hpcoders.com.au
University of New South Wales http://www.hpcoders.com.au


--  
You received this message because you are subscribed to the Google

Groups Everything List group.
To post to this group, send email to everything- 
l...@googlegroups.com.

To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
.

--  
You received this message because you are subscribed to the Google

Groups Everything List group.
To post to this group, send email to everything- 
l...@googlegroups.com.

To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
.



http://iridia.ulb.ac.be/~marchal/



--  
You received this message because you are subscribed to the Google  
Groups Everything List group.

To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com 
.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en 
.


--
You received this message because you are subscribed to the Google  
Groups Everything List group.

To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com 
.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en 
.




http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: A test for solipsism

2012-10-20 Thread Bruno Marchal


On 19 Oct 2012, at 23:41, Alberto G. Corona wrote:




2012/10/19 Bruno Marchal marc...@ulb.ac.be

On 19 Oct 2012, at 12:26, Alberto G. Corona wrote:

A few discoveries of evolutionary psichology may help. According  
with EP the mind is composed of many functional modules, each one  
for a different purpose. many of them are specific of each specie.  
Each of these modules is the result of the computation of certain  
areas of the brain. A functional module in the mind has´nt to be an  
area of the brain. Because the model of the mid in EP assumes comp,  
and assumes an specific, testable model for mind-brain design  
(natural selection) it is well suited for issues like this.


Severe autists lack a module called theory of mind . this module  
make you compute the mental states of other people. It gather  
information about their gestures, acts etc. It makes people  
interesting object to care about. Autists can learn rationally  
about the fact that other humans are like him, they can learn to  
take care of them. But they are not naturally interested in people.  
They dont care about if you have a mind, because they do not know  
what means a mind in another being. they just experience their own.  
For them, yuou are robot that they do not understand.


That is possible, but I would say that empathy, your module of a  
theory of mind is already present for all universal machine  
knowing that they are universal. Autist, in your theory, would be a  
L¨bian entity with some defect in that module, with respect to its  
local representation/body. Possible.


But the theory of mind in the case of an universal machine conscious  
of himself lack the  strong perception of another selves that humans  
have from the visual clues of the gestures and reactions of others.   
The human theory of mind is not an abstract theory of mind, but a  
human theory of mind, which evoques mirror feelings like worry,  
compassion, anger that we would never have when contemplating a  
machine. It´s not a philosophical-rational notion, but a instinctive  
one. And because this, it does not permits to fall into solipsism.  
Unless a robot mimic an human, he can never trigger this instinctive  
perception.


Yes. It is a general theory of the consciousness and matter of all  
universal (Löbian) machines. Humans are special case.







In the other side an autist may have the rational theory of mind of  
an universal machine, but lack the strong perception of there are  
others like me around. This is a very important difference for  
practical matter but also for theoretical ones, since the abstract,  
rational theory of mind is a rationalization that builds itself from  
our instinctive perception of a soul-mind in others.


No. The rational and the non rational is part of all Löbian machine.
In a sense Löbianity requires two universal machines, in front of each  
others. And one will dominate on the rational part of the truth, and  
the other will dominate on the not completely rational part.


The lobian machine can recognize another machine, even when alone. Of  
course nature has exploited this a lot at many levels, and even more  
so with the mammals, including especially the humans.


Bruno










We ask ourselves about the existence of the mind in others because  
we have a innate capacity for perceiving and feeling the mind in  
other. However, a robot without human gestures, without human  
reactions would not excite our theory of mind module, and we would  
not have the intuitive perception of a mind in that cold thing.


However this has nothing to do with the real thing.The theory of  
mind module evolved because it was very important for social life.  
But this is compatible with a reality in with each one of us live  
in an universe of zombies (some of them with postdoc in philosophy,  
church pastors etc) where we have the only soul. Of course I dont  
belive that. I have the normal belief. But this is one of the  
most deep and most widespread beliefs, because it is innate and you  
must fight against it to drop it out. This belief save you from a  
paralizing solipsism. That´s one of the reasons why I say I  
believe, therefore I can act


I follow you well. I agree. Comp is the inverse of solipsim, as it  
attributes a soul to a larger class of entity than usually thought:  
machines, and even relative numbers in arithmetic.


Bruno







2012/10/17 Roger Clough rclo...@verizon.net
Hi Bruno Marchal

Sorry, I lost the thread on the doctor, and don't know what Craig  
believes about the p-zombie.


http://en.wikipedia.org/wiki/Philosophical_zombie

A philosophical zombie or p-zombie in the philosophy of mind and  
perception is a hypothetical being
that is indistinguishable from a normal human being except in that  
it lacks conscious experience, qualia, or sentience.[1] When a  
zombie is poked with a sharp object, for example, it does not feel  
any pain though it behaves
exactly as if it does feel pain 

Re: Continuous Game of Life

2012-10-20 Thread Bruno Marchal


On 20 Oct 2012, at 07:15, John Clark wrote:

On Wed, Oct 17, 2012 at 10:13 AM, Bruno Marchal marc...@ulb.ac.be  
wrote:


 Darwin does not need to be wrong. Consciousness role can be  
deeper, in the evolution/selection of the laws of physics from the  
coherent dreams (computations from the 1p view) in arithmetic.


I have no idea what that means, not a clue,


Probably for the same reason that you stop at step 3 in the UD Argument.

You assume a physical reality, and you assume that our consciousness  
is some phenomenon related exclusively to some construct (brain,  
bodies) in that physical reality.


But once you grasp the first person indeterminacy, and take into  
account its many invariance features (they can't distinguish  
immediately real, virtual, arithmetical, they can't be aware of  
the delays of reconstitution) you can see that comp make the existence  
of a physical universe a from of vague wishful thinking kind of  
thing, as your future, from your first person points of view will  
depend on all the computations going through your actual current  
relative state(s).


Comp generalized Everett (on QM) to arithmetic.

No doubt we share deep linear computations. Everett saves comp from  
solipism. But QM has to be retrieved from number dreams statistics to  
confirms this.


Advantage? The subtlety of arithmetical self-reference makes possible  
to distinguish many sorts of points of view, and suggests explanation  
for the difference between the qualia and the quanta.






but I do know that Evolution can't select for something it can't see,


OK.



and I do know that Evolution can see intelligence because it  
produces behavior.


OK.




Evolution can't see consciousness directly any better than we can,


Plausible.





so if it produced it


No. With comp, consciousness was there before. It just get lost on  
relatively coherent sheafs of computational histories.
We share dreams.   (a dream is a computation to which a first person  
is attributable)





(and it did unless Darwin was dead wrong)


Darwin explains the evolution of species, in an Aristotelian framework.

Comp refutes the Aristotelian framework, and saves the main part of  
Darwin, indeed, it generalizes it on a realm where the laws of physics  
themselves arises by a process of arithmetical self-selection.






then consciousness MUST be a byproduct of something that it can see.


The contrary, if you say yes to the doctor by betting on comp,  
consciously.


I think anybody can see that once he/she/it takes comp seriously and  
stay cold rationalist on the subject.


I don't think it is so much more alluring than Everett QM.

Bruno






  John K Clark



--
You received this message because you are subscribed to the Google  
Groups Everything List group.

To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com 
.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en 
.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Continuous Game of Life

2012-10-20 Thread Craig Weinberg


On Friday, October 19, 2012 3:29:39 AM UTC-4, Bruno Marchal wrote:


 On 17 Oct 2012, at 17:04, Craig Weinberg wrote:



 On Wednesday, October 17, 2012 10:16:52 AM UTC-4, Bruno Marchal wrote:


 On 16 Oct 2012, at 18:56, Craig Weinberg wrote:

 Two men and two women live together. The woman has a child. 2+2=5


 You mean two men + two women + a baby = five persons. 

 You need the arithmetical 2+2=4, and 4+1 = 5, in your argument.

 Bruno


 I only see that one person plus another person can eventually equal three 
 or more people. 


 With the operation of sexual reproduction, not by the operation of 
 addition. 


Only if you consider the 2+2=5 to be a complex special case and 2+2=4 to be 
a simple general rule. It could just as easily be flipped. I can say 2+2=4 
by the operation of reflexive neurology, and 2+2=5 is an operation of 
multiplication. It depends on what level of description you privilege by 
over-signifying and the consequence that has on the other levels which are 
under-signified. To me, the Bruno view is near-sighted when it comes to 
physics (only sees numbers, substance is disqualified) and far-sighted when 
it comes to numbers (does not question the autonomy of numbers). What is it 
that can tell one number from another? What knows that + is different from 
* and how? Why doesn't arithmetic truth need a meta-arithmetic machine to 
allow it to function (to generate the ontology of 'function' in the first 
place)?

It's all sense. It has to be sense.





 It depends when you start counting and how long it takes you to finish.


 It depends on what we are talking about. Person with sex is not numbers 
 with addition.

 You are just changing definition, not invalidating a proof (the proof that 
 2+2=4, in arithmetic).


I'm not trying to invalidate the proof within one context of sense, I'm 
pointing out that it isn't that simple. There are other contexts of sense 
which reduce differently. 

Craig

 


 Bruno




 Craig
  




 http://iridia.ulb.ac.be/~marchal/




 -- 
 You received this message because you are subscribed to the Google Groups 
 Everything List group.
 To view this discussion on the web visit 
 https://groups.google.com/d/msg/everything-list/-/QjkYW9tKq6EJ.
 To post to this group, send email to everyth...@googlegroups.comjavascript:
 .
 To unsubscribe from this group, send email to 
 everything-li...@googlegroups.com javascript:.
 For more options, visit this group at 
 http://groups.google.com/group/everything-list?hl=en.


 http://iridia.ulb.ac.be/~marchal/





-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/everything-list/-/ma4il48CDGAJ.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Continuous Game of Life

2012-10-20 Thread Craig Weinberg


On Saturday, October 20, 2012 1:01:51 AM UTC-4, John Clark wrote:

 On Tue, Oct 16, 2012 at 12:56 PM, Craig Weinberg 
 whats...@gmail.comjavascript:
  wrote:

  So lets see, a giant junkyard magnet is a devastating logical 
 argument but  a junkyard car crusher is not. Explain to me how that works.


  Because talking about how you want to kill me in an argument about 
 computers is pointless ad hominem venting, but talking about the effect of 
 magnetism on computers in an argument about computers is relevant


 A strong magnetic field will disrupt the operation of a computer and it 
 will disrupt the operation of your brain too, and a junkyard car crusher 
 will disrupt the operation of both as well.


I get your point, but at the same time, we aren't outfitting Apache 
helicopters with giant magnets to immobilize armies of people.

Craig
 


   John K Clark  


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/everything-list/-/xqCdYrXGzBcJ.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Is consciousness just an emergent property of overly complexcomputations ?

2012-10-20 Thread Bruno Marchal

Dear Stephen,


On 19 Oct 2012, at 19:44, Stephen P. King wrote:


On 10/19/2012 1:37 PM, Bruno Marchal wrote:


On 17 Oct 2012, at 22:02, Alberto G. Corona wrote:




2012/10/17 Alberto G. Corona agocor...@gmail.com


2012/10/17 Bruno Marchal marc...@ulb.ac.be

On 17 Oct 2012, at 10:12, Alberto G. Corona wrote:





Life may support mathematics.



Arithmetic may support life. It is full of life and dreams.



Life is a computation devoted to making guesses about the future  
in order to self preserve . This is only possible in a world  
where natural computers are possible: in a world where the  
phisical laws have a mathematical nature. Instead of comp  
creating a mathematical-phisical reality, is the mathematical  
reality what creates the computations in which we live.


So all kind of arbitrary universes may exist, but only (some)  
mathematical ones can harbour self preserving computations, that  
is, observers.


OK. But harboring self-preserving computation is not enough, it  
must do in a first person measure winning way  on all computations  
going through our state. That's nice as this explain that your  
idea of evolution needs to be extended up to the origin of the  
physical laws.



I don´t think so .The difference between computation as an  
ordinary process of matter from the idea of  computation as the  
ultimate essence of reality is that the first restrict not only  
the mathematical laws, but also forces a matemacity of reality  
because computation in living beings   becomes a process with a  
cost that favour a  low kolmogorov complexity for the reality. In  
essence, it forces a discoverable local universe... ,


 In contrast,  the idea of computation as the ultimate nature of  
realtity postulates  computations devoid of restrictions by  
definition, so they may not restrict anything in the reality that  
we perceive. we may be boltzmann brains, we may  be a product not  
of evolution but a product of random computations. we may perceive  
elephants flying...


And still much of your conclussions coming from the first person  
indeterminacy may hold by considering living beings as ordinary  
material personal computers.



Yes, that's step seven. If the universe is enough big, to run a  
*significant* part of the UD. But I think that the white rabbits  
disappear only on the limit of the whole UD work (UD*).



Bruno



Dear Bruno,

Tell us more about how White Rabbits can appear if there is any  
restriction of mutual logical consistency between 1p and in any  
arbitrary recursion of 1p content?





We assume comp.  If a digital computer processes the activity of your  
brain in dream state with white rabbits, it means that such a  
computation with that dream exist in infinitely many local  
incarnation in the arithmetical (tiny, Turing universal) reality.


If you do a physical experience, the hallucination that all goes weird  
at that moment exists also, in arithmetic. The measure problem  
consists in justifying from consistency, self-reference, universal  
numbers, their rarity, that is why apparent special universal (Turing)  
laws prevails (and this keeping in mind the 1p, the 1p-indeterminacy,  
the 3p relative distinctions, etc.)


Bruno



http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



I believe that comp's requirement is one of as if rather than is

2012-10-20 Thread Stathis Papaioannou
On Tue, Oct 16, 2012 at 11:23 PM, Craig Weinberg
whatsons...@gmail.comjavascript:;
wrote:

 The universe is algorithmic insofar as a small number of physical rules
 gives rise to everything that we see around us.


 Only if we infer that is the case. Physical rules don't give rise to
 anything, especially beings which experience some version of 'seeing
 everything around them'.

I'm not sure if you really don't understand what is meant by a small
number of physical rules gives rise to everything that we see around us.
It means there are certain regularities in the universe which we call
rules or laws of nature. For example, the total momentum of two bodies
before they collide is the same as the total momentum after they collide,
which is called the law of conservation of momentum. This is not a law
from a parliament or a law from God but a description of what happens.

 A philosophical zombie is not charged with an expectation of anything
 mental, that is one of its defining characteristics.


 That's what I mean by charged. If you define something as having no mental
 experience and give it a name of a generic undead person, you are charging
 your definition with an expectation of absent personhood. If I say puppet,
 there is no supernatural absence of personhood, there is a common sense
 notion of prosthetically extended personhood of the puppeteer through an
 inanimate object.

There's no puppeteer if the computer acts autonomously. If you are going to
insist that since the computer was programmed it is not acting autonomously
then consider the same computer that came about through matter falling
together randomly - certainly physically possible if very improbable. We
have two apparently identical computers, one manufactured and programmed by
humans, the other generated spontaneously. Is one potentially conscious and
the other not?

 It's begging the question if I make the assumption in the premises of
an
 argument that purports to prove it. But I propose it as a theory: if Bugs
 Bunny does do this in an interactive way, such as a real rabbit would,
then
 Bugs Bunny is indeed as conscious as a real rabbit.


 If I see an old YouTube of a dead celebrity talking to Johnny Carson, does
 that mean that both of them are indeed conscious? Playing the YouTube has
a
 power of resurrection? If not, please explain in detail why not.

Why do you keep bringing up this example? It is obvious to anyone within a
second that the video will not interact with you like the real Johnny
Carson through a video link would.

 What we observe is that when certain physical processes happen,
 consciousness happens.

 We observe that physical processes coincide with reports of particular
kinds
 of conscious experiences. We have no theory to link the two causally and
 even lack an understanding of anesthesia.

A theory is that consciousness happens whenever a system interacts with the
environment in the way conscious entities do, and that in fact
consciousness is no more than this. Anaesthetics knock out this interaction
and so knock out consciousness. Death also knocks out this interaction and
so knocks out consciousness.

 This is a minimal theory. It's like observing the inverse square law for
 gravitational attraction. As a minimal theory, it is enough until new
facts
 come along requiring further explanation.


 Enough to send us in the completely wrong direction.

So you say, but you need to explain what aspect of the theory goes against
observation.

 In light of

 The fact that intelligence has no pragmatic reason or opportunity to
create
 or use consciousness to accomplish any unconscious purpose (even
 accidentally).
 The fact that intelligence in all observed cases evolves naturally through
 the development of an infant into a child and from primitive to more
recent
 species.
 The fact that attempts at artificial intelligence thus far not only show
no
 glimmer of consciousness but to the contrary continue to embody the
 emptiness of mechanism.
 The fact that the regions of the human brain involving intelligence are
 preceded by limbic-emotional and thalamic-sensory consciousness.
 The fact that human beings cannot function as intelligent agents while
 unconscious, but can be conscious without developing intelligence.

It seems human level intelligence is sufficient but not necessary for
consciousness. A minimal ability to perceive and interact with the
environment seems to be necessary. Biological processes per se however are
*not* sufficient. A anaesthetised human has most of his low level
neurological and other biological processes functioning normally but is not
conscious. That is consistent with functionalism but not with the idea that
consciousness originates at the cellular or molecular level.


--
Stathis Papaioannou


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.

Re: Continuous Game of Life

2012-10-20 Thread John Clark
On Sat, Oct 20, 2012  Bruno Marchal marc...@ulb.ac.be wrote:

  I have no idea what that means, not a clue



 Probably for the same reason that you stop at step 3 in the UD Argument.


Probably. I remember I stopped reading after your proof of the existence of
a new type of indeterminacy never seen before because the proof was in
error, so there was no point in reading about things built on top of that;
but I don't remember if that was step 3 or not.

You assume a physical reality,


I assume that if physical reality doesn't exist then either the words
physical or reality or exists are meaningless, and I don't think any
of those words are.


  and you assume that our consciousness is some phenomenon related
 exclusively to some construct (brain, bodies)


If you change your conscious state then your brain changes, and if I make a
change in your brain then your conscious state changes too, so I'd say that
it's a good assumption that consciousness is interlinked with a physical
object, in fact it's a downright superb assumption.

   so if it [Evolution] produced it [consciousness]



No. With comp, consciousness was there before.


Well I don't know about you but I don't think my consciousness was there
before Evolution figured out how to make brains, I believe this because I
can't seem to remember events that were going on during the Precambrian.
I've always been a little hazy about what exactly comp meant but I had
the general feeling that I sorta agreed with it, but apparently not.

  John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Why self-organization programs cannot be alive

2012-10-20 Thread John Clark
On Wed, Oct 17, 2012  Roger Clough rclo...@verizon.net wrote:

 Creating structure out of a random environment requires intelligence, the
 ability to make choices on one's own.


Thus we can conclude that when the sun evaporates salty water  salt
crystals do not form because a liquid is a amorphous collection of
molecules while a salt crystal is a highly ordered lattice of atoms. The
sun, not being intelligent, simply could not have performed this task; so
you might want to contact the Morton salt company and inform them that
their product does not exist.

  John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Continuous Game of Life

2012-10-20 Thread Stathis Papaioannou


On Oct 15, 2012, at 4:10 AM, Craig Weinberg whatsons...@gmail.com wrote:


  But since you misunderstand the first assumption you misunderstand the 
  whole argument. 
  
  
  Nope. You misunderstand my argument completely. 
 
 Perhaps I do, but you specifically misunderstand that the argument 
 depends on the assumption that computers don't have consciousness.
 
 No, I do understand that.

Good.

 You 
 also misunderstand (or pretend to) the idea that a brain or computer 
 does not have to know the entire future history of the universe and 
 how it will respond to every situation it may encounter in order to 
 function.
 
 Do you have to know the entire history of how you learned English to read 
 these words? It depends what you mean by know. You don't have to consciously 
 recall learning English, but without that experience, you wouldn't be able to 
 read this. If you had a module implanted in your brain which would allow you 
 to read Chinese, it might give you an acceptable capacity to translate 
 Chinese phonemes and characters, but it would be a generic understanding, not 
 one rooted in decades of human interaction. Do you see the difference? Do you 
 see how words are not only functional data but also names which carry 
 personal significance?

The atoms in my brain don't have to know how to read Chinese. They only need to 
know how to be carbon, nitrogen, oxygen etc. atoms. The complex behaviour which 
is reading Chinese comes from the interaction of billions of these atoms doing 
their simple thing. If the atoms in my brain were put into a Chinese-reading 
configuration, either through a lot of work learning the language or through 
direct manipulation, then I would be able to understand Chinese.

 What are some equivalently simple, uncontroversial things in 
 what you say that i misunderstand?
 
 You think that I don't get that Fading Qualia is a story about a world in 
 which the brain cannot be substituted, but I do. Chalmers is saying 'OK lets 
 say that's true - how would that be? Would your blue be less and less blue? 
 How could you act normally if you...blah, blah, blah'. I get that. It's 
 crystal clear.
 
 What you don't understand is that this carries a priori assumptions about the 
 nature of consciousness, that it is an end result of a distributed process 
 which is monolithic. I am saying NO, THAT IS NOT HOW IT IS.
 
 Imagine that we had one eye in the front of our heads and one ear in the 
 back, and that the whole of human history has been to debate over whether 
 walking forward means that objects are moving toward you or whether it means 
 changes in relative volume of sounds.
 
 Chalmers is saying, 'if we gradually replaced the eye with parts of the ear, 
 how would our sight gradually change to sound, or would it suddenly switch 
 over?' Since both options seem absurd, then he concludes that anything that 
 is in the front of the head is an eye and everything on the back is an ear, 
 or that everything has both ear and eye potentials.
 
 The MR model is to understand that these two views are not merely substance 
 dual or property dual, they are involuted juxtapositions of each other. The 
 difference between front and back is not merely irreconcilable, it is 
 mutually exclusive by definition in experience. I am not throwing up my hands 
 and saying 'ears can't be eyes because eyes are special', I am positively 
 asserting that there is a way of modeling the eye-ear relation based on an 
 understanding of what time, space, matter, energy, entropy, significance, 
 perception, and participation actually are and how they relate to each other.
 
 The idea that the newly discovered ear-based models out of the back of our 
 head is eventually going to explain the view eye view out of the front is not 
 scientific, it's an ideological faith that I understand to be critically 
 flawed. The evidence is all around us, we have only to interpret it that way 
 rather than to keep updating our description of reality to match the 
 narrowness of our fundamental theory. The theory only works for the back view 
 of the world...it says *nothing* useful about the front view. To the True 
 Disbeliever, this is a sign that we need to double down on the back end view 
 because it's the best chance we have. The thinking is that any other position 
 implies that we throw out the back end view entirely and go back to the dark 
 ages of front end fanatacism. I am not suggesting a compromise, I propose a 
 complete overhaul in which we start not from the front and move back or back 
 and move front, but start from the split and see how it can be understood as 
 double knot - a fold of folds.

I'm sorry, but this whole passage is a non sequitur as far as the fading qualia 
thought experiment goes. You have to explain what you think would happen if 
part of your brain were replaced with a functional equivalent. A functional 
equivalent would stimulate the remaining neurons the same as the part that 

Re: A test for solipsism

2012-10-20 Thread Stephen P. King

On 10/20/2012 10:11 AM, Bruno Marchal wrote:


On 20 Oct 2012, at 12:38, Roger Clough wrote:


Hi Bruno Marchal

In that definition of a p-zombie below, it says that
a p-zombie cannot experience qualia, and qualia
are what the senses tell you.


Yes. Qualia are the subjective 1p view, sometimes brought by percepts, 
and supposed to be treated by the brain.

And yes a zombie as no qualia, as a qualia needs consciousness.







The mind then transforms
what is sensed into a sensation. The sense of red
is what the body gives you, the sensation of red
is what the mind transforms that into. Our mind
also can recall past sensations of red to compare
it with and give it a name red, which a real
person can identify as eg a red traffic light
and stop. A zombie would not stop



No, a zombie will stop at the red light. By definition it behaves like 
a human, or like a conscious entity.
By definition, if you marry a zombie, your will never been aware of 
that, your whole life.




(I am not allowing
the fact that red and green lights are in different
positions).
That would be a test of zombieness.


There exists already detector of colors, smells, capable of doing 
finer discrimination than human.

I have heard about a machine testing old wine better than human experts.

Machines evolve quickly. That is why the non-comp people are 
confronted with the idea that zombie might be logically possible for them.


Bruno


Hi Bruno and Roger,

What would distinguish, for an external observer, a p-zombie from a 
a person that does not see the world external to it as anything other 
than an internal panorama with which it cannot interact?


--
Onward!

Stephen

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: The circular logic of Dennett and other materialists

2012-10-20 Thread Stephen P. King

On 10/20/2012 10:33 AM, Bruno Marchal wrote:


On 20 Oct 2012, at 14:04, Roger Clough wrote:


Hi Bruno Marchal

This is also where I run into trouble with the p-zombie
definition of what a zombie is.  It has no mind
but it can still behave just as a real person would.

But that assumes, as the materialists do, that the mind
has no necessary function. Which is nonsense, at least
to a realist.

Thus Dennett claims that a real candidate person
does not need to have a mind. But that's in his
definition of what a real person is. That's circular logic.


I agree with you on this.
Dennett is always on the verge of eliminativism. That is deeply wrong.

Now, if you want eliminate the zombie, and keep comp, you have to 
eventually associate the mind to the logico-arithmetical relations 
defining a computation relative to a universal number, and then a 
reasoning explains where the laws of physics comes from (the number's 
dream statistics).


This leads also to the arithmetical understanding of Plotinus, and of 
all those rare people aware of both the importance of staying rational 
on those issue, *and* open minded on, if not aware of, the existence 
of consciousness and altered consciousness states.


Bruno




  Dear Bruno,

It seems, from this post that you do support some form of 
panprotopsychism! http://www.youtube.com/watch?v=rieo-BDTcko


--
Onward!

Stephen


--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: I believe that comp's requirement is one of as if ratherthanis

2012-10-20 Thread Craig Weinberg


On Saturday, October 20, 2012 12:50:55 AM UTC-4, John Clark wrote:

 On Fri, Oct 19, 2012  Craig Weinberg whats...@gmail.com javascript:wrote:

  If you can do something for your own personal reasons then you have free 
 will. If you demand that personal reasons still must always come from 
 outside of the person themselves[...]


 But I don't demand that at all! You might picked X and not Y entirely for 
 internal reasons, entirely because of the state of the neurons inside your 
 very own personal head.


The reasons of my neurons are not my personal reasons. Neurons deal in GABA 
and acetylcholine. I deal in paychecks and days off. Different levels of 
description. My neurons can influence my consciousness from a sub-personal 
level - say feeling unfulfilled when I get my paycheck, but they cannot 
decide for me to get a better job. I decide. Me. I might decide to deal 
steroids on the side instead, or make human jack-o'lanterns out of the 
neighbors heads and sell them at a garage sale. Of these three options, the 
steroids and the premeditated murder and decapitation carry a heavy 
super-signifying charge. My social experience and and innate sensitivity 
circumscribes these acts as criminal, evil, or both, so my range of 
seemingly *realistic* options is quite a bit narrower than the full 
continuum of options available to me in theory. The full scope of what *
might* be available to me is relatively limitless. I can meet someone and 
go into business with them. I can have an idea and make money from it. I 
can get run over by a furniture truck and collect an insurance settlement. 
NONE of these possibilities are realizable on the sub-personal or 
super-signifying levels. They are native to the personal level of 
description. NOT my neurons or molecules, NOT my behavioral statistics, NOT 
determinism, NOT randomness. Personal Preference is the appropriate factor. 
Not the only factor, but a significant factor which you deny like it was 
the Holocaust.
 

 And my computer did X and not Y entirely because of the state of its 
 memory banks and microprocessor inside its very own personal aluminum box. 


Let's compare. Does your computer worry about it's job? Does it get a 
feeling one way or another if it receives more or less volts? If you remove 
RAM does it miss it? You are welcome to believe any fairytale sophistry you 
like, but you can be sure that the belief in this level of stupidity dwarfs 
any organized religion. Rather than talking myself into entertaining the 
fantasy of a computer with feelings, I have a better explanation for why no 
computer has ever exhibited a personal preference. They have none. There is 
no 'they' there. Instead of a super-signifying level of acculturation and 
ecology, they have instruction codes which impress upon them functions 
which are utterly alien to whatever substance is being borrowed to do the 
computing. Instead of a personal level, they have only sub-personal logic, 
involuntary reflex dictated by the rigidity of the materials specially 
selected for that quality. These are not proto-organisms, they are 
amputated sculptures playing pre-recorded messages. They are sophisticated 
messages - useful messages, but ultimately nothing more than a very 
cleverly organized library.
 


 That would only be true if every event must have a cause, but there is 
 no law of logic that demands that must always be true


  Then maybe a new law of logic just appeared out of nowhere.


 Maybe, if so it wouldn't be the first time something appeared out of 
 nowhere. But I don't understand why I should be embarrassed to have an 
 answer to the question why is there something rather than nothing? that 
 is not entirely satisfactory, its not as if you or anybody else can do 
 better.  


You don't need to be embarrassed at all. I do think that I have solved this 
problem though. The question assumes a background of nothing, whereas I see 
that in the absence of our own subjectivity, what is left is 
everythingness. Our awareness is a subtractive partitioning, or temporary 
diffraction from a boundaryless whole, not an evacuated absence. If you 
want to be embarrassed, it would be because you are a staunch critic of all 
possibilities which deviate even slightly from a reductionist logic of true 
or false, but don't see any contradiciton in having a deck of infinite 
wild-cards of 'maybe whatever out of nowhere' in your pocket.


  X and Y are made up. Like Pepsi and Coke. They are notations. 


 Deep man deep, Plato and Socrates eat your heart out. 


Deep shmeep, I am pointing out that X and Y are information modeling 
symbols, not features of universal truth.
 

  

  Was there a part of that grousing and grumbling that resembled an 
 answer to my question? I am asking the purpose of preference in a universe 
 devoid of ... your favorite word.


 As I said before, you can't ask me why I did or wrote something because 
 according to you I have this thing called free will 

Re: Continuous Game of Life

2012-10-20 Thread Craig Weinberg


On Saturday, October 20, 2012 1:47:28 PM UTC-4, stathisp wrote:



 On Oct 15, 2012, at 4:10 AM, Craig Weinberg whats...@gmail.comjavascript: 
 wrote:


  But since you misunderstand the first assumption you misunderstand the 
  whole argument. 
  
  
  Nope. You misunderstand my argument completely. 

 Perhaps I do, but you specifically misunderstand that the argument 
 depends on the assumption that computers don't have consciousness. 


 No, I do understand that.


 Good.

 You 
 also misunderstand (or pretend to) the idea that a brain or computer 
 does not have to know the entire future history of the universe and 
 how it will respond to every situation it may encounter in order to 
 function. 


 Do you have to know the entire history of how you learned English to read 
 these words? It depends what you mean by know. You don't have to 
 consciously recall learning English, but without that experience, you 
 wouldn't be able to read this. If you had a module implanted in your brain 
 which would allow you to read Chinese, it might give you an acceptable 
 capacity to translate Chinese phonemes and characters, but it would be a 
 generic understanding, not one rooted in decades of human interaction. Do 
 you see the difference? Do you see how words are not only functional data 
 but also names which carry personal significance?


 The atoms in my brain don't have to know how to read Chinese. They only 
 need to know how to be carbon, nitrogen, oxygen etc. atoms. The complex 
 behaviour which is reading Chinese comes from the interaction of billions 
 of these atoms doing their simple thing. 


I don't think that is true. The other way around makes just as much sense 
of not more: Reading Chinese is a simple behavior which drives the behavior 
of billions of atoms to do a complex interaction. To me, it has to be both 
bottom-up and top-down. It seems completely arbitrary prejudice to presume 
one over the other just because we think that we understand the bottom-up 
so well.

Once you can see how it is the case that it must be both bottom-up and 
top-down at the same time, the next step is to see that there is no 
possibility for it to be a cause-effect relationship, but rather a dual 
aspect ontological relation. Nothing is translating the functions of 
neurons into a Cartesian theater of experience - there is nowhere to put it 
in the tissue of the brain and there is no evidence of a translation from 
neural protocols to sensorimotive protocols - they are clearly the same 
thing. 
 

 If the atoms in my brain were put into a Chinese-reading configuration, 
 either through a lot of work learning the language or through direct 
 manipulation, then I would be able to understand Chinese.


It's understandable to assume that, but no I don't think it's like that. 
You can't transplant a language into a brain instantaneously because there 
is no personal history of association. Your understanding of language is 
not a lookup table in space, it is made out of you. It's like if you walked 
around with Google translator in your brain. You could enter words and 
phrases and turn them into you language, but you would never know the 
language first hand. The knowledge would be impersonal - accessible, but 
not woven into your proprietary sense.
 


 What are some equivalently simple, uncontroversial things in 
 what you say that i misunderstand? 


 You think that I don't get that Fading Qualia is a story about a world in 
 which the brain cannot be substituted, but I do. Chalmers is saying 'OK 
 lets say that's true - how would that be? Would your blue be less and less 
 blue? How could you act normally if you...blah, blah, blah'. I get that. 
 It's crystal clear.

 What you don't understand is that this carries a priori assumptions about 
 the nature of consciousness, that it is an end result of a distributed 
 process which is monolithic. I am saying NO, THAT IS NOT HOW IT IS.

 Imagine that we had one eye in the front of our heads and one ear in the 
 back, and that the whole of human history has been to debate over whether 
 walking forward means that objects are moving toward you or whether it 
 means changes in relative volume of sounds.

 Chalmers is saying, 'if we gradually replaced the eye with parts of the 
 ear, how would our sight gradually change to sound, or would it suddenly 
 switch over?' Since both options seem absurd, then he concludes that 
 anything that is in the front of the head is an eye and everything on the 
 back is an ear, or that everything has both ear and eye potentials.

 The MR model is to understand that these two views are not merely 
 substance dual or property dual, they are involuted juxtapositions of each 
 other. The difference between front and back is not merely irreconcilable, 
 it is mutually exclusive by definition in experience. I am not throwing up 
 my hands and saying 'ears can't be eyes because eyes are special', I am 
 positively asserting that there is a 

Re: Measurability is not a condition of reality.

2012-10-20 Thread Alberto G. Corona
Then the measure addict people believe in a lot of things that are not
measurable: they believe in an external reality . They believe in a certain
pitagoric cult to measurement, that is not measurable. They believe that
their perception is transparent, and that his mind play no role, because it
translates a complete objective and accurate view of reality.  Therefore
the mind and his relation with matter is not worth to study. They believe
in things not measurable, like countries, specially their own (which they
would laugh If i say that their country is  a bunch of atoms. Apparently
their reductionism is selective).

They believe in their loved ones that are dead (they do not exist according
with their point of view, but they sometimes talk with them, dedicate books
to them and act like if they are observing them. They bet, trust and
believe in persons, despite the fact that they are nor measurable.. They
believe in their leaders. They believe in some scientist that are liars.
but they believe them without making measures and experiments for
themselves. It seems tha almost all that they believe derives from a sense
of authority, like any other persom.

And they do it well on believing in these nor measurable things, because if
they doint believe, they would be paralized and will kill someone or kill
themselves.

2012/10/20 Roger Clough rclo...@verizon.net

 Hi Alberto G. Corona

 I have no problem with that, the problem I have
 is that I believe that nonphysical things (things,
 like Descartes' mind, not extended in space)
 like spirit, truly exist.  But to materialists,
 that's nonsense, because being inextended it
 can't be measured and so doesn't exist.
 And life is just a unique form of matter,
 so can be created.  And what is man but a
 bunch of atoms ?



 Roger Clough, rclo...@verizon.net
 10/20/2012
 Forever is a long time, especially near the end. -Woody Allen


 - Receiving the following content -
 From: Alberto G. Corona
 Receiver: everything-list
 Time: 2012-10-20, 08:48:39
 Subject: Re: Re: A test for solipsism


 Roger
 Different Qualia are a result fo different phisical effect in the senses.
 So a machine does not need to have qualia to distinguish between phisical
 effectds. It only need sensors that distinguish between them.


 A sensor can detect a red light and the attached computer can stop a car.
 With no problems.?


 http://www.gizmag.com/mercedes-benz-smart-stop-system/13122/



 2012/10/20 Roger Clough

 Hi Bruno Marchal

 In that definition of a p-zombie below, it says that
 a p-zombie cannot experience qualia, and qualia
 are what the senses tell you. The mind then transforms
 what is sensed into a sensation. The sense of red
 is what the body gives you, the sensation of red
 is what the mind transforms that into. Our mind
 also can recall past sensations of red to compare
 it with and give it a name red, which a real
 person can identify as eg a red traffic light
 and stop. A zombie would not stop (I am not allowing
 the fact that red and green lights are in different
 positions).
 That would be a test of zombieness.
 ?
 Roger Clough, rclo...@verizon.net
 10/20/2012

 Forever is a long time, especially near the end. -Woody Allen

 - Receiving the following content -
 From: Bruno Marchal
 Receiver: everything-list

 Time: 2012-10-19, 03:47:51
 Subject: Re: A test for solipsism

 On 17 Oct 2012, at 19:12, Roger Clough wrote:
  Hi Bruno Marchal
 
  Sorry, I lost the thread on the doctor, and don't know what Craig
  believes about the p-zombie.
 
  http://en.wikipedia.org/wiki/Philosophical_zombie
 
  A philosophical zombie or p-zombie in the philosophy of mind and
  perception is a hypothetical being
  that is indistinguishable from a normal human being except in that
  it lacks conscious experience, qualia, or sentience.[1] When a
  zombie is poked with a sharp object, for example, it does not feel
  any pain though it behaves
  exactly as if it does feel pain (it may say ouch and recoil from
  the stimulus, or tell us that it is in intense pain).
 
  My guess is that this is the solipsism issue, to which I would say
  that if it has no mind, it cannot converse with you,
  which would be a test for solipsism,-- which I just now found in
  typing the first part of this sentence.
 Solipsism makes everyone zombie except you.
 But in some context some people might conceive that zombie exists,
 without making everyone zombie. Craig believes that computers, if they
 might behave like conscious individuals would be a zombie, but he is
 no solipsist.
 There is no test for solipsism, nor for zombieness. BY definition,
 almost. A zombie behaves exactly like a human being. There is no 3p
 features that you could use at all to make a direct test. Now a theory
 which admits zombie, can have other features which might be testable,
 and so some indirect test are logically conceivable, relatively to
 some theory.
 Bruno
 ?
 ?
 ?
 
 
  Roger Clough, rclo...@verizon.net
  

Re: What's the difference between sense and sensation ?

2012-10-20 Thread Craig Weinberg


On Saturday, October 20, 2012 7:10:17 AM UTC-4, rclough wrote:


 The dictionary makes little or no differentiation between sense and 
 sensation, 
 but there is a difference to psychology.  Senses come from the body, 
 sensations are what the mind makes of the the sensual input. Psychology 
 has this to say: 

 http://en.wikipedia.org/wiki/Sensation_%28psychology%29 

  In psychology, sensation and perception are stages of processing of the 
 senses in human and animal systems, 
 such as vision, auditory, vestibular, and pain senses. These topics are 
 considered part of psychology, and not anatomy or physiology, 
 because processes in the brain so greatly affect the perception of a 
 stimulus. Included in this topic is the study of illusions such as 
 motion aftereffect, color constancy, auditory illusions, and depth 
 perception. 

 Sensation is the function of the low-level biochemical and neurological 
 events that begin with the impinging of a 
 stimulus upon the receptor cells of a sensory organ. It is the detection 
 of the elementary properties of a stimulus.[1] 

 Perception is the mental process or state that is reflected in statements 
 like I see a uniformly blue wall, 
 representing awareness or understanding of the real-world cause of the 
 sensory input. The goal of sensation [I think they meant to say sense] is 
 detection, the goal of perception is to create useful information of the 
 surroundings.[2] 

 In other words, sensations are the first stages in the functioning of 
 senses to represent stimuli from the 
  environment, and perception is a higher brain function about interpreting 
 events and objects in the world.[3] Stimuli from the environment is 
 transformed into neural signals which are then interpreted by the brain 
 through a process called transduction. Transduction can be likened to a 
 bridge connecting sensation to perception. 

 Gestalt theorists believe that with the two together a person experiences 
 a personal reality that is greater than the parts.  


I say the Gestalt theorists have it right, and go further. It is not 
greater than the sum of it's parts, it is less disconnected than the 
un-division of its parts. I call this trans-rational algebra or 
apocatastatic gestalts. The rejoining of broken parts by eliding their 
presumed granular, sub-personal differences. I think that transduction is 
figurative. Like the steering column turns the axle, not be transmitting a 
ghostly apparition of angular momentum on one plane to another but as a 
confluence of circumstance. The action taking place has multiple 
equivalents on multiple levels or ontological castes, from the micro to the 
macro, personal to impersonal, under-signifying to super-signifying.

Craig




 Roger Clough, rcl...@verizon.net javascript: 
 10/20/2012   
 Forever is a long time, especially near the end. -Woody Allen 


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/everything-list/-/EoOEkOf4T_MJ.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Heisenberg Uncertainty Principle in Doubt

2012-10-20 Thread Craig Weinberg


On Thursday, October 18, 2012 11:19:46 PM UTC-4, Stephen Paul King wrote:

 On 10/18/2012 2:16 PM, freqflyer07281972 wrote: 
  Is anyone here aware of the following? 
  
  
 http://www.tgdaily.com/general-sciences-features/66654-heisenbergs-uncertainty-principle-in-doubt
  
  
  Does it have implications for MW interpretations of quantum physics? 
  
  I'd love to see comments about this. 
  
  Cheers, 
  
  Dan 
  -- 
 Hi Dan, 

  This article is rubbish. The writer does not understand the 
 subtleties involved and does not understand that nothing like the tittle 
 was found to be true. 


I agree. I see what they were trying to get at: Measurement can cause 
uncertainty but not all of the uncertainty. They leave open the question of 
what does cause the uncertainty - i.e. perhaps the very nature of quantum 
is uncertain or immeasurable.

The problem of course is in the assumption we're just going to make a 
*weak* measurement that won't have an effect on it. Sigh. I'll just stand 
in the bathroom with you...you won't even know I'm here. You can't fool the 
fabric of the universe. You can spoof it maybe, but you can't hide from it 
entirely.

Craig
 

 -- 
 Onward! 

 Stephen 




-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/everything-list/-/F1tDWoWhmDEJ.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Code length = probability distribution

2012-10-20 Thread Alberto G. Corona
This is not a consequence of the shannon optimum coding , in which the
coding size of a symbol is inversely proportional  to the logaritm of the
frequency of the symbol?.

What is exactly the comp measure problem?

2012/10/19 Stephen P. King stephe...@charter.net

 Hi,

 I was looking up a definition and found the following:
 http://en.wikipedia.org/wiki/**Minimum_description_lengthhttp://en.wikipedia.org/wiki/Minimum_description_length
 Central to MDL theory is the one-to-one correspondence between code
 length functions and probability distributions. (This follows from the
 Kraft-McMillan inequality.) For any probability distribution , it is
 possible to construct a code  such that the length (in bits) of  is equal
 to ; this code minimizes the expected code length. Vice versa, given a code
 , one can construct a probability distribution such that the same holds.
 (Rounding issues are ignored here.) In other words, searching for an
 efficient code reduces to searching for a good probability distribution,
 and vice versa.

 Is this true? Would it be an approach to the measure problem of COMP?

 --
 Onward!

 Stephen


 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To post to this group, send email to 
 everything-list@googlegroups.**comeverything-list@googlegroups.com
 .
 To unsubscribe from this group, send email to everything-list+unsubscribe@
 **googlegroups.com everything-list%2bunsubscr...@googlegroups.com.
 For more options, visit this group at http://groups.google.com/**
 group/everything-list?hl=enhttp://groups.google.com/group/everything-list?hl=en
 .




-- 
Alberto.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: A test for solipsism

2012-10-20 Thread meekerdb

On 10/20/2012 5:48 AM, Alberto G. Corona wrote:

2012/10/20 Roger Clough rclo...@verizon.net mailto:rclo...@verizon.net

Hi Bruno Marchal

In that definition of a p-zombie below, it says that
a p-zombie cannot experience qualia, and qualia
are what the senses tell you. The mind then transforms
what is sensed into a sensation. The sense of red
is what the body gives you, the sensation of red
is what the mind transforms that into. Our mind
also can recall past sensations of red to compare
it with and give it a name red, which a real
person can identify as eg a red traffic light
and stop. A zombie would not stop (I am not allowing
the fact that red and green lights are in different
positions).
That would be a test of zombieness.



Interestingly, my father began to drive in 1926 in Texas.  He was red-green color blind, 
and he couldn't tell a red traffic light from a green one.  And in those days there was no 
convention as to the order of the lights and there were no yellow lights.  So he had to be 
very careful approaching any signal light.  Of course he soon memorized the position of 
the red and green lights in the small town where he grew up.  Later, I think in the 1940's 
there became a convention of putting the red lights above the green with the yellow in 
between.  Still later, in the '50's, they standardized the spectrum of lights so that the 
colors did look different even to red-green color blind people.  But through all this, I 
believe my father was a real person.


Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: The Peirce-Leibniz triads Ver. 2

2012-10-20 Thread Craig Weinberg
Cool Roger,

It mostly makes sense to me, except I don't understand why I. is associated 
with objects and substance when it is feeling, perception, and first person 
quale.

To me, thinking is just as much first person as feeling, and they both are 
subjective qualia. Thinking is a meta-quale of feeling (which is a 
meta-quale of awarenessperceptionsensationsense)

That puts the whole subjective enchilada as Firstness and leaves objects 
and substance to Secondness. This is Self-Body distinction. What you have 
is like Lower-Self/Higher-Self distinction but with objects kind of 
shoehorned in there. Once you see matter as a public extension and self as 
a private intention, then Thirdness arises as the spatiotemporal 
interaction of formation and information.

That outlines one way of slicing the pizza. I don't know if you can see 
this but here:

https://lh3.googleusercontent.com/-Xz8OmKGPEjE/UIL6EtVeBEI/AZ4/iBhuMxBj9oU/s1600/trio_sml_entropy.jpg

That gives a better idea of the syzygy effect of the big picture, how they 
overlap in different ways and set each other off in a multi-sense way.

The Firstness, Secondness, and Thirdness relate respectively to the 
respective trios:

*I. Sense, Motive
II. Matter, Energy,
III. Space, Time*

to get to morality, you have to look at the black and white:

*IV. Signal *(escalating significance), *Entropy* aka Ent ntr rop opy 
(attenuating significance...fragmentation and redundancy obstructs 
discernment capacities...information entropy generates thermodynamic 
entropy through sense participation)

I did a post on this today, but it's pretty intense: 
http://s33light.org/post/33951454539

Craig


On Thursday, October 18, 2012 9:18:50 PM UTC-4, rclough wrote:

  
 https://lh3.googleusercontent.com/-Xz8OmKGPEjE/UIL6EtVeBEI/AZ4/iBhuMxBj9oU/s1600/trio_sml_entropy.jpg
 Hi Craig
  
 Thanks very much for your comments Craig. I still need to digest them.
 Meanwhile, a flood of new ideas came to me and I just want to set them 
 down.
 There are no doubt mistakes, esp. with regard to subjective/objective.  
  

 The Peirce-Leibniz triads Ver.2
  
 I Firstness  objectsubstance   perception 
 (quale)   aesthetics  beauty1st person   feeling  
 subjective
  
 II Secondness sign  monad
 thought  logic   truth   2nd person 
 thinking subj/obj  
 
 III Thirdness   interprant supreme monad   expression  
 moralitygoodness   3rd person   doing   objective  
  
  
   
  --



 It appears that Peirce's three categories match the Leibniz monadic 
 structures 


 as follows: 

 I. = object = Leibniz substance = quale 

 II. Secondness = sign = monad representing that substance. 
 In Peirce, the sign is a word for the experience of that object . 
 In Leibniz, the monads are mental, which I think means subjective. 

 III. Thirdness = interprant (meaning of I and II ) = by the monad of monads. 


 In addition to this, Peirce says that his categories are predicates of 
 predicates, 

 where the first predicate (dog) is extensive and the second predicate (brown) 
 is intensive. 

 then the overall object might be animal--dog--brown. 
 Leibniz says that a monad is a complete concept, meaning all of the possible 

 predicates. 

 I suggest that the first or extensive predicate (dog) is objective  
 and the second predicate (brown) is qualitative or subjective. 
 So that the object as per ceived is a quale or Firstness. 



 Roger Clough, rcl...@verizon.net javascript:  
 10/18/2012  
 Forever is a long time, especially near the end. -Woody Allen 

 --  
 You received this message because you are subscribed to the Google Groups 
 Everything List group. 

 To post to this group, send email to 
 everyth...@googlegroups.comjavascript:. 

 To unsubscribe from this group, send email to 
 everything-list+unsubscr...@googlegroups.com javascript:. 
 For more options, visit this group at 
 http://groups.google.com/group/everything-list?hl=en.


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/everything-list/-/UzPBnSWqXdgJ.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Is consciousness just an emergent property of overly complexcomputations ?

2012-10-20 Thread meekerdb

On 10/20/2012 10:22 AM, Bruno Marchal wrote:

Dear Stephen,


On 19 Oct 2012, at 19:44, Stephen P. King wrote:


On 10/19/2012 1:37 PM, Bruno Marchal wrote:


On 17 Oct 2012, at 22:02, Alberto G. Corona wrote:




2012/10/17 Alberto G. Corona agocor...@gmail.com mailto:agocor...@gmail.com



2012/10/17 Bruno Marchal marc...@ulb.ac.be mailto:marc...@ulb.ac.be


On 17 Oct 2012, at 10:12, Alberto G. Corona wrote:





Life may support mathematics.



Arithmetic may support life. It is full of life and dreams.




Life is a computation devoted to making guesses about the future in 
order to
self preserve . This is only possible in a world where natural 
computers are
possible: in a world where the phisical laws have a mathematical nature.
Instead of comp creating a mathematical-phisical reality, is the
mathematical reality what creates the computations in which we live.
So all kind of arbitrary universes may exist, but only (some) 
mathematical
ones can harbour self preserving computations, that is, observers.


OK. But harboring self-preserving computation is not enough, it must do 
in a
first person measure winning way  on all computations going through our
state. That's nice as this explain that your idea of evolution needs to 
be
extended up to the origin of the physical laws.


I don´t think so .The difference between computation as an ordinary process 
of
matter from the idea of  computation as the ultimate essence of reality is 
that
the first restrict not only the mathematical laws, but also forces a 
matemacity
of reality because computation in living beings   becomes a process with a 
cost
that favour a  low kolmogorov complexity for the reality. In essence, it 
forces a
discoverable local universe... ,

 In contrast,  the idea of computation as the ultimate nature of realtity
postulates  computations devoid of restrictions by definition, so they may 
not
restrict anything in the reality that we perceive. we may be boltzmann 
brains, we
may  be a product not of evolution but a product of random computations. we 
may
perceive elephants flying...

And still much of your conclussions coming from the first person indeterminacy may 
hold by considering living beings as ordinary material personal computers.



Yes, that's step seven. If the universe is enough big, to run a *significant* part 
of the UD. But I think that the white rabbits disappear only on the limit of the whole 
UD work (UD*).



Bruno



Dear Bruno,

Tell us more about how White Rabbits can appear if there is any restriction of 
mutual logical consistency between 1p and in any arbitrary recursion of 1p content?





We assume comp.  If a digital computer processes the activity of your brain in dream 
state with white rabbits, it means that such a computation with that dream exist in 
infinitely many local incarnation in the arithmetical (tiny, Turing universal) reality.


If you do a physical experience, the hallucination that all goes weird at that moment 
exists also, in arithmetic. The measure problem consists in justifying from consistency, 
self-reference, universal numbers, their rarity,


And their very specific correlation with the physical brain states of sleep.

Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Heisenberg Uncertainty Principle in Doubt

2012-10-20 Thread Stephen P. King

On 10/20/2012 3:10 PM, Craig Weinberg wrote:



On Thursday, October 18, 2012 11:19:46 PM UTC-4, Stephen Paul King wrote:

On 10/18/2012 2:16 PM, freqflyer07281972 wrote:
 Is anyone here aware of the following?



http://www.tgdaily.com/general-sciences-features/66654-heisenbergs-uncertainty-principle-in-doubt

http://www.tgdaily.com/general-sciences-features/66654-heisenbergs-uncertainty-principle-in-doubt


 Does it have implications for MW interpretations of quantum
physics?

 I'd love to see comments about this.

 Cheers,

 Dan
 --
Hi Dan,

 This article is rubbish. The writer does not understand the
subtleties involved and does not understand that nothing like the
tittle
was found to be true.


I agree. I see what they were trying to get at: Measurement can cause 
uncertainty but not all of the uncertainty. They leave open the 
question of what does cause the uncertainty - i.e. perhaps the very 
nature of quantum is uncertain or immeasurable.


The problem of course is in the assumption we're just going to make a 
*weak* measurement that won't have an effect on it. Sigh. I'll just 
stand in the bathroom with you...you won't even know I'm here. You 
can't fool the fabric of the universe. You can spoof it maybe, but you 
can't hide from it entirely.


Craig

Hi Craig,

Uncertainty is in the geometric/statistical relationship between 
observables themselves.


--
Onward!

Stephen

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Continuous Game of Life

2012-10-20 Thread John Mikes
Bruno,
especially in my identification as responding to relations.
Now the Self? IT certainly refers to a more sophisticated level of
thinking, more so than the average (animalic?)  mind. - OR: we have no
idea. What WE call 'Self-Ccness' is definitely a human attribute because WE
identify it that way. I never talked to a cauliflower to clarify whether
she feels like having a self? (In cauliflowerese, of course).
JM

On Thu, Oct 18, 2012 at 10:39 AM, Bruno Marchal marc...@ulb.ac.be wrote:


 On 17 Oct 2012, at 19:19, Roger Clough wrote:

 Hi Bruno Marchal

 IMHO all life must have some degree of consciousness
 or it cannot perceive its environment.


 Are you sure?

 Would you say that the plants are conscious? I do think so, but I am not
 sure they have self-consciousness.

 Self-consciousness accelerates the information treatment, and might come
 from the need of this for the self-movie living creature having some
 important mass.

 all life is a very fuzzy notion.

 Bruno








 Roger Clough, rclo...@verizon.net
 10/17/2012
 Forever is a long time, especially near the end. -Woody Allen


 - Receiving the following content -
 From: Bruno Marchal
 Receiver: everything-list
 Time: 2012-10-17, 10:13:37
 Subject: Re: Continuous Game of Life




 On 16 Oct 2012, at 18:37, John Clark wrote:


 On Mon, Oct 15, 2012 at 2:40 PM, meekerdb  wrote:


  If consciousness doesn't do anything then Evolution can't see it, so
 how and why did Evolution produce it? The fact that you have no answer to
 this means your ideas are fatally flawed.


 I don't see this as a *fatal* flaw.  Evolution, as you've noted, is not a
 paradigm of efficient design.  Consciousness might just be a side-effect


 But that's exactly what I've been saying for months, unless Darwin was
 dead wrong consciousness must be a side effect of intelligence, so a
 intelligent computer must be a conscious computer. And I don't think Darwin
 was dead wrong.





 Darwin does not need to be wrong. Consciousness role can be deeper, in
 the evolution/selection of the laws of physics from the coherent dreams
 (computations from the 1p view) in arithmetic.


 Bruno




 http://iridia.ulb.ac.be/~**marchal/ http://iridia.ulb.ac.be/~marchal/

 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To post to this group, send email to 
 everything-list@googlegroups.**comeverything-list@googlegroups.com
 .
 To unsubscribe from this group, send email to
 everything-list+unsubscribe@**googlegroups.comeverything-list%2bunsubscr...@googlegroups.com
 .
 For more options, visit this group at http://groups.google.com/**
 group/everything-list?hl=enhttp://groups.google.com/group/everything-list?hl=en
 .


 http://iridia.ulb.ac.be/~**marchal/ http://iridia.ulb.ac.be/~marchal/



 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To post to this group, send email to 
 everything-list@googlegroups.**comeverything-list@googlegroups.com
 .
 To unsubscribe from this group, send email to everything-list+unsubscribe@
 **googlegroups.com everything-list%2bunsubscr...@googlegroups.com.
 For more options, visit this group at http://groups.google.com/**
 group/everything-list?hl=enhttp://groups.google.com/group/everything-list?hl=en
 .



-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Continuous Game of Life

2012-10-20 Thread Stephen P. King

On 10/20/2012 5:16 PM, John Mikes wrote:

Bruno,
especially in my identification as responding to relations.
Now the Self? IT certainly refers to a more sophisticated level of 
thinking, more so than the average (animalic?)  mind. - OR: we have no 
idea. What WE call 'Self-Ccness' is definitely a human attribute 
because WE identify it that way. I never talked to a cauliflower to 
clarify whether she feels like having a self? (In cauliflowerese, of 
course).

JM


If we where cauliflowers, we would have no concept of what it would 
be like to be human or, maybe, that humans even exist!




On Thu, Oct 18, 2012 at 10:39 AM, Bruno Marchal marc...@ulb.ac.be 
mailto:marc...@ulb.ac.be wrote:



On 17 Oct 2012, at 19:19, Roger Clough wrote:

Hi Bruno Marchal

IMHO all life must have some degree of consciousness
or it cannot perceive its environment.


Are you sure?

Would you say that the plants are conscious? I do think so, but I
am not sure they have self-consciousness.

Self-consciousness accelerates the information treatment, and
might come from the need of this for the self-movie living
creature having some important mass.

all life is a very fuzzy notion.

Bruno








Roger Clough, rclo...@verizon.net mailto:rclo...@verizon.net
10/17/2012
Forever is a long time, especially near the end. -Woody Allen


- Receiving the following content -
From: Bruno Marchal
Receiver: everything-list
Time: 2012-10-17, 10:13:37
Subject: Re: Continuous Game of Life




On 16 Oct 2012, at 18:37, John Clark wrote:


On Mon, Oct 15, 2012 at 2:40 PM, meekerdb  wrote:


If consciousness doesn't do anything then Evolution
can't see it, so how and why did Evolution produce it?
The fact that you have no answer to this means your
ideas are fatally flawed.


I don't see this as a *fatal* flaw.  Evolution, as you've
noted, is not a paradigm of efficient design.
 Consciousness might just be a side-effect


But that's exactly what I've been saying for months, unless
Darwin was dead wrong consciousness must be a side effect of
intelligence, so a intelligent computer must be a conscious
computer. And I don't think Darwin was dead wrong.





Darwin does not need to be wrong. Consciousness role can be
deeper, in the evolution/selection of the laws of physics
from the coherent dreams (computations from the 1p view) in
arithmetic.


Bruno



--
Onward!

Stephen

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Re: Re: Re: Why self-organization programs cannot be alive

2012-10-20 Thread Russell Standish
On Sat, Oct 20, 2012 at 08:18:16AM -0400, Roger Clough wrote:
 Hi Russell Standish 
 
 But the robot plants could not grow more robot structure
 for free nor produce seeds. Or produce beautiful sweet-smelling
 flowers. If they could produce more robot structure,
 we ought to use them to produce more manf capabilities
 (including producing more chips for free).
 

All of which are irrelevant to the stated task of using sunlight
to convert carbon dioxide to ocygen.

Nvevertheless, self-reproducing robots exist as well, in case you're
wondering. Take a look at the rep-rap project.

-- 


Prof Russell Standish  Phone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Professor of Mathematics  hpco...@hpcoders.com.au
University of New South Wales  http://www.hpcoders.com.au


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Code length = probability distribution

2012-10-20 Thread Stephen P. King

On 10/20/2012 5:45 PM, Russell Standish wrote:

A UD generates and executes all programs, many of which are
equivalent. So some programs are represented more than others. The
COMP measure is a function over all programs that captures this
variation in program respresentation.

Why should this be unique, independent of UD, or the universal Turing
machine it runs on? Because the UD executes every other UD, as well as
itself, the measure will be a limit over contributions from all UDs.

Hi Russell,

I worry a bit about the use of the word all in your remark. All 
is too big, usually, to have a single constructable measure! Why not 
consider some large enough but finite collections of programs, such as 
what would be captured by the idea of an equivalence class of programs 
that satisfy some arbitrary parameters (such as solving a finite NP-hard 
problem) given some large but finite quantity of resources?
Of course this goes against the grain of Bruno's theology, but 
maybe that is what it required to solve the measure problem. :-) I find 
myself being won over by the finitists, such as Norman J. Wildberger!


--
Onward!

Stephen


--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



a paper by Karl Svozil

2012-10-20 Thread Stephen P. King

Hi Folks,

For your amusement, delight and (hopefully) comment, I present a paper:

http://arxiv.org/abs/physics/0305048


 Computational universes

Karl Svozil http://arxiv.org/find/physics/1/au:+Svozil_K/0/1/0/all/0/1
(Submitted on 12 May 2003 (v1 http://arxiv.org/abs/physics/0305048v1), 
last revised 14 Apr 2005 (this version, v2))


   Suspicions that the world might be some sort of a machine or
   algorithm existing ``in the mind'' of some symbolic number cruncher
   have lingered from antiquity. Although popular at times, the most
   radical forms of this idea never reached mainstream. Modern
   developments in physics and computer science have lent support to
   the thesis, but empirical evidence is needed before it can begin to
   replace our contemporary world view.


--
Onward!

Stephen

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.