Re: The circular logic of Dennett and other materialists

2012-10-21 Thread Stathis Papaioannou
On Sat, Oct 20, 2012 at 11:04 PM, Roger Clough rclo...@verizon.net wrote:
 Hi Bruno Marchal

 This is also where I run into trouble with the p-zombie
 definition of what a zombie is.  It has no mind
 but it can still behave just as a real person would.

 But that assumes, as the materialists do, that the mind
 has no necessary function. Which is nonsense, at least
 to a realist.

 Thus Dennett claims that a real candidate person
 does not need to have a mind. But that's in his
 definition of what a real person is. That's circular logic.

Not really, he claims that zombies do not exist and if an entity
(human, computer, whatever) behaves as if it has a mind, then it does
have a mind.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Code length = probability distribution

2012-10-21 Thread Russell Standish
On Sat, Oct 20, 2012 at 07:07:14PM -0400, Stephen P. King wrote:
 On 10/20/2012 5:45 PM, Russell Standish wrote:
 A UD generates and executes all programs, many of which are
 equivalent. So some programs are represented more than others. The
 COMP measure is a function over all programs that captures this
 variation in program respresentation.
 
 Why should this be unique, independent of UD, or the universal Turing
 machine it runs on? Because the UD executes every other UD, as well as
 itself, the measure will be a limit over contributions from all UDs.
 Hi Russell,
 
 I worry a bit about the use of the word all in your remark.
 All is too big, usually, to have a single constructable measure!
 Why not consider some large enough but finite collections of
 programs, such as what would be captured by the idea of an
 equivalence class of programs that satisfy some arbitrary parameters
 (such as solving a finite NP-hard problem) given some large but
 finite quantity of resources?
 Of course this goes against the grain of Bruno's theology, but
 maybe that is what it required to solve the measure problem. :-) I
 find myself being won over by the finitists, such as Norman J.
 Wildberger!

This may well turn out to be the case. Also Juergen Schmidhuber has
investigated this under the rubrik of speed prior.

I should have a chat with Norm about that sometime. Maybe if I see him
at a Christmas party. I didn't realise he was a finitist. I knew he
has an interesting take on how trigonometry should be done.

Cheers

-- 


Prof Russell Standish  Phone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Professor of Mathematics  hpco...@hpcoders.com.au
University of New South Wales  http://www.hpcoders.com.au


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Continuous Game of Life

2012-10-21 Thread Stathis Papaioannou
On Sun, Oct 21, 2012 at 5:51 AM, Craig Weinberg whatsons...@gmail.com wrote:

 The atoms in my brain don't have to know how to read Chinese. They only
 need to know how to be carbon, nitrogen, oxygen etc. atoms. The complex
 behaviour which is reading Chinese comes from the interaction of billions of
 these atoms doing their simple thing.


 I don't think that is true. The other way around makes just as much sense of
 not more: Reading Chinese is a simple behavior which drives the behavior of
 billions of atoms to do a complex interaction. To me, it has to be both
 bottom-up and top-down. It seems completely arbitrary prejudice to presume
 one over the other just because we think that we understand the bottom-up so
 well.

 Once you can see how it is the case that it must be both bottom-up and
 top-down at the same time, the next step is to see that there is no
 possibility for it to be a cause-effect relationship, but rather a dual
 aspect ontological relation. Nothing is translating the functions of neurons
 into a Cartesian theater of experience - there is nowhere to put it in the
 tissue of the brain and there is no evidence of a translation from neural
 protocols to sensorimotive protocols - they are clearly the same thing.

If there is a top-down effect of the mind on the atoms then there we
would expect some scientific evidence of this. Evidence would
constitute, for example, neurons firing when measurements of
transmembrane potentials, ion concentrations etc. suggest that they
should not. You claim that such anomalous behaviour of neurons and
other cells due to consciousness is widespread, yet it has never been
experimentally observed. Why?

 If the atoms in my brain were put into a Chinese-reading configuration,
 either through a lot of work learning the language or through direct
 manipulation, then I would be able to understand Chinese.


 It's understandable to assume that, but no I don't think it's like that. You
 can't transplant a language into a brain instantaneously because there is no
 personal history of association. Your understanding of language is not a
 lookup table in space, it is made out of you. It's like if you walked around
 with Google translator in your brain. You could enter words and phrases and
 turn them into you language, but you would never know the language first
 hand. The knowledge would be impersonal - accessible, but not woven into
 your proprietary sense.

I don't mean putting an extra module into the brain, I mean putting
the brain directly into the same configuration it is put into by
learning the language in the normal way.

 I'm sorry, but this whole passage is a non sequitur as far as the fading
 qualia thought experiment goes. You have to explain what you think would
 happen if part of your brain were replaced with a functional equivalent.


 There is no functional equivalent. That's what I am saying. Functional
 equivalence when it comes to a person is a non-sequitur. Not only is every
 person unique, they are an expression of uniqueness itself. They define
 uniqueness in a never-before-experienced way. This is a completely new way
 of understanding consciousness and signal. Not as mechanism, but as
 animism-mechanism.



 A functional equivalent would stimulate the remaining neurons the same as
 the part that is replaced.


 No such thing. Does any imitation function identically to an original?

In a thought experiment we can say that the imitation stimulates the
surrounding neurons in the same way as the original. We can even say
that it does this miraculously. Would such a device *necessarily*
replicate the consciousness along with the neural impulses, or could
the two be separated?

 The original paper says this is a computer chip but this is not necessary
 to make the point: we could just say that it is any device, not being the
 normal biological neurons. If consciousness is substrate-dependent (as you
 claim) then the device could do its job of stimulating the neurons normally
 while lacking or differing in consciousness. Since it stimulates the neurons
 normally you would behave normally. If you didn't then it would be a
 miracle, since your muscles would have to contract normally. Do you at least
 see this point, or do you think that your muscles would do something
 different?


 I see the point completely. That's the problem is that you keep trying to
 explain to me what is obvious, while I am trying to explain to you something
 much more subtle and sophisticated. I can replace neurons which control my
 muscles because muscles are among the most distant and replaceable parts of
 'me'. These nerves are outbound efferent nerves and the target muscle cells
 are for the most part willing servants. The same goes for amputating my arm.
 I can replace it in theory. What I am saying though is that amputating my
 head is not even theoretically possible. Wherever my head is, that is where
 I have to be. If I replace my brain with other parts, the more parts 

Re: Continuous Game of Life

2012-10-21 Thread Evgenii Rudnyi

On 21.10.2012 10:05 Stathis Papaioannou said the following:

On Sun, Oct 21, 2012 at 5:51 AM, Craig Weinberg
whatsons...@gmail.com wrote:



...


I don't think that is true. The other way around makes just as much
sense of not more: Reading Chinese is a simple behavior which
drives the behavior of billions of atoms to do a complex
interaction. To me, it has to be both bottom-up and top-down. It
seems completely arbitrary prejudice to presume one over the other
just because we think that we understand the bottom-up so well.

Once you can see how it is the case that it must be both bottom-up
and top-down at the same time, the next step is to see that there
is no possibility for it to be a cause-effect relationship, but
rather a dual aspect ontological relation. Nothing is translating
the functions of neurons into a Cartesian theater of experience -
there is nowhere to put it in the tissue of the brain and there is
no evidence of a translation from neural protocols to sensorimotive
protocols - they are clearly the same thing.


If there is a top-down effect of the mind on the atoms then there we
would expect some scientific evidence of this. Evidence would


Scientific evidence, in my view, is the existence of science. Do you 
mean that for example scientific books have assembled themselves from 
atoms according to the M-theory?


Evgenii

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Re: Heisenberg Uncertainty Principle in Doubt

2012-10-21 Thread Roger Clough
Hi Craig Weinberg

http://en.wikipedia.org/wiki/Uncertainty_principle


...the uncertainty principle is inherent in the properties of all wave-like 
systems


Roger Clough, rclo...@verizon.net 
10/21/2012  
Forever is a long time, especially near the end. -Woody Allen 


- Receiving the following content -  
From: Craig Weinberg  
Receiver: everything-list  
Time: 2012-10-20, 15:10:09 
Subject: Re: Heisenberg Uncertainty Principle in Doubt 




On Thursday, October 18, 2012 11:19:46 PM UTC-4, Stephen Paul King wrote: 
On 10/18/2012 2:16 PM, freqflyer07281972 wrote:  
 Is anyone here aware of the following?  
  
 http://www.tgdaily.com/general-sciences-features/66654-heisenbergs-uncertainty-principle-in-doubt
   
  
 Does it have implications for MW interpretations of quantum physics?  
  
 I'd love to see comments about this.  
  
 Cheers,  
  
 Dan  
 --  
Hi Dan,  

 This article is rubbish. The writer does not understand the  
subtleties involved and does not understand that nothing like the tittle  
was found to be true.  



I agree. I see what they were trying to get at: Measurement can cause 
uncertainty but not all of the uncertainty. They leave open the question of 
what does cause the uncertainty - i.e. perhaps the very nature of quantum is 
uncertain or immeasurable. 

The problem of course is in the assumption we're just going to make a *weak* 
measurement that won't have an effect on it. Sigh. I'll just stand in the 
bathroom with you...you won't even know I'm here. You can't fool the fabric of 
the universe. You can spoof it maybe, but you can't hide from it entirely. 

Craig 
  

--  
Onward!  

Stephen  



--  
You received this message because you are subscribed to the Google Groups 
Everything List group. 
To view this discussion on the web visit 
https://groups.google.com/d/msg/everything-list/-/F1tDWoWhmDEJ. 
To post to this group, send email to everything-list@googlegroups.com. 
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com. 
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Re: The Peirce-Leibniz triads Ver. 2

2012-10-21 Thread Roger Clough

CRAIG: Cool Roger, 

It mostly makes sense to me, except I don't understand why I. is associated 
with objects and substance when it is feeling, perception, and first person 
quale. 

ROGER: It is not uncommon to find such objective/subjective dyslexia in the 
literature. 
This stuff is hard to get a hold of.

CRAIG: To me, thinking is just as much first person as feeling, and they both 
are subjective qualia. 
Thinking is a meta-quale of feeling (which is a meta-quale of 
awarenessperceptionsensationsense) 

ROGER: Actually I have yet to find a clear or useful definition of thinking 
(how it works). 
In fact Wittgenstein at one point said that he does not know what thinking 
is (!).
But I believe you have to think if you compare objects across an equals 
sign,
so comparison (a dyad) seems to me to be a basic type of thinking.

CRAIG: That puts the whole subjective enchilada as Firstness and leaves objects 
and 
substance to Secondness. This is Self-Body distinction. What you have is 
like 
Lower-Self/Higher- Self distinction but with objects kind of shoehorned in 
there. 
Once you see matter as a public extension and self as a private intention, 
then 
Thirdness arises as the spatiotemporal interaction of formation and 
information. 

ROGER: Yes, distinction is another form of basic thought. But that requires the 
ability to compare.

CRAIG: That outlines one way of slicing the pizza. I don't know if you can see 
this but here: 

https://lh3.googleusercontent.com/-Xz8OmKGPEjE/UIL6EtVeBEI/AZ4/iBhuMxBj9oU/s1600/trio_sml_entropy.jpg
 

That gives a better idea of the syzygy effect of the big picture, how they 
overlap in different ways and set each other off in a multi-sense way. 

The Firstness, Secondness, and Thirdness relate respectively to the respective 
trios: 

I. Sense, Motive 
II. Matter, Energy, 
III. Space, Time 

ROGER: I could see it, but couldn't see how to interpret it, but's thats OK.
The categories, like Hegel's dialectic, seem to be a basic take on 
existence,
So no doubt there are many approaches to defining them, yours included. 

CRAIG: to get to morality, you have to look at the black and white: 

IV. Signal (escalating significance), Entropy aka Ent ntr rop opy (attenuating 
significance...
fragmentation and redundancy obstructs discernment capacities...
information entropy generates thermodynamic entropy through sense 
participation) 

I did a post on this today, but it's pretty intense: 
http://s33light.org/post/33951454539 

ROGER: I welcome your thoughts on this. But as for myself, I try to keep things 
as simple as possible.
The truth is that actually  I had a serior moment when I wrote morality.
I should have recalled a better term, Ethics. That has to do with 
law and doing, both typical of III.


CRAIG: Craig 


On Thursday, October 18, 2012 9:18:50 PM UTC-4, rclough wrote: 

Hi Craig 

Thanks very much for your comments Craig. I still need to digest them. 
Meanwhile, a flood of new ideas came to me and I just want to set them down. 
There are no doubt mistakes, esp. with regard to subjective/objective. 


The Peirce-Leibniz triads Ver.2 

I Firstness object substance perception (quale) aesthetics beauty 1st person 
feeling subjective 

II Secondness sign monad thought logic truth 2nd person thinking subj/obj 
 
III Thirdness interprant supreme monad expression morality goodness 3rd person 
doing objective 








It appears that Peirce's three categories match the Leibniz monadic structures 

as follows: 

I. = object = Leibniz substance = quale 

II. Secondness = sign = monad representing that substance. 
In Peirce, the sign is a word for the experience of that object . 
In Leibniz, the monads are mental, which I think means subjective. 

III. Thirdness = interprant (meaning of I and II ) = by the monad of monads. 

In addition to this, Peirce says that his categories are predicates of 
predicates, 
where the first predicate (dog) is extensive and the second predicate (brown) 
is intensive. 
then the overall object might be animal--dog--brown. 
Leibniz says that a monad is a complete concept, meaning all of the possible 
predicates. 

I suggest that the first or extensive predicate (dog) is objective 
and the second predicate (brown) is qualitative or subjective. 
So that the object as per ceived is a quale or Firstness. 



Roger Clough, rcl...@verizon.net 
10/18/2012 
Forever is a long time, especially near the end. -Woody Allen 

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group. 
To post to this group, send email to everyth...@googlegroups.com. 
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com. 
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en. 
-- 
You received this message because you are subscribed to the Google Groups 
Everything List 

emulating life is possible now

2012-10-21 Thread Roger Clough
Hi Russell Standish 

Thanks for that info.  If you put reprap into the search
window on youtube, you come up with a number of video
clips. It's a stunning achievement. 

So I'm wrong, at least you can emulate life. The
only thing that remains for it to emulate a plant
would be to have it run by solar cells, which
would be possible. 



Roger Clough, rclo...@verizon.net 
10/21/2012  
Forever is a long time, especially near the end. -Woody Allen 


- Receiving the following content -  
From: Russell Standish  
Receiver: everything-list  
Time: 2012-10-20, 17:51:36 
Subject: Re: Re: Re: Re: Why self-organization programs cannot be alive 


On Sat, Oct 20, 2012 at 08:18:16AM -0400, Roger Clough wrote: 
 Hi Russell Standish  
  
 But the robot plants could not grow more robot structure 
 for free nor produce seeds. Or produce beautiful sweet-smelling 
 flowers. If they could produce more robot structure, 
 we ought to use them to produce more manf capabilities 
 (including producing more chips for free). 
  

All of which are irrelevant to the stated task of using sunlight 
to convert carbon dioxide to ocygen. 

Nvevertheless, self-reproducing robots exist as well, in case you're 
wondering. Take a look at the rep-rap project. 

--  

 
Prof Russell Standish Phone 0425 253119 (mobile) 
Principal, High Performance Coders 
Visiting Professor of Mathematics hpco...@hpcoders.com.au 
University of New South Wales http://www.hpcoders.com.au 
 

--  
You received this message because you are subscribed to the Google Groups 
Everything List group. 
To post to this group, send email to everything-list@googlegroups.com. 
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com. 
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Re: The circular logic of Dennett and other materialists

2012-10-21 Thread Roger Clough
Hi Stathis Papaioannou 

You say that if a person behaves  as if he has a mind,
then he does have a mind.

 


Roger Clough, rclo...@verizon.net 
10/21/2012  
Forever is a long time, especially near the end. -Woody Allen 


- Receiving the following content -  
From: Stathis Papaioannou  
Receiver: everything-list  
Time: 2012-10-21, 03:37:19 
Subject: Re: The circular logic of Dennett and other materialists 


On Sat, Oct 20, 2012 at 11:04 PM, Roger Clough  wrote: 
 Hi Bruno Marchal 
 
 This is also where I run into trouble with the p-zombie 
 definition of what a zombie is. It has no mind 
 but it can still behave just as a real person would. 
 
 But that assumes, as the materialists do, that the mind 
 has no necessary function. Which is nonsense, at least 
 to a realist. 
 
 Thus Dennett claims that a real candidate person 
 does not need to have a mind. But that's in his 
 definition of what a real person is. That's circular logic. 

Not really, he claims that zombies do not exist and if an entity 
(human, computer, whatever) behaves as if it has a mind, then it does 
have a mind. 


--  
Stathis Papaioannou 

--  
You received this message because you are subscribed to the Google Groups 
Everything List group. 
To post to this group, send email to everything-list@googlegroups.com. 
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com. 
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Re: A test for solipsism

2012-10-21 Thread Roger Clough
Hi Bruno Marchal 
 
 You say 

No, a zombie will stop at the red light. By definition it behaves like  
a human, or like a conscious entity. 

My problem is that the definition is an absurdity to begin with.
If he has no mind, he could not know what a red light means.
He could not know anything. So he could never behave as a 
real person would unless the response was instinctual. 

Note that you may be right, you could never know
if you married a zombie, but that does not follow
from the p-zombie definition. The definition is an absurdity.

Roger Clough, rclo...@verizon.net 
10/21/2012  
Forever is a long time, especially near the end. -Woody Allen 


- Receiving the following content -  
From: Bruno Marchal  
Receiver: everything-list  
Time: 2012-10-20, 10:11:25 
Subject: Re: A test for solipsism 




On 20 Oct 2012, at 12:38, Roger Clough wrote: 


Hi Bruno Marchal  

In that definition of a p-zombie below, it says that  
a p-zombie cannot experience qualia, and qualia 
are what the senses tell you. 


Yes. Qualia are the subjective 1p view, sometimes brought by percepts, and 
supposed to be treated by the brain. 
And yes a zombie as no qualia, as a qualia needs consciousness. 












The mind then transforms 
what is sensed into a sensation. The sense of red 
is what the body gives you, the sensation of red  
is what the mind transforms that into. Our mind 
also can recall past sensations of red to compare 
it with and give it a name red, which a real 
person can identify as eg a red traffic light 
and stop. A zombie would not stop  




No, a zombie will stop at the red light. By definition it behaves like a human, 
or like a conscious entity.  
By definition, if you marry a zombie, your will never been aware of that, your 
whole life.  




(I am not allowing 
the fact that red and green lights are in different 
positions).  
That would be a test of zombieness. 


There exists already detector of colors, smells, capable of doing finer 
discrimination than human. 
I have heard about a machine testing old wine better than human experts. 


Machines evolve quickly. That is why the non-comp people are confronted with 
the idea that zombie might be logically possible for them. 


Bruno 











Roger Clough, rclo...@verizon.net  
10/20/2012  
Forever is a long time, especially near the end. -Woody Allen  

- Receiving the following content -  
From: Bruno Marchal  
Receiver: everything-list  
Time: 2012-10-19, 03:47:51  
Subject: Re: A test for solipsism  

On 17 Oct 2012, at 19:12, Roger Clough wrote:  
 Hi Bruno Marchal  
  
 Sorry, I lost the thread on the doctor, and don't know what Craig  
 believes about the p-zombie.  
  
 http://en.wikipedia.org/wiki/Philosophical_zombie  
  
 A philosophical zombie or p-zombie in the philosophy of mind and  
 perception is a hypothetical being  
 that is indistinguishable from a normal human being except in that  
 it lacks conscious experience, qualia, or sentience.[1] When a  
 zombie is poked with a sharp object, for example, it does not feel  
 any pain though it behaves  
 exactly as if it does feel pain (it may say ouch and recoil from  
 the stimulus, or tell us that it is in intense pain).  
  
 My guess is that this is the solipsism issue, to which I would say  
 that if it has no mind, it cannot converse with you,  
 which would be a test for solipsism,-- which I just now found in  
 typing the first part of this sentence.  
Solipsism makes everyone zombie except you.  
But in some context some people might conceive that zombie exists,  
without making everyone zombie. Craig believes that computers, if they  
might behave like conscious individuals would be a zombie, but he is  
no solipsist.  
There is no test for solipsism, nor for zombieness. BY definition,  
almost. A zombie behaves exactly like a human being. There is no 3p  
features that you could use at all to make a direct test. Now a theory  
which admits zombie, can have other features which might be testable,  
and so some indirect test are logically conceivable, relatively to  
some theory.  
Bruno  



  
  
 Roger Clough, rclo...@verizon.net  
 10/17/2012  
 Forever is a long time, especially near the end. -Woody Allen  
  
  
 - Receiving the following content -  
 From: Bruno Marchal  
 Receiver: everything-list  
 Time: 2012-10-17, 08:57:36  
 Subject: Re: Is consciousness just an emergent property of  
 overlycomplexcomputations ?  
  
  
  
  
 On 16 Oct 2012, at 15:33, Stephen P. King wrote:  
  
  
 On 10/16/2012 9:20 AM, Roger Clough wrote:  
  
 Hi Stephen P. King  
  
 Thanks. My mistake was to say that P's position is that  
 consciousness, arises at (or above ?)  
 the level of noncomputability. He just seems to  
 say that intuiton does. But that just seems  
 to be a conjecture of his.  
  
  
 ugh, rclo...@verizon.net  
 10/16/2012  
 Forever is a long time, especially near the end. -Woody Allen  
  
  
 Hi Roger,  
  
 IMHO, computability 

Re: Re: a criticism of comp

2012-10-21 Thread Roger Clough

On 20 Oct 2012, at 13:35, Roger Clough wrote: 

(previously)  Hi Bruno Marchal 
 
 Comp cannot give subjective content, 

BRUNO: This is equivalent to saying that comp is false. 

By definition of comp, our consciousness remains intact when we get 
the right computer, featuring the brain at a genuine description level. 

Then the math confirms this, even in the ideal case of the 
arithmetically sound machine, and this by using the most classical 
definition of belief, knowledge, etc. 

ROGER: The problem is that the since a computer cannot experience
anything (having no 1p) it cannot generate descriptions (3p) of experiences. 
Or the inverse, to produce experiences (1p) from descriptions (3p) of them. 
Computers are imprisoned in a 3p world.
 
(previously)  can only provide an 
 objective simulation on the BEHAVIOR of a person (or his physical 
 brain). 
 This behavioral information can be dealt with by the 
 philosophy of mind called functionalism: 
 
 http://plato.stanford.edu/entries/functionalism/ 


BRUNO: Here you defend a reductionist conception of what machines and numbers 
are. It fails already at 3p level, by the incompleteness phenomena. 
(functionalism is an older version of comp, with the substitution 
level made implicit, and usually fixed at the neuronal level for the 
brain, and in that sense comp is a weaker hypothesis than 
functionalism, as it does not bound the comp subst. level. 

ROGER: Perhaps, but if I were a brain theorist, I might
withhold judgment, as incompleteness might not spoil
everything, and my personal attitude is to leave
whatever cards you can play still on the table.
Sometimes you can solve (to some degree of satisfaction)
a mystery with an incomplete set of evidence.




 
 Functionalism in the philosophy of mind is the doctrine that what 
 makes something a mental 
 state of a particular type does not depend on its internal 
 constitution, but rather on the way 
 it functions, or the role it plays, in the system of which it is a 
 part. This doctrine is rooted in 
 Aristotle's conception of the soul, and has antecedents in Hobbes's 
 conception of the mind as 
 a ?alculating machine?, but it has become fully articulated (and 
 popularly endorsed) only in 
 the last third of the 20th century. Though the term ?unctionalism? 
 is used to designate a variety 
 of positions in a variety of other disciplines, including psychology, 
 sociology, economics, and architecture, this entry focuses 
 exclusively on 
 functionalism as a philosophical thesis about the nature of mental 
 states. 
 
 A criticism of functionalism and hence of comp is that if one only 
 considers his physical behavior (and possibily but not necessarily 
 his brain's behavior), 
 a person can behave in a certain way but have a different mental 
 content. 

Good point, and this is a motivation for making explicit the existence 
of the level of substitution explicit in the definition. 

To survive *for a long time* I would personally ask a correct 
simulation of the molecular levels of both the neurons and the glial 
cells in the brain. 

The UD Argument does NOT depend on the choice of the substitution 
level, as long you get a finite digital description relatively to a 
universal number/theory/machine. 

Bruno 



 
 
 
 
 Roger Clough, rclo...@verizon.net 
 10/20/2012 
 Forever is a long time, especially near the end. -Woody Allen 
 
 
 - Receiving the following content - 
 From: Bruno Marchal 
 Receiver: everything-list 
 Time: 2012-10-19, 03:31:54 
 Subject: Re: I believe that comp's requirement is one of as if 
 ratherthanis 
 
 
 
 
 On 17 Oct 2012, at 15:28, Stephen P. King wrote: 
 
 
 On 10/17/2012 8:45 AM, Bruno Marchal wrote: 
 
 
 On 16 Oct 2012, at 15:00, Stephen P. King wrote: 
 
 
 On 10/16/2012 8:23 AM, Craig Weinberg wrote: 
 
 On Tuesday, October 16, 2012 4:02:44 AM UTC-4, stathisp wrote: 
 
 
 
 There is of course the idea that the universe is actually a 
 simulation but that is more controversial. 
 
 A tempting idea until we question what it is a simulation of? 
 
 
 We can close this by considering when is a simulation of a real 
 thing indistinguishable from the real thing! 
 
 
 What law states that computations exist ab initio, but the capacity 
 to experience and participate in a simulated world does not? 
 
 
 Good point! Why not both existing ab initio? 
 
 
 But they exists ab initio in the arithmetical truth. So with comp, 
 we can postulate only the numbers, or the computations (they are 
 ontologically equivalent), then consciousness is semantical fixed 
 point, existing for arithmetical reason, yet not describable in 
 direct arithmetical term (like truth, by Tarski, or knowledge by 
 Scott-Montague. The Theaetetical Bp  p is very appealing in that 
 setting, as it is not arithmetically definable, yet makes sense in 
 purely arithmetical term for each p in the language of the machine 
 (arithmetic, say). 
 
 So we don't have to postulate consciousness to explain 

Re: Continuous Game of Life

2012-10-21 Thread Bruno Marchal


On 20 Oct 2012, at 19:18, Craig Weinberg wrote:




On Friday, October 19, 2012 3:29:39 AM UTC-4, Bruno Marchal wrote:

On 17 Oct 2012, at 17:04, Craig Weinberg wrote:




On Wednesday, October 17, 2012 10:16:52 AM UTC-4, Bruno Marchal  
wrote:


On 16 Oct 2012, at 18:56, Craig Weinberg wrote:


Two men and two women live together. The woman has a child. 2+2=5


You mean two men + two women + a baby = five persons.

You need the arithmetical 2+2=4, and 4+1 = 5, in your argument.

Bruno


I only see that one person plus another person can eventually equal  
three or more people.


With the operation of sexual reproduction, not by the operation of  
addition.


Only if you consider the 2+2=5 to be a complex special case and  
2+2=4 to be a simple general rule.


2+2 = 5 is not a special case of 2+2=4.



It could just as easily be flipped.


Errors are possible pour complex subjects.



I can say 2+2=4 by the operation of reflexive neurology, and 2+2=5  
is an operation of multiplication. It depends on what level of  
description you privilege by over-signifying and the consequence  
that has on the other levels which are under-signified. To me, the  
Bruno view is near-sighted when it comes to physics (only sees  
numbers, substance is disqualified)


It means that you think that there is a flaw in UDA, as the non  
materiality of physics is a consequence of the comp hypothesis. There  
is no choice in the matter (pun included).




and far-sighted when it comes to numbers (does not question the  
autonomy of numbers).


Because computer science explains in details how number can be  
autonomous, or less simplified: how arithmetical realization can  
generate the beliefs in bodies, relative autonomy, etc. You seem to  
want to ignore the computer science behind the comp hypothesis.





What is it that can tell one number from another?


It is not simple to prove, but the laws of addition and multiplication  
is enough. I am not sanguine on numbers, I can take fortran programs  
in place, with the same explanation for the origin of the  
consciousness/realities couplings.






What knows that + is different from * and how?



Because we know the definition, and practice first order logical  
language. Everything I say is a theorem in the theory:


x + 0 = x
x + s(y) = s(x + y)

 x *0 = 0
 x*s(y) = x*y + x






Why doesn't arithmetic truth need a meta-arithmetic machine to allow  
it to function (to generate the ontology of 'function' in the first  
place)?


It does not. That's the amazing whole theoretical computer science  
point. The meta-arithmetic is already a consequence of the four laws  
above.


Bruno



It's all sense. It has to be sense.





It depends when you start counting and how long it takes you to  
finish.


It depends on what we are talking about. Person with sex is not  
numbers with addition.


You are just changing definition, not invalidating a proof (the  
proof that 2+2=4, in arithmetic).


I'm not trying to invalidate the proof within one context of sense,  
I'm pointing out that it isn't that simple. There are other contexts  
of sense which reduce differently.


Craig



Bruno





Craig




http://iridia.ulb.ac.be/~marchal/




--
You received this message because you are subscribed to the Google  
Groups Everything List group.
To view this discussion on the web visit https://groups.google.com/d/msg/everything-list/-/QjkYW9tKq6EJ 
.

To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything- 
li...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en 
.


http://iridia.ulb.ac.be/~marchal/




--
You received this message because you are subscribed to the Google  
Groups Everything List group.
To view this discussion on the web visit https://groups.google.com/d/msg/everything-list/-/ma4il48CDGAJ 
.

To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com 
.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en 
.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Continuous Game of Life

2012-10-21 Thread Bruno Marchal


On 20 Oct 2012, at 19:29, John Clark wrote:


On Sat, Oct 20, 2012  Bruno Marchal marc...@ulb.ac.be wrote:

  I have no idea what that means, not a clue

 Probably for the same reason that you stop at step 3 in the UD  
Argument.


Probably. I remember I stopped reading after your proof of the  
existence of a new type of indeterminacy never seen before because  
the proof was in error, so there was no point in reading about  
things built on top of that; but I don't remember if that was step 3  
or not.


From your error you have been obliged to say that in the WM  
duplication, you will live both at W and at W, yet your agree that  
both copy will feel to live in only one place, so the error you have  
seen was dues to a confusion between first person and third person. We  
were many to tell you this, and it seems you are stick in that  
confusion.


By the way, it is irrational to stop in the middle of a proof.  
Obviously, reading the sequel, can help you to see the confusion you  
are doing.






You assume a physical reality,

I assume that if physical reality doesn't exist then either the  
words physical or reality or exists are meaningless, and I  
don't think any of those words are.


By assuming a physical reality at the start, you make it into a  
primitive ontology. But the physical reality can emerge or appear  
without a physical reality at the start, like in the numbers' dreams.






 and you assume that our consciousness is some phenomenon related  
exclusively to some construct (brain, bodies)


If you change your conscious state then your brain changes, and if I  
make a change in your brain then your conscious state changes too,  
so I'd say that it's a good assumption that consciousness is  
interlinked with a physical object, in fact it's a downright superb  
assumption.


But this is easily shown to be false when we assume comp. If your  
state appears in a far away galaxies, what will happen far away might  
change your outcome of an experience you decided to do here. You  
believe in an identity thesis which can't work, unless you singularize  
both the mind and the brain matter with special sort of infinities.






  so if it [Evolution] produced it [consciousness]

No. With comp, consciousness was there before.

Well I don't know about you but I don't think my consciousness was  
there before Evolution figured out how to make brains, I believe  
this because I can't seem to remember events that were going on  
during the Precambrian. I've always been a little hazy about what  
exactly comp meant but I had the general feeling that I sorta  
agreed with it, but apparently not.


You keep defending comp, in your dialog with Craig, but you don't  
follow its logical consequences,
I guess, this is by not wanting to take seriously the first person and  
third person distinction, which is the key of the UD argument.


You can attach consciousness to the owner of a brain, but the owner  
itself must attach his consciousness to all states existing in  
arithmetic (or in a physical universe if that exists) and realizing  
that brain state.


Bruno

http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: A test for solipsism

2012-10-21 Thread Bruno Marchal


On 20 Oct 2012, at 19:47, Stephen P. King wrote:


On 10/20/2012 10:11 AM, Bruno Marchal wrote:


On 20 Oct 2012, at 12:38, Roger Clough wrote:


Hi Bruno Marchal

In that definition of a p-zombie below, it says that
a p-zombie cannot experience qualia, and qualia
are what the senses tell you.


Yes. Qualia are the subjective 1p view, sometimes brought by  
percepts, and supposed to be treated by the brain.

And yes a zombie as no qualia, as a qualia needs consciousness.







The mind then transforms
what is sensed into a sensation. The sense of red
is what the body gives you, the sensation of red
is what the mind transforms that into. Our mind
also can recall past sensations of red to compare
it with and give it a name red, which a real
person can identify as eg a red traffic light
and stop. A zombie would not stop



No, a zombie will stop at the red light. By definition it behaves  
like a human, or like a conscious entity.
By definition, if you marry a zombie, your will never been aware of  
that, your whole life.




(I am not allowing
the fact that red and green lights are in different
positions).
That would be a test of zombieness.


There exists already detector of colors, smells, capable of doing  
finer discrimination than human.
I have heard about a machine testing old wine better than human  
experts.


Machines evolve quickly. That is why the non-comp people are  
confronted with the idea that zombie might be logically possible  
for them.


Bruno


Hi Bruno and Roger,

What would distinguish, for an external observer, a p-zombie  
from a a person that does not see the world external to it as  
anything other than an internal panorama with which it cannot  
interact?


Nobody can distinguish a p-zombie from a human, even if that human is  
solipsist, even a very special sort of solipsist like the one you  
describe.


Bruno




http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: The circular logic of Dennett and other materialists

2012-10-21 Thread Bruno Marchal


On 20 Oct 2012, at 19:51, Stephen P. King wrote:


On 10/20/2012 10:33 AM, Bruno Marchal wrote:


On 20 Oct 2012, at 14:04, Roger Clough wrote:


Hi Bruno Marchal

This is also where I run into trouble with the p-zombie
definition of what a zombie is.  It has no mind
but it can still behave just as a real person would.

But that assumes, as the materialists do, that the mind
has no necessary function. Which is nonsense, at least
to a realist.

Thus Dennett claims that a real candidate person
does not need to have a mind. But that's in his
definition of what a real person is. That's circular logic.


I agree with you on this.
Dennett is always on the verge of eliminativism. That is deeply  
wrong.


Now, if you want eliminate the zombie, and keep comp, you have to  
eventually associate the mind to the logico-arithmetical relations  
defining a computation relative to a universal number, and then a  
reasoning explains where the laws of physics comes from (the  
number's dream statistics).


This leads also to the arithmetical understanding of Plotinus, and  
of all those rare people aware of both the importance of staying  
rational on those issue, *and* open minded on, if not aware of, the  
existence of consciousness and altered consciousness states.


Bruno




 Dear Bruno,

   It seems, from this post that you do support some form of  
panprotopsychism!


? With comp to have a mind, you need a computer, or an universal number.

Bruno





http://www.youtube.com/watch?v=rieo-BDTcko

--
Onward!

Stephen


--
You received this message because you are subscribed to the Google  
Groups Everything List group.

To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com 
.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en 
.




http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Is consciousness just an emergent property of overly complexcomputations ?

2012-10-21 Thread Bruno Marchal


On 20 Oct 2012, at 22:09, meekerdb wrote:


On 10/20/2012 10:22 AM, Bruno Marchal wrote:


Dear Stephen,


On 19 Oct 2012, at 19:44, Stephen P. King wrote:


On 10/19/2012 1:37 PM, Bruno Marchal wrote:


On 17 Oct 2012, at 22:02, Alberto G. Corona wrote:




2012/10/17 Alberto G. Corona agocor...@gmail.com


2012/10/17 Bruno Marchal marc...@ulb.ac.be

On 17 Oct 2012, at 10:12, Alberto G. Corona wrote:





Life may support mathematics.



Arithmetic may support life. It is full of life and dreams.



Life is a computation devoted to making guesses about the  
future in order to self preserve . This is only possible in a  
world where natural computers are possible: in a world where  
the phisical laws have a mathematical nature. Instead of comp  
creating a mathematical-phisical reality, is the mathematical  
reality what creates the computations in which we live.


So all kind of arbitrary universes may exist, but only (some)  
mathematical ones can harbour self preserving computations,  
that is, observers.


OK. But harboring self-preserving computation is not enough, it  
must do in a first person measure winning way  on all  
computations going through our state. That's nice as this  
explain that your idea of evolution needs to be extended up to  
the origin of the physical laws.



I don´t think so .The difference between computation as an  
ordinary process of matter from the idea of  computation as the  
ultimate essence of reality is that the first restrict not only  
the mathematical laws, but also forces a matemacity of reality  
because computation in living beings   becomes a process with a  
cost that favour a  low kolmogorov complexity for the reality.  
In essence, it forces a discoverable local universe... ,


 In contrast,  the idea of computation as the ultimate nature of  
realtity postulates  computations devoid of restrictions by  
definition, so they may not restrict anything in the reality  
that we perceive. we may be boltzmann brains, we may  be a  
product not of evolution but a product of random computations.  
we may perceive elephants flying...


And still much of your conclussions coming from the first person  
indeterminacy may hold by considering living beings as ordinary  
material personal computers.



Yes, that's step seven. If the universe is enough big, to run a  
*significant* part of the UD. But I think that the white rabbits  
disappear only on the limit of the whole UD work (UD*).



Bruno



Dear Bruno,

Tell us more about how White Rabbits can appear if there is  
any restriction of mutual logical consistency between 1p and in  
any arbitrary recursion of 1p content?





We assume comp.  If a digital computer processes the activity of  
your brain in dream state with white rabbits, it means that such a  
computation with that dream exist in infinitely many local  
incarnation in the arithmetical (tiny, Turing universal) reality.


If you do a physical experience, the hallucination that all goes  
weird at that moment exists also, in arithmetic. The measure  
problem consists in justifying from consistency, self-reference,  
universal numbers, their rarity,


And their very specific correlation with the physical brain states  
of sleep.


Of course. But this is taken into account in the theoretical reasoning  
where we suppose the brain state are obtained by (immaterial)  
machine doing the computation at the right level.


We cannot know our right level, so we are not trying to build an  
artificial brain. The measure problem comes from the fact that,  
whatever the level is, the physics has to be given by a measure on  
computations. That is enough to already derive the logic of the  
observable, and that a step toward solving the measure problem,  
although some other possible manner might exist.


Bruno






Brent


--
You received this message because you are subscribed to the Google  
Groups Everything List group.

To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com 
.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en 
.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Continuous Game of Life

2012-10-21 Thread Bruno Marchal

Hi John,

On 20 Oct 2012, at 23:16, John Mikes wrote:


Bruno,
especially in my identification as responding to relations.
Now the Self? IT certainly refers to a more sophisticated level of  
thinking, more so than the average (animalic?)  mind. - OR: we have  
no idea. What WE call 'Self-Ccness' is definitely a human attribute  
because WE identify it that way. I never talked to a cauliflower to  
clarify whether she feels like having a self? (In cauliflowerese, of  
course).


My feeling was first that all homeotherm animals have self- 
consciousness, as they have the ability to dream, easily realted to  
the ability to build a representation of one self. Then I have  
enlarged the spectrum up to some spiders and the octopi, just by  
reading a lot about them, looking video.


But this is just a personal appreciation. For the plant, let us say I  
know nothing, although I supect possible consciousness, related to  
different scalings.


The following theory seems to have consciousness, for different reason  
(the main one is that it is Turing Universal):


x + 0 = x
x + s(y) = s(x + y)

 x *0 = 0
 x*s(y) = x*y + x

But once you add the very powerful induction axioms: which say that if  
a property F is true for zero, and preserved by the successor  
operation, then it is true for all natural numbers. That is the  
infinity of axioms:


(F(0)  Ax(F(x) - F(s(x))) - AxF(x),

with F(x) being any formula in the arithmetical language (and thus  
defined with 0, s, +, *),


Then you get Löbianity, and this makes it as much conscious as you and  
me. Indeed, they got a rich theology about which they can develop  
maximal awareness, and even test it by comparing the physics  
retrievable by that theology, and the observation and inference on  
their most probable neighborhoods.


Löbianity is the treshold at which any new axiom added will create and  
enlarge the machine ignorance. It is the utimate modesty treshold.



Bruno







On Thu, Oct 18, 2012 at 10:39 AM, Bruno Marchal marc...@ulb.ac.be  
wrote:


On 17 Oct 2012, at 19:19, Roger Clough wrote:

Hi Bruno Marchal

IMHO all life must have some degree of consciousness
or it cannot perceive its environment.

Are you sure?

Would you say that the plants are conscious? I do think so, but I am  
not sure they have self-consciousness.


Self-consciousness accelerates the information treatment, and might  
come from the need of this for the self-movie living creature having  
some important mass.


all life is a very fuzzy notion.

Bruno








Roger Clough, rclo...@verizon.net
10/17/2012
Forever is a long time, especially near the end. -Woody Allen


- Receiving the following content -
From: Bruno Marchal
Receiver: everything-list
Time: 2012-10-17, 10:13:37
Subject: Re: Continuous Game of Life




On 16 Oct 2012, at 18:37, John Clark wrote:


On Mon, Oct 15, 2012 at 2:40 PM, meekerdb  wrote:


If consciousness doesn't do anything then Evolution can't see it, so  
how and why did Evolution produce it? The fact that you have no  
answer to this means your ideas are fatally flawed.


I don't see this as a *fatal* flaw.  Evolution, as you've noted, is  
not a paradigm of efficient design.  Consciousness might just be a  
side-effect


But that's exactly what I've been saying for months, unless Darwin  
was dead wrong consciousness must be a side effect of intelligence,  
so a intelligent computer must be a conscious computer. And I don't  
think Darwin was dead wrong.






Darwin does not need to be wrong. Consciousness role can be deeper,  
in the evolution/selection of the laws of physics from the  
coherent dreams (computations from the 1p view) in arithmetic.



Bruno




http://iridia.ulb.ac.be/~marchal/

--
You received this message because you are subscribed to the Google  
Groups Everything List group.

To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com 
.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en 
.



http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google  
Groups Everything List group.

To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com 
.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en 
.




--
You received this message because you are subscribed to the Google  
Groups Everything List group.

To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com 
.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en 
.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List 

Re: Re: The Peirce-Leibniz triads Ver. 2

2012-10-21 Thread Craig Weinberg


On Sunday, October 21, 2012 7:19:42 AM UTC-4, rclough wrote:

  
 CRAIG: Cool Roger, 

 It mostly makes sense to me, except I don't understand why I. is 
 associated with objects and substance when it is feeling, perception, and 
 first person quale. 

 ROGER: It is not uncommon to find such objective/subjective dyslexia in 
 the literature. 
 This stuff is hard to get a hold of.


It can be, yeah, although my model makes it really easy. Subject and object 
are poles on a continuum, with private, proprietary, solipsistic, 
trans-rational sense qualities on the East (Orienting) end and public, 
generic, nihilistic, logical realism quantities on the Western end. In the 
center region between the two poles, subjectivity and objectivity are 
clearly discernible as inner and outer body/world perception (I call this 
the mundane fold as it is like a crease which acts as a barrier). In the 
edge region, the East and West actually meet in the sort of transcendental 
oblivion of subjective union with the ultimate (nirvana, satori, 
enlightenment, etc)


 CRAIG: To me, thinking is just as much first person as feeling, and they 
 both are subjective qualia. 
 Thinking is a meta-quale of feeling (which is a meta-quale of 
 awarenessperceptionsensationsense) 

 ROGER: Actually I have yet to find a clear or useful definition of 
 thinking (how it works). 
 In fact Wittgenstein at one point said that he does not know what 
 thinking is (!).
 But I believe you have to think if you compare objects across an 
 equals sign,
 so comparison (a dyad) seems to me to be a basic type of thinking.


A think a comparison is a basic type of everything. As luck would have it, 
I just posted this definition for what a thought is yesterday:

What exactly is a thought? http://s33light.org/post/33997036879 A thought 
is a private, personal, directly participatory narrative subjective 
experience which is typically expressed in a verbal-gestural sense modality 
(as words or feelings easily converted to words by an agency of proprietary 
interior voice). Thoughts can be discerned from images, awareness, and 
perception by their potential purposefulness; they serve as the seeds for 
public action. Generally public actions which are understood to be 
voluntary are assumed to be the consequence of private thoughts. Behaviors 
which are ‘thoughtless’ are deemed to be unconscious, subconscious, 
accidental, or socially impaired.

 


 CRAIG: That puts the whole subjective enchilada as Firstness and leaves 
 objects and 
 substance to Secondness. This is Self-Body distinction. What you have 
 is like 
 Lower-Self/Higher- Self distinction but with objects kind of 
 shoehorned in there. 
 Once you see matter as a public extension and self as a private 
 intention, then 
 Thirdness arises as the spatiotemporal interaction of formation and 
 information. 

 ROGER: Yes, distinction is another form of basic thought. But that 
 requires the ability to compare.


First you have to be able to distinguish things before you can compare 
them, otherwise what would you be comparing? 


 CRAIG: That outlines one way of slicing the pizza. I don't know if you can 
 see this but here: 

 https://lh3.googleusercontent.com/-Xz8OmKGPEjE/UIL6EtVeBEI/AZ4/iBhuMxBj9oU/s1600/trio_sml_entropy.jpg
  


 That gives a better idea of the syzygy effect of the big picture, how they 
 overlap in different ways and set each other off in a multi-sense way. 

 The Firstness, Secondness, and Thirdness relate respectively to the 
 respective trios: 

 I. Sense, Motive 
 II. Matter, Energy, 
 III. Space, Time 
  
 ROGER: I could see it, but couldn't see how to interpret it, but's thats 
 OK.
 The categories, like Hegel's dialectic, seem to be a basic take on 
 existence,
 So no doubt there are many approaches to defining them, yours 
 included. 
  
 CRAIG: to get to morality, you have to look at the black and white: 

 IV. Signal (escalating significance), Entropy aka Ent ntr rop opy 
 (attenuating significance...
 fragmentation and redundancy obstructs discernment capacities...
 information entropy generates thermodynamic entropy through sense 
 participation) 

 I did a post on this today, but it's pretty intense: 
 http://s33light.org/post/33951454539 

  
 ROGER: I welcome your thoughts on this. But as for myself, I try to keep 
 things as simple as possible.
 The truth is that actually  I had a serior moment when I wrote 
 morality.
 I should have recalled a better term, Ethics. That has to do with 
 law and doing, both typical of III.


In my view morality and ethics are manifestation of IV. It is distinct from 
law because it is not a scripted assumption of compliance, it is an 
internalized sensitivity to social considerations which drives law from 
above, rather than a consequence of the existence of a-signifying 
behavioral constraints. This is actually pretty important as it reveals why 
COMP 

Re: Code length = probability distribution

2012-10-21 Thread Stephen P. King

On 10/21/2012 3:48 AM, Russell Standish wrote:

On Sat, Oct 20, 2012 at 07:07:14PM -0400, Stephen P. King wrote:

On 10/20/2012 5:45 PM, Russell Standish wrote:

A UD generates and executes all programs, many of which are
equivalent. So some programs are represented more than others. The
COMP measure is a function over all programs that captures this
variation in program respresentation.

Why should this be unique, independent of UD, or the universal Turing
machine it runs on? Because the UD executes every other UD, as well as
itself, the measure will be a limit over contributions from all UDs.

Hi Russell,

 I worry a bit about the use of the word all in your remark.
All is too big, usually, to have a single constructable measure!
Why not consider some large enough but finite collections of
programs, such as what would be captured by the idea of an
equivalence class of programs that satisfy some arbitrary parameters
(such as solving a finite NP-hard problem) given some large but
finite quantity of resources?
 Of course this goes against the grain of Bruno's theology, but
maybe that is what it required to solve the measure problem. :-) I
find myself being won over by the finitists, such as Norman J.
Wildberger!

This may well turn out to be the case. Also Juergen Schmidhuber has
investigated this under the rubrik of speed prior.

I should have a chat with Norm about that sometime. Maybe if I see him
at a Christmas party. I didn't realise he was a finitist. I knew he
has an interesting take on how trigonometry should be done.

Cheers


Hi Russell,

I will look at Juergen's stuff again. ;-)

--
Onward!

Stephen


--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Continuous Game of Life

2012-10-21 Thread Craig Weinberg


On Sunday, October 21, 2012 4:06:16 AM UTC-4, stathisp wrote:

 On Sun, Oct 21, 2012 at 5:51 AM, Craig Weinberg 
 whats...@gmail.comjavascript: 
 wrote: 

  The atoms in my brain don't have to know how to read Chinese. They only 
  need to know how to be carbon, nitrogen, oxygen etc. atoms. The complex 
  behaviour which is reading Chinese comes from the interaction of 
 billions of 
  these atoms doing their simple thing. 
  
  
  I don't think that is true. The other way around makes just as much 
 sense of 
  not more: Reading Chinese is a simple behavior which drives the behavior 
 of 
  billions of atoms to do a complex interaction. To me, it has to be both 
  bottom-up and top-down. It seems completely arbitrary prejudice to 
 presume 
  one over the other just because we think that we understand the 
 bottom-up so 
  well. 
  
  Once you can see how it is the case that it must be both bottom-up and 
  top-down at the same time, the next step is to see that there is no 
  possibility for it to be a cause-effect relationship, but rather a dual 
  aspect ontological relation. Nothing is translating the functions of 
 neurons 
  into a Cartesian theater of experience - there is nowhere to put it in 
 the 
  tissue of the brain and there is no evidence of a translation from 
 neural 
  protocols to sensorimotive protocols - they are clearly the same thing. 

 If there is a top-down effect of the mind on the atoms then there we 
 would expect some scientific evidence of this. 


These words are a scientific evidence of this. The atoms of my brain are 
being manipulated from the top down. I am directly projecting what I want 
to say through my mind in such a way that the atoms of my brain facilitate 
changes in the tissues of my body. Fingers move. Keys click. 

 

 Evidence would 
 constitute, for example, neurons firing when measurements of 
 transmembrane potentials, ion concentrations etc. suggest that they 
 should not. 


Do not neurons fire when I decide to type? 

What you are expecting would be nothing but another homunculus. If there 
was some special sauce oozing out of your neurons which looked like...what? 
pictures of me moving my fingers? How would that explain how I am inside 
those pictures. The problem is that you are committed to the realism of 
cells and neurons over thoughts and feelings - even when we understand that 
our idea of neurons are themselves only thoughts and feelings. This isn't a 
minor glitch, it is The Grand Canyon.

What has to be done is to realize that thoughts and feelings cannot be made 
out of forms and functions, but rather forms and functions are what 
thoughts and feelings look like from an exterior, impersonal perspective. 
The thoughts and feelings are the full-spectrum phenomenon, the forms and 
functions a narrow band of that spectrum. The narrowness of that band is 
what maximizes the universality of it. Physics is looking a slice of 
experience across all phenomena, effectively amputating all of the meaning 
and perceptual inertia which has accumulated orthogonally to that slice. 
This is the looong way around when it comes to consciousness as 
consciousness is all about the longitudinal history of experience, not the 
spatial-exterior mechanics of the moment.
 

 You claim that such anomalous behaviour of neurons and 
 other cells due to consciousness is widespread, yet it has never been 
 experimentally observed. Why? 


Nobody except you and John Clark are suggesting any anomalous behavior. 
This is your blind spot. I don't know if you can see beyond. I am not 
optimistic. If there were any anomalous behavior of neurons, they would 
STILL require another meta-level of anomalous behaviors to explain them. 
Whatever level of description you choose for human consciousness - the 
brain, the body, the extended body, CNS, neurons, molecules, atoms, 
quanta... it DOESN'T MATTER AT ALL to the hard problem. There is still NO 
WAY for us to be inside of those descriptions, and even if there were, 
there is no conceivable purpose for 'our' being there in the first place.  
This isn't a cause for despair or giving up, it is a triumph of insight. It 
is to see that the world is round if you are far away from it, but flat if 
you are on the surface. You keep trying to say that if the world were round 
you would see anomalous dips and valleys where the Earth begins to curve. 
You are not getting it. Reality is exactly what it seems to be, and it is 
many other things as well. Just because our understanding brings us 
sophisticated views of what we are from the outside in does not in any way 
validate the supremacy of the realism which we rely on from the inside out 
to even make sense of science.
 


  If the atoms in my brain were put into a Chinese-reading configuration, 
  either through a lot of work learning the language or through direct 
  manipulation, then I would be able to understand Chinese. 
  
  
  It's understandable to assume that, but no I don't 

Re: Continuous Game of Life

2012-10-21 Thread Stephen P. King

On 10/21/2012 4:05 AM, Stathis Papaioannou wrote:

On Sun, Oct 21, 2012 at 5:51 AM, Craig Weinberg whatsons...@gmail.com wrote:


The atoms in my brain don't have to know how to read Chinese. They only
need to know how to be carbon, nitrogen, oxygen etc. atoms. The complex
behaviour which is reading Chinese comes from the interaction of billions of
these atoms doing their simple thing.


I don't think that is true. The other way around makes just as much sense of
not more: Reading Chinese is a simple behavior which drives the behavior of
billions of atoms to do a complex interaction. To me, it has to be both
bottom-up and top-down. It seems completely arbitrary prejudice to presume
one over the other just because we think that we understand the bottom-up so
well.

Once you can see how it is the case that it must be both bottom-up and
top-down at the same time, the next step is to see that there is no
possibility for it to be a cause-effect relationship, but rather a dual
aspect ontological relation. Nothing is translating the functions of neurons
into a Cartesian theater of experience - there is nowhere to put it in the
tissue of the brain and there is no evidence of a translation from neural
protocols to sensorimotive protocols - they are clearly the same thing.

If there is a top-down effect of the mind on the atoms then there we
would expect some scientific evidence of this. Evidence would
constitute, for example, neurons firing when measurements of
transmembrane potentials, ion concentrations etc. suggest that they
should not. You claim that such anomalous behaviour of neurons and
other cells due to consciousness is widespread, yet it has never been
experimentally observed. Why?


Hi Stathis,

How would you set up the experiment? How do you control for an 
effect that may well be ubiquitous? Did you somehow miss the point that 
consciousness can only be observed in 1p? Why are you so insistent on a 
3p of it?





If the atoms in my brain were put into a Chinese-reading configuration,
either through a lot of work learning the language or through direct
manipulation, then I would be able to understand Chinese.


It's understandable to assume that, but no I don't think it's like that. You
can't transplant a language into a brain instantaneously because there is no
personal history of association. Your understanding of language is not a
lookup table in space, it is made out of you. It's like if you walked around
with Google translator in your brain. You could enter words and phrases and
turn them into you language, but you would never know the language first
hand. The knowledge would be impersonal - accessible, but not woven into
your proprietary sense.

I don't mean putting an extra module into the brain, I mean putting
the brain directly into the same configuration it is put into by
learning the language in the normal way.


How might we do that? Alter 1 neuron and you might not have the 
same mind.





I'm sorry, but this whole passage is a non sequitur as far as the fading
qualia thought experiment goes. You have to explain what you think would
happen if part of your brain were replaced with a functional equivalent.


There is no functional equivalent. That's what I am saying. Functional
equivalence when it comes to a person is a non-sequitur. Not only is every
person unique, they are an expression of uniqueness itself. They define
uniqueness in a never-before-experienced way. This is a completely new way
of understanding consciousness and signal. Not as mechanism, but as
animism-mechanism.



A functional equivalent would stimulate the remaining neurons the same as
the part that is replaced.


No such thing. Does any imitation function identically to an original?

In a thought experiment we can say that the imitation stimulates the
surrounding neurons in the same way as the original. We can even say
that it does this miraculously. Would such a device *necessarily*
replicate the consciousness along with the neural impulses, or could
the two be separated?


Is the brain strictly a classical system?




The original paper says this is a computer chip but this is not necessary
to make the point: we could just say that it is any device, not being the
normal biological neurons. If consciousness is substrate-dependent (as you
claim) then the device could do its job of stimulating the neurons normally
while lacking or differing in consciousness. Since it stimulates the neurons
normally you would behave normally. If you didn't then it would be a
miracle, since your muscles would have to contract normally. Do you at least
see this point, or do you think that your muscles would do something
different?


I see the point completely. That's the problem is that you keep trying to
explain to me what is obvious, while I am trying to explain to you something
much more subtle and sophisticated. I can replace neurons which control my
muscles because muscles are among the most distant and replaceable parts 

Re: a paper by Karl Svozil

2012-10-21 Thread Bruno Marchal

Hi Stephen,

Pleasing reading indeed. A bit old. You should easily find what is  
missing.


Answer: the mind-body problem. It is still a form of aristotelian  
physicalism, even if it has the correct natural numbers ontology.  
There is still an implicit use of the aristotelian identity thesis  
between a mind and a body.


Comp is finitist for the ontology, as there is only 0, s(0), s(s(0)),  
etc. (or K, S, KK, SK, ... ).


Then comp is infinitist, both for the subject, and for the math needed  
to solve the mind body problem in that setting, as we have to take  
into account non enumerable set of histories and random oracles, etc.


Now there might be relation between the Conway Moore automaton and the  
Z logics, but I have not yet find one, despite some formal  
relationship/duality between them (please don't jump on the duality  
here, as it is another one than the one by Pratt).


It is nice that Svozil cites people like Descartes, Rossler,  
Boscovitch and Finkelstein, and also Galouye, but he was somehow  
closer to comp when he referred also on Everett, like in his Singapore  
book.


I have often (try to) explained on this list that digital physics is  
self-contradictory, as it entails comp, but comp, if you keep the 1- 
indeterminacy in mind, entails ~digital physics, and ~digital  
theology, a priori.


The conscious person is still under the rug, I would say. It is still  
pre-first person indeterminacy, if you want, but it is also pre-the  
ASSA approaches, or the general everything approach.


Bruno


On 21 Oct 2012, at 05:15, Stephen P. King wrote:


Hi Folks,

For your amusement, delight and (hopefully) comment, I present a  
paper:


http://arxiv.org/abs/physics/0305048

Computational universes

Karl Svozil
(Submitted on 12 May 2003 (v1), last revised 14 Apr 2005 (this  
version, v2))
Suspicions that the world might be some sort of a machine or  
algorithm existing ``in the mind'' of some symbolic number cruncher  
have lingered from  antiquity. Although popular at times, the  
most radical forms of this idea never reached mainstream. Modern  
developments in physics and computer science have lent support to  
the thesis, but empirical evidence is needed before it can begin to  
replace our contemporary world view.


--
Onward!

Stephen

--
You received this message because you are subscribed to the Google  
Groups Everything List group.

To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com 
.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en 
.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Continuous Game of Life

2012-10-21 Thread Jason Resch
On Sun, Oct 21, 2012 at 8:56 AM, Bruno Marchal marc...@ulb.ac.be wrote:

 Hi John,

 On 20 Oct 2012, at 23:16, John Mikes wrote:

 Bruno,
 especially in my identification as responding to relations.
 Now the Self? IT certainly refers to a more sophisticated level of
 thinking, more so than the average (animalic?)  mind. - OR: we have no
 idea. What WE call 'Self-Ccness' is definitely a human attribute because WE
 identify it that way. I never talked to a cauliflower to clarify whether
 she feels like having a self? (In cauliflowerese, of course).


 My feeling was first that all homeotherm animals have self-consciousness,
 as they have the ability to dream, easily realted to the ability to build a
 representation of one self. Then I have enlarged the spectrum up to some
 spiders and the octopi, just by reading a lot about them, looking video.

 But this is just a personal appreciation. For the plant, let us say I know
 nothing, although I supect possible consciousness, related to different
 scalings.

 The following theory seems to have consciousness, for different reason
 (the main one is that it is Turing Universal):

 x + 0 = x
 x + s(y) = s(x + y)

  x *0 = 0
  x*s(y) = x*y + x

 But once you add the very powerful induction axioms: which say that if a
 property F is true for zero, and preserved by the successor operation, then
 it is true for all natural numbers. That is the infinity of axioms:

 (F(0)  Ax(F(x) - F(s(x))) - AxF(x),

 with F(x) being any formula in the arithmetical language (and thus defined
 with 0, s, +, *),

 Then you get Löbianity, and this makes it as much conscious as you and me.
 Indeed, they got a rich theology about which they can develop maximal
 awareness, and even test it by comparing the physics retrievable by that
 theology, and the observation and inference on their most probable
 neighborhoods.

 Löbianity is the treshold at which any new axiom added will create and
 enlarge the machine ignorance. It is the utimate modesty treshold.



Bruno,

Might there be still other axioms (which we are not aware of, or at least
do not use) that could lead to even higher states of consciousness than we
presently have?

Also, it isn't quite clear to me how something needs to be added to Turing
universality to expand the capabilities of consciousness, if all
consciousness is the result of computation.

Thanks,

Jason

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Continuous Game of Life

2012-10-21 Thread Jason Resch
On Sun, Oct 21, 2012 at 8:17 AM, Bruno Marchal marc...@ulb.ac.be wrote:


 On 20 Oct 2012, at 19:29, John Clark wrote:


 Well I don't know about you but I don't think my consciousness was there
 before Evolution figured out how to make brains, I believe this because I
 can't seem to remember events that were going on during the Precambrian.
 I've always been a little hazy about what exactly comp meant but I had
 the general feeling that I sorta agreed with it, but apparently not.


 You keep defending comp, in your dialog with Craig, but you don't follow
 its logical consequences,
 I guess, this is by not wanting to take seriously the first person and
 third person distinction, which is the key of the UD argument.

 You can attach consciousness to the owner of a brain, but the owner itself
 must attach his consciousness to all states existing in arithmetic (or in a
 physical universe if that exists) and realizing that brain state.


John,

I would also suggest that you read this link, it shows how an infinitely
large cosmos leads directly to quantum mechanics due to the observer's
inability to self-locate.  For someone who believes in both mechanism and
platonism, it is the exact scenario platonic programs should find
themselves in:

http://lesswrong.com/lw/3pg/aguirre_tegmark_layzer_cosmological/
http://arxiv.org/abs/1008.1066

Jason

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Continuous Game of Life

2012-10-21 Thread John Clark
On Sun, Oct 21, 2012  Bruno Marchal marc...@ulb.ac.be wrote:


   I stopped reading after your proof of the existence of a new type of
 indeterminacy never seen before because the proof was in error, so there
 was no point in reading about things built on top of that


  From your error you have been obliged to say that in the WM
 duplication, you will live both at W and at W


Yes.

yet your agree that both copy will feel to live in only one place


Yes.

 so the error you have seen was dues to a confusion between first person
 and third person.


Somebody is certainly confused but it's not me. The fact is that if we are
identical then my first person experience of looking at you is identical to
your first person experience of looking at me, and both our actions are
identical for a third person looking at both of us. As long as we're
identical it's meaningless to talk about 2 conscious beings regardless of
how many bodies or brains have been duplicated.

Your confusion stems from saying you have been duplicated but then not
thinking about what that really means, you haven't realized that a noun
(like a brain) has been duplicated but a adjective (like Bruno Marchal) has
not been as long as they are identical; you are treating adjectives as if
they were nouns and that's bound to cause confusion. You are also confused
by the fact that if 2 identical things change in nonidentical ways, such as
by forming different memories, then they are no longer identical. And
finally you are confused by the fact that although they are not each other
any more after those changes both still have a equal right to call
themselves Bruno Marchal. After reading these multiple confusions in one
step of your proof I saw no point in reading more, and I still don't.

 By the way, it is irrational to stop in the middle of a proof.


If one of the steps in a proof contains a blunder then it would be
irrational to keep reading it.

 By assuming a physical reality at the start


That seems like a pretty damn good place to make an assumption.

  But the physical reality can emerge or appear without a physical reality
 at the start


Maybe maybe not, but even if you're right that wouldn't make it any less
real; and maybe physical reality didn't even need to emerge because there
was no start.


  If you change your conscious state then your brain changes, and if I
 make a change in your brain then your conscious state changes too, so I'd
 say that it's a good assumption that consciousness is interlinked with a
 physical object, in fact it's a downright superb assumption.


   But this is easily shown to be false when we assume comp.


It's not false and I don't need to assume it and I haven't theorized it
from armchair philosophy either, I can show it's true experimentally. And
when theory and experiment come into conflict it is the theory that must
submit not the experiment. If I insert drugs into your bloodstream it will
change the chemistry of your brain, and when that happens your conscious
state will also change. Depending on the drug I can make you happy-sad,
friendly-angry, frightened-clam, alert-sleepy, dead-alive, you name it.


   If your state appears in a far away galaxies [...]


Then he will be me and he will remain me until differences between that far
away galaxy and this one cause us to change in some way, such as by forming
different memories; after that he will no longer be me, although we will
still both be John K Clark because John K Clark has been duplicated, the
machine duplicated the body of him and the environmental differences caused
his consciousness to diverge. As I've said before this is a odd situation
but in no way paradoxical.

 You keep defending comp, in your dialog with Craig,


I keep defending my ideas, comp is your homemade term not mine, I have no
use for it.

 You can attach consciousness to the owner of a brain,


Yes, consciousness is what the brain does.

  but the owner itself must attach his consciousness to all states
 existing in arithmetic


Then I must remember events that happened in the Precambrian because
arithmetic existed even back then, but I don't, I don't remember existing
then at all. Now that is a paradox! Therefore one of the assumptions must
be wrong, namely that the owner of a brain must attach his consciousness
to all states existing in arithmetic.

  John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Continuous Game of Life

2012-10-21 Thread Quentin Anciaux
2012/10/21 John Clark johnkcl...@gmail.com

 On Sun, Oct 21, 2012  Bruno Marchal marc...@ulb.ac.be wrote:


   I stopped reading after your proof of the existence of a new type of
 indeterminacy never seen before because the proof was in error, so there
 was no point in reading about things built on top of that


  From your error you have been obliged to say that in the WM
 duplication, you will live both at W and at W


 Yes.

 yet your agree that both copy will feel to live in only one place


 Yes.

  so the error you have seen was dues to a confusion between first person
 and third person.


 Somebody is certainly confused but it's not me. The fact is that if we are
 identical then my first person experience of looking at you is identical to
 your first person experience of looking at me, and both our actions are
 identical for a third person looking at both of us. As long as we're
 identical it's meaningless to talk about 2 conscious beings regardless of
 how many bodies or brains have been duplicated.

 Your confusion stems from saying you have been duplicated but then not
 thinking about what that really means, you haven't realized that a noun
 (like a brain) has been duplicated but a adjective (like Bruno Marchal) has
 not been as long as they are identical; you are treating adjectives as if
 they were nouns and that's bound to cause confusion. You are also confused
 by the fact that if 2 identical things change in nonidentical ways, such as
 by forming different memories, then they are no longer identical. And
 finally you are confused by the fact that although they are not each other
 any more after those changes both still have a equal right to call
 themselves Bruno Marchal. After reading these multiple confusions in one
 step of your proof I saw no point in reading more, and I still don't.

  By the way, it is irrational to stop in the middle of a proof.


 If one of the steps in a proof contains a blunder then it would be
 irrational to keep reading it.

  By assuming a physical reality at the start


 That seems like a pretty damn good place to make an assumption.

   But the physical reality can emerge or appear without a physical
 reality at the start


 Maybe maybe not, but even if you're right that wouldn't make it any less
 real; and maybe physical reality didn't even need to emerge because there
 was no start.


  If you change your conscious state then your brain changes, and if I
 make a change in your brain then your conscious state changes too, so I'd
 say that it's a good assumption that consciousness is interlinked with a
 physical object, in fact it's a downright superb assumption.


   But this is easily shown to be false when we assume comp.


 It's not false and I don't need to assume it and I haven't theorized it
 from armchair philosophy either, I can show it's true experimentally. And
 when theory and experiment come into conflict it is the theory that must
 submit not the experiment. If I insert drugs into your bloodstream it will
 change the chemistry of your brain, and when that happens your conscious
 state will also change. Depending on the drug I can make you happy-sad,
 friendly-angry, frightened-clam, alert-sleepy, dead-alive, you name it.


   If your state appears in a far away galaxies [...]


 Then he will be me and he will remain me until differences between that
 far away galaxy and this one cause us to change in some way, such as by
 forming different memories; after that he will no longer be me, although we
 will still both be John K Clark because John K Clark has been duplicated,
 the machine duplicated the body of him and the environmental differences
 caused his consciousness to diverge. As I've said before this is a odd
 situation but in no way paradoxical.

  You keep defending comp, in your dialog with Craig,


 I keep defending my ideas, comp is your homemade term not mine, I have
 no use for it.

  You can attach consciousness to the owner of a brain,


 Yes, consciousness is what the brain does.

   but the owner itself must attach his consciousness to all states
 existing in arithmetic


 Then I must remember events that happened in the Precambrian because
 arithmetic existed even back then, but I don't, I don't remember existing
 then at all. Now that is a paradox! Therefore one of the assumptions must
 be wrong,


Therefore that shows that you do your best to turn the meaning of
everything you read to be able to marvel at yourself... but well, that only
fools you.

Quentin


 namely that the owner of a brain must attach his consciousness to all
 states existing in arithmetic.

   John K Clark

  --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To post to this group, send email to everything-list@googlegroups.com.
 To unsubscribe from this group, send email to
 everything-list+unsubscr...@googlegroups.com.
 For more options, visit this group at
 

Re: Re: Solipsism = 1p

2012-10-21 Thread Roger Clough


On 20 Oct 2012, at 13:55, Roger Clough wrote: 

 Hi Bruno Marchal 
 
 
 I think if you converse with a real person, he has to 
 have a body or at least vocal chords or the ability to write. 

BRUNO:  Not necessarily. Its brain can be in vat, and then I talk to him by  
giving him a virtual body in a virtual environnement. 

I can also, in principle talk with only its brain, by sending the  
message through the hearing peripherical system, or with the cerebral  
stem, and decoding the nervous path acting on the motor vocal cords. 

ROGER: I forget what my gripe was.  This sounds OK.

 
 As to conversing (interacting) with a computer, not sure, but  
 doubtful: 
 for example how could it taste a glass of wine to tell good wine 
 from bad ? 

BRUNO: I just answered this. Machines becomes better than human in smelling  
and tasting, but plausibly far from dogs and cats competence. 

ROGER:  OK, but computers can't experience anything,
it would be simulated experience.  Not arbitrarily available.


 Same is true of a candidate possible zombie person. 

BRUNO:  Keep in mind that zombie, here, is a technical term. By definition it  
behaves like a human. No humans at all can tell the difference. Only  
God knows, if you want. 

ROGER: I  claim that it is impossible for any kind of zombie
that has no mind to act like a human. IMHO  that would
be an absurdity, because without a mind you cannot know
anything.  You would run into walls, for example, and
couldn't know what to do in any event. Etc. 
You couldn't understand language.

Bruno 



 
 
 Roger Clough, rclo...@verizon.net 
 10/20/2012 
 Forever is a long time, especially near the end. -Woody Allen 
 
 
 - Receiving the following content - 
 From: Bruno Marchal 
 Receiver: everything-list 
 Time: 2012-10-19, 14:09:59 
 Subject: Re: Solipsism = 1p 
 
 
 On 18 Oct 2012, at 20:05, Roger Clough wrote: 
 
 Hi Bruno Marchal 
 
 I think you can tell is 1p isn't just a shell 
 by trying to converse with it. If it can 
 converse, it's got a mind of its own. 
 
 I agree with. It has mind, and its has a soul (but he has no real 
 bodies. I can argue this follows from comp). 
 
 When you attribute 1p to another, you attribute to a shell to 
 manifest a soul or a first person, a knower. 
 
 Above a treshold of complexity, or reflexivity, (L?ianity), a 
 universal number get a bigger inside view than what he can ever see 
 outside. 
 
 Bruno 
 
 
 
 
 
 
 
 
 Roger Clough, rclo...@verizon.net 
 10/18/2012 
 Forever is a long time, especially near the end. -Woody Allen 
 
 
 - Receiving the following content - 
 From: Bruno Marchal 
 Receiver: everything-list 
 Time: 2012-10-17, 13:36:13 
 Subject: Re: Solipsism = 1p 
 
 
 On 17 Oct 2012, at 13:07, Roger Clough wrote: 
 
 Hi Bruno 
 
 Solipsism is a property of 1p= Firstness = subjectivity 
 
 OK. And non solipsism is about attributing 1p to others, which needs 
 some independent 3p reality you can bet one, for not being only part 
 of yourself. Be it a God, or a physical universe, or an arithmetical 
 reality. 
 
 Bruno 
 
 
 
 
 
 Roger Clough, rclo...@verizon.net 
 10/17/2012 
 Forever is a long time, especially near the end. -Woody Allen 
 
 
 - Receiving the following content - 
 From: Alberto G. Corona 
 Receiver: everything-list 
 Time: 2012-10-16, 09:55:41 
 Subject: Re: I believe that comp's requirement is one of as if 
 rather thanis 
 
 
 
 
 
 2012/10/11 Bruno Marchal 
 
 
 On 10 Oct 2012, at 20:13, Alberto G. Corona wrote: 
 
 
 2012/10/10 Bruno Marchal : 
 
 
 On 09 Oct 2012, at 18:58, Alberto G. Corona wrote: 
 
 
 It may be a zombie or not. I can? know. 
 
 The same applies to other persons. It may be that the world is made 
 of 
 zombie-actors that try to cheat me, but I have an harcoded belief in 
 the conventional thing. ? Maybe it is, because otherwise, I will act 
 in strange and self destructive ways. I would act as a paranoic, 
 after 
 that, as a psycopath (since they are not humans). That will not be 
 good for my success in society. Then, ? doubt that I will have any 
 surviving descendant that will develop a zombie-solipsist 
 epistemology. 
 
 However there are people that believe these strange things. Some 
 autists do not recognize humans as beings like him. Some psychopaths 
 too, in a different way. There is no authistic or psichopathic 
 epistemology because the are not functional enough to make societies 
 with universities and philosophers. That is the whole point of 
 evolutionary epistemology. 
 
 
 
 
 If comp leads to solipsism, I will apply for being a plumber. 
 
 I don't bet or believe in solipsism. 
 
 But you were saying that a *conscious* robot can lack a soul. See 
 the 
 quote just below. 
 
 That is what I don't understand. 
 
 Bruno 
 
 
 
 I think that It is not comp what leads to solipsism but any 
 existential stance that only accept what is certain and discard what 
 is only belief based on ?onjectures. 
 
 It can go no further than ?cogito ergo 

Re: Code length = probability distribution

2012-10-21 Thread Alberto G. Corona
This does not implies a reality created by an UD algorithm. It may be a
mathematical universe, that is a superset of the computable universes. The
measure problem in the UD algorith translates to the problem of the
effectivity of the Occam Razor, or the problem of the apparent simplicity
of the phisical laws, or, in other words, their low kolmogorov complexity,
that solomonov translates in his theory of inductive inference.

2012/10/21 Alberto G. Corona agocor...@gmail.com

 Ok

 I don´t remember the reason why Solomonof reduces the probability of the
 programs according with the length in is theory of inductive inference. I
 read it time ago. Solomonoff describes in his paper about inductive
 inference a more clear and direct solution for the measure problem. but I
 though that it was somehow ad hoc.

 I tough time ago about the Solomonof  solution to the induction problem,
 and I though  as such: living beings have to find, by evolution, at least
 partial and approximate inductive solutions in order to survive in their
 environment. This imposes a restriction on the laws of a local universe
 with life: It demand a low kolmogorov complexity for the *macroscopical* 
 laws. Otherwise these laws would not be discoverable, there would be no
 induction possible, so the living beings could not anticipate outcomes and
 they woul not survive.

 Solomonoff is a living being in a local universe, so shorther programs are
 more probable and add more weight for induction.

 I´m just thinking aloud. I will look again to the solomonof inductive
 inference. I was a great moment when I read it the first time.


 2012/10/20 Russell Standish li...@hpcoders.com.au

 On Sat, Oct 20, 2012 at 09:16:54PM +0200, Alberto G. Corona  wrote:
  This is not a consequence of the shannon optimum coding , in which the
  coding size of a symbol is inversely proportional  to the logaritm of
 the
  frequency of the symbol?.

 Not quite. Traditional shannon entropy uses probability of a symbol,
 whereas algorithmic complexity uses the probability of the whole
 sequence. Only if the symbols are independently distributed are the
 two the same. Usually, in most messages, the symbols are not id.

 
  What is exactly the comp measure problem?

 A UD generates and executes all programs, many of which are
 equivalent. So some programs are represented more than others. The
 COMP measure is a function over all programs that captures this
 variation in program respresentation.

 Why should this be unique, independent of UD, or the universal Turing
 machine it runs on? Because the UD executes every other UD, as well as
 itself, the measure will be a limit over contributions from all UDs.

 Cheers
 --


 
 Prof Russell Standish  Phone 0425 253119 (mobile)
 Principal, High Performance Coders
 Visiting Professor of Mathematics  hpco...@hpcoders.com.au
 University of New South Wales  http://www.hpcoders.com.au

 

 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To post to this group, send email to everything-list@googlegroups.com.
 To unsubscribe from this group, send email to
 everything-list+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/everything-list?hl=en.




 --
 Alberto.




-- 
Alberto.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



3p(1p) = FALSE, 3p(3p(1p))) = TRUE (?)

2012-10-21 Thread Roger Clough
SNIP
Dear Bruno, 

WHOEVER:   Tell us more about how White Rabbits can appear if there is any 
restriction of mutual logical consistency between 1p and in any arbitrary 
recursion of 1p content? 

BRUNO: We assume comp.  If a digital computer processes the activity of your 
brain in dream state with white rabbits, 
it means that such a computation with that dream exist in infinitely many 
local incarnation in the arithmetical (tiny, Turing universal) reality. 


If you do a physical experience, the hallucination that all goes weird at that 
moment exists also, in arithmetic. The measure problem consists in justifying 
from consistency, self-reference, universal numbers, their rarity, that is why 
apparent special universal (Turing) laws prevails (and this keeping in mind the 
1p, the 1p-indeterminacy, the 3p relative distinctions, etc.)  

ROGER: IMHO In either case, the dream or hallucination, I maintain
that a computer cannot directly share your experience or even have
an experience period. Or to use notation, 3p(1p) is impossible.
That is I think the solipsism problem. 

If I have it right, 3p(1p) = FALSE

However, my report of the dream would be 3p(1p), and a computer
can in the 3p sense, understand my 3p(1p). That is to say,

3p(3p(1p))) = TRUE.

Note that I am not at all proficient regarding logic.

Bruno 






http://iridia.ulb.ac.be/~marchal/

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Re: A test for solipsism

2012-10-21 Thread Roger Clough



WHOEVER:  Hi Bruno and Roger, 

What would distinguish, for an external observer, a p-zombie from a a 
person that does not see the world external to it as anything other than an 
internal panorama with which it cannot interact? 


BRUNO: Nobody can distinguish a p-zombie from a human, even if that human is 
solipsist, even a very special sort of solipsist like the one you describe.  


Bruno 

ROGER: Previously I deduced that a p-zombie (or any zombie without a brain)
would be an absurdity (not be able, as required, to act as a real person) 
because 
any being without a brain could not know anything. It would not know
what to do in any event -- and as far as conversing with it, it could
not understand language. You'd also find it bumping into walls.






http://iridia.ulb.ac.be/~marchal/

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Enumeration Without Representation

2012-10-21 Thread Craig Weinberg
I propose this simple, counter-COMP truth:

Without something to enumerate, numbers are meaningless.

Two plus two does not 'equal' anything without a fairly extensive list of a 
priori meta-artihmetic conditions. As far as I can tell, for two plus two 
to equal something, there must be:

Cause and effect
Logic
An experience of counting
A rigid reference body of equivalences
Function
Functional phase spaces which are in some sense independent of the 
reference body
A semiotic phase locking function which relates specific functions to the 
supreme ultimate reference body
Something which experiences the function as meaningful
Experience
A capacity for participation in experience
A capacity to direct and control experience, i.e. to cause some function to 
be enacted as a consequence of another
Reliable memory to switch between functions
Storage for isolating currently enacted functions from accumulations of 
sets of tacit functional results.

Lots of things.
In short, in order to have computation, you need... a computer. 

The items on this list all supervene on sense. The capacity to detect and 
project signal. Computation is a way of using signals to refer to other 
signals - figuratively. They are figures. Quantification is a way of 
bundling things with other things, but virtually, not literally. There is 
no actual bundling unless there is some *thing* doing the computing that 
some *thing* cares about.

Does a thing have to be a material object? Our imagination suggests that it 
does not, although the capacity to imagine is associated with living cells. 
We have no experience however with things which are neither physical 
objects, subjective experiences, or subjective experiences of physical 
instruments interacting with physical objects. There is no experience of 
math existing ab initio.

This correlates with our cosmological investigations as well. Contrary to 
what we should expect from an inevitable multiverse of every possible 
combination of universes, our universe exhibits a distinct lack of 
unexplained chaos. It is one thing to expect that we would naturally find 
ourselves in one of the many universes which supports our existence, but it 
is another thing to extend that to the point that we find ourselves also in 
one of the universes which makes sense wherever we look, all of the time. 
If anything, the exhaustively granular orderliness of the cosmos defies the 
imagination, with each particle of sand requiring a team of trillions of 
lucky monkeys to have typed out the right string of ontological 
meta-characters, while at the same time synchronizing effortlessly with 
global, local, and regional harmonies of order. If all of this happens 
without sense, without anything making sense, then it seems infinitely 
unlikely that beings such as ourselves who require sense to navigate our 
own lives should exist, and exist in such a natural and seamless way to the 
rest of the unconscious universal mechanism.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/everything-list/-/r0m1ab6MptYJ.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Re: Continuous Game of Life

2012-10-21 Thread Roger Clough
Hi Bruno Marchal  

1p is to know by acquaintance (only possible to humans).
I conjecture that any statement pertaining to humans containing 1p is TRUE.

3p is to know by description (works for both humans and computers).
I believe that any statement pertaining to computers containing 1p is FALSE.

Consciousness would be to know that you are conscious, or

for a real person, 1p(1p) = TRUE
and saying that he is conscious to others would be 3p(1p) = TRUE
or even (3p(1p(1p))) = TRUE


But a computer cannot experience anything (is blocked from 1p), or

for a computer, 3p (1p) = FALSE (or any statement containing 1p)
but 3p(3p) = TRUE (or any proposition not containing 1p = TRUE)  


Roger Clough, rclo...@verizon.net 
10/21/2012  
Forever is a long time, especially near the end. -Woody Allen 


- Receiving the following content -  
From: Bruno Marchal  
Receiver: everything-list  
Time: 2012-10-21, 09:56:39 
Subject: Re: Continuous Game of Life 


Hi John, 


On 20 Oct 2012, at 23:16, John Mikes wrote: 


Bruno, 
especially in my identification as responding to relations.  
Now the Self? IT certainly refers to a more sophisticated level of thinking, 
more so than the average (animalic?)  mind. - OR: we have no idea. What WE call 
'Self-Ccness' is definitely a human attribute because WE identify it that way. 
I never talked to a cauliflower to clarify whether she feels like having a 
self? (In cauliflowerese, of course).  


My feeling was first that all homeotherm animals have self-consciousness, as 
they have the ability to dream, easily realted to the ability to build a 
representation of one self. Then I have enlarged the spectrum up to some 
spiders and the octopi, just by reading a lot about them, looking video. 


But this is just a personal appreciation. For the plant, let us say I know 
nothing, although I supect possible consciousness, related to different 
scalings. 


The following theory seems to have consciousness, for different reason (the 
main one is that it is Turing Universal): 


x + 0 = x   
x + s(y) = s(x + y)  


 x *0 = 0 
 x*s(y) = x*y + x


But once you add the very powerful induction axioms: which say that if a 
property F is true for zero, and preserved by the successor operation, then it 
is true for all natural numbers. That is the infinity of axioms: 


(F(0)  Ax(F(x) - F(s(x))) - AxF(x),  


with F(x) being any formula in the arithmetical language (and thus defined with 
0, s, +, *),  


Then you get L?ianity, and this makes it as much conscious as you and me. 
Indeed, they got a rich theology about which they can develop maximal 
awareness, and even test it by comparing the physics retrievable by that 
theology, and the observation and inference on their most probable 
neighborhoods. 


L?ianity is the treshold at which any new axiom added will create and enlarge 
the machine ignorance. It is the utimate modesty treshold. 




Bruno 











On Thu, Oct 18, 2012 at 10:39 AM, Bruno Marchal  wrote: 


On 17 Oct 2012, at 19:19, Roger Clough wrote: 


Hi Bruno Marchal 

IMHO all life must have some degree of consciousness 
or it cannot perceive its environment. 


Are you sure? 

Would you say that the plants are conscious? I do think so, but I am not sure 
they have self-consciousness. 

Self-consciousness accelerates the information treatment, and might come from 
the need of this for the self-movie living creature having some important mass. 

all life is a very fuzzy notion. 

Bruno 









Roger Clough, rclo...@verizon.net 
10/17/2012 
Forever is a long time, especially near the end. -Woody Allen 


- Receiving the following content - 
From: Bruno Marchal 
Receiver: everything-list 
Time: 2012-10-17, 10:13:37 
Subject: Re: Continuous Game of Life 




On 16 Oct 2012, at 18:37, John Clark wrote: 


On Mon, Oct 15, 2012 at 2:40 PM, meekerdb  wrote: 



If consciousness doesn't do anything then Evolution can't see it, so how and 
why did Evolution produce it? The fact that you have no answer to this means 
your ideas are fatally flawed. 



I don't see this as a *fatal* flaw.  Evolution, as you've noted, is not a 
paradigm of efficient design.  Consciousness might just be a side-effect 


But that's exactly what I've been saying for months, unless Darwin was dead 
wrong consciousness must be a side effect of intelligence, so a intelligent 
computer must be a conscious computer. And I don't think Darwin was dead wrong. 





Darwin does not need to be wrong. Consciousness role can be deeper, in the 
evolution/selection of the laws of physics from the coherent dreams 
(computations from the 1p view) in arithmetic. 


Bruno 




http://iridia.ulb.ac.be/~marchal/ 

--  
You received this message because you are subscribed to the Google Groups 
Everything List group. 
To post to this group, send email to everything-list@googlegroups.com. 
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com. 

Re: Re: Re: Measurability is not a condition of reality.

2012-10-21 Thread Roger Clough
Yes, 
Hi Alberto G. Corona 

Yes, they are inconsistent. 


Roger Clough, rclo...@verizon.net 
10/21/2012 
Forever is a long time, especially near the end. -Woody Allen 


- Receiving the following content - 
From: Alberto G. Corona 
Receiver: everything-list 
Time: 2012-10-20, 14:55:08 
Subject: Re: Measurability is not a condition of reality. 


Then the measure addict people believe in a lot of things that are not 
measurable: they believe in an external reality . 
They believe in a certain pitagoric cult to measurement, that is not 
measurable. They believe that their perception is transparent, 
and that his mind play no role, because it translates a complete objective and 
accurate view of reality. ?herefore the 
 mind and his relation with matter is not worth to study. They believe in 
things not measurable, 
like countries, specially their own (which they would laugh If i say that their 
country is ? bunch of atoms. 
Apparently their reductionism is selective).? 


They believe in their loved ones that are dead (they do not exist according 
with their point of view, but they 
sometimes talk with them, dedicate books to them and act like if they are 
observing them. They bet, trust and 
believe in persons, despite the fact that they are nor measurable.. They 
believe in their leaders. They believe in 
some scientist that are liars. but they believe them without making measures 
and experiments for themselves. 
It seems tha almost all that they believe derives from a sense of authority, 
like any other persom. 


And they do it well on believing in these nor measurable things, because if 
they doint believe, 
they would be paralized and will kill someone or kill themselves. 



2012/10/20 Roger Clough 

Hi Alberto G. Corona 

I have no problem with that, the problem I have 
is that I believe that nonphysical things (things, 
like Descartes' mind, not extended in space) 
like spirit, truly exist. ?ut to materialists, 
that's nonsense, because being inextended it 
can't be measured and so doesn't exist. 
And life is just a unique form of matter, 
so can be created. ?nd what is man but a 
bunch of atoms ? 



Roger Clough, rclo...@verizon.net 
10/20/2012 
Forever is a long time, especially near the end. -Woody Allen 


- Receiving the following content - 
From: Alberto G. Corona 
Receiver: everything-list 
Time: 2012-10-20, 08:48:39 
Subject: Re: Re: A test for solipsism 


Roger 
Different Qualia are a result fo different phisical effect in the senses. So a 
machine does not need to have qualia to distinguish between phisical effectds. 
It only need sensors that distinguish between them. 


A sensor can detect a red light and the attached computer can stop a car. With 
no problems.? 


http://www.gizmag.com/mercedes-benz-smart-stop-system/13122/ 



2012/10/20 Roger Clough 

Hi Bruno Marchal 

In that definition of a p-zombie below, it says that 
a p-zombie cannot experience qualia, and qualia 
are what the senses tell you. The mind then transforms 
what is sensed into a sensation. The sense of red 
is what the body gives you, the sensation of red 
is what the mind transforms that into. Our mind 
also can recall past sensations of red to compare 
it with and give it a name red, which a real 
person can identify as eg a red traffic light 
and stop. A zombie would not stop (I am not allowing 
the fact that red and green lights are in different 
positions). 
That would be a test of zombieness. 
? 
Roger Clough, rclo...@verizon.net 
10/20/2012 

Forever is a long time, especially near the end. -Woody Allen 

- Receiving the following content - 
From: Bruno Marchal 
Receiver: everything-list 

Time: 2012-10-19, 03:47:51 
Subject: Re: A test for solipsism 

On 17 Oct 2012, at 19:12, Roger Clough wrote: 
 Hi Bruno Marchal 
 
 Sorry, I lost the thread on the doctor, and don't know what Craig 
 believes about the p-zombie. 
 
 http://en.wikipedia.org/wiki/Philosophical_zombie 
 
 A philosophical zombie or p-zombie in the philosophy of mind and 
 perception is a hypothetical being 
 that is indistinguishable from a normal human being except in that 
 it lacks conscious experience, qualia, or sentience.[1] When a 
 zombie is poked with a sharp object, for example, it does not feel 
 any pain though it behaves 
 exactly as if it does feel pain (it may say ouch and recoil from 
 the stimulus, or tell us that it is in intense pain). 
 
 My guess is that this is the solipsism issue, to which I would say 
 that if it has no mind, it cannot converse with you, 
 which would be a test for solipsism,-- which I just now found in 
 typing the first part of this sentence. 
Solipsism makes everyone zombie except you. 
But in some context some people might conceive that zombie exists, 
without making everyone zombie. Craig believes that computers, if they 
might behave like conscious individuals would be a zombie, but he is 
no solipsist. 
There is no test for solipsism, nor for zombieness. BY 

The p-zombie is a strawman argument

2012-10-21 Thread Roger Clough
Hi Stathis,

Sorry, my previous email was accidentally sent too early. 

You said that if a candidate person behaves as if he
has a mind, then he does. 

That may be OK, but if a person does NOT have a mind,
( is a zombie), then my position is that in fact he 
cannot behave as one with a mind would.

For example, he could not converse with you.
He could not tell you where or when he was born,
because having no mind, he has no memory,
and cannot understand language at all.

So the p-zombie definition is an impossibility,
and it is a strawman argument, to begin with.


- Receiving the following content - 
From: Stathis Papaioannou 
Receiver: everything-list 
Time: 2012-10-21, 03:37:19 
Subject: Re: The circular logic of Dennett and other materialists 


On Sat, Oct 20, 2012 at 11:04 PM, Roger Clough wrote: 
 Hi Bruno Marchal 
 
 This is also where I run into trouble with the p-zombie 
 definition of what a zombie is. It has no mind 
 but it can still behave just as a real person would. 
 
 But that assumes, as the materialists do, that the mind 
 has no necessary function. Which is nonsense, at least 
 to a realist. 
 
 Thus Dennett claims that a real candidate person 
 does not need to have a mind. But that's in his 
 definition of what a real person is. That's circular logic. 

Not really, he claims that zombies do not exist and if an entity 
(human, computer, whatever) behaves as if it has a mind, then it does 
have a mind. 


-- 
Stathis Papaioannou 

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group. 
To post to this group, send email to everything-list@googlegroups.com. 
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com. 
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Re: Solipsism = 1p

2012-10-21 Thread Craig Weinberg


On Sunday, October 21, 2012 3:39:11 PM UTC-4, rclough wrote:



 BRUNO:  Keep in mind that zombie, here, is a technical term. By definition 
 it   
 behaves like a human. No humans at all can tell the difference. Only   
 God knows, if you want. 

 ROGER: I  claim that it is impossible for any kind of zombie 
 that has no mind to act like a human. IMHO  that would 
 be an absurdity, because without a mind you cannot know 
 anything.  You would run into walls, for example, and 
 couldn't know what to do in any event. Etc. 
 You couldn't understand language. 


Roger I agree that your intuition is right - a philosophical zombie cannot 
exist in reality, but not for the reasons you are coming up with. Anything 
can be programmed to act like a human in some level of description. A 
scarecrow may act like a human in the eyes of a crow - well enough that it 
might be less likely to land nearby. You can make robots which won't run 
into walls or chatbots which respond to some range of vocabulary and 
sentence construction. The idea behind philosophical zombies is that we 
assume that there is nothing stopping us in theory from assembling all of 
the functions of a human being as a single machine, and that such a 
machine, it is thought, will either have the some kind of human-like 
experience or else it would have to have no experience.

The absent qualia, fading qualia paper is about a thought experiment which 
tries to take the latter scenario seriously from the point of view of a 
person who is having their brain gradually taken over by these substitute 
sub-brain functional units. Would they see blue as being less and less blue 
as more of their brain is replaced, or would blue just suddenly disappear 
at some point? Each one seems absurd given that the sum of the remaining 
brain functions plus the sum of the replaced brain functions, must, by 
definition of the thought experiment, equal no change in observed behavior.

This is my response to this thought experiment to Stathis:

*Stathis: In a thought experiment we can say that the imitation stimulates 
the *
*surrounding neurons in the same way as the original.* 

Craig: Then the thought experiment is garbage from the start. It begs the 
question. Why not just say we can have an imitation human being that 
stimulates the surrounding human beings in the same way as the original? 
Ta-da! That makes it easy. Now all we need to do is make a human being that 
stimulates their social matrix in the same way as the original and we have 
perfect AI without messing with neurons or brains at all. Just make a whole 
person out of person stuff - like as a thought experiment suppose there is 
some stuff X which makes things that human beings think is another human 
being. Like marzipan. We can put the right pheromones in it and dress it up 
nice, and according to the thought experiment, let’s say that works. 

You aren’t allowed to deny this because then you don’t understand the 
thought experiment, see? Don’t you get it? You have to accept this flawed 
pretext to have a discussion that I will engage in now. See how it works? 
Now we can talk for six or eight months about how human marzipan is 
inevitable because it wouldn’t make sense if you replaced a city gradually 
with marzipan people that New York would gradually fade into less of a New 
York or that New York becomes suddenly absent. It’s a fallacy. The premise 
screws up the result.

Craig

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/everything-list/-/vj3N3gQoVo8J.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



a mistake -- and light at the end of the tunnel for comp

2012-10-21 Thread Roger Clough
Hi everything-list 

I said that any statement for a computer containing 1p would  be FALSE.

That is not exactly true.  A computer can deal with any 3p operator on the 
outside,
meaning it has already been converted to a descriptive (3p) form.

Suppose I tell the computer (or anybody else) that I had
a dream about rabbits.  Of course I could be lying,
but for now assume that I tell the computer the truth.

so either to others or to the computer 3p(1p) is TRUE,
although 3p(1p) is always distorted. Maybe the rabbits
were dancing and I forgot to say that. One can never 
guarantee that 3p(1p) is accurate except to oneself.

 
Roger Clough, rclo...@verizon.net 
10/21/2012  
Forever is a long time, especially near the end. -Woody Allen

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: The p-zombie is a strawman argument

2012-10-21 Thread Stathis Papaioannou
On Mon, Oct 22, 2012 at 7:37 AM, Roger Clough rclo...@verizon.net wrote:
 Hi Stathis,

 Sorry, my previous email was accidentally sent too early.

 You said that if a candidate person behaves as if he
 has a mind, then he does.

 That may be OK, but if a person does NOT have a mind,
 ( is a zombie), then my position is that in fact he
 cannot behave as one with a mind would.

 For example, he could not converse with you.
 He could not tell you where or when he was born,
 because having no mind, he has no memory,
 and cannot understand language at all.

 So the p-zombie definition is an impossibility,
 and it is a strawman argument, to begin with.

Yes, that's Dennett's position. He has called the idea of zombies an
embarrassment to philosophy. So do you agree that if a computer could
converse with you like a human then it would have a mind?


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: AGI

2012-10-21 Thread John Mikes
Bruno: my apologies for this late late reply, I am slow to decipher the
listpost from the daily inundation of Roger-stuff so I miss some more
relevant list-post sometimes.

You wrote about the U-M:
*...an entity capable of computing all partial computable functions...*
**
I would be cautios with all since we know only SOME.
I plead ignorance to the difference of a Loeb and another type(?) Univ.
Machine. Is the Leobian restricted? In what sense? BTW: What is
'universal'?
I would think twice to deem something as

*... it might be intrinsically complex...*
**
*EVERYTHING* is intrinsically (too!) complex. We just take simplified
versions - adjusted to OUR mindful capabilities.

*intelligence vs competence?*
**
The 'oldies' (from yesterday back to the Greeks/Indians etc.) were
'competent' in the actual (then) inventory of the knowledge base of their
time. That gave their 'intelligence' (the way I defined it) so: no
controversy.

*Bohm* discussed with Krishnamurty before his association in London with
Hiley. The posthumous book the latter wrote in their combined(?) authorship
includes Bohm's earlier physical stances (~1952)  even before his Brazilian
escape.
I do not accuse Hiley of improperness, but he left out all the
Krishnamurtian mystique embraced by Bohm. Granted: Bohm taught later
advanced physical science in London but as far as I know never went back on
his interim (call it: metaphysical?) philosophy.

John M



On Wed, Oct 10, 2012 at 2:19 PM, Bruno Marchal marc...@ulb.ac.be wrote:

 John,

  On 09 Oct 2012, at 22:22, John Mikes wrote:

  Bruno,
 examples are not identifiction. I was referring to (your?) lack of
 detailed description what the universal machine consists of and how it
 functions (maybe: beyond what we know - ha ha). A comprehensive ID. Your
 lot of examples rather denies that you have one.


 A universal machine is any entity capable of computing all partial
 computable functions. There are many, and for many we can prove that they
 are universal machine. For many we can't prove that, or it might be
 intrinsically complex to do so.

 A Löbian machine is a universal machine which knows, in a weak technical
 precise sense, that they are universal.

 Same remark as above, we can prove that some machine are Löbian, but we
 might not been able to recognize all those who are.



  And:
 'if it is enough FOR YOU to consider them, it may not be enough for me. I
 don't really know HOW conscious I am.


 Nor do I. Nor do they, when you listened to them, taking into account
 their silence.



 I like your  counter-point in competence and intelligence.
 I identified the wisdom (maybe it should read: the intelligence) of the
 oldies as not 'disturbed' by too many factual(?) known circumstances -
 maybe it is competence.


 You meant intelligence? I would agree.

 You know I prefer the Bohm who discuss with Krishnamurti, than the Bohm
 (the same person to be sure) who believes in quantum hidden variables.


  To include our inventory accumulated over the millennia as impediment
 ('blinded by').


 Above the Löbian treshold, the machine understands that, the more she
 know, the more she is ignorant.

 Knowledge is only a lantern on a very big unknown. The more light you put
 on it, the bigger it seems.

 But we can ask question (= develop theories). And we can have experiences.


 Above the Löbian treshold, the machine understands that the more she can
 be intelligent, the more she can be stupid.

 And that competence is quite relative, but can be magnified uncomputably,
 but also (alas) unpredictably, with many simple heuristics, like:

 - tolerate errors,
 - work in union,
 - encourage changes of mind,

 etc.  (By results of Case and Smith, Blum and Blum, Gold, Putnam, etc.).
 reference in the biblio of conscience et mécanisme, in my url.

 Bruno





 John M

 On Tue, Oct 9, 2012 at 11:01 AM, Bruno Marchal marc...@ulb.ac.be wrote:


 On 08 Oct 2012, at 22:07, John Mikes wrote:

 Dear Richard, I think the lengthy text is Ben's article in response to
 D. Deutsch.
 Sometimes I was erring in the belief that it is YOUR text, but no.
 Thanks for copying.
 It is too long and too little organized for me to keep up with
 ramifications prima vista.
 What I extracted from it are some remarks I will try to communicate to
 Ben (a longtime e-mail friend) as well.

 I have my (agnostically derived) version of intelligence: the capability
 of reading 'inter'
 lines (words/meanings). Apart from such human distinction: to realize
 the 'essence' of relations beyond vocabulary, or 'physical science'
 definitions.
 Such content is not provided in our practical computing machines
 (although Bruno trans-leaps such barriers with his (Löb's) universal
 machine unidentified).



 Unidentified?I give a lot of  examples: PA, ZF, John Mikes, me, and
 the octopus.

 In some sense they succeed enough the mirror test. That's enough for me
 to consider them, well, not just conscious, but as conscious as me, and you.
 The 

Re: Continuous Game of Life

2012-10-21 Thread Jason Resch
On Sun, Oct 21, 2012 at 12:46 PM, John Clark johnkcl...@gmail.com wrote:

 On Sun, Oct 21, 2012  Bruno Marchal marc...@ulb.ac.be wrote:


   I stopped reading after your proof of the existence of a new type of
 indeterminacy never seen before because the proof was in error, so there
 was no point in reading about things built on top of that


  From your error you have been obliged to say that in the WM
 duplication, you will live both at W and at W


 Yes.

 yet your agree that both copy will feel to live in only one place


 Yes.

  so the error you have seen was dues to a confusion between first person
 and third person.


 Somebody is certainly confused but it's not me. The fact is that if we are
 identical then my first person experience of looking at you is identical to
 your first person experience of looking at me, and both our actions are
 identical for a third person looking at both of us. As long as we're
 identical it's meaningless to talk about 2 conscious beings regardless of
 how many bodies or brains have been duplicated.

 Your confusion stems from saying you have been duplicated but then not
 thinking about what that really means, you haven't realized that a noun
 (like a brain) has been duplicated but a adjective (like Bruno Marchal) has
 not been as long as they are identical; you are treating adjectives as if
 they were nouns and that's bound to cause confusion. You are also confused
 by the fact that if 2 identical things change in nonidentical ways, such as
 by forming different memories, then they are no longer identical. And
 finally you are confused by the fact that although they are not each other
 any more after those changes both still have a equal right to call
 themselves Bruno Marchal. After reading these multiple confusions in one
 step of your proof I saw no point in reading more, and I still don't.


John,

I think you are missing something.  It is a problem that I noticed after
watching the movie The Prestige and it eventually led me to join this
list.

Unless you consider yourself to be only a single momentary atom of thought,
you probably believe there is some stream of thoughts/consciousness that
you identify with.  You further believe that these thoughts and
consciousness are produced by some activity of your brain.  Unlike Craig,
you believe that whatever horrible injury you suffered, even if every atom
in your body were separated from every other atom, in principle you could
be put back together, and if the atoms are put back just right, you will
be removed and alive and well, and conscious again.

Further, you probably believe it doesn't matter if we even re-use the same
atoms or not, since atoms of the same elements and isotopes are
functionally equivalent.  We could take apart your current atoms, then put
you back together with atoms from a different pile and your consciousness
would continue right where it left off (from before you were obliterated).
 It would be as if a simulation of your brain were running on a VM, we
paused the VM, moved it to a different physical computer and then resumed
it.  From your perspective inside, there was no interruption, yet your
physical incarnation and location has changed.

Assuming you are with me so far, an interesting question emerges: what
happens to your consciousness when duplicated?  Either an atom for atom
replica of yourself is created in two places or your VM image which
contains your brain emulation is copied to two different computers while
paused, and then both are resumed.  Initially, the sensory input to the two
duplicates could be the same, and in a sense they are still the same mind,
just with two instances, but then something interesting happens once
different input is fed to the two instances: they split.  You could say
they split in the same sense as when someone opens the steel box to see
whether the cat is alive or dead.  All the splitting in quantum mechanics
may be the result of our infinite instances discovering/learning different
things about our infinite environments.

Jason



  By the way, it is irrational to stop in the middle of a proof.


 If one of the steps in a proof contains a blunder then it would be
 irrational to keep reading it.

  By assuming a physical reality at the start


 That seems like a pretty damn good place to make an assumption.

   But the physical reality can emerge or appear without a physical
 reality at the start


 Maybe maybe not, but even if you're right that wouldn't make it any less
 real; and maybe physical reality didn't even need to emerge because there
 was no start.


  If you change your conscious state then your brain changes, and if I
 make a change in your brain then your conscious state changes too, so I'd
 say that it's a good assumption that consciousness is interlinked with a
 physical object, in fact it's a downright superb assumption.


   But this is easily shown to be false when we assume comp.


 It's not false and I don't need to assume it and I 

Re: Continuous Game of Life

2012-10-21 Thread Stathis Papaioannou
On Mon, Oct 22, 2012 at 1:55 AM, Stephen P. King stephe...@charter.net wrote:

 If there is a top-down effect of the mind on the atoms then there we
 would expect some scientific evidence of this. Evidence would
 constitute, for example, neurons firing when measurements of
 transmembrane potentials, ion concentrations etc. suggest that they
 should not. You claim that such anomalous behaviour of neurons and
 other cells due to consciousness is widespread, yet it has never been
 experimentally observed. Why?


 Hi Stathis,

 How would you set up the experiment? How do you control for an effect
 that may well be ubiquitous? Did you somehow miss the point that
 consciousness can only be observed in 1p? Why are you so insistent on a 3p
 of it?

A top-down effect of consciousness on matter could be inferred if
miraculous events were observed in neurophysiology research. The
consciousness itself cannot be directly observed.

 I don't mean putting an extra module into the brain, I mean putting
 the brain directly into the same configuration it is put into by
 learning the language in the normal way.


 How might we do that? Alter 1 neuron and you might not have the same
 mind.

When you learn something, your brain physically changes. After a year
studying Chinese it goes from configuration SPK-E to configuration
SPK-E+C. If your brain were put directly into configuration SPK-E+C
then you would know Chinese and have a false memory of the year of
learning it.

 In a thought experiment we can say that the imitation stimulates the
 surrounding neurons in the same way as the original. We can even say
 that it does this miraculously. Would such a device *necessarily*
 replicate the consciousness along with the neural impulses, or could
 the two be separated?


 Is the brain strictly a classical system?

No, although the consensus appears to be that quantum effects are not
significant in its functioning. In any case, this does not invalidate
functionalism.

 As I said, technical problems with computers are not relevant to the
 argument. The implant is just a device that has the correct timing of
 neural impulses. Would it necessarily preserve consciousness?


 Let's see. If I ingest psychoactive substances, there is a 1p observable
 effect Is this a circumstance that is different in kind from that
 device?

The psychoactive substances cause a physical change in your brain and
thereby also a psychological change.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: The p-zombie is a strawman argument

2012-10-21 Thread Craig Weinberg


On Sunday, October 21, 2012 5:21:53 PM UTC-4, stathisp wrote:


 Yes, that's Dennett's position. He has called the idea of zombies an 
 embarrassment to philosophy. So do you agree that if a computer could 
 converse with you like a human then it would have a mind?


A movie can converse with you like a human if you are in the right state of 
mind.

Craig
 



 -- 
 Stathis Papaioannou 


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/everything-list/-/ahhORbganNQJ.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Interactions between mind and brain

2012-10-21 Thread Stephen P. King

On 10/21/2012 7:14 PM, Stathis Papaioannou wrote:

On Mon, Oct 22, 2012 at 1:55 AM, Stephen P. King stephe...@charter.net wrote:


If there is a top-down effect of the mind on the atoms then there we
would expect some scientific evidence of this. Evidence would
constitute, for example, neurons firing when measurements of
transmembrane potentials, ion concentrations etc. suggest that they
should not. You claim that such anomalous behaviour of neurons and
other cells due to consciousness is widespread, yet it has never been
experimentally observed. Why?


Hi Stathis,

 How would you set up the experiment? How do you control for an effect
that may well be ubiquitous? Did you somehow miss the point that
consciousness can only be observed in 1p? Why are you so insistent on a 3p
of it?

A top-down effect of consciousness on matter could be inferred if
miraculous events were observed in neurophysiology research. The
consciousness itself cannot be directly observed.


Hi Stathis,

This would be true only if consciousness is separate from matter, 
such as in Descartes failed theory of substance dualism. In the dual 
aspect theory that I am arguing for, there would never be any miracles 
that would contradict physical law. At most there would be statistical 
deviations from classical predictions. Check out 
http://boole.stanford.edu/pub/ratmech.pdf for details. My support for 
this theory and not materialism follows from materialism demonstrated 
inability to account for 1p. Dual aspect monism has 1p built in from 
first principles. BTW, I don't use the term dualism any more as what I 
am advocating seems to be too easily confused with the failed version.





I don't mean putting an extra module into the brain, I mean putting
the brain directly into the same configuration it is put into by
learning the language in the normal way.


 How might we do that? Alter 1 neuron and you might not have the same
mind.

When you learn something, your brain physically changes. After a year
studying Chinese it goes from configuration SPK-E to configuration
SPK-E+C. If your brain were put directly into configuration SPK-E+C
then you would know Chinese and have a false memory of the year of
learning it.


Ah, but is that change, from SPK-E to SPK-E+C, one that is 
numerable strictly in terms of a number of neurons changed? No. I would 
conjecture that it is a computational problem that is at least NP-hard. 
My reasoning is that if the change where emulable by a computation X 
*and* that X could also could be used to solve a P-hard problem, then 
there should exist an algorithm that could easily translate any 
statement in one language into another *and* finding that algorithm 
should require only some polynomial quantity of resources (relative to 
the number of possible algorithms). It should be easy to show that this 
is not the case.
I strongly believe that computational complexity plays a huge role 
in many aspects of the hard problem of consciousness and that the 
Platonic approach to computer science is obscuring solutions as it is 
blind to questions of resource availability and distribution.



In a thought experiment we can say that the imitation stimulates the
surrounding neurons in the same way as the original. We can even say
that it does this miraculously. Would such a device *necessarily*
replicate the consciousness along with the neural impulses, or could
the two be separated?


 Is the brain strictly a classical system?

No, although the consensus appears to be that quantum effects are not
significant in its functioning. In any case, this does not invalidate
functionalism.


Well, I don't follow the crowd. I agree that functionalist is not 
dependent on the type of physics of the system, but there is an issue of 
functional closure that must be met in my conjecture; there has to be 
some way for the system (that supports the conscious capacity) to be 
closed under the transformation involved.



As I said, technical problems with computers are not relevant to the
argument. The implant is just a device that has the correct timing of
neural impulses. Would it necessarily preserve consciousness?



 Let's see. If I ingest psychoactive substances, there is a 1p observable
effect Is this a circumstance that is different in kind from that
device?

The psychoactive substances cause a physical change in your brain and
thereby also a psychological change.


Of course. As I see it, there is no brain change without a mind 
change and vice versa. The mind and brain are dual, as Boolean algebras 
and topological spaces are dual, the relation is an isomorphism between 
structures that have oppositely directed arrows of transformation. The 
math is very straight forward... People just have a hard time 
understanding the idea that all of matter is some form of topological 
space and there is no known calculus of variations for Boolean algebras 
(no one is looking for it, except for me, that 

Re: Is consciousness just an emergent property of overly complexcomputations ?

2012-10-21 Thread meekerdb

On 10/21/2012 6:43 AM, Bruno Marchal wrote:

And their very specific correlation with the physical brain states of sleep.


Of course. But this is taken into account in the theoretical reasoning where we suppose 
the brain state are obtained by (immaterial) machine doing the computation at the 
right level.


We cannot know our right level, so we are not trying to build an artificial brain. The 
measure problem comes from the fact that, whatever the level is, the physics has to be 
given by a measure on computations. That is enough to already derive the logic of the 
observable, and that a step toward solving the measure problem, although some other 
possible manner might exist.


But I think that implies that consciousness (at least human like consciousness) cannot 
exist without the physics; that materialism is not optional.


Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Code length = probability distribution

2012-10-21 Thread Stephen P. King

On 10/21/2012 3:48 AM, Russell Standish wrote:

  I worry a bit about the use of the word all in your remark.
All is too big, usually, to have a single constructable measure!
Why not consider some large enough but finite collections of
programs, such as what would be captured by the idea of an
equivalence class of programs that satisfy some arbitrary parameters
(such as solving a finite NP-hard problem) given some large but
finite quantity of resources?
 Of course this goes against the grain of Bruno's theology, but
maybe that is what it required to solve the measure problem.:-)  I
find myself being won over by the finitists, such as Norman J.
Wildberger!

This may well turn out to be the case. Also Juergen Schmidhuber has
investigated this under the rubrik of speed prior.

Hi Rusell,

How does Schmidhuber consider the physicality of resources?

--
Onward!

Stephen


--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.