Re: COMP test (ontology of COMP)

2012-03-02 Thread Bruno Marchal


On 01 Mar 2012, at 19:39, acw wrote:


On 3/1/2012 18:16, meekerdb wrote:

But the 1p view of this is to be
conscious *of something*, which you describe as the computation seen
from the inside. What is it about these threads through different
states that makes them an equivalence class with respect to the
computation seen from the inside?
If they happen to be implementing some particular machine being in  
some particular state. The problem is that the machine can be self- 
modifiable (or that the environment can change it), and the machine  
won't know of this and not always recognize the change. This seems  
like a highly non-trivial problem to me.



Yes. That's why I think we have to extract the equivalence class  
structure from the ability of the machine to refer to itself at the  
right level. It is not constructive, from the machine's point of view,  
but this does not change the correct view of the correct machine, in  
the correct situation, despite no one can define that correctness.  
It is not trivial at all, but the contrary would have been astonishing.


Bruno


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: COMP test (ontology of COMP)

2012-03-02 Thread Bruno Marchal


On 01 Mar 2012, at 19:43, meekerdb wrote:


On 3/1/2012 10:23 AM, Bruno Marchal wrote:



On 01 Mar 2012, at 17:54, meekerdb wrote:


On 3/1/2012 1:01 AM, Bruno Marchal wrote:



On 29 Feb 2012, at 21:05, meekerdb wrote:


On 2/29/2012 10:59 AM, Bruno Marchal wrote:


Comp says the exact contrary: it makes matter and physical  
processes not completely Turing emulable.


But it makes them enough TE so that you can yes to the doctor  
who proposes to replace some part of your brain (which is made  
of matter) with a Turing emulation of it?


The doctor does not need to emulate the matter of my brain.  
This is completely not Turing *emulable*. It is only (apparently)  
Turing simulable, that is emulable at some digital truncation of  
my brain. Indeed matter is what emerges from the 1p indeterminacy  
on all more fine grained computations reaching my current states  
in arithmetic/UD.


OK, but just to clarify: The emergent matter is not emulable  
because there are infinitely many computations at the fine grained  
level reaching your current state.  But  it is  
simulable to an arbitrary degree.


If you can prove that.

I would say yes, but it does not seem obvious to prove. You have to  
emulate bigger and bigger portions of the UD*, and the 1-view are  
only defined in the limit, being unaware of the UD-delays. Not  
obvious. It might be true, but in some non tractable sense. Hmm...  
Interesting question.


I will think more on this, I smell a busy beaver situation. Your  
decimals, of your prediction might take a very long time   
to stabilize. I dunno.







But I'm still unclear on what constitutes my current states.   
Why is there more than one?  Is it a set of states of computations  
that constitutes a single state of consciousness?


If you say yes to the doctor, and if the doctor is luckily  
accurate, the current state is the encoding of the universal  
number + data that he got from the scanning. Basically, it is what  
is sent through the teleportation.


From the 1-p view, that state is unique, indeed. It is you here  
and now at the moment of the scanning (done very  quickly  
for the sake of the argument).


There is no more than one. But its encoding, and its relevant  
decoding, are generated infinitely often in the UD*, with different  
continuations, leading to your current self-indeterminacy. It is  
the subjective same you, like the people in W and M before they  
open the teletransporter box, just before differentiation.


Oops, I see that I wrote my current states, with a s.  So it  
means I was talking about the 3p computational states in the UD*  
corresponding on my (unique) current consciousness state. That  
exists, in the comp theory.


Hope I am enough clear, tell otherwise if not.


Yes, that's what I thought you meant when I first studied your  
theory.  But then I am not clear on the relation of this unique  
current state to the many non-equivalent states at a lower, e.g.  
quantum, level that constitute it at the quasi-classical level.  Is  
the UD* not also computing all of those fine-grained states?


Yes, and it adds up to the domain of first person indeterminacy.  
Usually I invoke the rule Y = II.  That is, two equivalent  
computations (equivalent in the sense that it leads to the same  
conscious experience) does not add up, but if they diverge at some  
point, even in the far future, they will add up. It is like in QM,  
there is a need for possible distinction in principle.


Let me ask a question to everybody. Consider the WM duplication,  
starting from Helsinki, but this time, in W, you are reconstituted in  
two exemplars, in exactly the same environment. Is the probability,  
asked in Helsinki,  to find yourself in W equal to 2/3 or to 1/2.
My current answer, not yet verified with the logics, is that if the  
two computations in W are exactly identical forever, then it is 1/2,  
but if they diverge soon or later, then the probability is 1/2. But I  
am not sure of this. What do you think?


Bruno


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Yes Doctor circularity

2012-03-02 Thread Bruno Marchal


On 01 Mar 2012, at 22:32, Craig Weinberg wrote:


On Mar 1, 7:34 am, Bruno Marchal marc...@ulb.ac.be wrote:

On 29 Feb 2012, at 23:29, Craig Weinberg wrote:




There is no such thing as evidence when it comes to  
qualitative

phenomenology. You don't need evidence to infer that a clock
doesn't
know what time it is.



A clock has no self-referential ability.



How do you know?



By looking at the structure of the clock. It does not implement
self-
reference. It is a finite automaton, much lower in complexity
than a
universal machine.



Knowing what time it is doesn't require self reference.



That's what I said, and it makes my point.


The difference between a clock knowing what time it is, Google  
knowing

what you mean when you search for it, and an AI bot knowing how to
have a conversation with someone is a matter of degree. If comp  
claims

that certain kinds of processes have 1p experiences associated with
them it has to explain why that should be the case.


Because they have the ability to refer to themselves and understand
the difference between 1p, 3p, the mind-body problem, etc.
That some numbers have the ability to refer to themselves is proved  
in

computer science textbook.
A clock lacks it. A computer has it.


This sentence refers to 'itself' too. I see no reason why any number
or computer would have any more of a 1p experience than that.


A sentence is not a program.










By comp it
should be generated by the 1p experience of the logic of the gears
of
the clock.



?



If the Chinese Room is intelligent, then why not gears?


The chinese room is not intelligent.


I agree.


The person which supervene on the
some computation done by the chinese room might be intelligent.


Like a metaphysical 'person' that arises out of the computation ?


It is more like prime numbers arising from + and *. Or like a chess  
player arising from some program, except that prime number and chess  
player have (today) no universal self-referential abilities.



















By comp logic, the clock could just be part of a
universal timekeeping machine - just a baby of course, so we  
can't

expect it to show any signs of being a universal machine yet,
but by
comp, we cannot assume that clocks can't know what time it is  
just
because these primitive clocks don't know how to tell us that  
they

know it yet.



Then the universal timekeeping would be conscious, not the baby
clock.
Level confusion.



A Swiss watch has a fairly complicated movement. How many watches
does
it take before they collectively have a chance at knowing what
time it
is? If all self referential machines arise from finite automation
though (by UDA inevitability?), the designation of any Level at
all is
arbitrary. How does comp conceive of self referential machines
evolving in the first place?


They exist arithmetically, in many relative way, that is to  
universal

numbers. Relative Evolution exists in higher level description of
those relation.
Evolution of species, presuppose arithmetic and even comp,  
plausibly.

Genetics is already digital relatively to QM.



My question though was how many watches does it take to make an
intelligent watch?


Difficult question. One hundred might be enough, but a good engineers
might be able to optimize it. I would not be so much astonished that
one clock is enough, to implement a very simple (and inefficacious)
universal system, but then you have to rearrange all the parts of  
that

clock.


The misapprehensions of comp are even clearer to me imagining a
universal system in clockwork mechanisms. Electronic computers sort of
mesmerize us because electricity seems magical to us, but having a
warehouse full of brass gears manually clattering together and
assuming that there is a  conscious entity experiencing something
there is hard to seriously consider. It's like Leibniz' Windmill.


Or like Ned block chinese people computer. This is not convincing. It  
is just helpful to understand that consciousness relies on logical  
informational patterns that on matter. That problem is not a problem  
for comp, but for theories without notion of first person. It breaks  
down when you can apply a theory of knowledge, which is the case for  
machine, thanks to incompleteness. Consciousness is in the true  
fixed point of self-reference. It is not easy to explain this shortly  
and it relies on Gödel and Tarski works. There will be opportunities  
to come back on this.





If
you were able to make a living zygote large enough to walk into, it
wouldn't be like that. Structures would emerge spontaneously out of
circulating fluid and molecules acting spontaneously and
simultaneously, not just in chain reaction.




It doesn't really make sense to me if comp were
true that there would be anything other than QM.


?


Why would there be any other 'levels'?


So you assume QM in your theory. I do not.




No matter how complicated a
computer program is, it doesn't need to form some kind of 

Re: The Relativity of Existence

2012-03-02 Thread meekerdb

On 3/1/2012 7:37 PM, Richard Ruquist wrote:



On Thu, Mar 1, 2012 at 7:14 PM, meekerdb meeke...@verizon.net 
mailto:meeke...@verizon.net wrote:


On 3/1/2012 9:27 AM, Bob Zannelli wrote:

The Relativity of Existence
Authors: Stuart Heinrich
http://arxiv.org/find/physics/1/au:+Heinrich_S/0/1/0/all/0/1
Subjects: History and Philosophy of Physics (physics.hist-ph); General 
Relativity
and Quantum Cosmology (gr-qc); Quantum Physics (quant-ph)

Despite the success of physics in formulating mathematical theories that can
predict the outcome of experiments, we have made remarkably little progress 
towards
answering some of the most basic questions about our existence, such as: 
why does
the universe exist? Why is the universe apparently fine-tuned to be able to 
support
life? Why are the laws of physics so elegant? Why do we have three 
dimensions of
space and one of time? How is it that the universe can be non-local and 
non-causal
at the quantum scale, and why is there quantum randomness? In this paper, 
it is
shown that all of these questions are answered if existence is relative, and
moreover, it seems that we are logically bound to accept it.

http://arxiv.org/pdf/1202.4545.pdf




To be clear, the idea that our universe is really just a computer 
simulation is
highly controversial and not supported by this paper.
Of course there's no sense in which reality can be a computer 
simulation EXCEPT
if there is a Great Programmer who can fiddle with the program.  Otherwise 
the
simulation and the reality are the same thing.

By the principle of explosion, in any system that contains a single
contradiction, it becomes possible to prove the truth of any
other statement no matter how nonsensical[34, p.18]. There is
clearly a distinction between truth and falsehood in our reality,
which means that the principle of explosion does not apply to
our reality. In other words, we can be certain that our reality is
consistent.
Hmm? I'd never heard ex falso quodlibet referred to as the principle of
explosion before.  But in any case there are ways for preventing a 
contradiction
from implying everything, c.f. Graham Priest's In Contradiction.  
Contradictions
are between propositions. Heinrich is saying that the lack of 
contradictions in our
propositions describing the world implies the world is consistent.  But at 
the same
time he adopts a MWI which implies that contrary events happen all the time.

In fact, there are an infinite number of ways to modify an axiomatic 
system while
keeping any particular theorem intact.
This is true if the axioms *and rules of inference* are strong enough 
to satisfy
Godel's incompleteness theorem, something with a rule of finite induction 
(isn't
that technically a schema for an infinite set of axioms?).  Then you are 
guaranteed
infinitely many true propositions which are not provable from your axioms, 
and each
of those can be added as an axiom.  Otherwise I think you only get to add 
infinitely
many axioms by creating arbitrary names, like aa and ab...

From the perspective of any self-aware being, something is real if it is 
true,
A very Platonic and dubious proposition. True applies to propositions 
not
things.  2+2=4 is true, but that doesn't imply anything is real.  Holmes 
friend was
Watson is true too.

Recognizing this, the ultimate answer to the question of why our reality 
exists
becomes trivial: because self-awareness can be represented axiomatically, 
any
axiomatic system that can derive self-awareness will be perceived as being 
real
without the need for an objective manifestation.
This is what Bruno Marchal refers to a Lobianity, the provability 
within a
system that there are unprovable true propositions. Marchal formulated this 
idea
before Tegmark and has filled it out and made it more precise (and perhaps 
testable)
by confining it to computation by a univeral dovetailer - not just any 
mathematics.
http://iridia.ulb.ac.be/~marchal/publications/SANE2004MARCHALAbstract.html

http://iridia.ulb.ac.be/%7Emarchal/publications/SANE2004MARCHALAbstract.html  
If
you join the everything-list@googlegroups.com
mailto:everything-list@googlegroups.com , he will explain it to you.

Not many things can be proven objectively true, because
any proof relying on axioms is not objective without proving
that the axioms are also objectively true.
This is confusion bordering on sophistry.  He has introduced a new, 
undefined
concept objective and stated that any objectively true statement has an 
objective
proof.  Proof is well defined since it means following from the axioms by 
the rules
of inference.  Proving something from no axioms just requires more 
powerful rules
of inference.  There's no 

Re: Yes Doctor circularity

2012-03-02 Thread Craig Weinberg
On Mar 1, 8:12 pm, Stathis Papaioannou stath...@gmail.com wrote:


 You do assume, though, that brain function can't be replicated by a
 machine.

No, I presume that consciousness is not limited to what we consider to
be brain function. Brain function, as we understand it now, is already
a machine.

That has no firmer basis than a claim that kidney function
 cannot be replicated by a machine. After all, brains and kidneys are
 made out of the same stuff.

The difference is that I am not my kidneys, but the same cannot be
said about my brain. It doesn't matter to me if my kidneys aren't
aware, as long as they keep me alive. The brain is a completely
different story. Keeping my body alive is of no concern to anyone
unless I am able to participate and participate directly in the life
of that body. If a replicated brain has no awareness, or if its
awareness is not 'me', then it is no better than a kidney grafted onto
a spinal cord.

You could bite the bullet and declare
 yourself a vitalist.

I'm not though. I'm a panexperientialist. I only point out that there
is a difference between the experience of a kidney, a brain, and an
array of transistors. You can't make a jellyfish out of clocks or a
glass of water out of sand.

Craig

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Yes Doctor circularity

2012-03-02 Thread Craig Weinberg
On Mar 2, 4:43 am, Bruno Marchal marc...@ulb.ac.be wrote:
 On 01 Mar 2012, at 22:32, Craig Weinberg wrote:

  There is no such thing as evidence when it comes to
  qualitative
  phenomenology. You don't need evidence to infer that a clock
  doesn't
  know what time it is.

  A clock has no self-referential ability.

  How do you know?

  By looking at the structure of the clock. It does not implement
  self-
  reference. It is a finite automaton, much lower in complexity
  than a
  universal machine.

  Knowing what time it is doesn't require self reference.

  That's what I said, and it makes my point.

  The difference between a clock knowing what time it is, Google
  knowing
  what you mean when you search for it, and an AI bot knowing how to
  have a conversation with someone is a matter of degree. If comp
  claims
  that certain kinds of processes have 1p experiences associated with
  them it has to explain why that should be the case.

  Because they have the ability to refer to themselves and understand
  the difference between 1p, 3p, the mind-body problem, etc.
  That some numbers have the ability to refer to themselves is proved
  in
  computer science textbook.
  A clock lacks it. A computer has it.

  This sentence refers to 'itself' too. I see no reason why any number
  or computer would have any more of a 1p experience than that.

 A sentence is not a program.

Okay, WHILE  program  0 DO program. Program = Program + 1. END
WHILE

Does running that program (or one like it) create a 1p experience?




  By comp it
  should be generated by the 1p experience of the logic of the gears
  of
  the clock.

  ?

  If the Chinese Room is intelligent, then why not gears?

  The chinese room is not intelligent.

  I agree.

  The person which supervene on the
  some computation done by the chinese room might be intelligent.

  Like a metaphysical 'person' that arises out of the computation ?

 It is more like prime numbers arising from + and *. Or like a chess
 player arising from some program, except that prime number and chess
 player have (today) no universal self-referential abilities.

That sounds like what I said.




  By comp logic, the clock could just be part of a
  universal timekeeping machine - just a baby of course, so we
  can't
  expect it to show any signs of being a universal machine yet,
  but by
  comp, we cannot assume that clocks can't know what time it is
  just
  because these primitive clocks don't know how to tell us that
  they
  know it yet.

  Then the universal timekeeping would be conscious, not the baby
  clock.
  Level confusion.

  A Swiss watch has a fairly complicated movement. How many watches
  does
  it take before they collectively have a chance at knowing what
  time it
  is? If all self referential machines arise from finite automation
  though (by UDA inevitability?), the designation of any Level at
  all is
  arbitrary. How does comp conceive of self referential machines
  evolving in the first place?

  They exist arithmetically, in many relative way, that is to
  universal
  numbers. Relative Evolution exists in higher level description of
  those relation.
  Evolution of species, presuppose arithmetic and even comp,
  plausibly.
  Genetics is already digital relatively to QM.

  My question though was how many watches does it take to make an
  intelligent watch?

  Difficult question. One hundred might be enough, but a good engineers
  might be able to optimize it. I would not be so much astonished that
  one clock is enough, to implement a very simple (and inefficacious)
  universal system, but then you have to rearrange all the parts of
  that
  clock.

  The misapprehensions of comp are even clearer to me imagining a
  universal system in clockwork mechanisms. Electronic computers sort of
  mesmerize us because electricity seems magical to us, but having a
  warehouse full of brass gears manually clattering together and
  assuming that there is a  conscious entity experiencing something
  there is hard to seriously consider. It's like Leibniz' Windmill.

 Or like Ned block chinese people computer. This is not convincing.

Why not? Because our brain can be broken down into components also and
we assume that we are the function of our brain? If so, that objection
evaporates when we use a symmetrical form  content model rather than
a cause  effect model of brain-mind.

 It
 is just helpful to understand that consciousness relies on logical
 informational patterns that on matter. That problem is not a problem
 for comp, but for theories without notion of first person. It breaks
 down when you can apply a theory of knowledge, which is the case for
 machine, thanks to incompleteness. Consciousness is in the true
 fixed point of self-reference. It is not easy to explain this shortly
 and it relies on Gödel and Tarski works. There will be opportunities
 to come back on this.

All of that sounds still like the easy problem of consciousness.
Arithmetic can show 

Re: COMP test (ontology of COMP)

2012-03-02 Thread meekerdb

On 3/2/2012 1:03 AM, Bruno Marchal wrote:


On 01 Mar 2012, at 19:43, meekerdb wrote:


On 3/1/2012 10:23 AM, Bruno Marchal wrote:


On 01 Mar 2012, at 17:54, meekerdb wrote:


On 3/1/2012 1:01 AM, Bruno Marchal wrote:


On 29 Feb 2012, at 21:05, meekerdb wrote:


On 2/29/2012 10:59 AM, Bruno Marchal wrote:
Comp says the exact contrary: it makes matter and physical processes not 
completely Turing emulable. 


But it makes them enough TE so that you can yes to the doctor who proposes to 
replace some part of your brain (which is made of matter) with a Turing emulation 
of it?


The doctor does not need to emulate the matter of my brain. This is completely not 
Turing *emulable*. It is only (apparently) Turing simulable, that is emulable at 
some digital truncation of my brain. Indeed matter is what emerges from the 1p 
indeterminacy on all more fine grained computations reaching my current states in 
arithmetic/UD.


OK, but just to clarify: The emergent matter is not emulable because there are 
infinitely many computations at the fine grained level reaching your current state.  
But it is simulable to an arbitrary degree.


If you can prove that.

I would say yes, but it does not seem obvious to prove. You have to emulate bigger and 
bigger portions of the UD*, and the 1-view are only defined in the limit, being 
unaware of the UD-delays. Not obvious. It might be true, but in some non tractable 
sense. Hmm... Interesting question.


I will think more on this, I smell a busy beaver situation. Your decimals, of your 
prediction might take a very long time to stabilize. I dunno.







But I'm still unclear on what constitutes my current states.  Why is there more 
than one?  Is it a set of states of computations that constitutes a single state of 
consciousness?


If you say yes to the doctor, and if the doctor is luckily accurate, the current 
state is the encoding of the universal number + data that he got from the scanning. 
Basically, it is what is sent through the teleportation.


From the 1-p view, that state is unique, indeed. It is you here and now at the 
moment of the scanning (done very quickly for the sake of the argument).


There is no more than one. But its encoding, and its relevant decoding, are generated 
infinitely often in the UD*, with different continuations, leading to your current 
self-indeterminacy. It is the subjective same you, like the people in W and M before 
they open the teletransporter box, just before differentiation.


Oops, I see that I wrote my current states, with a s.  So it means I was talking 
about the 3p computational states in the UD* corresponding on my (unique) current 
consciousness state. That exists, in the comp theory.


Hope I am enough clear, tell otherwise if not.


Yes, that's what I thought you meant when I first studied your theory.  But then I am 
not clear on the relation of this unique current state to the many non-equivalent 
states at a lower, e.g. quantum, level that constitute it at the quasi-classical 
level.  Is the UD* not also computing all of those fine-grained states?


Yes, and it adds up to the domain of first person indeterminacy. Usually I invoke the 
rule Y = II.  That is, two equivalent computations (equivalent in the sense that it 
leads to the same conscious experience) does not add up, but if they diverge at some 
point, even in the far future, they will add up. It is like in QM, there is a need for 
possible distinction in principle.


Let me ask a question to everybody. Consider the WM duplication, starting from Helsinki, 
but this time, in W, you are reconstituted in two exemplars, in exactly the same 
environment. Is the probability, asked in Helsinki,  to find yourself in W equal to 2/3 
or to 1/2.
My current answer, not yet verified with the logics, is that if the two computations in 
W are exactly identical forever, then it is 1/2, but if they diverge soon or later, then 
the probability is 1/2. But I am not sure of this. What do you think?


I think there's a typo and the second 1/2 was intended to be 2/3.  I wonder though why we 
should consider an hypothesis like in exactly the same environment (to the quantum 
level?) which is nomologically impossible.


Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Yes Doctor circularity

2012-03-02 Thread Bruno Marchal


On 02 Mar 2012, at 18:03, Craig Weinberg wrote:


On Mar 2, 4:43 am, Bruno Marchal marc...@ulb.ac.be wrote:

On 01 Mar 2012, at 22:32, Craig Weinberg wrote:


There is no such thing as evidence when it comes to
qualitative
phenomenology. You don't need evidence to infer that a clock
doesn't
know what time it is.



A clock has no self-referential ability.



How do you know?



By looking at the structure of the clock. It does not implement
self-
reference. It is a finite automaton, much lower in complexity
than a
universal machine.



Knowing what time it is doesn't require self reference.



That's what I said, and it makes my point.



The difference between a clock knowing what time it is, Google
knowing
what you mean when you search for it, and an AI bot knowing how to
have a conversation with someone is a matter of degree. If comp
claims
that certain kinds of processes have 1p experiences associated  
with

them it has to explain why that should be the case.



Because they have the ability to refer to themselves and understand
the difference between 1p, 3p, the mind-body problem, etc.
That some numbers have the ability to refer to themselves is proved
in
computer science textbook.
A clock lacks it. A computer has it.


This sentence refers to 'itself' too. I see no reason why any  
number

or computer would have any more of a 1p experience than that.


A sentence is not a program.


Okay, WHILE  program  0 DO program. Program = Program + 1. END
WHILE

Does running that program (or one like it) create a 1p experience?


Very plausibly not. It lacks self-reference and universality.









By comp it
should be generated by the 1p experience of the logic of the  
gears

of
the clock.



?



If the Chinese Room is intelligent, then why not gears?



The chinese room is not intelligent.



I agree.



The person which supervene on the
some computation done by the chinese room might be intelligent.



Like a metaphysical 'person' that arises out of the computation ?


It is more like prime numbers arising from + and *. Or like a chess
player arising from some program, except that prime number and chess
player have (today) no universal self-referential abilities.


That sounds like what I said.






By comp logic, the clock could just be part of a
universal timekeeping machine - just a baby of course, so we
can't
expect it to show any signs of being a universal machine yet,
but by
comp, we cannot assume that clocks can't know what time it is
just
because these primitive clocks don't know how to tell us that
they
know it yet.



Then the universal timekeeping would be conscious, not the baby
clock.
Level confusion.


A Swiss watch has a fairly complicated movement. How many  
watches

does
it take before they collectively have a chance at knowing what
time it
is? If all self referential machines arise from finite  
automation

though (by UDA inevitability?), the designation of any Level at
all is
arbitrary. How does comp conceive of self referential machines
evolving in the first place?



They exist arithmetically, in many relative way, that is to
universal
numbers. Relative Evolution exists in higher level  
description of

those relation.
Evolution of species, presuppose arithmetic and even comp,
plausibly.
Genetics is already digital relatively to QM.



My question though was how many watches does it take to make an
intelligent watch?


Difficult question. One hundred might be enough, but a good  
engineers
might be able to optimize it. I would not be so much astonished  
that

one clock is enough, to implement a very simple (and inefficacious)
universal system, but then you have to rearrange all the parts of
that
clock.



The misapprehensions of comp are even clearer to me imagining a
universal system in clockwork mechanisms. Electronic computers  
sort of

mesmerize us because electricity seems magical to us, but having a
warehouse full of brass gears manually clattering together and
assuming that there is a  conscious entity experiencing something
there is hard to seriously consider. It's like Leibniz' Windmill.


Or like Ned block chinese people computer. This is not convincing.


Why not? Because our brain can be broken down into components also and
we assume that we are the function of our brain?


We are relatively manifested by the function of our brain. we are  
not function.





If so, that objection
evaporates when we use a symmetrical form  content model rather than
a cause  effect model of brain-mind.


Form and content are not symmetrical.
The dependence of content to form requires at least universal machine.






It
is just helpful to understand that consciousness relies on logical
informational patterns that on matter. That problem is not a problem
for comp, but for theories without notion of first person. It breaks
down when you can apply a theory of knowledge, which is the case for
machine, thanks to incompleteness. Consciousness is in the true
fixed point of self-reference. 

Re: COMP test (ontology of COMP)

2012-03-02 Thread Bruno Marchal


On 02 Mar 2012, at 19:17, meekerdb wrote:


On 3/2/2012 1:03 AM, Bruno Marchal wrote:



On 01 Mar 2012, at 19:43, meekerdb wrote:


On 3/1/2012 10:23 AM, Bruno Marchal wrote:



On 01 Mar 2012, at 17:54, meekerdb wrote:


On 3/1/2012 1:01 AM, Bruno Marchal wrote:



On 29 Feb 2012, at 21:05, meekerdb wrote:


On 2/29/2012 10:59 AM, Bruno Marchal wrote:


Comp says the exact contrary: it makes matter and physical  
processes not completely Turing emulable.


But it makes them enough TE so that you can yes to the doctor  
who proposes to replace some part of your brain (which is made  
of matter) with a Turing emulation of it?


The doctor does not need to emulate the matter of my brain.  
This is completely not Turing *emulable*. It is only  
(apparently) Turing simulable, that is emulable at some digital  
truncation of my brain. Indeed matter is what emerges from the  
1p indeterminacy on all more fine grained computations reaching  
my current states in arithmetic/UD.


OK, but just to clarify: The emergent matter is not emulable  
because there are infinitely many computations at the fine  
grained level reaching your current state.  But it is simulable  
to an arbitrary degree.


If you can prove that.

I would say yes, but it does not seem obvious to prove. You have  
to emulate bigger and bigger portions of the UD*, and the 1-view  
are only defined in the limit, being unaware of the UD-delays.  
Not obvious. It might be true, but in some non tractable sense.  
Hmm... Interesting question.


I will think more on this, I smell a busy beaver situation. Your  
decimals, of your prediction might take a very long time to  
stabilize. I dunno.







But I'm still unclear on what constitutes my current states.   
Why is there more than one?  Is it a set of states of  
computations that constitutes a single state of consciousness?


If you say yes to the doctor, and if the doctor is luckily  
accurate, the current state is the encoding of the universal  
number + data that he got from the scanning. Basically, it is  
what is sent through the teleportation.


From the 1-p view, that state is unique, indeed. It is you here  
and now at the moment of the scanning (done very quickly for  
the sake of the argument).


There is no more than one. But its encoding, and its relevant  
decoding, are generated infinitely often in the UD*, with  
different continuations, leading to your current self- 
indeterminacy. It is the subjective same you, like the people in  
W and M before they open the teletransporter box, just before  
differentiation.


Oops, I see that I wrote my current states, with a s.  So it  
means I was talking about the 3p computational states in the UD*  
corresponding on my (unique) current consciousness state. That  
exists, in the comp theory.


Hope I am enough clear, tell otherwise if not.


Yes, that's what I thought you meant when I first studied your  
theory.  But then I am not clear on the relation of this unique  
current state to the many non-equivalent states at a lower, e.g.  
quantum, level that constitute it at the quasi-classical level.   
Is the UD* not also computing all of those fine-grained states?


Yes, and it adds up to the domain of first person indeterminacy.  
Usually I invoke the rule Y = II.  That is, two equivalent  
computations (equivalent in the sense that it leads to the same  
conscious experience) does not add up, but if they diverge at some  
point, even in the far future, they will add up. It is like in QM,  
there is a need for possible distinction in principle.


Let me ask a question to everybody. Consider the WM duplication,  
starting from Helsinki, but this time, in W, you are reconstituted  
in two exemplars, in exactly the same environment. Is the  
probability, asked in Helsinki,  to find yourself in W equal to 2/3  
or to 1/2.
My current answer, not yet verified with the logics, is that if the  
two computations in W are exactly identical forever, then it is  
1/2, but if they diverge soon or later, then the probability is  
1/2. But I am not sure of this. What do you think?


I think there's a typo and the second 1/2 was intended to be 2/3.


Oops.



I wonder though why we should consider an hypothesis like in  
exactly the same environment (to the quantum level?) which is  
nomologically impossible.


I meant, an environment sufficiently similar so that the first person  
experiences are identical. It is more easy to use virtual environment,  
so that we can use the comp subst level to make sure (thanks to the  
comp determinacy!) that the processing of the two brains will be  
exactly identical.


(exactly identical is what we told the cleaning service, hoping they  
will not put some flowers, or anything different in the two rooms  
which could make the experience diverging!)


So 1/2 or 2/3?

Bruno


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To 

Re: COMP theology

2012-03-02 Thread John Clark
On Thu, Mar 1, 2012 Bruno Marchal marc...@ulb.ac.be wrote:

 The question is If I throw a coin, what is the probability that I see it
 becoming a flying pig. In front of the UD, that question is not trivial.


In this thought experiment the meaning of the word I is not obvious and
in fact the entire point of the exercise is supposed to be to make clear
exactly what I means, and yet you throw out the word as if the meaning is
already clear. In one sense there is zero probability because if you became
a flying pig you would not be Bruno Marchal anymore. And in some sense
there is zero probability the Helsinki man will be the Moscow man because
the Moscow experiences is what transformed the Helsinki man into the Moscow
man so that although he may remembers being him he is not the Helsinki man
anymore. So the answer to the question If I change what is the probability
I will remain the same? is zero. And that's why I think this first person
indeterminacy stuff is just silly.

 Comp is just I can survive with a digital brain. It is about me, my
 consciousness, my body


Fine, but then how does that square with your comment Comp makes
arithmetic a theory of everything. Consciousness is not everything.

 comp makes matter into an appearance in the mind of universal numbers
 only.


Comp can certainly make a mind that through virtual reality can experience
matter that does not in fact exist, but even if the rock the mind feels
like he is holding does not exist other matter does in the form of the
computer that is simulating the rock, and the mind too. You claim you have
proven that a computer made of matter is not necessary to do a simulation
like this but I'll be damned if I can see where you did this. In
Aristotle's metaphysics the potential and the actual are somehow one, but
is this really true? I don't know.

 OK. So you see that there is a 1p- indetermination.


I don't even think 1p- indetermination has a clear meaning except  if
you change then you are not the same; well yes I can see that, it's true
but not very profound.

 the question does not bear on where he will be, but on where he will
 feels to be.


If I receive sense inputs from Washington I will feel like I'm in
Washington if I receive sights and sounds from Moscow I will feel like I'm
in Moscow. You may ask why are you the Moscow man and not the Washington
man?, and my answer is because I received inputs from Moscow not
Washington. So a legitimate question and a proper use of probabilities
would be What is the probability I will receive sights and sound from
Moscow but not Washington?. Unlike your question this one is perfectly
clear and is well suited for statistical analysis, but I don't see what
deep philosophical insights can be gained from it.

 he know that he will be in W and in M, but he knows that whatever he will
 feel to be, it will be in only one place, because he knows that he will not
 feel to be in two place at once.


Even that is not a given. This is virtual reality after all, it's the point
of your dovetail machine, so there is no reason you couldn't have the White
House in the middle of the Kremlin and the Washington Monument right next
to St. Basil's Cathedral.

 he is aware that he cannot predict which one among the many he he will
 feel to be.


That is true ONLY if he does not know if he will receive signals from
Washington or Moscow, if he knew that, and there is no reason in theory he
could not, then he could make such a prediction.

 That is the 1-indeterminacy, which is crucial for the rest of the
 reasoning.


I know it's crucial, and so if that fails, and it does, then the entire
proof falls apart. Don't misunderstand me, I'm not saying your conclusions
about numbers are wrong and in fact my hunch is that they are probably
right or close to it, but I don't think you've proved it and I'm certain
this 1p indeterminate stuff is a dead end.

 There is no difficulty. Just the discovery of how to explain a objective
 account of a feeling of subjective indeterminacy in the mechanist
 framework.


The explanation is not difficult, you never know what's coming next. Forest
Gump had a similar explanation that was every bit as deep, Life is like a
box of chocolates...you never know what you're gonna get.

   Non-comp may not be contradictory but all the human practitioners of
 non-comp most certainly are, every single one, no exceptions.


   Many are, but why all, and why necessarily?


All non-comp fans say that knowing what someone or something does is not
enough to determine if it is conscious, you need to know HOW they do what
they do; and yet until very recently nobody had the slightest idea how the
brain worked and yet they still firmly believed that their fellow human
beings were conscious when they acted as if they were, that is to say when
they were not sleeping or dead. Even today 99.9% of the human population
thinks that how the brain works is so unimportant that they have not
bothered to learn the first thing 

Re: Yes Doctor circularity

2012-03-02 Thread Craig Weinberg
On Mar 2, 2:49 pm, Bruno Marchal marc...@ulb.ac.be wrote:
 On 02 Mar 2012, at 18:03, Craig Weinberg wrote:


  There is no such thing as evidence when it comes to
  qualitative
  phenomenology. You don't need evidence to infer that a clock
  doesn't
  know what time it is.

  A clock has no self-referential ability.

  How do you know?

  By looking at the structure of the clock. It does not implement
  self-
  reference. It is a finite automaton, much lower in complexity
  than a
  universal machine.

  Knowing what time it is doesn't require self reference.

  That's what I said, and it makes my point.

  The difference between a clock knowing what time it is, Google
  knowing
  what you mean when you search for it, and an AI bot knowing how to
  have a conversation with someone is a matter of degree. If comp
  claims
  that certain kinds of processes have 1p experiences associated
  with
  them it has to explain why that should be the case.

  Because they have the ability to refer to themselves and understand
  the difference between 1p, 3p, the mind-body problem, etc.
  That some numbers have the ability to refer to themselves is proved
  in
  computer science textbook.
  A clock lacks it. A computer has it.

  This sentence refers to 'itself' too. I see no reason why any
  number
  or computer would have any more of a 1p experience than that.

  A sentence is not a program.

  Okay, WHILE  program  0 DO program. Program = Program + 1. END
  WHILE

  Does running that program (or one like it) create a 1p experience?

 Very plausibly not. It lacks self-reference and universality.

Why isn't a WHILE loop self-referential?




  By comp it
  should be generated by the 1p experience of the logic of the
  gears
  of
  the clock.

  ?

  If the Chinese Room is intelligent, then why not gears?

  The chinese room is not intelligent.

  I agree.

  The person which supervene on the
  some computation done by the chinese room might be intelligent.

  Like a metaphysical 'person' that arises out of the computation ?

  It is more like prime numbers arising from + and *. Or like a chess
  player arising from some program, except that prime number and chess
  player have (today) no universal self-referential abilities.

  That sounds like what I said.

  By comp logic, the clock could just be part of a
  universal timekeeping machine - just a baby of course, so we
  can't
  expect it to show any signs of being a universal machine yet,
  but by
  comp, we cannot assume that clocks can't know what time it is
  just
  because these primitive clocks don't know how to tell us that
  they
  know it yet.

  Then the universal timekeeping would be conscious, not the baby
  clock.
  Level confusion.

  A Swiss watch has a fairly complicated movement. How many
  watches
  does
  it take before they collectively have a chance at knowing what
  time it
  is? If all self referential machines arise from finite
  automation
  though (by UDA inevitability?), the designation of any Level at
  all is
  arbitrary. How does comp conceive of self referential machines
  evolving in the first place?

  They exist arithmetically, in many relative way, that is to
  universal
  numbers. Relative Evolution exists in higher level
  description of
  those relation.
  Evolution of species, presuppose arithmetic and even comp,
  plausibly.
  Genetics is already digital relatively to QM.

  My question though was how many watches does it take to make an
  intelligent watch?

  Difficult question. One hundred might be enough, but a good
  engineers
  might be able to optimize it. I would not be so much astonished
  that
  one clock is enough, to implement a very simple (and inefficacious)
  universal system, but then you have to rearrange all the parts of
  that
  clock.

  The misapprehensions of comp are even clearer to me imagining a
  universal system in clockwork mechanisms. Electronic computers
  sort of
  mesmerize us because electricity seems magical to us, but having a
  warehouse full of brass gears manually clattering together and
  assuming that there is a  conscious entity experiencing something
  there is hard to seriously consider. It's like Leibniz' Windmill.

  Or like Ned block chinese people computer. This is not convincing.

  Why not? Because our brain can be broken down into components also and
  we assume that we are the function of our brain?

 We are relatively manifested by the function of our brain. we are
 not function.

That seems to make 'functionalism' a misnomer.


  If so, that objection
  evaporates when we use a symmetrical form  content model rather than
  a cause  effect model of brain-mind.

 Form and content are not symmetrical.
 The dependence of content to form requires at least universal machine.

What if content is not dependent on form and requires nothing except
being real? I think that content and form are anomalous symmetries
inherent in all real things. It is only our perspective, as human

Re: Entropy and information

2012-03-02 Thread Stephen P. King

On 2/28/2012 8:20 PM, Alberto G.Corona wrote:

Dear Stephen,

A thing that I often ask myself concerning MMH is  the question about
what is mathematical and what is not?. The set of real numbers is a
mathematical structure, but also the set of real numbers plus the
point (1,1) in the plane is. The set of randomly chosen numbers { 1,4
3,4,.34, 3}  is because it can be described with the same descriptive
language of math. But the first of these structures have properties
and the others do not. The first can be infinite but can be described
with a single equation while the last   must be described
extensively. . At least some random universes (the finite ones) can be
described extensively, with the tools of mathematics but they don愒
count in the intuitive sense as mathematical.


Dear Alberto,

I distinguish between the existential and the essential aspects 
such that this question is not problematic. Let me elaborate. By 
Existence I mean the necessary possibility of the entity. By Essence I 
mean the collection of properties that are its identity. Existence is 
only contingent on whether or not said existence is self-consistent, in 
other words, if an entity's essence is such that it contradicts the 
possibility of its existence, then it cannot exist; otherwise entities 
exist, but nothing beyond the tautological laws of identity - A is A 
and Unicity http://www.thefreedictionary.com/Unicity - can be said to 
follow from that bare existence and we only consider those laws only 
after we reach the stage of epistemology.
Essence, in the sense of properties seems to require a spectrum of 
stratification wherein properties can be associated and categories, 
modalities and aspects defined for such. It is this latter case of 
Essence that you seem to be considering in your discussion of the 
difference between the set of Real numbers and some set of random chosen 
numbers, since the former is defined as a complete whole by the set (or 
Category) theoretical definition of the Reals while the latter is 
contigent on a discription that must capture some particular collection, 
hence it is Unicity that matters, i.e. the wholeness of the set.
I would venture to guess that the latter case of your examples 
always involves particular members of an example of the former case, 
e.g. the set of randomly chosen numbers that you mentioned is a subset 
of the set of Real numbers. Do there exist set (or Categories) that are 
whole that require the specification of every one of its members 
separately such that no finite description can capture its essence? I am 
not sure, thus I am only guessing here. One thing that we need to recall 
is that we are, by appearances, finite and can only apprehend finite 
details and properties. Is this limitation the result of necessity or 
contingency?
Whatever the case it is, we should be careful not to draw 
conclusions about the inherent aspects of mathematical objects that 
follow from our individual ability to conceive of them. For example, I 
have a form of dyslexia that makes the mental manipulation of symbolic 
reasoning extremely difficult, I make up for this by reasoning in terms 
of more visual and proprioceptive senses and thus can understand 
mathematical entities very well. Given this disability, I might make 
claims that since I cannot understand the particular symbolic 
representations that I am a bit dubious of their existence or 
meaningfulness. Of course this is a rather absurd example, but I have 
often found that many claims by even eminent mathematicians  boils down 
to a similar situation. Many of the claims against the existence of 
infinities can fall under this situation.




  What is usually considered genuinely mathematical is any structure,
that can be described briefly. Also it must have good properties ,
operations, symmetries or isomorphisms with other structures so the
structure can be navigated and related with other structures and the
knowledge can be reused.   These structures have a low kolmogorov
complexity, so they can be navigated with low computing resources.


So you are saying that finite describability is a prerequisite for 
an entity to be mathematical? What is the lowest upper bound on this 
limit and what would necessitate it? Does this imply that mathematics is 
constrained to some set of objects that only sapient entities can 
manipulate in a way that such manipulations are describable exactly in 
terms of a finite list or algorithm? Does this not seem a bit 
anthropocentric? But my question is more about the general direction and 
implication of your reasoning and not meant to imply anything in 
particular. I have often wondered about many of the debates that go on 
between mathematicians and wonder if we are all missing something deeper 
in our quest.
For example, why is it that there are multiple and different set 
theories that have as axioms concepts that are so radically different. 
Witness the way that a set 

Re: Yes Doctor circularity

2012-03-02 Thread Stathis Papaioannou
On Sat, Mar 3, 2012 at 3:01 AM, Craig Weinberg whatsons...@gmail.com wrote:
 On Mar 1, 8:12 pm, Stathis Papaioannou stath...@gmail.com wrote:


 You do assume, though, that brain function can't be replicated by a
 machine.

 No, I presume that consciousness is not limited to what we consider to
 be brain function. Brain function, as we understand it now, is already
 a machine.

You've moved on since I discussed this with you a few months ago,
since then you claimed that brain function (i.e. observable function
or behaviour) could not be replicated by machine. If you now accept
this, the further argument is that it is not possible to replicate
brain function without also replicating consciousness. This is valid
even if it isn't actually possible to replicate brain function. We've
discussed this before and I don't think you understand it.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: COMP test (ontology of COMP)

2012-03-02 Thread Joseph Knight
On Fri, Mar 2, 2012 at 3:03 AM, Bruno Marchal marc...@ulb.ac.be wrote:

 Let me ask a question to everybody. Consider the WM duplication, starting
 from Helsinki, but this time, in W, you are reconstituted in two exemplars,
 in exactly the same environment. Is the probability, asked in Helsinki,  to
 find yourself in W equal to 2/3 or to 1/2.
 My current answer, not yet verified with the logics, is that if the two
 computations in W are exactly identical forever, then it is 1/2, but if
 they diverge soon or later, then the probability is [2/3].


Why is that?


 But I am not sure of this. What do you think?


My intuition is that the probability should be 2/3 in either case.



 Bruno


 http://iridia.ulb.ac.be/~marchal/



  --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To post to this group, send email to everything-list@googlegroups.com.
 To unsubscribe from this group, send email to
 everything-list+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/everything-list?hl=en.




-- 
Joseph Knight

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Yes Doctor circularity

2012-03-02 Thread Craig Weinberg
On Mar 2, 7:46 pm, Stathis Papaioannou stath...@gmail.com wrote:
 On Sat, Mar 3, 2012 at 3:01 AM, Craig Weinberg whatsons...@gmail.com wrote:
  On Mar 1, 8:12 pm, Stathis Papaioannou stath...@gmail.com wrote:

  You do assume, though, that brain function can't be replicated by a
  machine.

  No, I presume that consciousness is not limited to what we consider to
  be brain function. Brain function, as we understand it now, is already
  a machine.

 You've moved on since I discussed this with you a few months ago,
 since then you claimed that brain function (i.e. observable function
 or behaviour) could not be replicated by machine.

No, there's no change. Brain function consists of physiological
processes, but physiology is too broad and generic to resolve subtle
anthropological processes. Eventually any machine replication will be
exposed to some human observer. This is because the idea of
'observable function or behavior' presumes a universal observer or
absolute frame of reference, which I have no reason to entertain as
legitimate. Are these words made of English letters or black pixels or
RGB pixels...colorless electrons..? A machine can produce the
electrons, the pixels, the letters, but not the cadence, the ideas,
the fluid presence of a singular voice over time. These are subtle
kinds of considerations but they make a difference over time. Machines
repeat themselves in an unnatural way. They are tone deaf and socially
awkward. They have no charisma. It shows. Brains have no charisma
either, so reproducing their function does not reproduce that. It is
the character which drives the brain function, not the other way
around.

 If you now accept
 this, the further argument is that it is not possible to replicate
 brain function without also replicating consciousness.

No, you're missing my argument now as you have in the past.

 This is valid
 even if it isn't actually possible to replicate brain function. We've
 discussed this before and I don't think you understand it.

I understand your argument from the very beginning. I debate people
about it all week long with the same view exactly. It's by far the
most popular position I have encountered online. It is the
conventional wisdom wisdom position. There is nothing remotely new or
difficult to understand about it.

Craig

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Yes Doctor circularity

2012-03-02 Thread Terren Suydam
On Fri, Mar 2, 2012 at 8:55 PM, Craig Weinberg whatsons...@gmail.com wrote:
 On Mar 2, 7:46 pm, Stathis Papaioannou stath...@gmail.com wrote:
 On Sat, Mar 3, 2012 at 3:01 AM, Craig Weinberg whatsons...@gmail.com wrote:
  On Mar 1, 8:12 pm, Stathis Papaioannou stath...@gmail.com wrote:

  You do assume, though, that brain function can't be replicated by a
  machine.

  No, I presume that consciousness is not limited to what we consider to
  be brain function. Brain function, as we understand it now, is already
  a machine.

 You've moved on since I discussed this with you a few months ago,
 since then you claimed that brain function (i.e. observable function
 or behaviour) could not be replicated by machine.

 No, there's no change. Brain function consists of physiological
 processes, but physiology is too broad and generic to resolve subtle
 anthropological processes. Eventually any machine replication will be
 exposed to some human observer. This is because the idea of
 'observable function or behavior' presumes a universal observer or
 absolute frame of reference, which I have no reason to entertain as
 legitimate. Are these words made of English letters or black pixels or
 RGB pixels...colorless electrons..? A machine can produce the
 electrons, the pixels, the letters, but not the cadence, the ideas,
 the fluid presence of a singular voice over time. These are subtle
 kinds of considerations but they make a difference over time. Machines
 repeat themselves in an unnatural way. They are tone deaf and socially
 awkward. They have no charisma. It shows. Brains have no charisma
 either, so reproducing their function does not reproduce that. It is
 the character which drives the brain function, not the other way
 around.

 If you now accept
 this, the further argument is that it is not possible to replicate
 brain function without also replicating consciousness.

 No, you're missing my argument now as you have in the past.

 This is valid
 even if it isn't actually possible to replicate brain function. We've
 discussed this before and I don't think you understand it.

 I understand your argument from the very beginning. I debate people
 about it all week long with the same view exactly. It's by far the
 most popular position I have encountered online. It is the
 conventional wisdom wisdom position. There is nothing remotely new or
 difficult to understand about it.

 Craig

Or, maybe it's ... http://en.wikipedia.org/wiki/Dunning-Kruger

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Yes Doctor circularity

2012-03-02 Thread Craig Weinberg
On Mar 2, 9:41 pm, Terren Suydam terren.suy...@gmail.com wrote:


 Or, maybe it's ...http://en.wikipedia.org/wiki/Dunning-Kruger

Or this...
http://www.alternet.org/health/154225/would_we_have_drugged_up_einstein_how_anti-authoritarianism_is_deemed_a_mental_health_problem

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: COMP test (ontology of COMP)

2012-03-02 Thread Stephen P. King

On 2/29/2012 9:54 AM, Bruno Marchal wrote:


On 29 Feb 2012, at 13:50, Stephen P. King wrote:


On 2/28/2012 5:19 PM, Quentin Anciaux wrote:


2012/2/28 Stephen P. King stephe...@charter.net 
mailto:stephe...@charter.net


On 2/28/2012 10:43 AM, Quentin Anciaux wrote:

Comp substitute consciousness... such as you could not feel
any difference (in your consciousness from your POV) if your
brain was substituted for a digital brain.


 Hi Quentin,

OK, but could you elaborate on this statement?


It means an hypothetical you after mind uploading would feel as 
conscious as you're now in your biological body, and you would steel 
*feel* and feel being you and conscious and all...


Hi Quentin,

We need to nail down exactly what continuity of self is. if there 
is no you, as Brent wrote yesterday, what is that which is 
invariant with respect to substitution?



As I said, Brent made a sort of pedagogical mistake, but a big one, 
which is often done, and which explains perhaps why some materialist 
becomes person eliminativist.


The you is a construct of the brain. It is abstract. You can see it 
as an information pattern, but a real stable one which can exist in 
many representations.


And you can build it for any machine by using Kleene's second 
diagonalization construction.


It is the key of the whole thing. So let me explain again. You can 
certainly construct  a program D capable of doing some simple 
duplication of an arbitrary object x and apply any transformation T 
that you want on that duplicated object, perhaps with some parameters:


Dx gives T(, xx, ),

Then applying D to itself, that is substituting x for D, leads to a 
self-referential program:


DD gives T(, DD, ...).

You might add quotes to prevent an infinite loop:

Dx gives T(...'xx' ...) so that

DD gives T(... 'DD'...).

This is the trick used by Gödel, Kleene, Turing, Church, Post, ... in 
all incompleteness and insolubility result, but also, in abstract 
biology (see my paper amoeba, planaria, and dreaming machine.


That define a relative you, trivially relative to you. It is the I 
of computer science. It allows you to write a program referring to its 
entire code/body in the course of its execution. In some programming 
language, like the object oriented Smalltalk, for example, it is a 
build in control structure called SELF.


This gives, unfortunately only a third person notion of self. It is 
more my body than my soul, and that if why, to do the math, we 
have to use the conjunction of truth, with belief, to get a notion of 
first person. By the non definability of truth, this I cannot be 
defined by the machine concerned, but it still exist, even if doubly 
immaterial---because it is abstract, and in relation with the non 
definable (by the machine) truth.


Both are invariant, by definition, when the comp substitution is done 
at the right level. It means that the reconstituted person will behave 
the same, and feel to be the same.




Dear Bruno,

Forgive the obvious question, but what you wrote here should be the 
blue print for creating an AI, no? All that needs to be done is to 
create a special purpose physical machine that can implement a program 
with this structure, such that it is implemented fast enough to be 
able to interact in our world at our level.







Is the differentiation that one _might_ feel, given the wrong
substitution level, different from what _might_ occur if a
digital uploading procedure is conducted that fails to
generate complete continuity?


It depends on the wrongness of the substitution or the lack of 
continuity... it's not binary outcome.


At some point it would have to be, for a digital system has a 
fine grained level of sensitivity to differences, no? I am trying to 
nail down the details of this idea.


The details are in the mathematics of self-reference.


Where? How is the degree of resolution or scope of a 
computation coded in a computation? It seems that this is assumed in the 
notion of computer grammars and semantics but has this question been 
address directly in literature?







Those does not feel any difference terms are a bit ambiguous
and vague, IMHO.



Digital physics says that the whole universe can be substituted
with a program, that obviously imply comp (that we can
substitue your brain with a digital one), but comp shows that
to be inconsistent, because comp implies that any piece of
matter is non-computable... it is the limit of the infinities
of computation that goes through your consciousness current state.


Can you see how this would be a problem for the entire
digital uploading argument if functional substitution cannot
occur in a strictly classical way, for example by strictly
classical level measurement of brain structure?


Yes, and if it is, it is a big indication that comp is somehow wrong...


AFAIK, it would only prevent 

Re: Yes Doctor circularity

2012-03-02 Thread Stephen P. King

On 3/2/2012 10:17 PM, Craig Weinberg wrote:

On Mar 2, 9:41 pm, Terren Suydamterren.suy...@gmail.com  wrote:


Or, maybe it's ...http://en.wikipedia.org/wiki/Dunning-Kruger

Or this...
http://www.alternet.org/health/154225/would_we_have_drugged_up_einstein_how_anti-authoritarianism_is_deemed_a_mental_health_problem


Hear Hear!

Drug us into compliance, please! Ever read Brave New World 
http://www.huxley.net/bnw/? I have seen first hand the effects of 
anti-ADD drugs...


Onward!

Stephen

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Yes Doctor circularity

2012-03-02 Thread Stathis Papaioannou
On Sat, Mar 3, 2012 at 12:55 PM, Craig Weinberg whatsons...@gmail.com wrote:

 I understand your argument from the very beginning. I debate people
 about it all week long with the same view exactly. It's by far the
 most popular position I have encountered online. It is the
 conventional wisdom wisdom position. There is nothing remotely new or
 difficult to understand about it.

I know that you understand the claim, but what you don't understand is
the reasoning behind it.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.