Re: Interesting paper on consciousness, computation and MWI

2011-08-31 Thread Pierz
Sophistry has a smell. Sometimes an argument smells of it, but it may
be a lot harder to pin down where the specious logic is – especially
when it’s all dressed up in a mathematical formalism that may be
inaccessible to the non-mathematician/logician. However the problem
with the arguments relating to consciousness in this paper is not so
hard to pin down, and indeed Stephen King is on the right track with
his objection.

Eastmond argues that an infinite conscious lifetime is impossible
because, in ‘finding oneself’ at a particular point in that lifetime,
one would have to gain an infinite amount of knowledge, which is
absurd.  He concludes that such an infinite lifetime is in principle
impossible. The flaw lies in the way the author glosses over the
notion of “gaining information”. In examining the problem, he treats
this “gaining of information” as if it occurred magically the moment
one finds oneself at a certain point in a lifetime, but in fact such
information has to be acquired by a concrete computation. For example,
if I am to gain information about my current lifetime position, I need
to examine a calendar and compare this to stored or acquired knowledge
about my date of birth. In the case of an infinite lifetime, the size
of the computation required is arbitrarily large (but finite) in the
case of an infinite lifetime with a lower bound (a life time with a
starting point), or simply uncomputable in the case of an infinite
lifetime with no lower bound.  This is the same as saying that one
cannot calculate the age of a person who has always existed. The fact
that such a person’s age is uncomputable does not however mean that
such a person cannot exist.

The favoured theory in modern cosmology suggests that the universe is
spatially infinite. How then do we calculate the position of our
planet in this universe? If astronomers had infinite access to the map
of the universe, they could still never calculate our position,
because the calculation would be infinite. Given that time is known to
be interconvertible with space, it follows that the same logic would
apply to locating an event on an infinite timeline. The situations are
mathematically indistinguishable, yet this does not prove away the
spatially infinite universe theory.

In an infinite lifetime with no lower bound, we can never know our
age, and the amount of information ‘gained’ when we find ourselves at
a point of time in such a lifetime is a function of how much
information we can process (concrete processing limitations) and the
amount of information available to us about our position. Whichever is
smaller forms the limit.

There is also a flaw in the reasoning in relation to the proposed
conscious computer which resets itself in order to generate repeated
(and therefore infinite) conscious moments. We must remember that the
information gain is made by the conscious entity and must form part of
its conscious computation. Otherwise where is the supposed gain
occurring? All we would have is an objective description of a
perfectly mathematically conceivable situation – an infinite set of
values for the set of conscious moments, or an infinitely long string
to define a moment within that set.  So the computer must gain the
information. But it cannot do so if it continually resets. The
invocation of thermodynamics does not help if the computer cannot
access information about entropy. It can escape this problem with an
endless incrementing loop, but then it needs an infinite memory to
store this growing string. Its computational limitations inevitably
force its incrementing register to 'clock over' (like Y2K) at some
point, causing it to repeat itself.

So  unless we grant the possibility of an infinite mind/computer, an
infinite lifetime necessarily entails the repeat  of conscious
experience (just as cosmologists grant that the spatially infinite
universe with locally finite information must entail a Nietzschean
infinite recurrence). Such a lifetime is perfectly imaginable. Indeed,
theoretically an infinite lifetime with no repetition is possible with
infinite computational resources.
With these flaws the remaining argument regarding the impossibility of
a deterministic and conscious computer need not even be addressed,
since they are built on unsound foundations.


On Aug 25, 8:12 am, David Nyman david.ny...@gmail.com wrote:
 This paper presents some intriguing ideas on consciousness, computation and
 the MWI, including an argument against the possibility of consciousness
 supervening on any single deterministic computer program (Bruno might find
 this interesting).  Any comments on its cogency?

 http://arxiv.org/abs/gr-qc/0208038

 David

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 

Re: bruno list

2011-08-31 Thread Bruno Marchal


On 30 Aug 2011, at 19:23, Craig Weinberg wrote:


On Aug 30, 11:29 am, Bruno Marchal marc...@ulb.ac.be wrote:

On 30 Aug 2011, at 14:43, Craig Weinberg wrote:




On Aug 30, 4:06 am, Bruno Marchal marc...@ulb.ac.be wrote:

On 29 Aug 2011, at 20:07, Craig Weinberg wrote:



Definitely, but the reasons that we have for causing those changes
in
the semiconductor material are not semiconductor logics. They use
hardware logic to to get the hardware to do software logic, just  
as

the mind uses the brain's hardware to remember, imagine, plan, or
execute what the mind wants it to. What the mind wants is  
influenced
by the brain, but the brain is also influenced directly by the  
mind.



A hard-wired universal machine can emulate a self-transforming
universal machine, or a high level universal machine acting on its
low
level universal bearer.



Ok, but can it emulate a non-machine?


This is meaningless.


If there is no such thing as a non-machine, then how can the term
machine have any meaning?


There are a ton of non-machines. Recursion theory is the study of  
degree of non-machineness.


What is meaningless is to ask to a machine to emulate a non-machine,  
which by definition is not emulable by a machine.








The point is just this one: do you or not make your theory  
relying on

something non-Turing emulable. If the answer is yes: what is it?



Yes, biological, zoological, and anthropological awareness.


If you mean by this, 1-awareness,


No, I mean qualitatively different phenomenologies which are all types
of 1-awareness. Since 1-awareness is private, they are not all the
same.


Most plausibly.






comp explains its existence and its
non Turing emulability, without introducing ad hoc non Turing  
emulable

beings in our physical neighborhood.


Whose physical neighborhood are comp's non Turing emulable 1-awareness
beings living in? Or are they metaphysical?


They are (sigma_1 )arithmetical, in the 3-view.
And unboundedly complex, in the 1-views (personal, plurals)






This is enough precise to be
tested, and we can argue that some non computable quantum weirdness,
like quantum indeterminacy, confirmed this. The simple self-
duplication illustrates quickly how comp makes possible to experience
non computable facts without introducing anything non computable in
the third person picture.


I'm not suggesting anything non-computable in the third person
picture. Third person is by definition computable.


Of course not. I mean, come on, Gödel, 1931. Or just by Church thesis  
as I explain from time to time (just one double diagonalization). The  
third person truth is bigger than the computable.






Some of those
computations are influenced by 1p motives though.


OK. But the motive might be an abstract being or engram (programmed by  
nature, evolution, that is deep computational histories).

No need to introduce anything non Turing emulable in the picture here.




Once those motives
are expressed externally, they are computable.


But with comp, you just cannot express them externally, just  
illustrate them and hope others grasp. They are not computable,  
because the experience is consciousness filtered by infinities of  
'brains'.


Comp shows a problem. What problem shows your theory?





You can't always
reverse engineer the 1-p motives from the 3-p though.


You are right, that is why, with comp, most 1-p notion are not 3- 
definable. Still, comp allows to study the case of the ideally correct  
machine, and have metatheories shedding light on that non  
communicability.








Feeling as
qualitatively distinct from detection.


Of course. Feeling is distinct from detection. It involves a person,


Yes! A person, or another animal. Not a virus or a silicon chip or a
computer made of chips.



This is racism.

It is a confusion of what is a person and its body. No doubt billions  
years of engramming, make them hard to separate technologically, but  
nothing prevent to survive with digital brain, or even to live in  
virtual environment in principle, at some level, some day.
And in this picture we can formulate precise (sub) problem of the hard  
mind body problem.








which involves some (not big) amount of self-reference ability.


You don't have to be able to refer to yourself to feel something.


You don't have to refer to yourself explicitly, but *feeling* still  
involves implicit self-references, I think.






Pain
is primitive.


It is very simple at the base and very deep, but, hmm I don't  
know, perhaps 1-primitive (with some of the 1-views described by the  
arithmetical or self-referential hypostases).


Not 3-primitive, with mechanism.






Not to disqualify machines
implemented in a particular material - stone, silicon, milk bottles,
whatever, from having the normal detection experiences of those
substances and pbjects, but there is nothing to tempt me to want to
assign human neurological qualia to milk bottles stacked up like
dominoes. We know about 

Re: bruno list

2011-08-31 Thread Craig Weinberg
On Aug 31, 2:53 am, Bruno Marchal marc...@ulb.ac.be wrote:
 On 30 Aug 2011, at 19:23, Craig Weinberg wrote:

  A hard-wired universal machine can emulate a self-transforming
  universal machine, or a high level universal machine acting on its
  low
  level universal bearer.

  Ok, but can it emulate a non-machine?

  This is meaningless.

  If there is no such thing as a non-machine, then how can the term
  machine have any meaning?

 There are a ton of non-machines. Recursion theory is the study of
 degree of non-machineness.

 What is meaningless is to ask to a machine to emulate a non-machine,
 which by definition is not emulable by a machine.

Ok, so how do we know that human awareness is not both a machine and a
non-machine, and therefore not completely Turing emulable?

  The point is just this one: do you or not make your theory
  relying on
  something non-Turing emulable. If the answer is yes: what is it?

  Yes, biological, zoological, and anthropological awareness.

  If you mean by this, 1-awareness,

  No, I mean qualitatively different phenomenologies which are all types
  of 1-awareness. Since 1-awareness is private, they are not all the
  same.

 Most plausibly.



  comp explains its existence and its
  non Turing emulability, without introducing ad hoc non Turing
  emulable
  beings in our physical neighborhood.

  Whose physical neighborhood are comp's non Turing emulable 1-awareness
  beings living in? Or are they metaphysical?

 They are (sigma_1 )arithmetical, in the 3-view.
 And unboundedly complex, in the 1-views (personal, plurals)

What makes them seem local to a spatiotemporal axis in a way that
seems simple in the 1p? How does an unboundedly complex phenomena 'go
to the store for some beer'?

But back to this (sigma_1 )arithmetical, in the 3-view. That's a yes
to the question of whether they are metaphysical, right?

  This is enough precise to be
  tested, and we can argue that some non computable quantum weirdness,
  like quantum indeterminacy, confirmed this. The simple self-
  duplication illustrates quickly how comp makes possible to experience
  non computable facts without introducing anything non computable in
  the third person picture.

  I'm not suggesting anything non-computable in the third person
  picture. Third person is by definition computable.

 Of course not. I mean, come on, Gödel, 1931. Or just by Church thesis
 as I explain from time to time (just one double diagonalization). The
 third person truth is bigger than the computable.

I don't know enough about it to say whether I agree yet, so I'll take
your word for it, but would you agree that Third person truth is by
definition more computable than first person?

  Some of those
  computations are influenced by 1p motives though.

 OK. But the motive might be an abstract being or engram (programmed by
 nature, evolution, that is deep computational histories).
 No need to introduce anything non Turing emulable in the picture here.

Doesn't that just push first cause back a step? What motives influence
the abstract being, nature, or deep computational histories?

  Once those motives
  are expressed externally, they are computable.

 But with comp, you just cannot express them externally, just
 illustrate them and hope others grasp. They are not computable,
 because the experience is consciousness filtered by infinities of
 'brains'.

Illustrating them isn't an external expression? It sounds like you're
saying that nothing is computable now?

 Comp shows a problem. What problem shows your theory?

You mean what problem does my theory solve? Or what's an example of a
problem which arises from not using my model? It's the mind/body
problem. The role of awareness in the cosmos. The nature of our
relation to the microcosm and macrocosm. What energy, time, space, and
matter really are. The origins of the universe.

  You can't always
  reverse engineer the 1-p motives from the 3-p though.

 You are right, that is why, with comp, most 1-p notion are not 3-
 definable. Still, comp allows to study the case of the ideally correct
 machine, and have metatheories shedding light on that non
 communicability.

Sounds good to me. I think there is tremendous value in studying ideal
principles, although I would not limit them to arithmetic minimalism.
There's a whole universe of ideally correct non-machine intelligence
out there (in here) that needs metatheories too.

  Feeling as
  qualitatively distinct from detection.

  Of course. Feeling is distinct from detection. It involves a person,

  Yes! A person, or another animal. Not a virus or a silicon chip or a
  computer made of chips.

 This is racism.

A silicon chip is not a member of a race. It does nothing at all that
could be considered an expression of feeling. It might have feeling,
but whatever it has, we have something more, at least in our own eyes.
Racism is to look at another human being with prejudice, not to look
at an inanimate object and fail to give it 

10 Important Differences Between Brains and Computers

2011-08-31 Thread Craig Weinberg
http://scienceblogs.com/developingintelligence/2007/03/why_the_brain_is_not_like_a_co.php

10 Important Differences Between Brains and Computers

[ Artificial Intelligence, Cognitive Neuroscience, Computational
Modeling ]
Posted on: March 27, 2007 12:38 PM, by Chris Chatham

A good metaphor is something even the police should keep an eye on.
- G.C. Lichtenberg

Although the brain-computer metaphor has served cognitive psychology
well, research in cognitive neuroscience has revealed many important
differences between brains and computers. Appreciating these
differences may be crucial to understanding the mechanisms of neural
information processing, and ultimately for the creation of artificial
intelligence. Below, I review the most important of these differences
(and the consequences to cognitive psychology of failing to recognize
them): similar ground is covered in this excellent (though lengthy)
lecture.

Difference # 1: Brains are analogue; computers are digital

It's easy to think that neurons are essentially binary, given that
they fire an action potential if they reach a certain threshold, and
otherwise do not fire. This superficial similarity to digital 1's and
0's belies a wide variety of continuous and non-linear processes that
directly influence neuronal processing.

For example, one of the primary mechanisms of information transmission
appears to be the rate at which neurons fire - an essentially
continuous variable. Similarly, networks of neurons can fire in
relative synchrony or in relative disarray; this coherence affects the
strength of the signals received by downstream neurons. Finally,
inside each and every neuron is a leaky integrator circuit, composed
of a variety of ion channels and continuously fluctuating membrane
potentials.

Failure to recognize these important subtleties may have contributed
to Minksy  Papert's infamous mischaracterization of perceptrons, a
neural network without an intermediate layer between input and output.
In linear networks, any function computed by a 3-layer network can
also be computed by a suitably rearranged 2-layer network. In other
words, combinations of multiple linear functions can be modeled
precisely by just a single linear function. Since their simple 2-layer
networks could not solve many important problems, Minksy  Papert
reasoned that that larger networks also could not. In contrast, the
computations performed by more realistic (i.e., nonlinear) networks
are highly dependent on the number of layers - thus, perceptrons
grossly underestimate the computational power of neural networks.

Difference # 2: The brain uses content-addressable memory

In computers, information in memory is accessed by polling its precise
memory address. This is known as byte-addressable memory. In contrast,
the brain uses content-addressable memory, such that information can
be accessed in memory through spreading activation from closely
related concepts. For example, thinking of the word fox may
automatically spread activation to memories related to other clever
animals, fox-hunting horseback riders, or attractive members of the
opposite sex.

The end result is that your brain has a kind of built-in Google, in
which just a few cues (key words) are enough to cause a full memory to
be retrieved. Of course, similar things can be done in computers,
mostly by building massive indices of stored data, which then also
need to be stored and searched through for the relevant information
(incidentally, this is pretty much what Google does, with a few
twists).

Although this may seem like a rather minor difference between
computers and brains, it has profound effects on neural computation.
For example, a lasting debate in cognitive psychology concerned
whether information is lost from memory because of simply decay or
because of interference from other information. In retrospect, this
debate is partially based on the false asssumption that these two
possibilities are dissociable, as they can be in computers. Many are
now realizing that this debate represents a false dichotomy.

Difference # 3: The brain is a massively parallel machine; computers
are modular and serial

An unfortunate legacy of the brain-computer metaphor is the tendency
for cognitive psychologists to seek out modularity in the brain. For
example, the idea that computers require memory has lead some to seek
for the memory area, when in fact these distinctions are far more
messy. One consequence of this over-simplification is that we are only
now learning that memory regions (such as the hippocampus) are also
important for imagination, the representation of novel goals, spatial
navigation, and other diverse functions.

Similarly, one could imagine there being a language module in the
brain, as there might be in computers with natural language processing
programs. Cognitive psychologists even claimed to have found this
module, based on patients with damage to a region of the brain known
as Broca's area. More recent 

Re: 10 Important Differences Between Brains and Computers

2011-08-31 Thread Bruno Marchal


On 31 Aug 2011, at 15:45, Craig Weinberg wrote:


http://scienceblogs.com/developingintelligence/2007/03/why_the_brain_is_not_like_a_co.php

10 Important Differences Between Brains and Computers

[ Artificial Intelligence, Cognitive Neuroscience, Computational
Modeling ]
Posted on: March 27, 2007 12:38 PM, by Chris Chatham

A good metaphor is something even the police should keep an eye on.
- G.C. Lichtenberg

Although the brain-computer metaphor has served cognitive psychology
well, research in cognitive neuroscience has revealed many important
differences between brains and computers. Appreciating these
differences may be crucial to understanding the mechanisms of neural
information processing, and ultimately for the creation of artificial
intelligence. Below, I review the most important of these differences
(and the consequences to cognitive psychology of failing to recognize
them): similar ground is covered in this excellent (though lengthy)
lecture.

Difference # 1: Brains are analogue; computers are digital

It's easy to think that neurons are essentially binary, given that
they fire an action potential if they reach a certain threshold, and
otherwise do not fire. This superficial similarity to digital 1's and
0's belies a wide variety of continuous and non-linear processes that
directly influence neuronal processing.

For example, one of the primary mechanisms of information transmission
appears to be the rate at which neurons fire - an essentially
continuous variable. Similarly, networks of neurons can fire in
relative synchrony or in relative disarray; this coherence affects the
strength of the signals received by downstream neurons. Finally,
inside each and every neuron is a leaky integrator circuit, composed
of a variety of ion channels and continuously fluctuating membrane
potentials.

Failure to recognize these important subtleties may have contributed
to Minksy  Papert's infamous mischaracterization of perceptrons, a
neural network without an intermediate layer between input and output.
In linear networks, any function computed by a 3-layer network can
also be computed by a suitably rearranged 2-layer network. In other
words, combinations of multiple linear functions can be modeled
precisely by just a single linear function. Since their simple 2-layer
networks could not solve many important problems, Minksy  Papert
reasoned that that larger networks also could not. In contrast, the
computations performed by more realistic (i.e., nonlinear) networks
are highly dependent on the number of layers - thus, perceptrons
grossly underestimate the computational power of neural networks.

Difference # 2: The brain uses content-addressable memory

In computers, information in memory is accessed by polling its precise
memory address. This is known as byte-addressable memory. In contrast,
the brain uses content-addressable memory, such that information can
be accessed in memory through spreading activation from closely
related concepts. For example, thinking of the word fox may
automatically spread activation to memories related to other clever
animals, fox-hunting horseback riders, or attractive members of the
opposite sex.

The end result is that your brain has a kind of built-in Google, in
which just a few cues (key words) are enough to cause a full memory to
be retrieved. Of course, similar things can be done in computers,
mostly by building massive indices of stored data, which then also
need to be stored and searched through for the relevant information
(incidentally, this is pretty much what Google does, with a few
twists).

Although this may seem like a rather minor difference between
computers and brains, it has profound effects on neural computation.
For example, a lasting debate in cognitive psychology concerned
whether information is lost from memory because of simply decay or
because of interference from other information. In retrospect, this
debate is partially based on the false asssumption that these two
possibilities are dissociable, as they can be in computers. Many are
now realizing that this debate represents a false dichotomy.

Difference # 3: The brain is a massively parallel machine; computers
are modular and serial

An unfortunate legacy of the brain-computer metaphor is the tendency
for cognitive psychologists to seek out modularity in the brain. For
example, the idea that computers require memory has lead some to seek
for the memory area, when in fact these distinctions are far more
messy. One consequence of this over-simplification is that we are only
now learning that memory regions (such as the hippocampus) are also
important for imagination, the representation of novel goals, spatial
navigation, and other diverse functions.

Similarly, one could imagine there being a language module in the
brain, as there might be in computers with natural language processing
programs. Cognitive psychologists even claimed to have found this
module, based on patients with damage to a region 

Re: 10 Important Differences Between Brains and Computers

2011-08-31 Thread Craig Weinberg
On Aug 31, 10:01 am, Bruno Marchal marc...@ulb.ac.be wrote:


 Those are  arguments against the comp metaphor, which compare the  
 brain with man made universal machine, and which is very naïve. Not  
 against the comp hypothesis which assert the existence of a level  
 where we are Turing emulable.

Yes, it's just about brain vs contemporary electronic semiconductor
computer. I mainly wanted to post this in corroboration with my
position on the viability of artificial neurons or the conception of
the psyche as a product of electric switching through neurons.

I wouldn't say that it supports comp hypothesis though either, whereas
I would expect that it would support it if the data fit that
interpretation. The point about relying on continuous sense
connections of the body with it's outside world would seem to support
my view that sense is fundamental and not solipsistic simulations or
arithmetic representations. Other points made, like content
addressable memory and self-organization seem to favor a signifying,
1p architecture rather than a 3-p a-signifying scripted organization.

Every point he mentions seems to go along well with my position, but
nothing compels me one way or the other in it about comp hypothesis.
The #6 item about hardware and software being distinctly different in
a typical PC but not in the brain supports my contention that our use
of computers piggybacks our own human codes and experiences onto a
completely unfeeling inorganic substrate which has no capacity to feel
or learn to feel.

Craig

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: bruno list

2011-08-31 Thread Bruno Marchal


On 31 Aug 2011, at 15:42, Craig Weinberg wrote:


On Aug 31, 2:53 am, Bruno Marchal marc...@ulb.ac.be wrote:

On 30 Aug 2011, at 19:23, Craig Weinberg wrote:



A hard-wired universal machine can emulate a self-transforming
universal machine, or a high level universal machine acting on  
its

low
level universal bearer.



Ok, but can it emulate a non-machine?



This is meaningless.



If there is no such thing as a non-machine, then how can the term
machine have any meaning?


There are a ton of non-machines. Recursion theory is the study of
degree of non-machineness.

What is meaningless is to ask to a machine to emulate a non-machine,
which by definition is not emulable by a machine.


Ok, so how do we know that human awareness is not both a machine and a
non-machine, and therefore not completely Turing emulable?


On the contrary, we know that if we have a Turing emulable body, then  
our first person being are not Turing emulable.
Even the Universal Dovetailer cannot emulate one soul. By the first  
person indeterminacy (but not only that) the soul emerges from the  
whole block structure of the UD-work (which I denote often by UD*).  
The notion of soul refers to truth which is not even definable.








The point is just this one: do you or not make your theory
relying on
something non-Turing emulable. If the answer is yes: what is it?



Yes, biological, zoological, and anthropological awareness.



If you mean by this, 1-awareness,


No, I mean qualitatively different phenomenologies which are all  
types

of 1-awareness. Since 1-awareness is private, they are not all the
same.


Most plausibly.




comp explains its existence and its
non Turing emulability, without introducing ad hoc non Turing
emulable
beings in our physical neighborhood.


Whose physical neighborhood are comp's non Turing emulable 1- 
awareness

beings living in? Or are they metaphysical?


They are (sigma_1 )arithmetical, in the 3-view.
And unboundedly complex, in the 1-views (personal, plurals)


What makes them seem local to a spatiotemporal axis in a way that
seems simple in the 1p? How does an unboundedly complex phenomena 'go
to the store for some beer'?


Look at what is beer in a first approximation. You need stars planet,  
life, .. up to he human story including perhaps soccer, adverstizing,  
prohibition of cannabis, and incredibly complex phenomenon related to  
other complex phenomenon.


It seems simple to you because a large part of that story is already  
encapsulate by the complexity of your cells and brains, the deepness  
of the thirst sensation, etc. The 1-person find that simple, because  
it looks at the process from its end.






But back to this (sigma_1 )arithmetical, in the 3-view. That's a yes
to the question of whether they are metaphysical, right?


No, it means it is arithmetical. Like 17 is prime.
And the 1-person is theological, if you want. Like 17 is prime and  
17 is prime. The second 17 is prime refers implicitly to truth,  
which is arguably metaphysical or theological.







This is enough precise to be
tested, and we can argue that some non computable quantum  
weirdness,

like quantum indeterminacy, confirmed this. The simple self-
duplication illustrates quickly how comp makes possible to  
experience

non computable facts without introducing anything non computable in
the third person picture.



I'm not suggesting anything non-computable in the third person
picture. Third person is by definition computable.


Of course not. I mean, come on, Gödel, 1931. Or just by Church thesis
as I explain from time to time (just one double diagonalization). The
third person truth is bigger than the computable.


I don't know enough about it to say whether I agree yet, so I'll take
your word for it, but would you agree that Third person truth is by
definition more computable than first person?


I'm afraid it is not.
A famous theorem (following works by Post, Skolem, Kleene, Mostowski)  
makes it possible to classify the arithmetical insolubilities by the  
alternation of the quantifiers in front of a decidable predicate.  
Hereafter P(x, y, z, r, s, t ...) is a decidable predicate containing  
only the symbols *, +, s, 0, together with variables x, y, z, ..., and  
the usual propositional logical symbol (, V, ~, (, )°, but NO  
quantifiers.


P(x, y, z, r, s, t ...) is Sigma_0, or Pi_0, or Delta_0. They are  
recursive, decidable, completely computable.
ExP(x, y, z, r, s, t ...) is Sigma_1 = semi-decidable (decidable when  
true) = partial computable = Turing emulable
AxP(x, y, z, r, s, t ...) is Pi_1 = semi-refutable (decidable when  
false) = already non computable, non Turing emulable

ExAyP(x, y, z, r, s, t ...) is Sigma_2 (much more non computable)
AxEyP(x, y, z, r, s, t ...) is Pi_2 (even much more non computable)
ExAyEzP(x, y, z, r, s, t ...) is Sigma_3
etc.
Arithmetical truth can be seen as the union of all Sigma_i (or of all  
Pi_i). Computability stops at sigma_1.


Now the 1-person 

Re: bruno list

2011-08-31 Thread Bruno Marchal


On 31 Aug 2011, at 18:26, meekerdb wrote:


On 8/31/2011 12:28 AM, Bruno Marchal wrote:
I don't understand. I insist all the time that IF 3-we are machine  
(yes doctor) then neither matter nor consciousness are computable/ 
Turing emulable. The 1-p is not even representable, although it is  
meta-representable (by Bp  p, for example).


I think that is the confusing part.  To say 'yes doctor' is to bet  
that we (our brain) can be replaced by a computer (in some general  
sense),


Yes.


but then you purport to show that 'we' are not computable.  So even  
what a (physical) computer does is not computable.



Well, to be an artificial brain, the physical computer has to do  
something computable. But the physical part (body) of the computer,  
when looked in details, has to result from a competition among  
infinities of universal machines. But that is just the comp many- 
worlds as seen through our angle/history/histories. This should be  
apparent in step seven, no? Just take the first person indeterminacy  
with UD* as domain. For your current computational state, there is an  
infinity of computational story going in that state, run by infinities  
of UMs. Your first person experience (including physical sensations)  
is given by a sort of Gaussian on those histories. Normality comes the  
high relative numbers of normal histories, perhaps by some  
(arithmetical) phase randomization.


No doubt this is confusing. We start by assuming the brain is some  
material machine, and we conclude in making that machine a limiting  
idea in the universal mind looking at itself. But it is only  
logically confusing if you take some primitive matter for granted.
It gives to physics a reason and a way to originate and evolve, and it  
justifies the existence of the non communicable part of truth  
(consciousness, notably).


Ask for any precision if needed (but note that from tomorrow to 9  
september I have a lot of exams, so be perhaps patient).



Bruno





http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: bruno list

2011-08-31 Thread Craig Weinberg
On Aug 31, 12:22 pm, Bruno Marchal marc...@ulb.ac.be wrote:
 On 31 Aug 2011, at 15:42, Craig Weinberg wrote:

  Ok, so how do we know that human awareness is not both a machine and a
  non-machine, and therefore not completely Turing emulable?

 On the contrary, we know that if we have a Turing emulable body, then
 our first person being are not Turing emulable.
 Even the Universal Dovetailer cannot emulate one soul. By the first
 person indeterminacy (but not only that) the soul emerges from the
 whole block structure of the UD-work (which I denote often by UD*).
 The notion of soul refers to truth which is not even definable.

I'm confused. I thought that the whole point of comp is to say that
our first person being can be emulated digitally.

  The point is just this one: do you or not make your theory
  relying on
  something non-Turing emulable. If the answer is yes: what is it?

  Yes, biological, zoological, and anthropological awareness.

  If you mean by this, 1-awareness,

  No, I mean qualitatively different phenomenologies which are all
  types
  of 1-awareness. Since 1-awareness is private, they are not all the
  same.

  Most plausibly.

  comp explains its existence and its
  non Turing emulability, without introducing ad hoc non Turing
  emulable
  beings in our physical neighborhood.

  Whose physical neighborhood are comp's non Turing emulable 1-
  awareness
  beings living in? Or are they metaphysical?

  They are (sigma_1 )arithmetical, in the 3-view.
  And unboundedly complex, in the 1-views (personal, plurals)

  What makes them seem local to a spatiotemporal axis in a way that
  seems simple in the 1p? How does an unboundedly complex phenomena 'go
  to the store for some beer'?

 Look at what is beer in a first approximation. You need stars planet,
 life, .. up to he human story including perhaps soccer, adverstizing,
 prohibition of cannabis, and incredibly complex phenomenon related to
 other complex phenomenon.

That's the 3-p externality, but why and how is there a 1-p simplicity
on top of that? We do we experience a beer and not stars, planets,
life, human civilization, etc.? What is served by it seeming simple if
it isn't?

 It seems simple to you because a large part of that story is already
 encapsulate by the complexity of your cells and brains, the deepness
 of the thirst sensation, etc. The 1-person find that simple, because
 it looks at the process from its end.

Why and how would complexity encapsulate itself?

  But back to this (sigma_1 )arithmetical, in the 3-view. That's a yes
  to the question of whether they are metaphysical, right?

 No, it means it is arithmetical. Like 17 is prime.
 And the 1-person is theological, if you want. Like 17 is prime and
 17 is prime. The second 17 is prime refers implicitly to truth,
 which is arguably metaphysical or theological.

To me the arithmetic truth is a human cognitive experience with a
large set of 3-p demonstrable consequences. Primeness is conceptual.
You could just name an imaginary number i17 that = whatever quantity
17 is divisible by other than one, or just alter your state of
consciousness until 17 seems even.

  This is enough precise to be
  tested, and we can argue that some non computable quantum
  weirdness,
  like quantum indeterminacy, confirmed this. The simple self-
  duplication illustrates quickly how comp makes possible to
  experience
  non computable facts without introducing anything non computable in
  the third person picture.

  I'm not suggesting anything non-computable in the third person
  picture. Third person is by definition computable.

  Of course not. I mean, come on, Gödel, 1931. Or just by Church thesis
  as I explain from time to time (just one double diagonalization). The
  third person truth is bigger than the computable.

  I don't know enough about it to say whether I agree yet, so I'll take
  your word for it, but would you agree that Third person truth is by
  definition more computable than first person?

 I'm afraid it is not.
 A famous theorem (following works by Post, Skolem, Kleene, Mostowski)
 makes it possible to classify the arithmetical insolubilities by the
 alternation of the quantifiers in front of a decidable predicate.
 Hereafter P(x, y, z, r, s, t ...) is a decidable predicate containing
 only the symbols *, +, s, 0, together with variables x, y, z, ..., and
 the usual propositional logical symbol (, V, ~, (, )°, but NO
 quantifiers.

 P(x, y, z, r, s, t ...) is Sigma_0, or Pi_0, or Delta_0. They are
 recursive, decidable, completely computable.
 ExP(x, y, z, r, s, t ...) is Sigma_1 = semi-decidable (decidable when
 true) = partial computable = Turing emulable
 AxP(x, y, z, r, s, t ...) is Pi_1 = semi-refutable (decidable when
 false) = already non computable, non Turing emulable
 ExAyP(x, y, z, r, s, t ...) is Sigma_2 (much more non computable)
 AxEyP(x, y, z, r, s, t ...) is Pi_2 (even much more non computable)
 ExAyEzP(x, y, z, r, s, t ...) is Sigma_3
 etc.


Re: bruno list

2011-08-31 Thread Stathis Papaioannou
On Wed, Aug 31, 2011 at 2:52 AM, Craig Weinberg whatsons...@gmail.com wrote:

 The subject feels he initiates and has control over the voluntary
 movement but not the involuntary movement. That's the difference
 between them.

 Ok, now you could understand what I'm talking about if you wanted to.
 All you have to do is realize that it is not possible for us to feel
 that there is a difference between them if there is not a difference
 between them. Doesn't mean that the difference is what we think it is
 - it could very well be only a feeling - but so what? What possible
 purpose could such a feeling have, and how could it possibly arise
 from particle mechanics? Where is the feeling located? Why is it
 there? Why don't we have the same feeling about our stomach digesting?

We have different feelings about different things and this means there
are different brain processes underlying them. Our neurology is not
set up to control our digestion, but I suppose it is possible for a
mutant to be born who has motor and sensory connections from the gut
to the cortex. I don't see how you will help your case if you claim
that there is some fundamental cellular difference between voluntary
and involuntary actions, since in broad terms all the cells in the
body follow the same basic plan.

 Both types of movement, however, are completely
 determined by the low level behaviour of the matter in the brain,

 You can say that if you want to, but it just means that the low level
 behavior of matter is magic, and that even though it's only a large
 molecule, it wants to drive a Bugatti.

If a movement is *not* determined by the low level behaviour of matter
in the brain that means that some part of the brain will do something
magical. An ion channel will open not because the appropriate ligand
has bound, but all by itself.

 which can in theory be modeled by a computer.
 No particle moves unless
 it is pushed by another particle or force,

 Force is metaphysical. It's just our way of understanding processes
 which are occurring outside of us rather than inside. My view is that
 it's all sense and that force is in the eye of the beholder.

Even if you believe there is no basic physical reality, there are
certain consistencies in the behaviour of objects in the apparent
reality, and that is the subject of science.

 otherwise it's magic, like
 a table levitating.

 Tables do levitate if they aren't stuck to a large planet. What's
 magic is that we think it's a table and not a cloud of atoms flying
 around a volume of empty space.

Tables only do what the forces on them make them do. Same with
everything else in the universe, whether it's particles inside cells
or inside stars.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: bruno list

2011-08-31 Thread Stathis Papaioannou
On Wed, Aug 31, 2011 at 3:24 AM, Evgenii Rudnyi use...@rudnyi.ru wrote:

 The subject feels he initiates and has control over the voluntary
 movement but not the involuntary movement. That's the difference
 between them. Both types of movement, however, are completely
 determined by the low level behaviour of the matter in the brain,
 which can in theory be modeled by a computer. No particle moves
 unless it is pushed by another particle or force, otherwise it's
 magic, like a table levitating.

 I would appreciate if you could be more specific about the mechanism on how
 movement of atoms leads for example to creation of a book about
 consciousness. Such a book is after all just a collection of atoms, this is
 true. For me however a self-assembly of such a book is just a magic.

The atoms have to move in order to write the book. They have to move
inside the brain of the author, then his hands have to move, the keys
on the computer keyboard move, and so on. Also, things have to happen
prior to the book being written. The universe arises, stars and
planets form, life evolves, the author is born, photons from books he
has read on consciousness impact on his retina which then leads to
reactions in his visual cortex and language centre. It's all very
complex, of course, but there is a causal chain of events. If you had
the right physical theory and enough computing power you could start
with the Big Bang, run a computer simulation and end up with the book.
Quantum mechanics does not preclude such a simulation.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: bruno list

2011-08-31 Thread Stathis Papaioannou
On Wed, Aug 31, 2011 at 11:33 AM, Craig Weinberg whatsons...@gmail.com wrote:

 A mechanistic world model can still accomodate human (and animal) feeling,
 imagination, creativity and compatibilist free will.

 How, specifically?

There is no onus on us to answer that question in order to show that
it can happen, since it is in fact what happens. It's like asking how
heavier than air flight is possible: birds are heavier than air, birds
fly, therefore heavier than air flight is possible.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.