On 01 Jan 2012, at 18:35, David Nyman wrote:

On 1 January 2012 02:04, meekerdb <meeke...@verizon.net> wrote:

Not to wish to pre-empt Bruno's reply, but I think you're mixing up 1- p and 3-p. From 3-p, all branches are conscious, but I only experience
myself on one branch at a time, probabilistically according to the
measure of computations. There's no individual soul, just in one sense
a single consciousness that experiences every possible state.


That seems incoherent to me.  How is it different from there are many
experiences? "I" is just a construct from a subset of experiences and there
can be many different subsets from which many different "I"s can be
constructed. But I don't know what it would mean to say there is just
one "I" or to say that "I" can jump from one thread of experience to
another. That would presuppose that consciousness, the "I", is something
apart from the experiences it jumps to.

This is a tricky one.  Pierz says above that "from 3-p, all branches
are conscious".  But perhaps it might be more accurate to say
something more like "from 3-p, all branches are in some measure
accessible to consciousness".

OK. And the measure is a conditional measure relative to the state "you are in". Of course, from the internal view of the state, you are always in.



Consciousness indeed supervenes on all
branches, but never "all at the same time".

Of course, this is a bit ambiguous. Like Brent said (or will say) time is an internal construct related to the machine's ability to sum up his comp path, and foresee/bet-on its possible futures.



Supervenience is not an
identity claim.

Key point.



The putative supervenience base is an inclusive
category embracing all 3-p descriptions indifferently, whereas 1-p
experiences are characterised precisely by their mutual exclusivity.

Indeed.



I agree with you that ""I" is just a construct from a subset of
experiences and there
can be many different subsets from which many different "I"s can be
constructed".

Hmm...  (more below)



"I" in this objective sense can be coherently understood
as an ensemble of co-existing 3-p descriptions.  But any conscious
experience, by contrast, is always a singular occasion - a unique
"moment in time", if you like.  So we mustn't be misled into imagining
arrays of conscious moments as somehow sitting there "all together" in
timeless identity with their 3-p supervenience base, because to do so
would be to destroy all logical possibility of recovering the
uniqueness of the experiential moment.

OK.



It is this very "numerical  problem" - the fact that there are many
bodies but only one conscious experience - that led Schrödinger to
make his remark about our consciousness being "not merely a piece of
this entire existence, but in a certain sense the whole".  Because
whenever we try to think of it as merely a "piece", the question will
always obtrude "but why only THIS piece right NOW?".  A criterion of
selection is implied which would be capable of transforming the
totality of 3-p indifferent co-existence into a unique 1-p
manifestation.  And this in turn entails, as Schrödinger observed,
that in some sense (to be resolved!) each individual conscious
"fragment of the present" must be a unique summation, by the system as
a whole, of itself.

I think I agree. But how is that possible? The answer is behind my "hmm..." above. I disagree with the idea that the self is a construct based on past experiences. Past experiences are important, but does not constitute the "self" which is a much more primitive notion. In fact we must, I think, distinguish between the two notion of third person self (more or less your body, or your local relative "Gödel number", which, by Kleene's theorem, can be handled by the machine, and the first person self, which is the same except that it is connected with truth (and to meaning through that connection). The machine can know entirely its local third person self-description. That is not obvious, and comes from the fact that if phi_i is a universal enumeration of the computable (partial and total) function, we can solve equation of the kind (with F being any computable function)

phi_e(x, y, ...) = F(e, x, y, ...)

The program e is able to refer to its own code. This is not obvious (and is Kleene's result, although the math for this is already in Gödel's proof). I can prove it later. This is proved by the diagonalization technic. As an exemple phi_k( ) = k describes a program k, without input, which is able to output its own complete description (the elementary amoeba).

The first person self is more like

phi_e(x, y, ...) = F(e, x, y, ...) & TRUE("F(e, x, y, ...)") (cf Bp & p)

From the proper theological point of view (G* minus G) those two relations are equivalent (e being self-referentially correct by Kleene's diagonalization), but e cannot know that, so that the logic of those two relations will differ a lot. e can prove the first relation, but cannot even express in its language the second one.

So the third person "self" is more like a generic (and non Turing universal) control structure. It is the same for all programs, once they have it. The first person self mix the self-control-structure with truth or reality (whatever it is). By this, it can no more be defined by the machine, and it will look, from its 1-view, like if it was (much) more than just the self, and indeed it is, from its 1-view, much more, due to that relation with truth (or God, or the ONE, if you want give it a "non-name"). Now the truth is very rich and contains all the counterfactuals, all the possible computational pasts and futures, and the 1-view becomes a local relative view opposing locally the whole and the tiny personal body (3-self), providing some sense to Schroedinger, and Maharshi, intuition. Logicians separate syntax and semantic. The 3-self can be defined entirely by syntactic constructions, but the 1-self refers to truth and whole, and logical tools can help to make that clearer.

Bruno


On 12/31/2011 5:07 PM, David Nyman wrote:

On 31 December 2011 23:35, Pierz<pier...@gmail.com>  wrote:

Not to wish to pre-empt Bruno's reply, but I think you're mixing up 1- p and 3-p. From 3-p, all branches are conscious, but I only experience
myself on one branch at a time, probabilistically according to the
measure of computations. There's no individual soul, just in one sense
a single consciousness that experiences every possible state.


That seems incoherent to me.  How is it different from there are many
experiences? "I" is just a construct from a subset of experiences and there
can be many different subsets from which many different "I"s can be
constructed. But I don't know what it would mean to say there is just
one "I" or to say that "I" can jump from one thread of experience to
another. That would presuppose that consciousness, the "I", is something
apart from the experiences it jumps to.

Brent

Yes, and the sense in which there is "a single consciousness that
experiences every possible state" is indeed an unusual one.  It's as
if we want to say that all such first-personal experiences "occur"
indifferently or even "simultaneously", but on reflection there can be no relation of simultaneity between distinguishable conscious events. The first-person is, by definition, always in the singular and present
NOW.

As Schrödinger remarked:

"This life of yours which you are living is not merely a piece of this entire existence, but in a certain sense the whole; only this whole is
not so constituted that it can be surveyed in one single glance."

David


When you write things like that I'm left with the impression that you
think one's
consciousness is a thing, a soul, that moves to different bundles of
computation so there
are some bundles that don't have any consciousness but could have if you
"jumped to them".

Not to wish to pre-empt Bruno's reply, but I think you're mixing up 1- p and 3-p. From 3-p, all branches are conscious, but I only experience
myself on one branch at a time, probabilistically according to the
measure of computations. There's no individual soul, just in one sense
a single consciousness that experiences every possible state.

As for Mars Rover I'm curious to know this: If we programmed it to
avoid danger, would it experience fear? Until we understand the
qualia, you're as in the dark as we are on this question. You assume
the affirmative, we assume the negative. That's why I sigh. Such
arguments go nowhere but a reassertion of our biases/intuitions, and
the result is unedifying.

--
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com .
To unsubscribe from this group, send email to
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at
http://groups.google.com/group/everything-list?hl=en.


--
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To post to this group, send email to everything- l...@googlegroups.com.
To unsubscribe from this group, send email to
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at
http://groups.google.com/group/everything-list?hl=en.


--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com . For more options, visit this group at http://groups.google.com/group/everything-list?hl=en .


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to