On 17 Sep 2014, at 05:46, Russell Standish wrote:
On Tue, Sep 16, 2014 at 06:21:04PM +0200, Bruno Marchal wrote:
Hi Russell, Hi Others,
Sorry for the delay. Some comments on your (Russell) MGA paper
appear below.
Incidentally, when you see the complexity of the interaction between
the roots of trees and the soils, chemicals and through bacteria,
and when you believe, as some experiences suggest, that trees and
plant communicate, I am not so sure if trees and forest, perhaps on
different time scale, have not some awareness, and a self-awareness
of some sort. (I take awareness as synonymous with consciousness,
although I change my mind below!).
Intra-plant communication appears to be too simple to support
consciousness, but rhizozone networks are indeed a different
story. We can leave it as an open problem whether the rhizozone of a
forest could be conscious, just as we're prepared to consider ant
colonies as conscious.
That seems to me wise, except that I am open to the idea that all
universal beings are conscious, so are trees, bacteria and protozoans.
I really don't know, but the study of their behaviors surprise me
enough to that possibility.
OK. I will make a try. Awareness in its most basic forms comes from
the ability to distinguish a good feeling from a bad feeling. The
amoeba, like us, knows (in a weak sense) that eating some paramecium
is good, but that hot or to cold place are bad, and this makes it
reacts accordingly with some high degrees of relative
self-referential correctness.
This definition would grant consiousness to thermostats.
That does not follow, as thermostat are not universal, and in
particular have no universal goal, like do anything to surivive, like
amoeba and most animals and plants, which are Turing universal, and
have that universal goal.
I don't
believe it is enough - it really evacuates the concept of
consciousness. But until there is some agreement on what
"consciousness" means, this will be a sterile debate.
I tend to think that thermostat are not conscious. They are not
universal.
...
I agree that to have awareness, you need a self, a third person
self. But that is well played by the relative body (actually bodies,
incarnate through the UD).
Maybe we should define consciousness by self-awareness, and then
self-consciousness would be the higher form of self-self-awareness?
That makes one "self" per reflexive loop.
What's the distinction?
Between what? Consciousness (self-awareness) needs, as you say, a
self. Self-consciousness (self-self-awareness) needs not only a self,
but an awareness that there is a self. The distinction is that in the
first case we don't have Kp -> KKp. It is the difference between
universal and Löbian, or between Robinson Arithmetic (RA) and Peano
Arithmetic (PA). technically, universality implies the existence of
one reflexive loop, and Löbianity gives the cognitive faculty of being
able to know our own universality.
Attacks on anthropic reasoning will work better by choosing a
reference class which is indisputably a subset of the reference
class,
such as all human beings, and then demonstrating a contradiction. I
thought I had come up with such an example with my "Chinese
paradox",
but it turned out anthropic reasoning was rescued from that by the
peculiar distribution of country population sizes that happens to
hold
in reality. AR has proved remakably resilient to empirical tests.
I am still a bit agnostic for its use in the fundamentals, as the
probability, with computationalism, are always relative. It is the
same in quantum mechanics, where the probabilities are not on
states, but on relative states: they have always the form <a I b>^2,
the probability for finding b when being in the state a.
But we can extract useful information from the Anthropic principle,
and even from the most general Turing-thropic. Just saying that the
laws of physics should be a calculus of relative probabilities.
But the AP is applied relatively anyway. Indeed, there is evidence
that the absolute measure is not positive real-valued, so the only
meaningful probabilities are relative.
This is a bit unclear to me. I think I agree. Yet, to say that ants
are not conscious, because we would be ants, seems to me to use an
absolute form of the AP, like in the doomsday argument. This does not
make sense to me.
PS I have printed your MGA paper, and so read it and comment it
despite being in a busy period.
Let me say here, as we are in the good thread, two main points,
where we might have vocabulary issue, or perhaps disagree on
something? So you might think about this and be prepared :)
The first point concerns the relation between counterfactualness and
modal realism, that you link in a way which makes me a bit uneasy. I
do believe in some links between them, though, but it might not
correspond to yours. Examples will follow later.
The second point is the one we have already discussed, and concerns
the definition of supervenience. We do both agree on the Stanford
definition, but I am still thinking you are misusing it when apply
to the Alice and Bob in the classroom situation.
You agree that
C supervenes on B if to change C it is necessary to change B.
For example, consciousness C supervenes on a brain activity B,
because to change that consciousness you need to change that brain
activity.
I address this in the paper.
But my comment sum up where I disagree. I will comment more precisely
when I have more times.
What you go on to say that consciousness
C (ie the consciousness attached to body C, which is in B) supervenes
on B+A, which is correct.
OK, so you agree that Alice's consciousness supervenes on Alice's body
+ Bob's body + the room + the entire universe + the entire UD*. OK?
But my point is that consciousness itself
(not necessarily attached to a particular body or person)
You mean the existence of consciousness?
is not
supervenient on B+A in this case, as the consciousness could be a C or
a D (where D supervenes on A).
?
I agree, (assuming always some neuro-assumption to make things simple)
that Alice's consciousness does not supervene on Bob's brain activity,
but it does supervenes on Alice + Bob brains activities.
Where this matters is that one cannot say consciousness supervenes on
the universal dovetailer.
I really don't see this. That contradict the fact that if A supervenes
on B, it supervenes on A+B.
If Alice consciousness supervenes on say one computation in the UD*,
it supervenes on that computation + all the others.
Bruno
--
----------------------------------------------------------------------------
Prof Russell Standish Phone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Professor of Mathematics [email protected]
University of New South Wales http://www.hpcoders.com.au
Latest project: The Amoeba's Secret
(http://www.hpcoders.com.au/AmoebasSecret.html)
----------------------------------------------------------------------------
--
You received this message because you are subscribed to the Google
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,
send an email to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.
http://iridia.ulb.ac.be/~marchal/
--
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.