On 9/17/2014 6:47 AM, Bruno Marchal wrote:
On 17 Sep 2014, at 06:24, meekerdb wrote:
OOPS! I hit "send" when I intended to "close" my email. So here's another try at
replying.
On 9/16/2014 9:21 AM, Bruno Marchal wrote:
Hi Russell, Hi Others,
Sorry for the delay. Some comments on your (Russell) MGA paper appear below.
On 25 Aug 2014, at 00:30, Russell Standish wrote:
On Sun, Aug 24, 2014 at 01:22:51PM -0700, meekerdb wrote:
On 8/24/2014 12:55 AM, Russell Standish wrote:
I don't think that can be the case. I don't see how it can be anything
to be like a tree, yet trees are clearly DNA-based beings. So you
would get skewed results if you were to reason as though you could be
a tree.
Exactly. It's a reductio on the pattern of argument you used to
prove ants can't be conscious. I used it to prove ants can't be DNA
based.
I don't understand. How is having DNA relevant to having
consciousness? It is quite plausible that non-DNA-based forms are
conscious (eg a computer running a suitable AI program), and that
some DNA-based forms are not conscious (trees, for example).
DNA isn't relevant to consciousness. It was just an example of something we share with
ants, to point out that an argument that ants aren't conscious because if they were
we'd be ants is invalid.
OK.
I agree.
Incidentally, when you see the complexity of the interaction between the roots of
trees and the soils, chemicals and through bacteria, and when you believe, as some
experiences suggest, that trees and plant communicate, I am not so sure if trees and
forest, perhaps on different time scale, have not some awareness, and a
self-awareness of some sort. (I take awareness as synonymous with consciousness,
although I change my mind below!).
The reference class cannot be larger than the class of conscious
beings. Obviously it can be quite a bit smaller, but there must be a
maximal reference class for which anthropic reasoning is valid,
although it is quite controversial what it is - some suggest it may
even be as small as those people capable of understanding the
anthropic argument, a sizable fraction of which inhabits this list!
That's what bothers me. If you exclude ants because they're not
conscious (and I assume you've read "Godel, Escher, and Bach") and
hence can't understand the argument, why not exclude people who
can't understand the argument?
"Ant Fugue" is about the possibility that ant _colonies_ might be
conscious. My argument has nothing to say about ant colonies, even
though I consider "Ant Fugue" to be just an interesting speculation,
rather than a serious claim about ant colonies.
I am a bit agnostic on this. But I have few doubt that individual ants have some
consciousness, though.
But why is "consciousness" or "understanding the argument" the relevant attribute of
"us"? Why not "breathes oxygnen" or "metabolized carbohydrates"?
Those are not Turing complete activity, with universal goal like "help
yourself".
Oh - perhaps you mean "can't understand the argument" as in organisms
that can't understand the anthropic argument must be excluded from the
reference class. This seems a rather implausible claim - just because
anthropic argument has not occurred to you yet, shouldn't really
exclude you. The idea that self-awareness is a necessary requirement
of the reference class is a perhaps more believable claim - in order to even
think anthropically requires a concept of self - but then I'm still
not sure what it even means to be conscious, but not self-aware. What
does it even mean to "be an amoeba", as Bruno seems to think is possible.
Yes, that's another way of asking the same question - why is "be an amoeba" the
important category?
I see that you see, below. OK.
OK. I will make a try. Awareness in its most basic forms comes from the ability to
distinguish a good feeling from a bad feeling. The amoeba, like us, knows (in a weak
sense) that eating some paramecium is good, but that hot or to cold place are bad, and
this makes it reacts accordingly with some high degrees of relative self-referential
correctness. The genome of the amoeba, which is really a collection of cooperating
many genomes (lot of "nucleus") is Turing universal or "complete", and the amoeba
incarnates it relatively to her (our) probable lower substitution level (which defines
by the FPI the physical reality). So she get a life, a first person life, of some
sorts. Little consciousness, if you want, because from the first person view of the
amoeba it is the whole big thing. The life of protozoans are similar to ours. They
keep moving for eating, try to avoid the possible predators, get sleepy (very deeply
so) when it get cold (the cell transforms into a sort of egg), and they really dislike
when being eaten, and try to avoid it instinctively, but with a possible "bad"
experience.
here an amoeba eats two paramecia: https://www.youtube.com/watch?v=pvOz4V699gk
I agree. You're defining amoeba's to be conscious because of the intelligence of their
behavior,
Yes. I can recognize myself somehow.
and as you note (though you've denied elsehwere) they may be said to have a "little
consciousness" because their intelligence has a relatively narrow scope of action.
Little compare to ours, but the amoeba has not that point of comparison, so it might be
that from her point of view, that consciousness seems as big as ours as seen from our
points of view.
I don't know what it means to say consciousness seems to have a size to itself.
What I denied is that there would be a notion of half consciousness. Either an entity is
conscious, or it is not. But when we compare consciousness content, there might be
different intensity, volume, etc. But from the first person perspective seen from the
first person perspective, the experience of consciousness might feel the same.
However, you shift around inconsistently and refer to amoeba as a genome (or the
species?).
The genoma of a cells is the part Turing universal, like the brain of a animal. I made
the usual language abuse, of course it is the first person associate to the amoeba,
through its genome, which would be conscious.
But you've confounded two different loci of consciousness. Are you now supposing that
people are conscious because of the computation in their brains AND they are also
conscious because of the possibility of universal computation by their genome? The
consciousness'es are certainly not conscious of the same thing.
I have a general problem with your identification of consciousness with Turing
universality. That is a potentiality or capability. It's not identifying some particular
class of computations that *are* conscious. Yet being conscious seems to be a temporal
phenomena, one that happens "in the moment" - not just a potentiality. So what
potentiality needs to be realized to instantiate consciousness? Do I have to actually
being thinking of mathematical induction to be conscious?
The genome may be "intelligent" in a different way in the sense that it can evolve
into say and Einstein and so solve some difficult problems - but this is not what we
generally mean by intelligence or consciousness.
OK. I was not doing that move. I was talking on some amoeba's consciousness "here
and now".
But the metabolic functions of the genome don't provide Turing universality - they *could*
in principle, but there would be no way to produce that via natural selection. It is only
the genome+evolution that is universal.
It's not awareness of the environment except in one response: reproduce or not.
Or get sleepy, or move around to find the food, or mate, etc.
Those are not things the genome does.
All the rest of the "intelligence" comes from random variation.
Plausible (adding the selection).
Now, amoeba are universal, but not Löbian, and so they lack the Kp -> KKp law, and are
not self-aware. Nor do have them memories, or only few one, so they live in the
instant present, happy when eating, unhappy when being eaten. At least they will not
philosophize and be unhappy when eating because they know they *might* be eaten, nor
happy when being eaten because they got the point that it is part of the game of life
and be serene about this, or because they believe in christ or someone. You need to be
Löbian to develop those form of craziness. I think this came with lower invertebrate,
like jumping spiders and cuttlefishes. But they are lucky, their brain are not enough
big to develop much of the craziness. They probably live a little bit less in the
present, but still don't get the point of the existential question.
To be aware is to feel the cold, the hot, the yummy, the acidity level, and capable of
interpreting it "self-referentially", and reacting.
Right. It's to have values and to be able to act to attempt to realize them.
To be self-aware add the memories and one more reflexive loop (which you get in RA
when adding the induction axioms, leading to PA). As long as you are correct, you obey
the modal logic G and G* in that case. But the 1p views obeys the intensional variants.
But that smacks of parochialism, much like the notion of
geocentrism. I just haven't found a convincing argument that the
maximal reference class is not just the class of conscious organisms,
of beings for whom there is a something it is like to be.
But my question (which you haven't answered) is what you think this
maximal reference class is from your four part classification of consciousness.
If I had to pick, I'd say it was those entities who were aware of
their own thoughts and had sufficient language to formulate Bayesian
inference.
The Bayesian theory is a bit stringent don't you think. There are
plenty of formulations of the doomsday argument that don't use
Bayesian reasoning. Take Gott's version for example.
Self-awareness, as I mentioned, is more defensible property. The
question is whether non-self-aware consciousness (your koi) is a
coherent concept.
I agree that to have awareness, you need a self, a third person self. But that is well
played by the relative body (actually bodies, incarnate through the UD).
Maybe we should define consciousness by self-awareness, and then self-consciousness
would be the higher form of self-self-awareness? That makes one "self" per reflexive
loop.
I think there are different levels of self-awareness and maybe that's what you have in
mind. The first level is just being aware of one's self and nothing else. Amoeba have
this. The second level is being aware of one's self and of one's relation to other
things in the world. I'd say spiders and koi have this level.
So we agree.
Then there's being aware of one's self and of one's relation to other things and to
other self-aware beings. I'd say dogs and cats have this level.
OK. Spiders have this too *in principle*, but they cannot exploits this for "technical
reason".
I don't know what you mean by a "technical reason". I'd say it's because spiders are not
social animals and don't have a "theory of mind" about other animals and insects. Dogs
and cats do - they have theories about when you're angry or happy with them.
Brent
You don't need to add axioms to PA top get this (like you need to add axioms to RA to
get this). The difference is only in the actual amount of "tape"/"memory" available.
And then there's being aware of one's self and of what other's think of you and of how
you think of yourself.
PA got this, and spider too, but the difference is dues to the tape, not the program.
Like a prisoner has as much free will than a free citizen (if that exists), yet, cannot
exploit it for the technical reason that he is not free, technically. I mean that the
difference is not conceptual.
Bruno
--
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.