On 17 Sep 2014, at 23:28, meekerdb wrote:

On 9/17/2014 6:47 AM, Bruno Marchal wrote:

On 17 Sep 2014, at 06:24, meekerdb wrote:

OOPS! I hit "send" when I intended to "close" my email. So here's another try at replying.

On 9/16/2014 9:21 AM, Bruno Marchal wrote:
Hi Russell, Hi Others,

Sorry for the delay. Some comments on your (Russell) MGA paper appear below.


On 25 Aug 2014, at 00:30, Russell Standish wrote:

On Sun, Aug 24, 2014 at 01:22:51PM -0700, meekerdb wrote:
On 8/24/2014 12:55 AM, Russell Standish wrote:

I don't think that can be the case. I don't see how it can be anything to be like a tree, yet trees are clearly DNA-based beings. So you would get skewed results if you were to reason as though you could be
a tree.

Exactly.  It's a reductio on the pattern of argument you used to
prove ants can't be conscious. I used it to prove ants can't be DNA
based.

I don't understand. How is having DNA relevant to having
consciousness? It is quite plausible that non-DNA-based forms are
conscious (eg a computer running a suitable AI program), and that
some DNA-based forms are not conscious (trees, for example).

DNA isn't relevant to consciousness. It was just an example of something we share with ants, to point out that an argument that ants aren't conscious because if they were we'd be ants is invalid.

OK.






I agree.

Incidentally, when you see the complexity of the interaction between the roots of trees and the soils, chemicals and through bacteria, and when you believe, as some experiences suggest, that trees and plant communicate, I am not so sure if trees and forest, perhaps on different time scale, have not some awareness, and a self-awareness of some sort. (I take awareness as synonymous with consciousness, although I change my mind below!).







The reference class cannot be larger than the class of conscious
beings. Obviously it can be quite a bit smaller, but there must be a
maximal reference class for which anthropic reasoning is valid,
although it is quite controversial what it is - some suggest it may
even be as small as those people capable of understanding the
anthropic argument, a sizable fraction of which inhabits this list!

That's what bothers me.  If you exclude ants because they're not
conscious (and I assume you've read "Godel, Escher, and Bach") and
hence can't understand the argument, why not exclude people who
can't understand the argument?


"Ant Fugue" is about the possibility that ant _colonies_ might be
conscious. My argument has nothing to say about ant colonies, even
though I consider "Ant Fugue" to be just an interesting speculation,
rather than a serious claim about ant colonies.


I am a bit agnostic on this. But I have few doubt that individual ants have some consciousness, though.



But why is "consciousness" or "understanding the argument" the relevant attribute of "us"? Why not "breathes oxygnen" or "metabolized carbohydrates"?

Those are not Turing complete activity, with universal goal like "help yourself".








Oh - perhaps you mean "can't understand the argument" as in organisms that can't understand the anthropic argument must be excluded from the reference class. This seems a rather implausible claim - just because
anthropic argument has not occurred to you yet, shouldn't really
exclude you. The idea that self-awareness is a necessary requirement of the reference class is a perhaps more believable claim - in order to even think anthropically requires a concept of self - but then I'm still not sure what it even means to be conscious, but not self-aware. What does it even mean to "be an amoeba", as Bruno seems to think is possible.

Yes, that's another way of asking the same question - why is "be an amoeba" the important category?

I see that you see, below. OK.





OK. I will make a try. Awareness in its most basic forms comes from the ability to distinguish a good feeling from a bad feeling. The amoeba, like us, knows (in a weak sense) that eating some paramecium is good, but that hot or to cold place are bad, and this makes it reacts accordingly with some high degrees of relative self-referential correctness. The genome of the amoeba, which is really a collection of cooperating many genomes (lot of "nucleus") is Turing universal or "complete", and the amoeba incarnates it relatively to her (our) probable lower substitution level (which defines by the FPI the physical reality). So she get a life, a first person life, of some sorts. Little consciousness, if you want, because from the first person view of the amoeba it is the whole big thing. The life of protozoans are similar to ours. They keep moving for eating, try to avoid the possible predators, get sleepy (very deeply so) when it get cold (the cell transforms into a sort of egg), and they really dislike when being eaten, and try to avoid it instinctively, but with a possible "bad" experience.
here an amoeba eats two paramecia: https://www.youtube.com/watch?v=pvOz4V699gk

I agree. You're defining amoeba's to be conscious because of the intelligence of their behavior,


Yes. I can recognize myself somehow.




and as you note (though you've denied elsehwere) they may be said to have a "little consciousness" because their intelligence has a relatively narrow scope of action.

Little compare to ours, but the amoeba has not that point of comparison, so it might be that from her point of view, that consciousness seems as big as ours as seen from our points of view.

I don't know what it means to say consciousness seems to have a size to itself.




It is related to the fact that for a machine of any complexity C, C is too much complex to be understood by a machine of complexity C. So, it is meaningful that from its first person perspective, the basic consciousness is felt the same.

Nobody is clever enough to understand itself, which is good as it is part of the explanation of free-will (related to self-indeterminacy (not to the FPI to be sure).



What I denied is that there would be a notion of half consciousness. Either an entity is conscious, or it is not. But when we compare consciousness content, there might be different intensity, volume, etc. But from the first person perspective seen from the first person perspective, the experience of consciousness might feel the same.






However, you shift around inconsistently and refer to amoeba as a genome (or the species?).

The genoma of a cells is the part Turing universal, like the brain of a animal. I made the usual language abuse, of course it is the first person associate to the amoeba, through its genome, which would be conscious.

But you've confounded two different loci of consciousness. Are you now supposing that people are conscious because of the computation in their brains AND they are also conscious because of the possibility of universal computation by their genome?

I don't know, and I don't think so. Our cells have their own consciousness, independent of the incarnation of our consciousness through our nervous system. If our consciousness depends on the consciousness of our cells and of the treatment of information at that level, it means our substitution level is the molecular level. The genomes or the universal machines are never conscious. It is the (immaterial) person associated to the (infinitely many) relative representation of those programs which are conscious.




 The consciousness'es are certainly not conscious of the same thing.

I agree. I explained this in thread by Craig.




I have a general problem with your identification of consciousness with Turing universality.

I never did this, and gave a simple idea to keep in mind for ever doing this. Turing universality is a 3p notion, and consciousness is 1p. So we can't identify them. But we can attribute to a Turing machine, a believer. Like we do with RA,. The beliefs of RA are Turing complete. So, there is a Turing complete box "[]" associate to RA. It does not obey G nor G*, but it has its own (ultra-weak) modal logic, and to attribute consciousness to RA, we can use the same idea used for the Löbian machine, by considering the formal link between the box []p and the consistency <>p or the truth p. That "[]p & p" can be shown being non definable by any predicate, like truth and consciousness. Unlike PA, RA will not prove the arithmetical []p -> [] []p, making RA, a bit like the amoeba, deprive of "mental memory", and she lives in a constant (pleasing of non pleasing) time, without inducing a past, or a future or anything. This comes with PA, and the self-consciousness of the Löbian entities.





That is a potentiality or capability. It's not identifying some particular class of computations that *are* conscious. Yet being conscious seems to be a temporal phenomena,

Some state of consciousness, and indeed the mundane one, is like that. But not all states of consciousness are like that. I have explained that reading salvia report, and then experimenting, has destroy my prejudice that consciousness needs to be a temporal phenomena. Some altered state of consciousness depicts that going out of time, and space, and everything physical. That is what open me to the idea that RA is already conscious, and this helps to understand the possible main difference between the raw consciousness, and the reflexive higher level self-consciousness.




one that happens "in the moment" - not just a potentiality.

I agree with you. When I talk about consciousness, it is always the first person experience (of a person, thus) and in the "here-and-now". But, amazingly and paradoxically, that experience here-and-now can be unrelated to any notion of "here-and-now". To makes sense of this, you need to really abandon the materialist identity thesis of brain and mind. The consciousness of the universal numbers is attached to all universal numbers, in all universal computations, and is basically the consciousness which will differentiate along the stories, by the mixing of (all) computations and the FPI.



So what potentiality needs to be realized to instantiate consciousness?

You need only the "universal absolute arithmetical sigma_1 truth".




Do I have to actually being thinking of mathematical induction to be conscious?

No. You need that ability to be self-conscious, to believe in realities and in others behind the more direct pleasurable and the painful experiences.




The genome may be "intelligent" in a different way in the sense that it can evolve into say and Einstein and so solve some difficult problems - but this is not what we generally mean by intelligence or consciousness.

OK. I was not doing that move. I was talking on some amoeba's consciousness "here and now".

But the metabolic functions of the genome don't provide Turing universality - they *could* in principle, but there would be no way to produce that via natural selection.

?

The human genomes does instantiate universality by "running" the human brain, which is typically Turing universal (and Löbian in the ideal correct case).

Then the work by René Thomas shows that Escherichia Coli genome is Turing Universal. The genetic regulation of the genome of a bacteria is Turing universal as it already by himself computes the "Goto" instruction, the "If then else", and some blocking, or activating, instructions.





It is only the genome+evolution that is universal.

Not at all. One bacteria cannot exploits its universality, as it has a too little tape. But many bacterias can do that, and indeed the whole of life is the result of ungoing conversation between universal numbers.








It's not awareness of the environment except in one response: reproduce or not.

Or get sleepy, or move around to find the food, or mate, etc.

Those are not things the genome does.

Nor does a brain. Sory, I talk about the immaterial (but real) person using those genomic or brain "material" (in the FPI sense) machines.





All the rest of the "intelligence" comes from random variation.

Plausible (adding the selection).






Now, amoeba are universal, but not Löbian, and so they lack the Kp -> KKp law, and are not self-aware. Nor do have them memories, or only few one, so they live in the instant present, happy when eating, unhappy when being eaten. At least they will not philosophize and be unhappy when eating because they know they *might* be eaten, nor happy when being eaten because they got the point that it is part of the game of life and be serene about this, or because they believe in christ or someone. You need to be Löbian to develop those form of craziness. I think this came with lower invertebrate, like jumping spiders and cuttlefishes. But they are lucky, their brain are not enough big to develop much of the craziness. They probably live a little bit less in the present, but still don't get the point of the existential question.

To be aware is to feel the cold, the hot, the yummy, the acidity level, and capable of interpreting it "self-referentially", and reacting.

Right. It's to have values and to be able to act to attempt to realize them.


To be self-aware add the memories and one more reflexive loop (which you get in RA when adding the induction axioms, leading to PA). As long as you are correct, you obey the modal logic G and G* in that case. But the 1p views obeys the intensional variants.




But that smacks of parochialism, much like the notion of
geocentrism. I just haven't found a convincing argument that the
maximal reference class is not just the class of conscious organisms,
of beings for whom there is a something it is like to be.

But my question (which you haven't answered) is what you think this maximal reference class is from your four part classification of consciousness.

If I had to pick, I'd say it was those entities who were aware of
their own thoughts and had sufficient language to formulate Bayesian
inference.


The Bayesian theory is a bit stringent don't you think. There are
plenty of formulations of the doomsday argument that don't use
Bayesian reasoning. Take Gott's version for example.

Self-awareness, as I mentioned, is more defensible property. The
question is whether non-self-aware consciousness (your koi) is a
coherent concept.

I agree that to have awareness, you need a self, a third person self. But that is well played by the relative body (actually bodies, incarnate through the UD).

Maybe we should define consciousness by self-awareness, and then self-consciousness would be the higher form of self-self- awareness? That makes one "self" per reflexive loop.

I think there are different levels of self-awareness and maybe that's what you have in mind. The first level is just being aware of one's self and nothing else. Amoeba have this. The second level is being aware of one's self and of one's relation to other things in the world. I'd say spiders and koi have this level.

So we agree.



Then there's being aware of one's self and of one's relation to other things and to other self-aware beings. I'd say dogs and cats have this level.

OK. Spiders have this too *in principle*, but they cannot exploits this for "technical reason".

I don't know what you mean by a "technical reason". I'd say it's because spiders are not social animals and don't have a "theory of mind" about other animals and insects. Dogs and cats do - they have theories about when you're angry or happy with them.

Perhaps. Jumping spiders are amazing. With patience, you can make them into sort of pet. They just fear you a lot, but they seem to know very well that you are a living being, and you can have amazing relations with them once they stop to fear you, not unlike with cats or dogs to some degree. The same with octopi and cuttlefishes. You don't get this with usual spiders, at least there is no evidences, and still less with flies. Unlike many animals, giving food to a jumping spider does not help (and is not easy to do as they want alive insects), you can tame them only by letting them walking on you and exploring you. Doing that a lot, they seem to develop some relationship, and can sometimes just come and look at you for hours, then if you tend the hands, they jump on it. They can stay on you for long time. After a molt, the spider seems to remember you. Once pregnant, the spider ignores you and concentrate on its nest tasks. But she can keep coming back after the spider babies are gone, which they do very quickly (better open the windows at that moment). They have most probably no theory of mind as developed as cats and dogs, but I would not exclude some "theories of mind" by jumping spiders. They do have a theory of reality, as they show bewilderment when looking behind a mirror, like in the video I refer to sometimes ago. Ah: this one: https://www.youtube.com/watch?v=iND8ucDiDSQ

Bruno

http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

Reply via email to