On Saturday, September 26, 2015 at 2:24:36 AM UTC+10, Bruno Marchal wrote:
>
>
> On 25 Sep 2015, at 14:11, Pierz wrote:
>
> I disagree with most of the theorising about this scenario, which seems to 
> me to be coming from a much too theoretical place. Humans may or may not be 
> computational at base, but we are not PCs. We are not blank slates, waiting 
> for an operating system to be installed. Our brains and bodies imply an 
> environment and a developmental trajectory in which social interaction and 
> emotional nurture play a critical role. (Consider the famous "wire mother" 
> experiments performed on rhesus monkeys: 
> http://pages.uoregon.edu/adoption/studies/HarlowMLE.htm) A person raised 
> like a brain in a vat, "sans eyes, sans teeth, sans everything", is not an 
> amoeba, but a torture victim. 
>
>
> Preventing a universal baby to grow in a plausibly deep available history 
> might be unfair, but "torture" seems a bit too much. 
>
> Not at all. A baby born into an isolation tank experiment is not a 
"universal baby" (I don't have a clue what a "plausibly deep available 
history" is), it is an individual person deprived of pretty much everything 
except nutrition. Babies cry for their mothers, and as the "wire mother" 
experiment has shown, if they are denied the experience of physical contact 
and nurture, they develop serious emotional problems. Fundamentally, it is 
no different to placing any other person in conditions of solitary 
confinement and sensory deprivation. People disintegrate rapidly under such 
conditions, precisely because the nature of embodied consciousness is such 
that it depends on an environment. Isolation is one of the most effective 
forms of torture available. And this dependency on a social and physical 
environment is not learned, it is inbuilt. A baby expects to be born into a 
world. 
 

>
> Certainly they would fail to develop into something we would recognize as 
> a person, 
>
>
> I think it might be us who fail to recognize the person, just because such 
> a person is non human. I see such black state as the consciousness state of 
> the universal person. It seems to be a curious conscious yet static state. 
> I am not sure of this (and don't use such idea in the derivation I try to 
> explain).
>
>
>
>
> but they would not and could not "fail to become a person", because 
> personhood is not a function of the development of specific physical or 
> cognitive capabilities. 
>
>
> So I think we do agree. May be I was unclear.
>
>
>
> People with locked-in syndrome are also often not recognised as persons 
> because they cannot communicate their personhood. My own suspicion with 
> regards to this scenario is that the victim would die, as people sometimes 
> do who are deprived of all hope. Physical health is not merely a product of 
> the input of certain nutrients, the removal of wastes and so on. Like 
> development itself, it is a function of a relationship between a body and 
> an environment to which that body, mind and soul are adapted. That is the 
> meaning of being an organism, and it's why the computational metaphor sits 
> badly with me sometimes (I don't agree with much of what Craig Weinberg 
> used to say, but I suspect we're aligned on that point).  If we're 
> computers, it is only in the most abstract sense, 
>
>
> We are computers (actually at different levels at once) and we are 
> entangled in deep long complex histories. But not as a metaphor, in the 
> mathematical sense that those deep histories belongs to the universal 
> dovetialing, or, to put in another way, are "realized" in the tiny sigma_1 
> part of the arithmetical reality. 
>

Oh I know the sense you mean it in. But when you say "we are computers" you 
*are* invoking a metaphor as well as making an ontological and mathematical 
claim. Consider how differently we think about humans if we describe them 
as "evolved organisms", "economically rational agents", "linguistic, 
meaning-making animals", "biological machines" etc. Those accounts are all 
valid, even if you happen to believe that the computational one is 
ontologically prior. The metaphor we're using enables us to illuminate 
certain types of phenomena (consider the economic, evolutionary and 
cognitive-behavioural accounts of human relationship behaviour for 
example), but it also tends to blind us to other perspectives. Ideas are 
glasses we look through, and like any lens, they can become invisible after 
a while, so we don't see how they are blinding us as well as granting us 
vision. This is a little like the "priming effect" in psychology: if we're 
asked to, say, recount the ten commandments, we're more likely to behave 
morally at a later point. Thinking of the human as a computer subtly 
influences our entire vision of others and I am arguing that if you believe 
a baby in an isolation tank will do anything other than create a profoundly 
emotionally and psychologically damaged person, then you've been blinded by 
your own metaphor (and that is quite independent of the validity of the 
computational claim).
 

>
> This is used to put light on the mind and matter problem in the frame of 
> computationalism. It is almost not relevant for the human condition, 
>
Yes.
 

> at least not until we "practice comp", or think about the afterlife 
> question. The virgin universal machine, once she believe in enough 
> induction axioms, like PA or ZF, has an incredibly rich theology 
> (arithmeticall true propositions (but not necessarily justifiable) 
> concerning itself and its different points of view) 
>
>
>
> but the metaphor tends to extend itself, as metaphors do. The infant 
> "expects" a mother, without being able to name that expectation. It expects 
> a lot more too - to put it simply, it expects a *world. *In some sense, 
> it *is* that world, having evolved over billions of years as a response 
> to it. 
>
>
> I prefer to avoid any ontological commitment, and actually, the TOE does 
> not assume worlds, which are explained by the persisting and sharable first 
> person points of view. 
>
> Fine, but I'm not really making any "ontological commitments". Surely you 
are not disputing that we *are* evolve organisms, whatever the ontological 
status of matter is. "Worlds" is just a relative term here. 

>
>
> This makes the "experiment" extremely artificial - as a philosopher of 
> mind we might be tempted to think of the subject as a kind of computational 
> tabula rasa, as Bruno appears to do. But in truth he or she is much more 
> like the most extreme form of amputee. 
>
>
> Except that the person is saved from both elimination (à-la Dennett or 
> Churchland) and the person is saved from all reductionist theory, as the 
> machine's knower (defined by the method of Theaetetus applied on Gödel's 
> predicate) is provably NOT a machine from its first person points pov view.
>
> The analogy with the baby is not so good, as the baby (in the tank) as 
> already a big brain having itself a long history. It is controversial but I 
> find plausible that fetus can dream, like apes can already climb trees at 
> birth, and might trained themselves thanks to wired programs selected 
> trhough evolution.
> The blank state is more like the state you are in before your parent met. 
> It is consciousness before any distinction, nor differentiation.
>

Yes OK. I'm much more prepared to go along with that than with the idea 
that a baby can represent such a universal person. Like you, I've had the 
salvia experience of personal identity being erased and sinking into a 
state of person-less consciousness that seems to be prior to identity. 
Scared the willies out of me to be honest, but probably because it is the 
nature of ego to preserve itself against threats of dissolution.
 

> I was against that idea, like Brouwer, but I have to say that salvia has 
> throw a big doubt on my prejudice that consciousness is necessarily 
> temporal, en begin with a first distinction. I realized also that it is 
> somehow more coherent with computationalism, and the "Galois connexion" 
> theory of consciousness, even if it looks strongly counter-intuitive.
>
> The tabula rasa is only for Aristotle's primary matter, which I though for 
> a long time to be only a christian superstition. Bigot materialists proves 
> me the contrary.
>
> It is important to keep distinct the use of computer as a metaphor, which 
> might be a good or bad metaphor depending of the computer chosen, and the 
> computationalist hypothesis, which assumes only the existence of a level 
> where our bodies are Turing emulable in such a way that our consciousness 
> is preserved.
>

My argument is that it is indeed important to remember the distinction, and 
it is also difficult, in much the same way as it is difficult for the 
economist not to see the world in terms of market forces, or the Freudian 
in terms of psychodynamics.

>
> Then whatever can be done by a physical Turing machine can be done, and is 
> already done an infinity of times in arithmetic. But the truth and the 
> meaning of those computations relies in the higher more complex non Turing 
> emulable number relations. 
>
> Bruno
>
>
>
>
>
>
>
> On Monday, September 21, 2015 at 10:55:55 PM UTC+10, Bruno Marchal wrote:
>>
>> Hi Brian, Telmo and others,
>>
>> On 21 Sep 2015, at 02:49, Telmo Menezes wrote:
>>
>> Hi Brian,
>>
>> That's an interesting question. My take is this: I think trying to 
>> understand that experience is like trying to understand what it feels like 
>> to be an amoeba. It's just too alien.
>>
>>
>> I am not sure. I can imagine an amoeba having "proto-feeling" comparable 
>> to ours. An amoeba or a paramecium might feel something like some urge to 
>> find food when hungry, some urge to find a mate, some urge to build a kist 
>> due to pollution, ... A monocellular eukaryotic organism is a cell playing 
>> the roles of liver cells, digestive cells, skin cells, neuronal cells, 
>> muscular cells, etc. In the case of paramecium, this is more or less 
>> confirmed by the molecular structure of the cells, in which key molecules 
>> playing the corresponding role of each organ can be found. In particular we 
>> can anesthetize a paramecium, we can block its locomotion with inhibiters 
>> similar to what can inhibit our muscles, etc. 
>> (Note that muscular and neuronal key molecules of that type have been 
>> found at the base of the roots of plants too, and we can indeed anesthetize 
>> plants).
>> The only difficulty for us to imagine what is like to be an amoeba might 
>> comes when it divides itself, but then in this list this should no more be 
>> a problem (except for Clark and Peck I guess).
>>
>> But this does not solve Brian's difficult question. A human totally 
>> deprived of an environment would be more like an encysted amoeba, never 
>> going out of its "egg" (cyst), I think its consciousness might be similar 
>> to the consciousness of the virgin machine (the non programmed computer), 
>> which I think is similar to the consciousness we can have during some phase 
>> of sleep or with some drug (notably salvia). That is a consciousness state 
>> that we can hardly conceive, because it is not time-related, nor 
>> space-related. To imagine it, some thought experience can be given, but 
>> they will have to contain "total amnesia", and even this will just be a 
>> sort of approximation. In fact such a state of consciousness, even when 
>> lived, are not strictly speaking memorable. If some theories are correct, 
>> the feeling can be like an "home feeling". It looks like some people 
>> getting at that state describe it as the usual, normal consciousness when 
>> in absence of any hallucination: it is being like "you" before you begin 
>> any differentiation. Even among those who describe it as "home", some are 
>> quite positive about it (like bliss) and some are negative about it.. Note 
>> that people doing deprivation of input experience in talk deprivation, are 
>> actually trying to get closer to such state (and some claims to have found 
>> it in that way).
>>
>> Now, it is only recently (well since 2008) that I think that all 
>> universal machine are maximally conscious, and why that sort of "blank 
>> state", when given information/input, is somehow distracted, and will 
>> confuse that "out-of-time" consciousness with its growing content. If such 
>> state was too much easily accessible, we would get in there when we have 
>> problem, instead of solving the problem, and probably accept equally to eat 
>> and to be eaten, which is a good state at the end of life, but handicapping 
>> when young where we are supposed to take life, and its information flux, 
>> very seriously. We forget quickly our nocturnal dreams for probable similar 
>> reason.
>>
>>
>>
>>
>>
>> We have some clues. For example, it is known that if children don't learn 
>> a language until a certain age, they become incapable of learning a 
>> language forever. There are some instances of this, with children being 
>> raised by animals in the wild.
>>
>> I believe we depend on a lot of information that is encoded in the 
>> environment to become human. What you describe would be a life form, but 
>> not human as we understand it. A developing brain is capable of growing 
>> into what we understand to be a human brain, but not by itself.
>>
>>
>> "Humanness" is encoded in the environment, it transcends single bodies or 
>> what DNA can encode by itself.
>>
>>
>> I agree with this and what the others said. If you are never feed any 
>> input, you are in the state that "you" had before birth, and with some 
>> luck, after clinical death (when you don't backtrack on different 
>> continuations, if that is possible, as plant and experience reports suggest 
>> to be possible). It is not a human state of mind.
>>
>> To be honest, I have done a simplification here, as a fetus *might* have 
>> preprogrammed human experiences and skills. Babies seem to be able to swim 
>> and walk immediately after birth (like horse), but quickly forget those 
>> skills (and perhaps associated experience) to learn them again through the 
>> try and error typical way for baby to learn (unlike horse who will just 
>> walk instinctively). So, a real human born in a deprivation tank might have 
>> some experience, due to the fact that it has some brain, will get food, 
>> etc. he will have the sleep phases, and might dream that he is hungry for 
>> example. I doubt that he will be able to imagine colors and shapes, though. 
>> (To be sure I read that some people born blind did saw color when taking 
>> some drugs, which is not so hard to conceive, as color might be partially 
>> preprogrammed in the brain too). But the reports might be fake, and that 
>> might be circumstantial, and my answer is on the principle, not in 
>> practice, where such an experience would be ... inhuman to do.
>>
>> Best,
>>
>> Bruno
>>
>> *We are not human beings having from time to time divine experiences. We 
>> are divine beings having from time to time human experiences (de Chardin).*
>>
>>
>>
>>
>> Telmo.
>>
>> On Mon, Sep 21, 2015 at 1:21 AM, Brian Tenneson <ten...@gmail.com> wrote:
>>
>>> I wonder what would happen to someone's mind if they were born in a 
>>> white (or any color) isolation tank. What would happen as years wore on? 
>>> Would the person ever hallucinate anything? It has only seen the tank for 
>>> his whole life. So what would inspire him to hallucinate something? Can he 
>>> hallucinate, say, a friend staring at him from across the void without ever 
>>> seeing a friend or anything for that matter except the white isolation 
>>> tank. Would he dream and what would he dream of? Would dreaming become 
>>> one with waking? Would he even know what a dream is? He has never heard the 
>>> word "dream" spoken out loud. But he knows which worlds decay faster or are 
>>> more "curvy" in the world-line sense: dreams decay faster or are more 
>>> "curvy" than waking events. So, locally, we usually know when it's a dream. 
>>> When the event world-line is straight, that means we pretty much never know 
>>> what is a dream and what is "real"?
>>>
>>> -- 
>>> You received this message because you are subscribed to the Google 
>>> Groups "Everything List" group.
>>> To unsubscribe from this group and stop receiving emails from it, send 
>>> an email to everything-li...@googlegroups.com.
>>> To post to this group, send email to everyth...@googlegroups.com.
>>> Visit this group at http://groups.google.com/group/everything-list.
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>
>>
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "Everything List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to everything-li...@googlegroups.com.
>> To post to this group, send email to everyth...@googlegroups.com.
>> Visit this group at http://groups.google.com/group/everything-list.
>> For more options, visit https://groups.google.com/d/optout.
>>
>>
>> http://iridia.ulb.ac.be/~marchal/
>>
>>
>>
>>
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-li...@googlegroups.com <javascript:>.
> To post to this group, send email to everyth...@googlegroups.com 
> <javascript:>.
> Visit this group at http://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.
>
>
> http://iridia.ulb.ac.be/~marchal/
>
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

Reply via email to