On 29 May 2012, at 16:32, Jason Resch wrote:

On Tue, May 29, 2012 at 2:02 AM, Colin Geoffrey Hales <cgha...@unimelb.edu.au > wrote:

From: everything-list@googlegroups.com [mailto:everything-list@googlegroups.com ] On Behalf Of Jason Resch
Sent: Tuesday, 29 May 2012 3:45 PM
To: everything-list@googlegroups.com
Subject: Re: Church Turing be dammed.

Natural physics is a computation. Fine.

But a computed natural physics model is NOT the natural physics....it is the natural physics of a computer.


I recently read the following excerpt from "The Singularity is Near" on page 454:

"The basis of the strong (Church-Turing thesis) is that problems that are not solvable on a Turing Machine cannot be solved by human thought, either. The basis of this thesis is that human thought is performed by the human brain (with some influence by the body), that the human brain (and body) comprises matter and energy, that matter and energy follow natural laws, that these laws are describable in mathematical terms, and that mathematics can be simulated to any degree of precision by algorithms. Therefore there exist algorithms that can simulate human thought. The strong version of the Church- Turing thesis postulates an essential equivalence between what a human can think or know, and what is computable."

So which of the following four link(s) in the logical chain do you take issue with?

A. human brain (and body) comprises matter and energy
B. that matter and energy follow natural laws,
C. that these laws are describable in mathematical terms
D. that mathematics can be simulated to any degree of precision by algorithms




Hi Jason,

Brain physics is there to cognise the (external) world. You do not know the external world.

Your brain is there to apprehend it. The physics of the brain inherits properties of the (unknown) external world. This is natural cognition. Therefore you have no model to compute. Game over.

If I understand this correctly, your point is that we don't understand the physics and chemistry that is important in the brain? Assuming this is the case, it would be only a temporary barrier, not a permanent reason that prohibits AI in practice.

You are right. That would neither prohibit AI,  nor comp.

There are also reasons to believe we already understand the mechanisms of neurons to a sufficient degree to simulate them. There are numerous instances where computer simulated neurons apparently behaved in the same ways as biological neurons have been observed to. If you're interested I can dig up the references.

Meaning: there are reasonable levels to bet on.

Here, for once, I will give my opinion, if you don't mind. First, about the level, the question will be "this level, this year, or that more finest grained level next year, because technology evolves. In between it *is* a possible Pascal Wag, in the sense that if you have a fatal brain disease, you might not afford the time to wait for possible technological deeper levels.

And my opinion is that I can imagine saying "yes" to a doctor for a cheap "neuronal simulator", but I expect getting an altered state of consciousness, and some awareness of it. Like being stone or something. For a long run machine, I doubt we can copy the brain without respecting the entire electromagnetic relation of its constituents. I think it is highly plausible that we are indeed digital with respect to the law of chemistry, and my feeling is that the brain is above all a drug designer, and is a machine where only some part of the communication use the "cable". So I would ask to the doctor to take into account the glial cells, who seems to communicate a lot, by mechano-chemical diffusion waves, including some chatting with the neurons. And those immensely complex dialog are mainly chemical. This is quite close to the Heizenberg uncertainty level, which is probably our first person plural level (in which case comp is equivalent with QM).

Also, by the first person indeterminacy, a curious happening is made when you accept an artificial brain with a level above the first person plural corresponding level. From your point of view, you survive, but with a larger spectrum of possibilities, just because you miss finer grained constraints. (It the "Galois connection", probably where the logical time reverses the arrow and "become" physical time, to do a pleasure to Stephen). In that situation, an observer of the candidate for a high level artificial brain (higher than the first person plural level) will get with a higher probability realilties disconnected from yours. His mind might even live an "Harry Potter" type of experience.

To see this the following thought experience can help. Some guy won a price consisting in visiting Mars by teleportation. But his state law forbid annihilation of human. So he made a teleportation to Mars without annihilation. The version of Mars is very happy, and the version of earth complained, and so try again and again, and again ... You are the observer, and from your point of view, you can of course only see the guy who got the feeling to be infinitely unlucky, as if P = 1/2, staying on earth for n experience has probability 1/2^n (that the Harry Potter experience). Assuming the infinite iteration, the guy as a probability near one to go quickly on Mars.

Someone with a lesser brain might have different first person expectation, disconnected from your history.

This lead to another related question, rarely tackled, and actually difficult with respect of the reversal physics/arithmetic.

What is a brain and what does a brain, notably with respect of the conscious person? Well, with comp we know what is a brain: it is a local, relative, universal number.

The question I have in mind is "Does a brain produce consciousness, or does the brain filter consciousness?

We "know" that consciousness is in "platonia", and that local brains are just relative universal numbers making possible for a person (in a large sense which can include an amoeba) to manifest itself relatively to its most probable computation/environment. But this does not completely answer the question. I think that many thinks that the more a brain is big, the more it can be conscious, which is not so clear when you take the reversal into account. It might be the exact contrary.

And this might be confirmed by studies showing that missing some part of the brain, like an half hippocampus, can lead to to a permanent feeling of presence. Recently this has been confirmed by the showing that LSD and psilocybe decrease the activity of the brain during the hallucinogenic phases. And dissociative drugs disconnect parts of the brain, with similar increase of the first person experience. Clinical studies of Near death experiences might also put evidence in that direction. haldous Huxley made a similar proposal for mescaline.

This is basically explained with the Bp & Dt hypostases. By suppressing material in the brain you make the "B" poorer (you eliminate belief), but then you augment the possibility so you make the consistency Dt stronger. Eventually you come back to the universal consciousness of the virgin simple universal numbers, perhaps.

Here are some recent papers on this:




PS I asked Colin on the FOR list if he is aware of the European Brain Project, which is relevant for this thread. Especially that they are aware of "simulating nature at some level":


If you have _everything_ in your model (external world included), then you can simulate it. But you don’t. So you can’t simulate it.

Would you stop behaving intelligently if the gravity and light from Andromeda stopped reaching us? If not, is _everything_ truly required?

C-T Thesis is 100% right _but 100% irrelevant to the process at hand: encountering the unknown.

It is not irrelevant in the theoretical sense. It implies: "_If_ we knew what algorithms to use, we could implement human-level intelligence in a computer." Do you agree with this?

The C-T Thesis is irrelevant, so you need to get a better argument from somewhere and start to answer some of the points in my story:

Q. Why doesn’t a computed model of fire burst into flames?

If this question is a serious, it indicates to me that you might not understand what a computers is. If its not serious, why ask it?

There is a burst of flames (in the computed model). Just as in a computed model of a brain, there will be intelligence within the model. We can peer into the model to obtain the results of the intelligent behavior, as intelligent behavior can be represented as information.

Similarly we can peer into the model of the fire to obtain an understanding of what happened during the combustion and see all the by-products. What we cannot do, is peer into a simulated model of fire to obtain the byproducts of the combustion. Nor can we peer into the model of the simulated brain and extract neurotransmitters or blood vessels.

To me, this "fire argument" is as empty as saying "We can't take physical objects from our dreams with us into our waking life. Therefore we cannot dream."

This should the natural expectation by anyone that thinks a computed model of cognition physics is cognition. You should be expected answer this. Until this is answered I have no need to justify my position on building AGI. That is what my story is about. I am not assuming an irrelevant principle or that I know how cognition works. I will build cognition physics and then learn how it works using it. Like we normally do.

What will you build them out of? Biological neurons, or something else? What theory will you use to guide your pursuit, or will you, like Edison, try hundreds or thousands of different materials until you find one that works?

I don’t know how computer science got to the state it is in, but it’s got to stop. In this one special area it has done us a disservice.

This is my answer to everyone. I know all I’ll get is the usual party lines. Lavoisier had his phlogiston. I’ve got computationalism. Lucky me.



You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com . For more options, visit this group at http://groups.google.com/group/everything-list?hl=en .

You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com . For more options, visit this group at http://groups.google.com/group/everything-list?hl=en .


You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
For more options, visit this group at 

Reply via email to