On 08 Mar 2010, at 06:46, Jack Mallah wrote:

--- On Tue, 3/2/10, David Nyman <david.ny...@gmail.com> wrote:
computationalist theory of mind would amount to the claim that consciousness supervenes only on realisations capable of instantiating this complete range of underlying physical activity (i.e. factual + counterfactual) in virtue of relevant physical laws.

Right (assuming physicalism). Of course, implementing only part of the range of a computation that leads to consciousness might lead to the same consciousness, if it is the right part.

What do you mean by right part?




In the case of a mechanism with the appropriate arrangements for counterfactuals - i.e. one that in principle at least could be "re- run" in such a way as to elicit the counterfactual activity - the question of whether the relevant "physical law" is causal, or merely inferred, would appear to be incidental.

Causality is needed to define implementation of a computation because otherwise we only have correlations. Correlations could be coincidental or due to a common cause (such as the running of a movie).

Is is physical causality? Or computational causality. Which needs only a universal mathematical base (like arithmetic, combinators, ...). I try to understand your "physical or platonic".




--- On Fri, 3/5/10, Stathis Papaioannou <stath...@gmail.com> wrote:
If the inputs to the remaining brain tissue are the same as they would have been normally then effectively you have replaced the missing parts with a magical processor, and I would say that the thought experiment shows that the consciousness must be replicated in this magical processor.

No, that's wrong. Having the right inputs could be due to luck (which is conceptually the cleanest way), or it could be due to pre- recording data from a previous simulation. The only consciousness present is the partial one in the remaining brain.

I have no clue by what you mean by "partial consciousness". Suppose a brain is implementing a computation to which some pain can be associated to the person owning that brain. Suppose that neuron B is never used, and remain inactive, during that computation. Eliminating the neuron B does not change the physical activity, but would change the counterfactuals. Would such an elimination of inactive neuron alleviate the pain? But then the person will change its behavior (taking less powerful pain killer for example), and this despite the brain implements the same computation? Continuing in that direction, we could build a partial zombie. Partial consciousness does not make sense for me.



"Computationalism" doesn't necessarily mean only digital computations, and it can include super-Turing machines that perform infinite steps in finite time. The main characteristic of computationalism is its identification of consciousness with systems that causally solve initial-value math problems given the right mapping from system to formal states.

That is weird. Can you give a reference?


I should also note that if you _can't_ make a partial quantum brain, you probably don't have to worry about the things my argument is designed to attack, either, such as substituting _part_ of the brain with a movie (with no change in the rest) and invoking the 'fading qualia' argument.

Like in the movie graph? Look at MGA3 thread of last year.

Bruno, do you have the link? I searched the list archive but the only references to fading qualia I could find are to the argument I mentioned, in which a brain is progressively substituted for by a movie, as Bishop does to attack computationalism. It _is_ different than Chalmers, who substitutes components that _do_ have the right counterfactuals - Chalmers' argument is a defense of computationalism (albeit from a dualist point of view), not an attack on it.

Search the thread MG11, MGA2 and MGA3. I don't use the expression "fading qualia". I ask if consciousness disappear or not. It is is an old argument already in my 1988 papers and earlier talks.

All of the 'fading qualia' arguments fail, for the reason I discussed in my PB paper: consciousness could be partial, not faded. I am sure that yours is no different in that regard.

You are the one saying there is something wrong, you are the one who should be sure about this, and cite the passage you have refuted. What do you mean by partial consciousness? In what sense this would deter the movie graph. You don't give any indices.


If consciousness supervenes on the physical realization of a computation, including the inactive part, it means you attach consciousness on an unknown physical phenomenon. It is a magical move which blurs the difficulty.

There is no new physics or magic involved in taking laws and counterfactuals into account, obviously. So you seem to be just talking nonsense.

If consciousness supervene of laws, which laws? The (physical) supervenience thesis is that consciousness supervene on particular instantiation of laws, and if physically inactive parts part play a role (to get the counterfactual) then the physical supervenience is already false. Then by saying "physical or platonic", you are just agreeing with I try to explain, except that the label "physical" does loose its explanative power both for consciousness, and physical appearance (by uda-step seven). If you don't agree with this, it means you do give a computational power to non active part for a particular instantiation of consciousness, but then I am not allowed to say "yes" to a doctor which propose to me an emulator of the computation of my brain which may work differently from my current brain.

The only charitable interpretation of what you are saying that I can think of is that, like Jesse Mazer, you don't think that details of situations that don't occur could have any effect on consciousness. Did you follow the 'Factual Implications Conjecture' (FIC)? I do find it basically plausible, and it's no problem for physicalism.

For example, suppose we have a pair of black boxes, A and B. The external functioning of each box is simple: it takes a single bit as input, and as output it gives a single bit which has the same value as the input bit. So they are trivial gates. We can insert them into our computer with no problem. Suppose that in the actual run, A comes into play, while B does not.

The thing about these boxes is, while their input-output relations are simple, inside are very complex Rube Goldberg devices. If you study schematics of these devices, it would be very hard to predict their functioning without actually doing the experiments.

Now, if box A were to function differently, the physical activity in our computer would have been different. But there is a chain of causality that makes it work. If you reject the idea that such a system could play a role in consciousness, I would characterize that as a variant of the well-known Chinese Room argument. I don't agree that it's a problem.

It's harder to believe that the way in which box B functions could matter. Since it didn't come into play, perhaps no one knows what it would have done. That's why I agree that the FIC is plausible. However, in principle, there would be no 'magic' involved even if the functioning of B did matter. It's a part of the overall system, and the overall system implements the computation.

I don't think so. The overall systems implements all possible computation making the subject reacting in possibly different situation. For any particular computation, a part of the system may implement the relevant computation needed for consciousness. Or, again, you attach consciousness to the laws in general, and digitality entails that consciousness will supervene on the platonic computation, not on any particular universal machine (be it physical) running it. You say you find FIC plausible, but you argue like it was not. What happens in A depend on the level of substitution.

If consciousness supervenes, in "real time and place" to a physical activity realizing a computation, and this "qua computatio" then consciousness supervenes on the movie (MGA2).

We already agreed that it _doesn't_ supervene on the activity alone. It requires the counterfactuals too.


But how could the neurons or the person knows if those counterfactual neuron are present or not given that they have no activity at all. Neither physical, nor computational. It looks like pure magic to me.


But if those definitions leads to a Turing emulable process, you just lift the difficulty on another level.

As far as we know, physics is computable (although analog). So what? That brings in no difficulty.

It brings the fact that if it is turing emulable, then it occurs an infinity of time in the universal dovetailing, and that whatever I can observe in my current state is emerge from a statistic on the computation going through that state.
Do you agree with the notion of first person indeterminacy?


Eventually you talk like if we knew which universal system supports us.

No, I don't and don't need to.

Why do you refer all the time to physical system, then, and causality. Causality is not a notion of computer science, or it is defined in term of higher level logical construction, with some modal logic, or by referring to a fixed universal system.

Chalmers' dualist beliefs are not relevant to what we have been discussing - I only mentioned him because he wrote the most widely cited paper on implementation of computations - but I don't think he would agree that the person would have any telepathic connection between the places even epihenomenally. It seems more likely he meant something else, like that he would define them both to be the same person even though they each feel like they're the full original.

Well, he explicitly refuted this. He would have accepted the local first person indeterminacy. But his later paper shows that he may have see the point since.

That is especially likely given that he does favor the MWI, which already involves similar splits and obviously no telepathy between branches.

This is not clear at all. Given that the MWI has been invented (discovered) to restore monism.

Didn't you see the quotes around 'physical'? As should have been obvious, I use it as short for "physical or Platonic". Really that's not important here.

As far as you pretend to have found something wrong in the derivation, I would say it is the key point here. If it is platonic, and thus arithmetic, then the mind body problem is reduced to the problem of justifying the laws of physics from a statistics on computations.


The point is that MGA fails because it doesn't take the needs for the laws into account.

Which laws, the one for the arithmetical platonia (elementary arithmetic) or the physical laws?

OK. I see you have not get the point. MGA does not need the notion of counterfactuals. It just show that IF neurons, or basic entity have no prescience, then the movie has the same physical activity, as far as realizing a computation in real time, than the boolean graph. So, to *avoid* prescience, we have to make consciousness supervening on the counterfactuals. But those related to the computation are mathematical, immaterial, defined only in computer science. The physical counterfactuals have to be non relevant, or Turing emulable.

The above paragraph doesn't make any sense. Why do the physical counterfactuals have to be non-relevant?

Because MGA shows that a universal computing machine cannot distinguish a "real" computation from a platonic computation. The notion of computation is a purely mathematical one. Not a physical one. It happens that it looks like we are embedded in a (quantum) computational system, but below our level of substitution all computations statistically interfere.

That would be the case if the work did not pass the academical test.

Sadly, some crackpots do get their work published and get people who should know better to agree with them. The argument from authority holds no water with me. Penrose is a crackpot, as is John Cramer, and none more so than Joy Christian (who claims to have disproved Bell's theorem and got himself a nice position at the Perimeter Institute).

Penrose is not crackpot. he made an error, and that's all. All logician can see the error. But you have not yet find an error, I guess you would have shown it to me since. you refer to another theory, which is very fuzzy, seems to rely on physics, admit partial conscious being, etc. You cannot invoke a personal theory to show an error in a reasoning, you have to show what is non valid.

Who are these people?  You are unable to explain yourself,

The subject is difficult. But all people to which I explain understand, soon or later.


so maybe it would be better for me to correspond with someone more articulate. Who would you say is best equipped to explain the "prescience" aspect of MGA among these other people? I guess it should be one of the physicists, they are more likely to talk in clear language.

You may read the detail report by Gochet (in french). Physicist knows nothing about mathematical logic and theoretical computer science. Or just tell me what do you don't understand. You are the one saying that a piece of a machine having no role in a computation has still a role for the consciousness attached to that computation. It makes the computer science notion of emulation senseless. Actually it makes computer science a branch of physics, like Landauer defends, but this leads to physicalism and to the idea that to simulate a brain, I have to simulate much more than what is necessary for the computations (including the counterfactual).
Is there someone else who can explain this?

I would also like to know who these people are. What media has discussed your work? Also you have left me (a physicist) out; I have been saying it for years here and to your virtual face.


I got the price for the best thesis in 1998 by the journal Le MONDE, then this has only extend moral harassment from Brussels to Paris (and elsewhere). If you insist to have name I can give you them out of line. None have published anything in this field. They told me that the concept of "consciousness" is crackpot (in the seventies), but also quantum computation, etc. I think you just have prejudice on me, not based on any reading of my work. You never mention AUDA and my use of the self-reference logic, which provides an utterly clear arithmetical interpretation of what I say in UDA (but needs mathematical logic). I do derive the embryo of physical laws in the manner made obligatory by the UDA argument. In your paper you mention "all computation" without mentioning the universal dovetailer. you don't mention the first person indeterminacy. Most on this list are familiar now with those notions.

I don't. There is no motivation given for such a passage. Counterfactuals are needed but they could be Platonic or physical.

The whole point is there. Especially if you address to physicists. Of course they are not to well placed to appreciate, given that it made physics emerging from the non physical (the mathematical, or the arithmetical).

What you may consider insults (and I can only guess) is nothing you have not brought on yourself.

You are the one repeating term like non sense or crackpot instead of saying things like I don't understand this or that, here or there in this or that paper or posts. Asking for more precision you refer to your on theory, which is not the way science work. You may use your theory to say that that an argument is invalid, but then you have to do it.

To say that consciousness supervene on laws and counterfactual, and then to blur the platonic/physical distinction does not help to clarify your saying.

Do you have a problem with the seven first step of the uda? I can only guess a problem with the 8th step (MGA), based on the fact that you pretend that an inactive piece of material is needed for the particular consciousness (and not the particular computation).

You don't seem aware that computation is a notion definable in arithmetic, and which has a priori nothing to do with physics.

Bruno Marchal
http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to