On 26 Mar 2015, at 08:05, Bruce Kellett wrote:

Bruno Marchal wrote:
On 25 Mar 2015, at 12:25, Quentin Anciaux wrote:
Multiple realisation does not undermine physical supervenience... what undermine it, is that you're forced to accept (with the movie graph argument) that the consciousness is supervening on the movie + broken gate... which is absurd, and the conclusion is either that physical supervenience is false or computationalism is...
Good summary. If you accept physical supervenience, you need to accept that non active part of the brain have active part in the brain, basically. It makes clear that it is not the material brain or material computer which does the thinking, but the abstract person run by any sufficiently robust programs, with a robustness defined to its most plausible computations above its substitution level above and below the substitution below.

I think that all the MGA establishes is that if the film taken of the physical states of the brain is a good copy, then consciousness can supervene an that copy as well as the original.

Let me try to summarize the argument as I see it. We are conscious and we have brains that seem to be connected with the conscious state, such that a reasonable first model is that consciousness supervenes on the physical brain -- we alter the brain, we affect the conscious state, and the conscious state, being deterministic, reciprocally affects the brain. (Changed thoughts are correlated with changed brain states.)

The observation is then made that we could, quite probably, simulate the brain state to any desired level in a computer (universal Turing machine). The question is: does consciousness supervene on the physical state, or on the abstract calculational state represented by the computer?

Given that the computer simulation has the same conscious state as the original brain, it follows that copies of the conscious state can be made. In so far as these are accurate copies of the original physical state, they are all the same conscious moments -- we only create different consciousnesses when the inputs differ between copies -- and then the states are no longer identical.

None of this argues against conscious supervening on the physical rather than on an abstraction in Platonia. The MGA, as I understand it, was designed to undermine this conclusion. The movie image projected on the original neural plate recreates the original conscious state. But we can degrade the neural plate. As long as we project the same movie copy, the conscious state is unchanged. It is argued that this is absurd. As far as I can tell, such an argument hinges on the notion of conterfactual equivalence: the original movie and the degraded plate are not counterfactually equivalent.

I simply say, so what! Counterfactual equivalence does not have any independent justification, and it is highly unlike to be sensible, even in the context of computationalism.

You are quick here.
I might explain the stroboscope machinery which might help me to ask you what you mean by consciousness supervening to a recording, given that no computation at all is involved in the recording.



Basically, because the simulation of any given conscious state can be carried out an any computer -- whatever the architecture, physical construction, or programming language. As long as the original state is accurately simulated, the conscious state will be the same.

It is not the state which must be correctly simulated, it is the relation between those state/ You need the truth of the proposition IF the input change, I do some this or that". That truth is part of what define my person, and it is not applied in the movie. The movie is a sequence of description of states, not a sequence of states related by some universal number.





But these different instances of the calculation are generally not counterfactually equivalent, nor need they be -- they only have to simulate the original state to the required degree of accuracy

But there is no simulation here, in the technical sense of simulation.

I say that x simulates y on z if for all y z, the sentence phi_x(y, z) = phi_y(z) is provable in RA.




-- they may differ to any degree whatsoever for their calculated states before and after the target conscious moment.

If one x simulated y, an infinity of different x will simulated y as well. We agree that there is a measure problem.



This comes back to my original question: since all possible programs are run by the dovetailer, how do we ensure that conscious beings see an ordered and predictable world.

Indeed that is the question. To answer it I have asked a Löbian number. The answer, to make simple, is that if you abstract on falsities (for knowledge), or illusion (for physics) the logic of self- reference constraints the problem and show that in those direction the reality kicks back ans is structured, so we can extend the formal "measure one" into a physics (calculus of uncertainty).



Only a set of measure zero among all possible programs would give that result.

I suspect you don't take the first person views (the modal variant of relational justification) into account.

I don't pretend it is simple. You need to understand some theorems in computer science, which has been exploited a lot to get those machine logics.

Bruno




Bruce

--
You received this message because you are subscribed to the Google Groups "Everything List" group. To unsubscribe from this group and stop receiving emails from it, send an email to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

Reply via email to