On 09 Jul 2014, at 04:59, LizR wrote:

On 9 July 2014 14:03, meekerdb <meeke...@verizon.net> wrote:
On 7/8/2014 6:14 PM, LizR wrote:
So suppose we have a conscious computer frozen in state S1. We start it running and let it interact with its environment via, say, a body in the form of a Mars Rover. We record all the inputs it receives from its sensors, incoming signals from anywhere else, etc. After say 10 minutes we stop the recording and we turn to another computer, on Earth, with no body, also in state S1, and now we play back the inputs we recorded from teh first one. Why would the second computer not behave exactly like the first one, believing that it's interacting with the surface of Mars? And if it does, why would it be any less conscious than the first one?
I'd say that if it instantiates conscious, thoughts then they take their meaning from Mars, even though it's "second hand".

So you are happy that the replay is conscious, and has the same experiences and state of consciousness as the original? If so then we may as well drop this stuff about meaning, which only seemed to be there to distinguish the "really real" first time around consciousness from the "not really real" second time around consciousness.

I hope you see that the MGA is a reductio ad absurdum, and that you are OK with the fact that a record of a computation is not a computation, at least assuming comp, as nothing compute in a record of a computation. OK?

That is why we just abandon the physical supervenience. Consciousness has nothing to do with the physical activity of the brain or the computer. Consciousness has everything to do with the immaterial person, and all of its realization in arithmetic. Eventually a brain is just a way for that consciousness to manifest itself relatively to some 1p-plural stable universal neighbor.





Maudlin adds extra machinery to provide counterfactual computations. This must assume interaction with some environment in order that the counterfactual events can be defined.

Yes, I didn't get that. All that unused machinery ... the MGA seems a lot tidier, at least.

Or looked at another way, suppose there were a different Europa rover which had different sensors and programs and actuators, but by coincidence of it's interaction with the environment it happened to have a sequence of inputs and outputs from it's cpu exactly the same as a sequence that occurred in Mars rover. So when the sequence is played back in a simulation on Earth, does the simulation experience being on Mars or on Europa?

If they are the SAME inputs then it experiences whatever the Mars AND Europa rovers experienced, according to comp (or according to materialism, for that matter). At least it does assuming the two rovers have identical experiences, by which I assume you mean they started at some point in time in the same machine state (otherwise the Mars one knows its on Mars anyway, and can't have the same conscious states as the Europa one, which knows it's on Europa).

So if you make their states of consciousness identical at the start time (by hypothesis, this means that they are both equivalent to Turing machines in a specific state) and they happen to have the same inputs, and the whole thing gets replayed by a Turing machine on Earth, then that machine has the same experience (which would have to be along the lines of "Where am I? I don't know, but it looks like a bunch of rocks..." Or whatever it happens to look like.

The MGA assumes you start the system in some specified state and replay the inputs. I can't see any wiggle room for this to be a different conscious experience no matter how many times you do it. Comp says it's literally the same states of consciousness.
My point is that consciousness may be more holistic than supposed, i.e. it depends on environment and maybe even on the evolutionary history.

I think comp covers this when Bruno says that you may have to simulate more than just the person's physical form, but perhaps their surroundings too. But in any case "depends on" is irrelevant if consciousness is Turing emulable, as far as I know a state of a Turing Machine doesn't care how it got into that state, it's simply in it.

OK.
Asking for the presence of the environment is like asking for a lower level, and does not change anything when confronted to UD* or the arithmetical reality. It only makes the high level used by many neuro- philosophers less plausible, and makes step 1-6 harder, without reason, as the step 7 works for all level, with all sort of Turing emulable *generalized* brains, including oracles.

Bruno






--
You received this message because you are subscribed to the Google Groups "Everything List" group. To unsubscribe from this group and stop receiving emails from it, send an email to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

Reply via email to