On 16 Jan 2014, at 19:10, meekerdb wrote:

On 1/16/2014 12:34 AM, Bruno Marchal wrote:

On 15 Jan 2014, at 20:40, meekerdb wrote:

On 1/15/2014 12:34 AM, Bruno Marchal wrote:
And the answer is "yes, he would know that, but not immediately".

So it would not change the indeterminacy, as he will not immediately see that he is in a simulation, but, unless you intervene repeatedly on the simulation, or unless you manipulate directly his mind, he can see that he is in a simulation by comparing the comp physics ("in his head") and the physics in the simulation. The simulation is locally finite, and the comp-physics is necessarily infinite (it emerges from the 1p indeterminacy on the whole UD*), so, soon or later, he will bet that he is in a simulation (or that comp is wrong).


But if it is sufficiently large he won't find it is finite.

Hmm... OK. But he will soon or later. We are talking "in principle", assuming the emulated person has all the time ...



Also, I don't understand why finding his world is finite

Finite or computable (Recursively enumerable).



would imply comp is wrong. In a finite world it seems it would be even easier to be sure of saying "yes" to the doctor.

I don't know how you can know that the universe if finite. But comp makes it non finite (and non computable), so if you have a good reason to believe that the universe is finite, you have a good reason to believe that comp is wrong, and to say "no" to the doctor. That *is* counter-intuitive, but follow from step 7 and 8.



I think you equivocate on "comp"; sometimes it means that an artificial brain is possible other times it means that plus the whole UDA.

Comp is where UDA is valid. By comp, according to the degree of understanding of the UD-Argument or the person I am speaking to, just means the hypothesis, or its logical consequences.

But that comes from your assumption that belief=provable

UDA does not use that assumption.

And AUDA uses only the assumption that you believe in what PA can prove, and that you are willing to be cautious on believing anything more, as UDA suggests.




and that consciousness requires proving there are unprovable true sentences.

Consciousness does not require that. Worms are conscious, and I doubt they prove incompleteness. But as finite entities, incompleteness applies to them, so they live or experience the incompleteness. It is true for them, but worlds are not Löbian, and so they can't explicitly explain this to themselves like a more introspective being (Löbian) can do.


Those are both much more dubious than "an artificial neuron can replace a biological one."

Yes, that is why I prove what I assert from that assumption, and definition (which always simplify things).

Bruno



Brent

--
You received this message because you are subscribed to the Google Groups "Everything List" group. To unsubscribe from this group and stop receiving emails from it, send an email to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.

http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.

Reply via email to