> On 10 Jun 2020, at 01:14, Jason Resch <[email protected]> wrote: > > > > On Tue, Jun 9, 2020 at 6:03 PM Stathis Papaioannou <[email protected] > <mailto:[email protected]>> wrote: > > > On Wed, 10 Jun 2020 at 03:08, Jason Resch <[email protected] > <mailto:[email protected]>> wrote: > For the present discussion/question, I want to ignore the testable > implications of computationalism on physical law, and instead focus on the > following idea: > > "How can we know if a robot is conscious?" > > Let's say there are two brains, one biological and one an exact computational > emulation, meaning exact functional equivalence. Then let's say we can > exactly control sensory input and perfectly monitor motor control outputs > between the two brains. > > Given that computationalism implies functional equivalence, then identical > inputs yield identical internal behavior (nerve activations, etc.) and > outputs, in terms of muscle movement, facial expressions, and speech. > > If we stimulate nerves in the person's back to cause pain, and ask them both > to describe the pain, both will speak identical sentences. Both will say it > hurts when asked, and if asked to write a paragraph describing the pain, will > provide identical accounts. > > Does the definition of functional equivalence mean that any scientific > objective third-person analysis or test is doomed to fail to find any > distinction in behaviors, and thus necessarily fails in its ability to > disprove consciousness in the functionally equivalent robot mind? > > Is computationalism as far as science can go on a theory of mind before it > reaches this testing roadblock? > > We can’t know if a particular entity is conscious, but we can know that if it > is conscious, then a functional equivalent, as you describe, is also > conscious. This is the subject of David Chalmers’ paper: > > http://consc.net/papers/qualia.html <http://consc.net/papers/qualia.html> > > Chalmers' argument is that if a different brain is not conscious, then > somewhere along the way we get either suddenly disappearing or fading qualia, > which I agree are philosophically distasteful. > > But what if someone is fine with philosophical zombies and suddenly > disappearing qualia? Is there any impossibility proof for such things?
This would not make sense with Digital Mechanism. Now, by assuming some NON-mechanism, maybe someone can still make sense of this. That is why qualia and quanta are automatically present in *any* Turing universal realm (the model or semantic of any Turing universal or sigma_1 complete theory). That is why physicalists need to abandon mechanism, because it invoke a non Turing emulable reality (like a material primitive substance) to make consciousness real for some type of universal machine, and unreal for others. As there is no evidence until now for such primitive matter, this is a bit like adding complexity to avoid the consequence of a simpler theory. Bruno > > Jason > > -- > You received this message because you are subscribed to the Google Groups > "Everything List" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to [email protected] > <mailto:[email protected]>. > To view this discussion on the web visit > https://groups.google.com/d/msgid/everything-list/CA%2BBCJUjnn2DQwit%2Bj%3DYdXbXZbwHTv_PZa7GRKXwdo31gTAFygg%40mail.gmail.com > > <https://groups.google.com/d/msgid/everything-list/CA%2BBCJUjnn2DQwit%2Bj%3DYdXbXZbwHTv_PZa7GRKXwdo31gTAFygg%40mail.gmail.com?utm_medium=email&utm_source=footer>. -- You received this message because you are subscribed to the Google Groups "Everything List" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/D6984742-4AF5-4445-873B-27B3B56CCA70%40ulb.ac.be.

