On Tue, 20 Mar 2018 at 3:49 am, Lawrence Crowell < goldenfieldquaterni...@gmail.com> wrote:
> On Monday, March 19, 2018 at 6:47:01 AM UTC-5, stathisp wrote: > >> >> On Mon, 19 Mar 2018 at 8:58 pm, Lawrence Crowell < >> goldenfield...@gmail.com> wrote: >> >>> On Sunday, March 18, 2018 at 8:46:26 PM UTC-6, stathisp wrote: >>> >>>> >>>> >>>> On 19 March 2018 at 12:14, Lawrence Crowell <goldenfield...@gmail.com> >>>> wrote: >>>> >>>>> On Sunday, March 18, 2018 at 3:51:13 PM UTC-6, John Clark wrote: >>>>>> >>>>>> On Sun, Mar 18, 2018 at 11:02 AM, Lawrence Crowell < >>>>>> goldenfield...@gmail.com> wrote: >>>>>> >>>>>> *> The MH spacetimes have Cauchy horizons that because they pile up >>>>>>> geodesics can be a sort of singularity.* >>>>>> >>>>>> >>>>>> That’s not the only thing they have, MH spacetimes also have closed >>>>>> timelike curves and logical paradoxes produced by them, one of them being >>>>>> the one found by Turing. They also have naked singularities that nobody >>>>>> has >>>>>> ever seen the slightest hint of. And if you need to go to as exotic a >>>>>> place >>>>>> as the speculative interior of a Black Hole to find a reason why Cryonics >>>>>> might not work I am greatly encouraged. >>>>>> >>>>> >>>>> Not all MH spaces have closed timelike curves. >>>>> >>>>> >>>>>> >>>>>> *> The subject of NP-completeness came up because of my conjecture >>>>>>> about there being a sort of code associated with a conscious entity >>>>>>> that is >>>>>>> not computable or if computable is intractable in NP. * >>>>>> >>>>>> >>>>>> NP-completeness is sorta weird and consciousness is sorta weird, but >>>>>> other than that is there any reason to think the two things are related? >>>>>> >>>>> >>>>> This seems to be something you are not registering. Classic >>>>> NP-complete problems involve cataloging subgraphs and determining the >>>>> rules >>>>> for all subgraphs in a graph. There are other similar combinatoric >>>>> problems >>>>> that are NP complete. A map from a brain to a computer is going to require >>>>> knowing how to handle these problems. Quantum computers do not help much. >>>>> >>>>> >>>>>> >>>>>> *> It could have some bearing on the ability to emulate consciousness >>>>>>> in a computer.* >>>>>> >>>>>> >>>>>> How do you figure that? Both my brain and my computer are made of >>>>>> matter that obeys the laws of physics, and matter that obeys the laws of >>>>>> physics has never been observed to compute NP-complete problems in >>>>>> polynomial time, much less less find the answer to a non-computable >>>>>> question, like “what is the 7918th Busy Beaver number?”. >>>>>> >>>>> >>>>> And for this reason it could be impossible to map brain states into a >>>>> computer and capture a person completely. Of course brains and computers >>>>> are made of matter. So is a pile of shit also made of matter. Based on >>>>> what >>>>> we know about bacteria and their network communicating by electrical >>>>> potentials the pile of shit may have more in the way of consciousness than >>>>> a computer. >>>>> >>>>> As for the rest I think a lot of this sort of idea is chasing after >>>>> some crazy dream. There is in some ways a problem with doing that. As >>>>> things stand now I would not do the upload. Below is a picture of some >>>>> aspect of this. >>>>> >>>>> >>>>> <https://lh3.googleusercontent.com/-B42zD6RjTlo/Wq8Or4mWXiI/AAAAAAAADSs/rSPOyS5rTfwkhdWkws8ll7Huj6DVNHMqgCLcBGAs/s1600/Why%2Bis%2Bthe%2Bdog%2Bhappier.png> >>>>> >>>> Could you say if you think the observable behaviour of the brain (and >>>> hence of the person whose muscles are controlled by the brain) could be >>>> replaced by a computer, and, if the answer is yes, if you still think it is >>>> possible that the consciousness might not be preserved? And if the answer >>>> is also yes to the second question, what you think it would be like if your >>>> consciousness was changed by replacing part of your brain, but your brain >>>> still forced your body to behave in the same way? >>>> >>>> >>>> -- >>>> Stathis Papaioannou >>>> >>> >>> I really do not know. I will say if it is possible in principle to >>> replace the executive parts of the brain with a computer, but where the >>> result could be a sort of zombie. There are too many unknowns and unknowns >>> with no Bayesian priors, or unknown unknowns. We are in a domain of >>> possibles, plausibles and maybe a Jupiter computer-brain. There is so >>> little to go with this, and to be honest a lot more possible obstructions I >>> might see than realities, that almost nothing can be said with much >>> certainty. >>> >> >> Consider not a zombie but a brain in the process of zombification. A >> piece of the brain is replaced with an electronic implant which replicates >> its I/O behaviour as it interacts with the surrounding biological tissue >> but, by assumption, does not participate in consciousness. It is believed, >> for example, that visual experiences arise in the occipital cortex, and >> lesions here cause partial or complete blindness. So if the implant in the >> visual cortex lacked the special quality giving rise to visual experiences, >> the subject should have this large deficit in his consciousness. But >> although he might want to yell out that he is blind, his vocal cords >> receive the same input from motor neurones that they would normally, since >> the output from the visual cortex is the same, and he declares that nothing >> has changed and he can see perfectly well. >> >> The scenario above is used in a reductio ad absurdum (supporting the idea >> that any functionally equivalent system must preserve consciousness) in the >> following paper: >> >> http://consc.net/papers/qualia.html >> >> >> -- >> Stathis Papaioannou >> > > There may be scaling issues with this. If a patient were to have 10% of > neurons replaced in some brain system the impact on conscious awareness > might not be that pronounced. At some threshold there may be a sufficient > change in the network structure that profound changes may take place. > It is possible that consciousness is fully preserved until a threshold is reached then suddenly disappears. So if half the subject’s brain is replaced, he behaves normally and has normal consciousness, but if one more neurone is replaced he continues to behave normally but becomes a zombie. Moreover, since neurones are themselves complex systems it could be broken down further: half of that final neurone could be replaced with no change to consciousness, but when a particular membrane protein is replaced with a non-biological nanomachine the subject will suddenly become a zombie. And we need not stop here, because this protein molecule could also be replaced gradually, for example by non-biological radioisotopes. If half the atoms in this protein are replaced, there is no change in behaviour and no change in consciousness; but when one more atom is replaced a threshold is reached and the subject suddenly loses consciousness. So zombification could turn on the addition or subtraction of one neutron. Are you prepared to go this far to challenge the idea that if the observable behaviour of the brain is replicated, consciousness will also be replicated? Further, as network subfunction is NP-complete it may be impossible to > establish how a brain function is segmented into networks well enough to > duplicate the whole thing. > > Folks, this stuff is clearly far more difficult than most are thinking. > You can also be sure that in the future when this becomes more experimental > that surprises and obstructions will appear. > It is a theoretical rather than practical question. > -- Stathis Papaioannou -- You received this message because you are subscribed to the Google Groups "Everything List" group. To unsubscribe from this group and stop receiving emails from it, send an email to everything-list+unsubscr...@googlegroups.com. To post to this group, send email to everything-list@googlegroups.com. Visit this group at https://groups.google.com/group/everything-list. For more options, visit https://groups.google.com/d/optout.