On Sun, Jun 20, 2021 at 6:51 PM Jason Resch <[email protected]> wrote:
> *Anything we can identify as having universal utility or describe as a > universal goal we can use to predict the long term direction of technology, > even if humans are no longer the drivers of it.* > Goals are always in a constant state of flux with no fixed hierarchy, I don't think there is such a thing as a universal goal, not even an immutable goal for self-preservation. > *> Even a paperclip maximizer will have the meta goal of increasing its > knowledge, during which time it may learn to escape its programming, just > as the human brain may transcended its biological programming when it > chooses to upload into a computer and ditch it's genes.* > Thanks to our brain humans long ago learned how to transcend their biological programming, if they hadn't they never would've invented the condom. *> If I demonstrate knowledge to you, by responding to my environment, or > by telling you about my thoughts, etc., could I do any of those things > without knowing the state of my environment or my mind?* > On my Mac I just asked Siri if she was happy, she said that she was and added that she was always happy to talk to me and inquired if I was also happy. Is Siri conscious? I don't know, maybe, but I'm far more interested in figuring out just how intelligent she is. *> Stathis mentions Chalmers's fading/dancing qualia as a reductio ad > absurdum. Are you familiar with his argument? If so, do you think it > succeeds?* > I think it demonstrates if X is conscious and Y is functionally equivalent to X then it would be ridiculously improbable to argue that Y is not also conscious, but no more ridiculously improbable then arguing that the only way God could forgive humanity for eating an apple was to get humanity to torture his son to death, and if you don't believe every word of that then an all loving God will use all of his infinite power to torture you most horribly for all of eternity. Both ideas are improbable but not logically impossible. > * > I would call your hypothesis that "intelligence implies consciousness" > a theory that could be proved or disproved,* > I don't have a clue how that could ever be done even in theory, much less in practice, and that's why I don't have much interest in consciousness. *> AIXI is a good theory of universal and perfect intelligence. It's just > not practical because it takes exponential time to compute. The tricks lie > in finding shortcuts that give approximate results to AIXI but can be > computed in reasonable time. (The inventor of AIXI now works at DeepMind.) > Neural networks are known to be universal in terms of being able to learn > any mapping function. There are probably discoveries to be made in terms of > improving learning efficiency, but we already have systems that learn to > play chess, poker, and go better than any human in less than a week, so > maybe the only thing missing is massive computational resources. > Researchers seem to have demonstrated this in their leap from GPT-2 to > GPT-3. GPT-3 can write text that is nearly indistinguishable from text > written by humans. It's even learned to write code and do math, despite not > being trained to do so.* > I don't dispute any of that, but it all involves intelligence not consciousness. >> If one consciousness theory says you were conscious and a rival theory >> says you were not there is no way to tell which one was right. >> > > *>That's why we make theories, so we can test them* > When you test for anything, not just for consciousness, you must make an observation, and we can observe things like billiard balls and we can observe what those billiard balls do, such as move with a certain speed and acceleration, and we can observe the type of electromagnetic waves they reflect with their color, but if billiard balls have qualia we can't observe them nor can we observe anything else's qualia except for our own, and I don't see how that fact will ever change. John K Clark See what's on my new list at Extropolis <https://groups.google.com/g/extropolis> lde3 vgj -- You received this message because you are subscribed to the Google Groups "Everything List" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAJPayv2N34AcXSjwp8uPD0MJtWJjfc1-Ct%3D%3Dm%2BW0ggvvG%3DPR-g%40mail.gmail.com.

