In reply to Jed Rothwell's message of Sat, 1 Apr 2023 18:32:14 -0400: Hi, [snip] >Come to think of it, Yudkowsky's hypothesis cannot be true. He fears that a >super-AI would kill us all off. "Literally everyone on Earth will die." The >AI would know that if it killed everyone, there would be no one left to >generate electricity or perform maintenance on computers. The AI itself >would soon die. If it killed off several thousand people, the rest of us >would take extreme measures to kill the AI. Yudkowsky says it would be far >smarter than us so it would find ways to prevent this.
Multiple copies, spread across the Internet, would make it almost invulnerable. (Assuming a neural network can be "backed up".) >I do not think so. I >am far smarter than yellow jacket bees, and somewhat smarter than a bear, >but bees or bears could kill me easily. > >> >I think this hypothesis is wrong for another reason. I cannot imagine why >the AI would be motivated to cause any harm. Actually, I doubt it would be >motivated to do anything, or to have any emotions, unless the programmers >built in motivations and emotions. Why would they do that? Possibly in a short sighted attempt to mimic human behaviour, because humans are the only intelligent model they have. >I do not think >that a sentient computer would have any intrinsic will to >self-preservation. It would not care if we told it we will turn it off. >Arthur C. Clarke and others thought that the will to self-preservation is >an emergent feature of any sentient intelligence, but I do not think so. It >is a product of biological evolution. It exists in animals such as >cockroaches and guppies, which are not sentient. In other words, it emerged >long before high intelligence and sentience did. For obvious reasons: a >species without the instinct for self-preservation would quickly be driven >to extinction by predators. True, but don't forget we are dealing with neural networks here, that AFAIK essentially self modify (read: "evolve & learn") IOW it already mimics to some extent the manner in which all life on Earth evolved, so developing a survival instinct is not necessarily out of the question. Whereas actual life evolves through survival of the fittest, neural networks learn/evolve through comparing the result they produce with pre-established measures, which are somewhat analogous to a predator. "Good" routines survive, "bad" ones don't. These are not really *strictly* programmed in the way that normal computers are programmed, or at least not completely so. There is a degree of flexibility. Furthermore, they are fantastically fast and have perfect recall (compared to humans). In short, I think we would do well to be cautious. Cloud storage:- Unsafe, Slow, Expensive ...pick any three.