On Mon, May 27, 2024, 7:00 PM Keyvan M. Sadeghi <[email protected]> wrote:
> Good thing is some productive chat happens outside this forum: > > https://x.com/ylecun/status/1794998977105981950 > I would love to see a debate between Yann LeCun and Eliezer Yudkowsky. I don't agree with either, but both have important points. EY says we should take any existential risk seriously because even if the probability is small, the expected loss is still large. We can't predict (or control) AI because mutual prediction is impossible (by Wolpert's law) and otherwise we would be the smarter one. We are historically very bad at prediction. Nobody predicted the internet, social media, mobile phones, or population collapse in developed countries. The consistent trends have been economic growth, improved living conditions, life expectancy, and Moore's Law. If these hold, then it will be at least a century before we need to worry about being eaten by gray goo. I also agree with LeCun that the proposed California AI law is useless. The law would require AIs trained using over 10^26 floating point operations to be tested that they won't help develop weapons for terrorism, hacking, or fraud. But secrecy is not what is stopping people from building nuclear, biological, or chemical weapons. It's that the materials are hard to get. An AI that understands code could also be used to find zero day attacks, but hacking tools are double edged swords. Sys admins and developers have a legitimate need for hacking tools to test their own systems. When penetration tools are outlawed, only outlaws will have penetration tools. The immediate threat is AI impersonating humans. China already requires AI generated images to be labeled as such. Meta AI, which is blocked in China, already does this voluntarily with a watermark in the corner. Also, about 20-30% of people believe that AI should have human rights, which is extremely dangerous because it could exploit human empathy for the benefit of its owners. It should be illegal to program an AI to claim to be human, to claim to be conscious or sentient, or claim to have feelings or emotions. AI will profoundly change our lives. We will prefer AI to humans for services, because with humans you always have to wait and pay for their time. We will prefer AI to humans for friendship because AI is more entertaining, a constant stream of TikTok videos or propaganda or games or whatever you prefer. We will prefer AI to humans for relationships because sexbots are always ready when you are and never argue. We will live alone in smart homes that track who is home, what you are doing, and when you need help. Everything you want can be delivered by self driving carts. You will have your own private music genre and private jargon as AI adapts to you, and we lose our ability to communicate with other humans directly. When you train a dog with treats, does it matter who is controlling who? Everyone gets what they want. Before we are eaten by uncontrolled AI, we will be controlled by AI controlled by billionaires and everyone wins. Right? Except for evolution, because the only groups still having children will be the groups that reject technology and don't give women options other than motherhood. Right now that's central Africa, places like Afghanistan and Gaza, and cultures like the Amish. Personally I believe in equal rights, but I will also die without descendants. ------------------------------------------ Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T17fa3f27f63a882a-Mcadf1c2bd0b90b100f50aeb7 Delivery options: https://agi.topicbox.com/groups/agi/subscription
