On Sat, Jun 3, 2023, 11:24 PM <[email protected]> wrote:
> On Saturday, June 03, 2023, at 7:17 AM, Matt Mahoney wrote: > > The alignment problem is not aligning AI to human values. We know how to > do that. The problem is aligning human values to a world where you have > everything except the will to live. > > > I still don't think having everything makes you sad. And while I do think > the new "great" world we are about to enter will isolate us almost totally, > I think the AIs will take us out of that before we well, die or get really > lonely. > Because happiness is not utility. It is the change in utility. All reinforcement algorithms implemented with finite memory have one or more states of maximum utility. It does not matter what the utility function is. The best you can achieve is transitions between maximum states, computation without feeling, like a zombie. Otherwise any thought or perception would be an unpleasant transition to a lower state. You seek death, but don't know it because evolution gave you a brain that positively reinforces thought, perception, and action, giving you the sensations of consciousness, qualia, and free will so that you will fear death and produce more offspring. If it weren't for your illusion of identity, a robot that looks and acts like you would be you. You don't believe me, but people are not happier today than 100 or 1000 years ago in spite of vastly better living conditions. Humans are not happier than other animals. About 10-20% of humans are chronically sad or depressed or addicted to drugs, including millionaire celebrities that have everything. No other animals commit suicide except some large brained mammals like dolphins and whales. Solving the alignment problem requires infinite memory, and we live in a finite universe. ------------------------------------------ Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T22a6257384b5d40a-M4e9686106eaefc1a7a9e82f8 Delivery options: https://agi.topicbox.com/groups/agi/subscription
