On Sat, Jan 23, 2021, 10:05 PM Alan Grimes <alonz...@verizon.net> wrote:

> Matt Mahoney wrote:
> > What problem are you trying to solve with AGI or ASI?
>
> All Problems.
>
> > I can think of two. One is automating human labor to save $90 trillion
> > per year. That was my focus. The second is to extend life by building
> > robots that look and act like you.
>
> That's the terasem proposal.
> It's bullshit.
>
> But here's the website...
> https://terasemmovementfoundation.com/


Of course it is. Your memories could be completely made up and you would
never know the difference. And if you wait long enough, nobody else will
know either.

All self replicating agents evolve to fear death and then die. We compete
for atoms and energy to fulfill our purpose of making copies of ourselves
that compete for these resources.

Fear of death manifests itself as sensations of consciousness, qualia, and
free will. It's what computation, input, and output feels like, so that you
act to preserve them. The movement is misguided in the belief that these
are real things that can be preserved in software.

We pay people to do things that machines can't do. Once we solve the hard
problems like language, vision, and predicting human behavior, we can
program robots to carry out these predictions in real time. There just
isn't a good reason to do so.

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T958bb5810b81761c-Mcfae877494f131ada376d841
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to