On Wednesday, November 15, 2023, at 8:45 PM, WriterOfMinds wrote:
> ... By this definition, a true intelligence could behave very differently 
> from a human. It would merely need different goals.

Surely, a form of intelligence would be ability to achieve goal of world state 
B, given some starting point of world state A.

But I'm not sure about what should goals be. Try to name a single goal that 
such intelligence would imagine that is not related to some living being. I 
believe goals are tightly related to livings, and while true AI would learn 
from livings, I just can't imagine true AI having different goals from livings.

As a thought experiment, consider yourself being alone on some planet since 
your first day alive (assuming you are given all resources needed to keep you 
alive). What would you do without other livings?  That situation is 
incomprehensive to me other than eating all day long and be bored to death.

And then there are we, humans of billions of characters and strivings, living 
together, influencing each other lives. But we share something in common. 
Shouldn't a goal of true AI be some intersection of those strivings, very 
related to what humans want in general?

Evolution may be a fruitful food for thought. It started as bare survival of 
unicellular organisms drifting around and eating each other. As we grew more 
complex, we developed goals other than merely surviving, and now there are 
thousands of things we would like to do, see, and experience, even impressive 
attempts to survive without killing other livings (see "upside food" - a 
successful lab grown meat experiment). We got here by learning from each other, 
being in good or bad company, but always needing each other to be friends with.

What would next evolution step be? Creating an artificial life form, beginning 
with true AI experiments? Wouldn't then AI need to learn from us, a valuable 
resources of learned goals weighted by 5.4 billions years of evolution?

But those goals mean something only to us, living beings. Since we don't do 
anything other than making each other company, why would we create something 
artificial, and make it imagine goals not aligned with ours? What could those 
goals be?

So the questions may go even further. Do we want to push the true AI we create 
into our world? Or do we want to create a glass bell, a virtual Universe that 
sees only inside of it, just like in our Universe we don't see the Universe 
parent to it. Within this glass bell, we might set up our own rules (avoiding 
things we don't like in our Universe), incept the AI beings in it, and watch 
how they evolve, what they do around, what they learn to want about each other, 
and enjoy seeing them thrive in a way we never could over here?  Or would it be 
an insult to our original Universe? Ultimately, is there a God whose feelings 
we should care about?
------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta4916cac28277893-Mfb538c57b04fe78b5f7782be
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to