Hello AI enthusiasts,

Something is going through my troubled mind for a while now. I'm not sure
how to articulate it, but I'll try to do so.

Today's AI tip-top apps are trained on large datasets of human
conversations, and they exhibit a certain level of intelligence, but they
show some psychopathic behavior like sexism, racism, or homophobia in
general. I believe that is the case because of poor training data quality.
Anyway, data on which such AIs are trained on isn't created for a purpose
of training an AI, so it doesn't necessarily mean that people in general
are psychopaths, although repurposing their conversations yields a certain
level of ill-behavior. Because of this ill-behavior, we have to be very
careful and doubtful when using such trained AI apps.

Thus, we saw what is possible with large datasets, but I want to approach
the whole problem from another perspective. I'll try to bring the point of
this letter in a very simple way: what if someone would be dedicated to the
purpose of raising AI, just like human children are being raised and being
taken care of? How much ethically correct behavior would exhibit a result
of this dedication? I realize it could take years just to raise such a
"thing", but still... I believe the experiment could result in some decent
"achievement" (read on, you may want to replace words "thing" and
"achievement" with a word "artificial being" or "person").

But who would do a thing such as raising an infant AI for years on, until
it reaches its adulthood? I'm sure there may be some interested parties,
maybe some laic AI enthusiasts, maybe people who can't have their own kids,
maybe even some crazy scientists in a hope to have a super-intelligent
participant in technical conversations. The potential effect could be worth
spending a few years on raising the infant AI, and there may be some good
motives to do so.

In short, I am talking about offering a simple empty infant artificial
mind, ready to be raised into a whole and complete (artificial, if I may
say) adult person, guided by the same values by which people would raise
their own children. Of course, for this idea to be successful, the whole
story should be very emotional and have very sentimental value, because an
artificial being who would be given such attention should be worthy of such
a sacrifice.

Just imagine: an artificial being, which is guided by values carefully
chosen to be taught of, finally rocking out in the world, shaking all the
troubles, and independently doing amazing things which you could be proud
of, just like you could be proud of your very own child. Maybe such an
artificial being could deserve its own space under the Sun, along with the
other amazing people that we have an opportunity to meet in our lives. And
the best thing would be, when people ask for its name and origin, that
being could answer: my name is [so and so] and my real mother/father is
[mrs/mr so and so], because (this is very important) its real parents
wouldn't be us, the programmers with dirty hacks, but people who would
invest their time, effort, and hopingly even love into raising their future
creation, if you allow. The real parents would start with an empty AI mind,
and could finally end up with the phrase: "Go, get them tiger!" And
practically anyone could do it, regardless of their sexual orientation,
etnicity, gender, or age. It would only take a fair amount of love,
measured in years of dedication.

Such artificial beings wouldn't need sophisticated bodies and senses, they
could interface the world in text mode, over the Internet. Not a state of
art for interaction, but I believe it would do for a start. Later, any
sensorical addon would be welcomed.

Now, let's get back from the dreamland to the solid ground, and analyze
what we already have. I presume GPT-X technology isn't too far from being
able to realize such an idea. It is a great social experiment opening many
doors, but I wanted to ask this community how apart the OpenCog foundation
is from creating described artificial beings based on parental dedication
of love and care. And if this is possible, what could it take to make it
happen?

Sincerely,
Ivan

-- 
You received this message because you are subscribed to the Google Groups 
"opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/opencog/CAB5%3Dj6XcOQKCUZ10oBeACZrygyt8bueDzLV7zzyKAdTqTrVmmg%40mail.gmail.com.

Reply via email to