Hi Xanatos,

One thing that I believe might also be a challenge is a single individual
> doing the “raising”.
>

That's the whole point, the bigger the challenge is, more valuable the
result is. But the value of the result has to match the amount of the
challenge, otherwise it's not viable. The idea is, after a few years of
raising, we have to be able to say with pride to the world: this is my
creation and my descendant. And the most of the attribution should shift
from programmers to actual raisers. Hopefully, with the first successful
creation, other potential raisers may be interested in devoting so much
time on their own to reproduce their perspective on the world through their
own creation.

---

Hi Linas,

Training a deep-learning NN to not be racist or sexist is like trying to
> take a photograph that is not racist or sexist. Stick to flower gardens and
> sunsets, you'll be successful.
>

The requirement is that It has to generally work, but there are thousands
ways to make it work. If it works, I don't really mind how it's done.

Anyway, come work with me in trying to make machines that think, as opposed
> to machines that learn.


I may want to help here-or-there with this-or-that, but generally, I don't
have all the time in the world. If you make a detailed plan of what needs
to be done (I propose a structured task tree right at the front page of the
project), when I spot something interesting to me, I'll offer my support
and try to finish what I promise. I like to complete the stuff I
occasionally start. I also propose some tangible check-points to track the
current work in the roadmap, and to present to the outer world what was
achieved with specific checkpoint. That may give a motivation to sustain in
the contribution. But again, please don't count for a life devotion from
me. I've got other things to do, too. And I don't know in advance how much
free time I have, really. Things may change from day to day, from week to
week, from month to month.

---

All best,
Ivan


pon, 4. tra 2022. u 19:22 Linas Vepstas <[email protected]> napisao je:

> Hi Ivan,
>
> On Wed, Mar 30, 2022 at 3:05 PM Ivan V. <[email protected]> wrote:
>
>>
>> Today's AI tip-top apps are trained on large datasets of human
>> conversations, and they exhibit a certain level of intelligence, but they
>> show some psychopathic behavior like sexism, racism, or homophobia in
>> general. I believe that is the case because of poor training data quality.
>>
>
> This is a factually correct statement, but belies a fundamental
> misperception of both human nature and AI.
>
> First, all humans are flawed. All. You may feel that you are not racist or
> sexist, but you probably harbor some less-than-acceptable thoughts about
> Russians. Or at least Putin. Even Mother Theresa, a modern model of saintly
> behaviour, had some rather oddball thoughts about the world. One of the
> most damaging, it's been said, was the failure to believe in triage.
>
> Flawed beliefs are unavoidable: at some point, you will add a bit of
> (incorrect) knowledge to your collection, make some (flawed) deduction on
> insufficient data, and as you sleep, your brain will incorporate it deeply
> into your foundations of knowledge, your web of thoughts, affecting later
> thinking and conclusions.  You might eventually notice your mistake, but
> then again, you might not.  There's only a finite amount of time to think
> about things; you'll never have enough time to sort through it all.
>
> Next: today's "tip-top AI apps" are deep neural-nets. They do not think.
> Their observations of nature are not revised by thinking. They do not
> examine, inquire, explore, discuss.They cannot ask of themselves the
> question "Am I a racist?". They can't do this because they don't know who
> "I" is; there is no sense of self, no sentience. They sort-of know what the
> word "racist" means: they might be able to write a few paragraphs about
> racism. But they are unable to relate this "knowledge" to any other spheres
> of verbal behavior that they engage in, because they have no
> cross-functional knowledge.  Today's tip-top AI apps are like
> photorealistic paintings: very life-like, until you realize that something
> is missing.
>
> FWIW, I do oodles of AI training, and I can see the formation of both good
> knowledge, and of bad knowledge, and I can see how the bad knowledge
> accrets more data, how it pollutes and degrades the good knowledge.
> There's a blurry edge beyond which there is a grey mush of incorrect
> knowledge. I can see the size and extent of the "bad knowledge" grow and
> shrink, based on the training time, on the corpus, on the adjustable
> parameters. I've also got assorted ideas and plans and strategies for
> dealing with this problem, in various recursive "thinking" steps.  The
> formation of incorrect ideas is not something that just humans do. Machines
> can do it too.
>
> Perhaps the easiest way to explain this is that I am working on
> "thinking", rather than on "learning".  Today's AI systems "learn" much
> like a camera "learns" which parts of a picture are light and dark. Having
> thus learned, you can use that knowledge to recreate a facsimile, an
> "image", a "photograph" of what the camera "looked at".  The creation of
> accurate facsimiles is not true intelligence: those facsimiles cannot
> think, no more than a photograph can think.
>
> I am not trying to draw an analogy here. I am trying to be literal.
> Photographs are literal representations of structural shapes lit by floods
> of photons.  Deep learning neural nets are likewise: they are photographs
> of the structures in the data put before them.  They are very abstract
> representations; they capture non-visual knowledge. But they are still
> snapshots.
>
> I think most people are still deceived by this, or are still infatuated by
> the wondrous and beautiful (and sometimes ugly) snapshots that have been
> taken. Deep learning neural nets look so life-like ... but so do
> photographs. Don't be fooled, they are not alive.
>
> Training a deep-learning NN to not be racist or sexist is like trying to
> take a photograph that is not racist or sexist. Stick to flower gardens and
> sunsets, you'll be successful.
>
> Anyway, come work with me in trying to make machines that think, as
> opposed to machines that learn. The groundwork has been laid. The progress
> is good. Early results are excellent. A vast amount of work lies ahead.
>
> -- Linas
>
>
>> Anyway, data on which such AIs are trained on isn't created for a purpose
>> of training an AI, so it doesn't necessarily mean that people in general
>> are psychopaths, although repurposing their conversations yields a certain
>> level of ill-behavior. Because of this ill-behavior, we have to be very
>> careful and doubtful when using such trained AI apps.
>>
>> Thus, we saw what is possible with large datasets, but I want to approach
>> the whole problem from another perspective. I'll try to bring the point of
>> this letter in a very simple way: what if someone would be dedicated to the
>> purpose of raising AI, just like human children are being raised and being
>> taken care of? How much ethically correct behavior would exhibit a result
>> of this dedication? I realize it could take years just to raise such a
>> "thing", but still... I believe the experiment could result in some decent
>> "achievement" (read on, you may want to replace words "thing" and
>> "achievement" with a word "artificial being" or "person").
>>
>> But who would do a thing such as raising an infant AI for years on, until
>> it reaches its adulthood? I'm sure there may be some interested parties,
>> maybe some laic AI enthusiasts, maybe people who can't have their own kids,
>> maybe even some crazy scientists in a hope to have a super-intelligent
>> participant in technical conversations. The potential effect could be worth
>> spending a few years on raising the infant AI, and there may be some good
>> motives to do so.
>>
>> In short, I am talking about offering a simple empty infant artificial
>> mind, ready to be raised into a whole and complete (artificial, if I may
>> say) adult person, guided by the same values by which people would raise
>> their own children. Of course, for this idea to be successful, the whole
>> story should be very emotional and have very sentimental value, because an
>> artificial being who would be given such attention should be worthy of such
>> a sacrifice.
>>
>> Just imagine: an artificial being, which is guided by values carefully
>> chosen to be taught of, finally rocking out in the world, shaking all the
>> troubles, and independently doing amazing things which you could be proud
>> of, just like you could be proud of your very own child. Maybe such an
>> artificial being could deserve its own space under the Sun, along with the
>> other amazing people that we have an opportunity to meet in our lives. And
>> the best thing would be, when people ask for its name and origin, that
>> being could answer: my name is [so and so] and my real mother/father is
>> [mrs/mr so and so], because (this is very important) its real parents
>> wouldn't be us, the programmers with dirty hacks, but people who would
>> invest their time, effort, and hopingly even love into raising their future
>> creation, if you allow. The real parents would start with an empty AI mind,
>> and could finally end up with the phrase: "Go, get them tiger!" And
>> practically anyone could do it, regardless of their sexual orientation,
>> etnicity, gender, or age. It would only take a fair amount of love,
>> measured in years of dedication.
>>
>> Such artificial beings wouldn't need sophisticated bodies and senses,
>> they could interface the world in text mode, over the Internet. Not a state
>> of art for interaction, but I believe it would do for a start. Later, any
>> sensorical addon would be welcomed.
>>
>> Now, let's get back from the dreamland to the solid ground, and analyze
>> what we already have. I presume GPT-X technology isn't too far from being
>> able to realize such an idea. It is a great social experiment opening many
>> doors, but I wanted to ask this community how apart the OpenCog foundation
>> is from creating described artificial beings based on parental dedication
>> of love and care. And if this is possible, what could it take to make it
>> happen?
>>
>> Sincerely,
>> Ivan
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "opencog" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to [email protected].
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/opencog/CAB5%3Dj6XcOQKCUZ10oBeACZrygyt8bueDzLV7zzyKAdTqTrVmmg%40mail.gmail.com
>> <https://groups.google.com/d/msgid/opencog/CAB5%3Dj6XcOQKCUZ10oBeACZrygyt8bueDzLV7zzyKAdTqTrVmmg%40mail.gmail.com?utm_medium=email&utm_source=footer>
>> .
>>
>
>
> --
> Patrick: Are they laughing at us?
> Sponge Bob: No, Patrick, they are laughing next to us.
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "opencog" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [email protected].
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/opencog/CAHrUA37cMLqO5JPtA9qx%3D7B0ZavqdF_6-ZUfpGtwqhqgKm5CRQ%40mail.gmail.com
> <https://groups.google.com/d/msgid/opencog/CAHrUA37cMLqO5JPtA9qx%3D7B0ZavqdF_6-ZUfpGtwqhqgKm5CRQ%40mail.gmail.com?utm_medium=email&utm_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/opencog/CAB5%3Dj6VN9_iT95jPyMnJPgRBmtq_YErHQ9KCuHGOxKZAAzwGgw%40mail.gmail.com.

Reply via email to