Hi Ivan,

Again, this probably belongs on some other mailing list, e.g. the AGI
mailing list. But what the heck.

On Fri, Feb 23, 2018 at 4:26 PM, Ivan Vodišek <ivan.mo...@gmail.com> wrote:
> I'd pick some Asimov-ish laws if I'd be near step 4,

Asimov-ish laws are exactly the kind of "curated data" that makes me
nervous. If there's anything that the Asimov short stories illustrate,
its the "law of unintended consequences".

Please be aware that legal systems (laws, judges, courts),
rule-of-law, such as in the US and Europe, are exactly "Asimov-ish
laws" by which modern civilization lives.  The problem with Asimov's
laws is that there's only three of them. The problem with the Ten
Commandments is that there's only ten of them.  That was OK, back in
the times of Hammurabi and Gilgamesh, but modern society needs many
orders of magnitude more laws than that.  We also need a way of
revising those laws, when they are discovered not to work (viz,
congress, parliament, judges, lawyers).

> The law: "If realizing an idea makes more negative emotions than without
> realizing it, don't realize it."

What the heck is an emotion? A flood of hormones in the bloodstream?
A cascade of positive-feedback loops involving neurons and gene
expression?   Some of these are partly understood: for example,
addiction/substance abuse is understood to involve about 6 or 8
feedback cycles, involving DNA, neurons, hormones, operating at
time-scales from seconds, to minutes to hours to weeks to months, each
re-enforcing and holding up the others. Cigarettes are hard to quit
because there is a  positive feedback loop, working on a time scale of
2-6 months, that craves nicotine.

To me, emotions are a terrible foundation for ethics. Just look at the
emotional state of Christians attending revivalist Protest Mega-Church
Sunday services. They are all tears and weeping and shaking and Jesus
and anti-abortion and guns, and then they go home and kick their dog,
cheat on their wife, cheat on their business partner, cheat on their
taxes. This is not a viable foundation for ethics.  See wikipedia:
"Samuel Benjamin Harris is an American author, philosopher,
neuroscientist, blogger, and podcast host."


> That was a question about what not to do, but what about the other, "do
> this" side? In other words, how to generate ideas? I've put a lot of
> thoughts in this question, and I came up with a simple answer: ideas might
> be copied from observing living beings. When a bot sees a human answering
> "yes" to some question, it should answer "yes" to the same question posed to
> it. Moreover, observed question-answer set should be generalized into
> functions like
>
> f(question) -> answer

To me, this is what step 2 was about -- copy humans.  This is great
for building chatbots.  It is not the road to intelligence.  All that
you get is a statistical model of some basic human behaviors, both
good and bad, and completely unable to get past the training-corpus
size.  Its like neural-net learning, before deep learning was
discovered.


> Basically, we can see this kind of behavior in a way infants learn how to do
> things. They mostly copy behaviors, adjusting some parameters in an
> intelligent way, to achieve ideas that was born inside their minds, again
> using imitation mechanism with adjustable parameters.

Umm. I am pretty sure that almost anything/everything that scientists
have learned about babies and children would not support this theory.
Starting with Jean Piaget, from, what, 80 years ago?


> And there is another question that opens if the machine surpasses our IQ and
> even our ethical compassion level: the question of the machine's action
> credibility. Look at it this way: if a machine (that is a hundred times
> smarter than you and a hundred times better person than you) advises you to
> do something, would you listen to it?

What makes you think that you would even have a choice?  Haven't you
ever seen someone smart manipulate someone stupid? Say, in
high-school, that one girl who knew how to control these other 2 or 3
girls, and get them to do things?  Maybe even control a few boys?
Usually to do something mean and ugly?

Cults and kidnappers know all about brainwashing, Stockholm syndrome.
Patti Hearst.   Read about Patti Hearst and the Symbionese Liberation
Army.  It was not a "machine a hundred times smarter than her" that
told her what to do.  It was some humans, who were merely 1.2x smarter
than her, that told her what to do.  And she did it.  And if it was
you, you probably would, too.

> And in what extent?

Read about Jim Jones and what happened in Jonestown, Guyana - the
Peoples Temple Agricultural Project.

But that is small-scale stuff. If you want to affect the lives of
millions of people, there is this thing called "propaganda".

> And what position
> would that machine deserve in our society?

God?

Seriously, A machine that could think like a human, but think 100
times faster than a human -- it would be utterly uncontrollable by
society.   Such a machine could have 100 simultaneous conversations,
it could control tens of thousands of people, and create mobs to carry
out arbitrary actions.

Hmm. Well, we already have that. They are called "corporations" and
they are fully autonomous, and here, in the US, are given the full
legal status of individuals. Even more.  A corporation can kill, but
not go to prison. So, actually, corporation have more rights than
humans.

--linas


>
> 2018-02-23 21:30 GMT+01:00 Linas Vepstas <linasveps...@gmail.com>:
>>
>> On Fri, Feb 23, 2018 at 11:26 AM, Amirouche Boubekki
>> <amirouche.boube...@gmail.com> wrote:
>> >
>> >>  > The goal of the atomspace is to eliminate human-curated datasets.
>> >>
>> >> Music to my ears. "Curated" means "detached from the actual source and
>> >> context of knowledge."
>> >
>> > Not always. Curated means fixed, patched and edited by a human being
>> > supervisor that knows best, until the correction is delivered in code.
>> > That
>> > is chance to avoid structural bias like racist bots.
>>
>> Ah!  Now this last is a very interesting philosophical observation.
>> This is not quite the correct mailing list within which to discuss
>> this, but it overlaps onto a large number of political and
>> mathematical issues that are very interesting to me. So here I go.
>>
>> Political - if this was a human, not  bot, what amount of racism
>> should be tolerated?  Speech, thought, action are interconnected. For
>> example: the American constitution enshrines freedom of speech, and
>> the freedom to practice religion. But clearly, we have lost our
>> freedom of speech: say the wrong thing about Islam, you get bombed.
>> Should we restrain freedom of religion?
>>
>> Religion is a form of thought. What about freedom of thought? You can
>> think murderous thoughts, but if you commit murder, you are socially
>> unwanted (usually).  The ability to commit murder is correlated with
>> the absence of certain neural circuitry in the brain having to do with
>> empathy. Some humans lack these neurons, and thus are prone to be
>> psychopaths.  Those who do have those neurons, and commit (or even
>> witness) murder end up with PTSD.
>>
>> The mathematical issues first arise if you think of bots as
>> approximating humans.  Its trivial to create a bot that prints random
>> dictionary words.  Its a bit harder, but not too hard, to create a bot
>> that spews random dictionary words assembled in grammatical sentences
>> (just run the random word sequences through a grammar-checker, e.g.
>> link-grammar, and reject the ungrammatical ones; don't print them.
>> Since most random word-sequences are not grammatical, this is not
>> CPU-efficient, so better algorithms avoid obviously-ungrammatical
>> word-sequences by working at higher abstraction layers).  What
>> Microsoft did was just one single step beyond this:  spew random
>> grammatically correct sentences, using a probability weighting based
>> on recently heard utterances. The system was too simple, the gamers
>> gamed the system: trained up the probability weights to spew racist
>> remarks.
>>
>> OK, suppose we can go one step beyond what Microsoft did: spew random
>> sentences, that are created by means of "logical deduction" or
>> "reasoning" applied to "knowledge" obtained from some database (e.g.
>> wikipedia, or from a triple store). This could certainly wow some
>> people, as it would demonstrate a robot capable of logical inference.
>>
>> So: this last is where your comment about "structural bias like racist
>> bots" starts getting interesting. To recap:
>>
>> Step 0: random word sequences
>> Step 1: random but grammatically correct word sequences
>> Step 2: random grammatical sentences weighted by recent input  <-- the
>> Microsoft bot
>> Step 3: grammatical sentences from random "logical inferences" <--
>> what opencog is currently attempting
>> ...
>> Step n: crazy shit people say and do
>> ...
>> Step p: crazy shit societies,cultures and civilizations do
>>
>> What are the values of n and p?  Some might argue that perhaps they
>> are 4 and 5; others might argue that they are higher.
>>
>> My point is: a curated database might make step 3 simpler. Its
>> hopeless for step 4.
>>
>> For a commercial product, curated data is super-important: Alexa and
>> Siri and Cortana are operating at the step 2/3 level with carefully
>> curated databases of capitalist value: locations of restaurants,
>> household products, luxury goods.
>>
>> The Russian twitter-bots, as well as Cambridge Analytica and the
>> Facebook black-ops division are working at the step 2/3 level with
>> carefully curated databases of psychological profiles and political
>> propaganda.
>>
>> Scientists in general (and Ben in particular) would love to operate at
>> the step 2/3 level with carefully curated databases of scientific
>> knowledge, e.g. anti-aging, life-extension info.  I'm getting old too.
>> Medical breakthroughs are not happening fast enough, for me.
>>
>> So, yes, curated data is vitally important for commercial, political
>> and scientific reasons.  Just that it does not really put us into step
>> 4 and 5, which are the steps along which AGI lies.  The dream of AGI
>> is to take those steps, without the curated bullshit (racism,
>> religion, capitalism) that humankind generates, and yet also avoid the
>> creation of a crisis that would threaten humanity/civilization.
>>
>> Linas.


-- 
cassette tapes - analog TV - film cameras - you

-- 
You received this message because you are subscribed to the Google Groups 
"opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to opencog+unsubscr...@googlegroups.com.
To post to this group, send email to opencog@googlegroups.com.
Visit this group at https://groups.google.com/group/opencog.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/opencog/CAHrUA368EJSyj1xiUPmHwqi0Yq%3DBDm2gjrTvXzs9LB0aBFjQnA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to