Vladimir,

At great risk of stepping in where angels fear to tread...

This is an IMPORTANT discussion which several others have attempted to
start, but for which there is an entrenched self-blinding minority
(majority?) here who fail to see its EXTREME value. I believe that answers
to some of these questions should guide AGI development but probably will
fail to do so, resulting in most of the future harm that AGIs may do. In
short, the danger is NOT in AGI itself, but in the willful ignorance of its
developers, which may also be your concern. "Protective" mechanisms to
restrict their thinking and action will only make things WORSE.

My own present opinion varies slightly from yours, in that I believe that
even if a (supposedly) FAI could be developed, that it would only become the
tool for our own self-destruction, given present illogical prejudices about
what is "right", even when it is in direct conflict with "best".

Continuing with comments...

On 8/30/08, Vladimir Nesov <[EMAIL PROTECTED]> wrote:
>
> On Sat, Aug 30, 2008 at 8:54 PM, Terren Suydam <[EMAIL PROTECTED]>
> wrote:
> > --- On Sat, 8/30/08, Vladimir Nesov <[EMAIL PROTECTED]> wrote:
> >
> >> You start with "what is right?" and end with
> >> Friendly AI, you don't
> >> start with "Friendly AI" and close the circular
> >> argument. This doesn't
> >> answer the question, but it defines Friendly AI and thus
> >> "Friendly AI"
> >> (in terms of "right").
> >
> > In your view, then, the AI never answers the question "What is right?".
> > The question has already been answered in terms of the algorithmic
> process
> > that determines its subgoals in terms of Friendliness.


I might be interested in an AGI who was working for "best", but I would be
the first to swing a sledgehammer onto one that was working for "right".
Where "right" and "best" differ, SOMETHING is wrong and it must be
understood before causing great damage. Usually, it is "right" that is
wrong, but how do you convince a religious constituency that their God-given
book is wrong in SO many ways?

Curiosity: Aside from its content, the New Testament is arguably the worst
written religious book now in common use. For example, where most other
religious books present clear instruction, the New Testament is full of
"parables" that have many possible interpretations. The only way that anyone
could claim such a poorly written work to be "from God" is though ignorance
of other works. Amazingly, this has survived for ~2,000 years and continues
to be the prevailing standard for "right", flaws and all.

However, narrowly defined "right" as presently defined by subgoals, more
closely equals "emasculated".

There is a symbolic string "what is right?" and what it refers to, the
> thing that we are trying to instantiate in the world.


Different groups have different goals. For in-your-face outrageous example,
a major reason that most people now die during their second half-century is
because of our quaint social practice of pairing like-aged couples, thereby
removing all Darwinian pressure to evolve into longer lived individuals. If
we were to socially enforce pairing young and old individuals, we could
reverse this. Of course, there aren't as many old individuals as there are
young, so the "pairing" would have to include more young people. Of course,
all this flies TOTALLY in the face of all prevailing shitforbrains
religions, even though it was originally practiced by Abraham. Any
"friendly" AGI would work AGAINST extending lifespan in such ways because
its subgoals would prohibit it from working against the younger majority to
restrict their freedom of choice to live with the mates they prefer.

Of course, there are always the "best" advocates (me among them) whose AGI
would possibly be more like Colossus. BTW, has anyone here read the 2nd and
3rd books in the Colossus trilogy yet? They reverse some of the lessons of
the first book/movie that people often comment on here.

The whole
> process of  answering the question is the meaning of life, it is what
> we want to do for the rest of eternity (it is roughly a definition of
> "right" rather than over-the-top extrapolation from it).


IMHO, a primary reason for an AGI is to see past present human prejudices
and make better decisions, which greatly favors "best" over "right". Indeed,
this uses "best" to discover the errors in "right", whereas you would
(apparently attempt to) work the other way.

It is an
> immensely huge object, and we know very little about it, like we know
> very little about the form of a Mandelbrot set from the formula that
> defines it, even though it entirely unfolds from this little formula.
> What's worse, we don't know how to safely establish the dynamics for
> answering this question, we don't know the formula, we only know the
> symbolic string, "formula", that we assign some fuzzy meaning to.


What we DO have is a world full of different societies that have DIFFERENT
problems. We could easily learn from them how to produce a supersociety with
a minimum of problems. Unfortunately, many/most of those societies respect
different religions, while we here in the U.S. live in a distinctly
Christian society that believes in rejecting the lessons of other societies
because those lessons would expose some of the many failings of
Christianity.

There is no final answer, and no formal question,


While a supersociety synthesized as above would still be open to
improvement, it doesn't take an AGI to make GIANT improvements over our
present very screwed up society, if only the religious freaks would let it
happen.

so I use
> question-answer pairs to describe the dynamics of the process, which
> flows from question to answer, and the answer is the next question,
> which then follows to the next answer, and so on.
>
> With Friendly AI, the process begins with the question a human asks to
> himself, "what is right?". From this question follows a technical
> solution, initial dynamics of Friendly AI, that is a device to make a
> next step, to initiate transferring the dynamics of "right" from human
> into a more reliable and powerful form. In this sense, Friendly AI
> answers the question of "right", being the next step in the process.
> But initial FAI doesn't embody the whole dynamics, it only references
> it in the humans and learns to gradually transfer it, to embody it.
> Initial FAI doesn't contain the content of "right", only the structure
> of absorb it from humans.


Like learning a sense of smell from a sewer. If this is to be based on
present religiously-inspired human wants, then it is doomed to fail as
present human society now does. This could only serve to AMPLIFY the
failings of present society.

Of course, this is simplification, there are all kinds of
> difficulties. For example, this whole endeavor needs to be safeguarded
> against mistakes made along the way, including the mistakes made
> before the idea of implementing FAI appeared, mistakes in everyday
> design that went into FAI, mistakes in initial stages of training,
> mistakes in moral decisions made about what "right" means.


I believe that ALL such decisions are a mistake. When "right" diverges from
"best", it is time to STOP and examine both.

Initial
> FAI, when it grows up sufficiently, needs to be able to look back and
> see why it turned out to be the way it did, was it because it was
> intended to have a property X, or was it because of some kind of
> arbitrary coincidence, was property X intended for valid reasons, or
> because programmer Z had a bad mood that morning, etc. Unfortunately,
> there is no objective morality, so FAI needs to be made good enough
> from the start to eventually be able to recognize what is valid and
> what is not, reflectively looking back at its origin, with all the
> depth of factual information and optimization power to run whatever
> factual queries it needs.
>
> I (vainly) hope this answered (at least some of the) other questions as
> well.


This all reminds me of my Psychology 101 course. When I signed up for it, I
presumed that psychology was a science and that I should know about it.
After one quarter, I had been convinced by countless arguments that this
definitely was NOT a science, but yes I did need to know about it because it
was DANGEROUS.

I believe that "friendly" AGI is probably the most dangerous form of all.

There is a really good reason for most human conflict, and that is an
absolute resistance to seeing the world through the other person's eyes.
While you may or may not be a Christian, you have probably been raised in
our Christian society that "builds in" many shitforbrains concepts at such
an early age that we never subsequently question them. We believe in voting
our prejudices rather than finding mutually workable solutions. We believe
in keeping our neighbors from doing "wrong" and "immoral" things, even
though our neighbors belong to a religion that sees their actions as "right"
and our actions as "wrong" and "immoral". We believe in locking criminals up
in prisons, even though this has been shown to do no one any good at all. We
believe in "women's rights", even though these rights do NOT include the
right to a stable home nor assurance that their children will grow up with
their fathers. I could go on with this list all day, but suffice it to say
that the one common element in all of these items is Christianity and its
refusal to import the solutions to these problems from other societies.
ANYTHING, including "friendly" AGI,  that supports and spreads these
shitforbrains ideas as "right" should be killed in its crib.

Fortunately, there is still much of the world that has NOT fallen into this
trap. If "friendly" AGI is to have any chance at all, then it MUST be
developed in one of these other societies, and export to the U.S. must be
PROHIBITED.

While you obviously are unable to see the following and hence will be unable
to agree with it; much of what you have written literally SCREAMS Christian
programming. Only Christians claim that we don't have the answers to our
present conflicts, while we continue to upset other societies who DO have
many of the answers. In Muslim societies, they have community property and
divorce is freely available, yet they have <2% divorce rate and nearly all
children grow up with their fathers. Why? There have been books written
about this, but until you understand this and countless other existing
solutions and why they are NOT practiced here, you (and others) have no hope
of promoting a truly safe "friendly" AGI, no matter how open minded and
questioning it might seem to be. Just hire a priest, as they already have
all the answers, and they don't require any high-tech development.

Conclusion: Give up on "right", except possibly as a cross-check on "best"
to possibly evoke further analysis where they differ. Where "right and
"best" indicate conflicting actions, part of the output should be a detailed
explanation as to how "right" is wrong. Of course, passing judgment on
present religious belief will place our new AGI above our present Gods, but
I see no other rational choice. Do you?

Steve Richfield



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com

Reply via email to