From: Derek Zahn <[EMAIL PROTECTED]>
>What do you suggest is a rational approach for AGI research to follow?
That's a very broad question. If I narrow it to something relevant to
the recent conversation, I get:
What do you suggest is a rational approach to preventing AI's
from doing something grossly different from what is desired?
When an AI is going to make a change, at some point an analysis has to
be done that estimates the consequences of the change and takes the
social context into account to figure out whether the change is likely
to have undesired consequences. (The social context is relevant
because it determines the meaning of "undesired".) Right now that
analysis is generally done by humans, but we'll have to automate it
when either there are too many changes happening, or the consequences
are not something humans can accurately estimate.
You know when you're walking down a narrow hallway, and you see
someone else coming toward you going the opposite way, and you do this
little nonverbal negotiation to figure out how you get around each
other? Humans fairly reliably infer what other humans want from their
behavior, and they routinely act upon these inferences. I'd like an
AI to figure out what entities in the environment want from their
behavior and react appropriately, so it would be able to figure out
this nonverbal negotiation in the hallway on its own.
If the same sort of reasoning can motivate more sophisticated
behavior, then so far as I can tell we would have a solution to the
Friendly AI problem.
Maybe someone has already done this.
I have a theoretical solution that's partially written up. I'll have
more details later.
--
Tim Freeman http://www.fungible.com [EMAIL PROTECTED]
-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=53257439-8a4da2