On Thu, Aug 28, 2008 at 12:29 PM, Terren Suydam <[EMAIL PROTECTED]> wrote:
>I challenge anyone who believes that Friendliness is attainable in principle 
>to construct a scenario in which there is a clear right action that does not 
>depend on cultural or situational context.

It does depend on culture & other things, but at some point, mankind
may get unified by a single culture and things like our DNA, our
learning methods, our responses to particular stimulus (and more) may
get highly controlled & standardized. The meaning of friendliness &
goodness may then get well defined & standardized (for us as well as
for our machines). If this is ever gonna happen, it will probably be
in so distant future that I don't think today's AGI developers need to
spend much time with theoretical analysis of those scenarios. In
short, the more differences between us the more meaningless the
hardcoded friendliness is. At this point, we should IMO rather focus
on AGIs that get goals/rules from authorized subjects. And those
subjects, when specifying the goals/rules, should IMO be more specific
than what some of the members of this list seem to be envisioning. I
don't think it's a good idea to give just very high-level orders to
our AGIs and let it work on it. When you say to your AGI: "Figure out
what's good for us and make sure we all get as much of it as
possible!", the results may not really be that good. What would you do
if you are in the AGI's shoes in that case? Search Internet for
phrases like "Ohhhh, it feels sooo good!" and making statistics about
what are all those scenarios about (including checks for negative
consequences) so you could "force" (and intensify) the "good" stuff on
global scale? I think we can well guess what would top such a list.
Our AGIs will IMO need to be tightly controlled (by people) for a
while to prevent ridiculous solutions. Our "written record"
(/Internet) is filled with data that could be very misleading for
thinking machines that did not learn in very human-like ways. And we
just don't have technology for implementing human senses and related
data processing mechanisms. We can still make AGI working (using other
architectures), but I just don't think you could then simply send your
AGI to learn from internet in the first major learning phase. It will
IMO be more painful than that. For our babies it's also not that
simple/fast to understand basics about our world. As for the thread
subject: No, embodiment is not necessary for AGI. There is certain
[not too high] number of core semantic concepts that you can support
with built-in grounding and such semantic baseline can be later used
to ground later-gained knowledge. Limited in certain ways? Sure, but
we humans have limitations of that nature as well.

Regards,
Jiri Jelinek


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com

Reply via email to