Hi Ben,

First of all, thanks for inviting me to this list. It looks really exciting.

Because this is moonlighting work for me, I typically participate in forums and post variants of my comments to my HyperAdvance blog (http://www.hyperadvance.com). Please let me know if this is not o.k. in this forum.

Before I comment, I need to let everyone on this list know that I'm probably coming at this from a much different perspective than most of the people here. I came to be interested in AI from in depth studies of human creativity, which led me into knowledge study, and then into a more precise definition knowledge creation that incorporates terms like creativity, innovation, genius, and problem solving.

That said, please excuse any technical ignorance in my postings. I've helped implement very complex systems and expert systems, but have never written any kind of code with that end.

Anyway, to get to this 'friendly AI' topic, I first have to state that I see AI itself as a misnomer. I define intelligence as knowledge that is stored and retrievable, be it in/from a brain or a system. Under this definition, AI would have obviously already occurred. In my opinion, the piece that has not yet occurred in a comprehensible way is 'artificial knowledge creation.' In my opinion, which is definitely not the popular view on this, the realization of artificial knowledge creation is synonymous with Singularity.

Knowledge is a complex, logical structure of symbols, chosen by a society. Natural language, which is used in our society to 'work' knowledge, is linear, while the knowledge structure itself is three-dimensional.

Looking at the natural language interface, which is weak to start with (because it is linear), AI research could easily get sidetracked with how to make a machine that can recognize and respond to natural language...and miss the point of advancing the knowledge that is behind the natural language interface.

Key point being...one can advance knowledge without too much concern for a natural language interface. And one can leave this entirely, if a new set of symbols are chosen (for example, a new, three-dimensional langugage).

Artificial knowledge creation (AKC) is a more appropriate term because the intent then is to artificially advance knowledge (through knowledge creation). Granted, one intent may be to create a machine that can interact with humans, but that is simply an exercise in replication of the human paradigm and not true AI. True AI, or AKC, is the automatic advance of knowledge in a society to the limits of the artificial storage capacity.

Emotion then, really doesn't enter into this equation. Emotion is a part of replicating the human paradigm, but does not have to be at all involved in terms of automating or mechanizing knowledge advance. Knowledge creation appears to be serendipitous, but in reality it is a cold, hard, logical process with no feelings in it. It operates by converting questions, which are a perceived lack of knowledge structure, into knowledge, which is the logical structure of symbols. This is the process behind all the 'creativity' terms and methods across all disciplines and industries. It is very predictable and could theoretically be mechanized.

Kind Regards,

Bruce LaDuke
Managing Director

Instant Innovation, LLC
Indianapolis, IN
[EMAIL PROTECTED]
http://www.hyperadvance.com




----Original Message Follows----
From: "Ben Goertzel" <[EMAIL PROTECTED]>
Reply-To: [EMAIL PROTECTED]
To: [email protected]
Subject: [singularity] Is Friendly AI Bunk?
Date: Sun, 10 Sep 2006 01:42:25 -0400

Hi all!

Over 50 people have subscribed to this new list since its creation a
couple days ago, which is pretty exciting.   This seems like a large
enough crew that it's worth launching a discussion.

There are a lot of things I'd like to talk with y'all about on this
list -- in fact, I'd planned to start things off with a discussion of
the possible relevance of quantum theory and John Wheeler's notion of
"It from Bit" to the Singularity.  But now I've decided to save that
for just a little later, because my friend Shane Legg posted an
interesting and controversial blog entry

http://www.vetta.org

entitled "Friendly AI is Bunk" which seems to me worthy of discussion.
Shane wrote it after a conversation we (together with my wife
Izabela) had in Genova last week.  (As a bit of background, Shane is
no AGI slacker: he is currently a PhD student of Marcus Hutter working
on the theory of near-infinitely-powerful AGI, and in past he worked
with me on the Webmind AI project in the late 1990's, and with Peter
Voss on the A2I2 project.)

Not to be left out, I also wrote down some of my own thoughts
following our interesting chat in that Genova cafe' (which of course
followed up a long series of email chats on similar themes), which you
may find here:

http://www.goertzel.org/papers/LimitationsOnFriendliness.pdf

and which is also linked from (and briefly discussed in, along with a
bunch of other rambling) this recent blog entry of mine:

http://www.goertzel.org/blog/blog.htm

My own take in the above PDF is not as entertainingly written as
Shane's (a bit more technical) nor quite as extreme, but we have the
same basic idea, I think.  The main difference between our
perspectives is that, while I agree with Shane that achieving
"Friendly AI" (in the sense of AI that is somehow guaranteed to remain
benevolent to humans even as the world and it evolve and grow) is an
infeasible idea ... I still suspect it may be possible to create AGI's
that are very likely to maintain other, more abstract sorts of
desirable properties (compassion, anyone?) as they evolve and grow.
This latter notion is extremely interesting to me and I wish I had
time to focus on developing it further ... I'm sure though that I will
take that time, in time ;-)

Thoughtful comments on Shane's or my yakking, linked above, will be
much appreciated and enjoyed....  (All points of view will be accepted
openly, of course: although I am hosting this new list, my goal is not
to have a list of discussions mirroring my own view, but rather to
have a list I can learn something from.)

Yours,
Ben Goertzel

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


-------
To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to