RE: [agi] Complexity of Evolving an AGI

2004-03-02 Thread Ben Goertzel


But the different trials need not be independent --- we can save the
trajectory of each AI's development continuously, and then restart a new
branch of AI x at time y for any recorded AI x at any recorded time point
y.

Also, we can intentionally form composite AI's by taking portions of AI x's
mind and portions of AI y's mind and fusing them together into a new AI z...

So we don't need to follow a strict process of evolutionary trial and error,
which may accelerate things considerably  particularly if, as
experimentation progresses, we are able to learn abstract theories about
what makes some AI's smarter or stabler or friendlier than others.

-- Ben G

 My guess is that the bottleneck would be the time
 required to test an AI. First, the AI to be tested
 needs to evolve a common sense based on its interaction
 with the environment. Then it has to learn natural
 language. Only after that can we test its cognitive
 abilities. The time it takes for each trial will
 severely limit the number of instances we can test.

 YKY


 
 Get advanced SPAM filtering on Webmail or POP Mail ... Get Lycos Mail!
 http://login.mail.lycos.com/r/referral?aid=27005

 ---
 To unsubscribe, change your address, or temporarily deactivate
 your subscription,
 please go to http://v2.listbox.com/member/[EMAIL PROTECTED]




---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


RE: [agi] Dr. Turing, I presume?

2004-03-02 Thread Ben Goertzel

Brad wrote:

  I see your point, but I'm not so sure you're correct.

 If you're devoting resources specifically to getting some attention, you
 may indeed speed up the process.  I wish you luck.

Well, I'm not devoting resources to getting widespread attention for AGI
right now -- because the time is not right.

The time to devote resources to getting widespread attention for AGI will be
after the baby mind is done being engineered and we've started teaching
it  I'm a bit annoyed that it's taking so long to get to that stage, but
such is life (and, more to the point, such is the progress of highly
ambitious and complex science/engineering projects, particularly those
carried out on a part-time basis... (though a number of us are working on
Novamente-based projects full-time, the AGI aspects are being done part-time
while a lot of focus is on short-term narrow-AI apps of the codebase that
are able to generate revenue right now))

Getting widespread attention for AGI right now would probably be possible
via a well-coordinated publicity effort -- but it would be foolish.  The
attention would not stick well enough because of excessive skepticism on the
part of the conventional academic community, and because of the lack of a
continuous stream of ongoing exciting results.  The attention would likely
fade before we get to the teaching-baby-mind phase... and then it would be
more difficult to get the attention back.

On the other hand, a publicity storm when the baby mind starts being taught
will create attention that will stick far better -- because the criticism by
conventional academics will be more muted (assuming the baby mind has been
described in publications in the right journals, which is easy enough), and
because the baby mind's continual intelligence improvements will an provide
ongoing stream of novel fodder for the media.

Attracting and sustaining media attention is not easy, but unlike creating
AGI, it's a known science ;-)

 However even if you do get such attention, it will still take quite a
 while for the repercussions to percolate through society.

Yes, of course...

 Mike seemed to
 be implying a technological rapture with very rapid changes at
 all levels of society.

 I think that people at all levels will be slow to react while a small
 percentage of early adopters who grab hold and start creating a market.
 This belief is based on historical precedent.

Hmmm... well, I think that once the population at large becomes AGI-aware,
then the collective mind of the first-world business and scientific
community will start thinking of all sorts of AGI applications and working
really hard to make them happen.

And the speed of dissemination of AGI-awareness through society will depend
a lot on the mode of dissemination.

For example, suppose one launched an AGI in the context of a popular online
multiplayer game, say the next Everquest (whatever that may be)  Then
a big sector of the population will get what the AGI is like very quickly.
The game's popularity will grow because the AGI is involved with the game,
and then a huge percentage of the teenage boys in the world will be highly
AGI-savvy

What if an AGI scientist with rudimentary English conversation skills
makes a significant discovery?... and the AGI is interviewed on every
popular TV talk show (together with its dubiously photogenic creator ;)?  It
doesn't even have to be a world-shattering discovery, just something
moderately original and important, but conceived by a software system that
can talk in rudimentary English about what it discovered and why.  (Bear in
mind that some kinds of scientific discovery will in a sense be easy for
AGI's, compared to a lot of everyday tasks that seem easier to humans.)

These are just two examples of how broad AGI awareness may be quickly
raised -- there are many more...

-- Ben G


---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


RE: [agi] AGI's and emotions

2004-03-02 Thread Gus Constan









Emotion is not sensory data but rather a
product of it, from the machine point of view emotion is another reasoning
faculty invoked from archetypal imprints adjusting to a sensory cognitive
pattern (the resolution process). Emotion is the steering heuristic encapsulating
the resolution domain. One may say its the seed of reason
or at least the path it traces.



Gus





-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf
Of Ben Goertzel
Sent: Wednesday, February 25, 2004
11:25 AM
To: [EMAIL PROTECTED]
Subject: RE: [agi] AGI's and
emotions











Mike,











Regarding your definition
of emotion. Ialmost agree with what you say -- BUT, I think you're
missing a basic point. Emotions do involve data coming into the cognitive
centers, vaguely similarly to how perceptual data comes into the cognitive centers.
And, as with perception, emotions involve processing that goes on in areas of
the brain that are mostly opaque to the cognitive centers. But in the
case of emotion, the data comes in from a broadly distributed set of
physiological and kinesthetic indicators -- AND from parts of the brain that
are concerned with reaction to stimuli and goal-achievement rather than just
perceiving. This is qualitatively different than data feeding in from
sensors Emotions are more similar to unconscious reflex actions than
to sensation per se -- but they last longer and are more broadly-based than
simple reflex actions...











ben g





-Original
Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED]On Behalf Of
deering
Sent: Wednesday, February 25, 2004
2:19 AM
To: [EMAIL PROTECTED]
Subject: Re: [agi] AGI's and
emotions



Bill, I agree with you that emotions
are tied to motivation of behavior in humans. Humans prefer the
experience of some emotions and avoid the experience of others, and therefore
generate their behavior to maximize these goals. I think this is a
peculiarly biological situation and need now be replicated in AI's. I
think in AI's we have the design option to base the motivation of behavior on
more rational grounds.

















Ben, I don't know if my personal
definition of emotions will be of much help as it may not be shared by a very
large community. but for what it's worth, here it is.











MIKE DEERING'S PERSONAL DEFINITION
OF EMOTIONS: Emotions are a kind of sensory data. The sensory organ
that perceives this data is the conscious mind alone. The physical
reality which generates this raw data are selected concentrations of
neurotransmitters in the brain. Their effects vary with different types
of neurons in different locations. Some types of neurons produce more of
certain kinds of neurotransmitter than other types of neurons. Those that
generate the neurotransmitters are not necessarily the same as those that are
more affected. They are also affected by other chemicalsproduced by
glands. It's complicated. These neurochemical phenomena are by
evolutionary design causally linked to environmental circumstances and divided
into positive and negative type. They are used, by evolutionary design,
to positively and negatively reinforce behaviors to maximize and minimize the
related circumstances. Emotions are not products of cognitive processes
but are rather perceptions of neurochemical states and states of activation of
selected regions of the brain. Because of the complicated feedback
arrangements in the generation of neurotransmitters and hormones, and the
neurons role in this feedback, some limited conscious influence can be
exercised in the management of emotions. Emotions can be generated
artificially by the introduction of various chemicals to the brain, the direct
electrical stimulation of certain neuron clusters, or direct control of
environmental circumstances. Certain physical bodily sensations are
closely related to emotions: pain to sadness, pleasure to happiness.





















To unsubscribe, change your address, or temporarily
deactivate your subscription, please go to
http://v2.listbox.com/member/[EMAIL PROTECTED] 









To unsubscribe, change your address, or temporarily
deactivate your subscription, please go to
http://v2.listbox.com/member/[EMAIL PROTECTED]






To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]