Charles,
We're still a few million miles apart :). But perhaps we can focus on
something constructive here. On the one hand, while, yes, I'm talking about
extremely sophisticated behaviour in essaywriting, it has generalizable
features that characterise all life. (And I think BTW that a dog is still
extremely sophisticated in its motivations and behaviour - your idea there
strikes me as evolutionarily naive).
Even if a student has an extremely dictatorial instructor, following his
instructions slavishly, will be, when you analyse it, a highly problematic,
open-ended affair, and no slavish matter - i.e. how he is to apply some
general, say, deconstructionist criticism instructions and principles and
translate them into a v. complex essay.
In fact, it immediately strikes me such essaywriting, and all essaywriting,
and most human activities and animal activities will be a matter of
hierarchical goals - of, off the cuff, something v. crudely like - "write an
essay on Hamlet" - "decide general approach"... "use deconstructionist
approach" - "find contradictory values in Hamlet to deconstruct"...etc.
But all life, I guess, must be organized along those lines - the simplest
worm must start with something crudely like : "find food to eat"..."decide
where food may be located" "decide approach to food location " etc..
(which in turn will almost always be conflicting with opposed
emotions/motivations/goals like "get some more sleep" .."stay cuddled up in
burrow.." )
And even, pace Koestler and others, v. simple actions, like reaching out for
food in a kitchen, can be a hierarchical affair, with only the general
direction and goal decided to begin with, and more specific targeting of arm
and shaping of hand, only specified at later stages of the action.
Hierarchical goals are surely fundamental to general intelligence.
Interestingly, when I Google "hierarchical goals" and AI, I get v. little -
except from our immediate friends, gamers - and this from: "Programming Game
AI by Example" Mat Buckland:
"Chapter 9: Hierarchical Goal Based Agents
This chapter introduces agents that are motivated by hierarchical goals.
This type of architecture is far more flexible than the one described in
Chapter 2 allowing AI programmers to easily imbue game characters with the
brains necessary to do all sorts of funky stuff.
Discussion, code and demos of: atomic goals, composite goals, goal
arbitration, creating goal evaluation functions, implementation in Raven,
using goal evaluations to create personalities, goals and agent memory,
automatic resuming of interrupted activities, negotiating special path
obstacles such as elevators, doors or moving platforms, command queuing,
scripting behavior."
Anyone care to comment about using hierarchical goals in AGI or elsewhere?
Charles: Flaws in Hamlet: I don't think of this as involving general
intelligence. Specialized intelligence, yes, but if you see general
intelligence at work there you'll need to be more explicit for me to
understand what you mean. Now determining whether a particular
deviation from iambic pentameter was a flaw would require a deep human
intelligence, but I don't feel that understanding of how human emotions
are structured is a part of general intelligence except on a very
strongly superhuman level. The level where the AI's theory of your mind
was on a par with, or better than, your own.
Charles,
My flabber is so ghasted, I don't quite know what to say. Sorry, I've
never come across any remarks quite so divorced from psychological
reality. There are millions of essays out there on Hamlet, each one of
them different. Why don't you look at a few?:
http://www.123helpme.com/search.asp?text=hamlet
I've looked at a few (though not those). In college I formed the definite
impression that essays on the meaning of literature were exercises in
determining what the instructor wanted. This isn't something that I
consider a part of general intelligence (except as mentioned above).
...
The reason over 70 per cent of students procrastinate when writing essays
like this about Hamlet, (and the other 20 odd per cent also procrastinate
but don't tell the surveys), is in part that it is difficult to know
which of the many available approaches to take, and which of the odd
thousand lines of text to use as support, and which of innumerable
critics to read. And people don't have a neat structure for essay-writing
to follow. (And people are inevitably and correctly afraid that it will
all take if not forever then far, far too long).
. This isn't a problem of general intelligence except at a moderately
superhuman level. Human tastes aren't reasonable ingredients for an entry
level general intelligence. Making it a requirement merely ensures that
one will never be developed (whose development attends to your theories of
what's required).
...
In short, essay writing is an excellent example of an AGI in action - a
mind freely crossing different domains to approach a given subject from
many fundamentally different angles. (If any subject tends towards
narrow AI, it is normal as opposed to creative maths).
I can see story construction as a reasonable goal for an AGI, but at the
entry level they are going to need to be extremely simple stories.
Remember that the goal structures of the AI won't match yours, so only
places where the overlap is maximal are reasonable grounds for story
construction. Otherwise this is an area for specialized AIs, which isn't
what we are after.
Essay writing also epitomises the NORMAL operation of the human mind.
When was the last time you tried to - or succeeded in concentrating for
any length of time?
I have frequently written essays and other similar works. My goal
structures, however, are not generalized, but rather are human. I have
built into me many special purpose functions for dealing with things like
plot structure, family relationships, relative stages of growth, etc.
As William James wrote of the normal stream of consciousness:
"Instead of thoughts of concrete things patiently following one another
in a beaten track of habitual suggestion, we have the most abrupt
cross-cuts and transitions from one idea to another, the most rarefied
abstractions and discriminations, the most unheard-of combinations of
elements, the subtlest associations of analogy; in a word, we seem
suddenly introduced into a seething caldron of ideas, where everything is
fizzling and bobbing about in a state of bewildering activity, where
partnerships can be joined or loosened in an instant, treadmill routine
is unknown, and the unexpected seems the only law."
Ditto:
The normal condition of the mind is one of informational disorder:
random thoughts chase one another instead of lining up in logical causal
sequences.
Mihaly Csikszentmihalyi
Ditto the Dhammapada, "Hard to control, unstable is the mind, ever in
quest of delight,"
When you have a mechanical mind that can a) write essays or tell stories
or hold conversations [which all present the same basic difficulties]
and b) has a fraction of the difficulty concentrating that the brain does
and therefore c) a fraction of the flexibility in crossing domains, then
you might have something that actually is an AGI.
You seem to be placing an extremely high bar in place before you will
consider something an AGI. Accepting all that you have said, for an AGI
to react as a human would react would require that the AGI be strongly
superhuman.
More to the point, I wouldn't DARE create an AGI which had motivations
similar to those that I see clearly exposed in many people that I
encounter. It needs to be willing to defend itself, in a weak sense of
the term, but not in a strong sense of the term. If it becomes the driver
of a vehicle, it must be willing to allow itself to be killed via it's own
action before it chooses to cause harm to a human. This isn't a human
goal structure (except in a very few non-representative cases that I don't
understand well enough to model).
I'm hoping for a goal structure similar to that of a pet dog, but a bit
less aggressive. (Unfortunately, I also expect it will be a lot less
intelligent. I'm going to need to depend of people to read a lot more
intelligence into it than is actually present. Fortunately people are
good at that.) The trick will be getting people to interact with it
without it having a body. This will, I hope, be an AGI because it is able
to learn to deal with new things. The emphasis here is on the general
rather than on the intelligence, as there won't be enough computer cycles
for a lot of actual intelligence. And writing an essay would be totally
out of the question. A simple sentence-based conversation is the most I
can hope for.
-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
http://www.listbox.com/member/?&
Powered by Listbox: http://www.listbox.com
--
No virus found in this incoming message.
Checked by AVG. Version: 7.5.524 / Virus Database: 269.23.7/1411 - Release
Date: 5/2/2008 8:02 AM
-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com