Mike,
I derived a few things from your response - even enjoyed it. One point
passed over too quickly was the question of "How knowable is the world?"
I take this to be a rhetorical question meant to suggest that we need
all of it to be considered intelligent. This suggestion seems to be
echoed in the statement "Which brings us to HOW MANY KINDS OF
REPRESENTATIONS OF A SUBJECT.. do we need to form a comprehensive
representation of the man - "
If the implication is that we need it all, the bar is too high -
unnecessarily high.
1. We are not building God. The AGI does not need to grasp everything
there is to Ben. It doesn't have to conquer the stock market or
dominate the institutions of man. It need not have full recall of all
important historical events. It need not prefer a philosophy or know the
relative value of all things.
2. We are building a machine that performs - intelligent behavior. And
because it is to have "general" intelligent behavior, it must grow. The
growth will be done by adopting new xyz? (whatever it finds useful...)
For example, as Steve Reed increases the conversational ability of a
machine, he will be giving more capability to the unit. With this new
capability the unit will have more "choices." It will be able to
function in an environment where intelligence can be developed /
harvested and tested. Who says it won't grow from the advice of those
it converses with?
3. Confusion is amplified when there is no distinction between what it
is to be intelligent and what it is to be super intelligent. Why make it
more difficult than it already is? Why ask the fledgling performer to
do what is way beyond it's capacity at inception?
4. If the big deal is "will an AGI ever use images?" We know that they
will. If the question is can they have human comprehension of images?
It isn't too much of a stretch to say "yeah, probably." As humans we
have rich comprehension of many things. Then again, I know many people
who think a snake is a snake and that's all they need to know.
5. Minimal system is a target. In the sense of minimal system, I view
AGI as a narrow problem. What is the essence of intelligence? - what's
required to see intelligent behavior? (qualified to include the broader
sense of general intelligence, that is including the growth factors.)
Mike, are you saying that there is no such thing as a minimal system?
6. The problem of THIS minimal system is that it is complicated. A few
techniques and methods won't do - else such a system exhibiting general
intelligent behavior would exist and be growing today.
My point - there will continue to be *misunderstanding* if intelligence
is viewed without distinguishing "mature" from fledgling.
I'm interested in the "minimal" system. I consider it my good fortune
to have a good seat to observe historic events - I appreciate the
project, this list, and it's contributors.
Mike Tintner wrote:
Matthias: a state description could be:....
..I am in a kitchen. The door is open. It has two windows. There is a
sink. And three cupboards. Two chairs. A fly is on
the right window. The sun is shining. The color of the chair is... etc.
etc.....
..........................................................................................................
I think studying the limitations of human intelligence or better to say the
predefined innate knowledge in our algorithms is essential to create that
what you call AGI. Because only with this knowledge you can avoid the
problem of huge state spaces.
You did something v. interesting, which is you started to ground the
discussion about general intelligence. These discussions are normally
almost
totally ungrounded as to the SUBJECTS/OBJECTS of intelligence.
Essentially, the underlying perspectives of discussions of GI in this
field are computational and mathematical re the MEDIUM of intelligence.
People basically think along the lines of: how much information can a
computer hold, and how can it manipulate that information? But that -
the equivalent would be something like: how much can a human brain hold
and manipulate? - is not all there is to intelligence.
What is totally missing is a philosophical and semiotic perspective. A
philosopher looks at things v. differently and asks essentially : how
much information can we get about a given subject (and the world
generally)? A semioticist asks: how much and what kinds of information
about any given subject (or the world generally) can different forms of
representation give us? (A verbal description, photo, movie, statue
will all give us different forms of info and show different dimensions
of a subject).
The AI-er asks how much information (about the world) can I and my
machine handle? The philosopher: how much information about the world
can we actually *get*? - How knowable is the world? ANd what do we have
to do to get and present knowledge about the world?
If you are truly serious here, I suggest, you have to look at
intelligence from both perspectives.
You took a kitchen as a possible subject to ground the discussion. Why
not take something easier to
think about - to consider the difficulties of getting to know the world
- a human being. Take one at random:
http://lifeboat.com/board/ben.goertzel.jpg
What does anyone, any society or any intelligence need to be a)
intelligent - to - ultimately b) omniscient about this man?
How many disciplines of knowledge studying how many LEVELS OF THE SUBJECT -
levels of this man and his body, behaviour and relationships do we need
to bring
in? Presumably we need somewhere between something and everything our
culture has to offer - every branch of science - psychology, social
psychology, biopsychology, social anthropology, behavioural economics,
cognitive science, neuroscience, down to cardiology, gastroenterology,
immunology ... down to biochemistry, molecular biology, genetics -
focussing
on every part of his behaviour, and every part or subsystem of his body.
(Would you want a total systems science view which would attempt to
integrate all their views into one totally inegrated model of the man? Our
culture doesn't offer such a thing only a piecemeal view, but maybe you'd
like to attempt one?)
And those are just the generalists. Then we really ought to bring in
somewhere between some
and all kinds of the arts - they specialise in individual portraits.
Novelists, painters, sculptors, moviemakers, cartoonists etc. A Scorsese
at least to do justice to his titanic struggles. They can all show us
different of dimensions of this man.
Which brings us to HOW MANY KINDS OF REPRESENTATIONS OF A SUBJECT.. do
we need to form a comprehensive representation of the man -
textual, references on Google, mathematicial, photographic, drawing,
cartoon, movies,
statues, 3-d molecular models, holograms, tax returns, bank statements
how many scientific representations - mammogram, cardiogram, urine samples,
skin samples, biopsies, blood tests...
And then how much PERSONAL INTERACTION WITH THE SUBJECT is needed.
Should you have interviewed, worked with him, partied with him, had sex
with im? - And the SUBJECT'S RELATIONS ... should you know his family,
friends etc.?
How extensive REPRESENTATIONS OF THE SUBJECT'S ENVIRONMENT... his home,
office, car, beat-up chair, clothes etc...local neighbourhood, town, etc..
And what DEGREE OF EMBODIMENT should you, the knower, - or your computer
- have? Because, obviously, you can only identify with any given subject
to the extent that you have a similar/the same body. Hence philosophy's
"what's it like to be a bat?" and "how can you know *my* qualia?"
obsessions. Even God, according to some religions, had to become flesh
to know humans.
Ultimately, I suggest, PERFECTION ...near godlike knowledge and
intelligence would involve having a PERFECT REPLICA OF THE SUBJECT AND
HIS ENVIRONMENT... with total powers of investigation and vision - the
ability to look inside any part of his body or head or personal world
and find out just what was going on.
Everything short of that, should be considered as DEGREES OF
INTELLIGENCE... perhaps degrees of reality.
Once you think like this - philosophically - you become more realistic
about the problems of developing intelligence of any kind. Now my
experience is that AGI-ers won't do this, because they're only prepared
to think within the disciplines they're familiar and comfortable with -
computational and mathematical, mainly. But if you're truly interested
in general intelligence, you can't be culturally insular - that should
actually be regarded as a cardinal sin - you have to have an overview of
our culture, all forms of human intelligence, and the world at large, as
well as computers - the "to-be-known" as well as the "means-of-knowing".
-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
http://www.listbox.com/member/?&
Powered by Listbox: http://www.listbox.com
-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com