Stan

I'm putting together a detailed paper on this, so overall it will be best to wait for that.

My posts today give the barest beginning to my thinking, which is that you start to understand the semiotic requirements for a general intelligence by thinking about the *things* that it must know about, and then look at the dimensions of things that different sign systems - maths/logic/language/schemas/ still images/ dynamic images - *allow* you to see.

AGI-ers and indeed most of our culture still think pre-semiotically, and aren't aware that every sign system we use is like a different set of spectacles, and focusses on certain dimensions and problems of things, but totally excludes others.

My focus is not so much on the different stages of general intelligence - an evolutionary perspective - although I do think about that. Ironically, it is people who want their AGI's to converse straight away, or variously handle language, who are actually starting in a sense at the godlike, super - human end.

It actually takes human intelligence many developmental steps to proceed from being able to process simple, highly specific, concrete, here-and-now this-and-that words for objects and people in immediate scenes, to being able to think in language about vast superclasses of things and creatures spread out over zillions of scenes and billions of years past and future. The idea that you can process, say, "the history of the universe is a hard subject to think about", with the same single- or simple-level processing as "it's hard to see where the key is in this room" is an absurd illusion. Similarly, the idea that you can process all numbers and mathematical entities with the same ease is absurd. It took a long time historically for mathematicians to even dare to think about "infinity" - a taboo subject until the printing press made it something that could be in part concretely imagined.

I'll try then to put out a paper on the semiotics - the bare minimum requirements in terms of sign systems - that I think essential to solve the main problems of AGI, and why, shortly.

But re your underlying question - I don't know how tough it will all be. My personal preference is that, as s.o. else just suggested, you guys should link up with some of the roboticists - you both seem to need and complement each other in some ways. But however tough it is, one thing's for sure - it won't do any good to pretend it's easier. AGI will just keep banging its head into brick walls, like it has done for over 50 years. The shorter your cuts, the longer it will actually take.



----- Original Message ----- From: "Stan Nilsen" <[EMAIL PROTECTED]>
To: <agi@v2.listbox.com>
Sent: Tuesday, April 29, 2008 6:52 PM
Subject: Re: [agi] How general can be and should be AGI?


Mike,

I derived a few things from your response - even enjoyed it. One point passed over too quickly was the question of "How knowable is the world?"

I take this to be a rhetorical question meant to suggest that we need all of it to be considered intelligent. This suggestion seems to be echoed in the statement "Which brings us to HOW MANY KINDS OF REPRESENTATIONS OF A SUBJECT.. do we need to form a comprehensive representation of the man - "

If the implication is that we need it all, the bar is too high - unnecessarily high.

1. We are not building God. The AGI does not need to grasp everything there is to Ben. It doesn't have to conquer the stock market or dominate the institutions of man. It need not have full recall of all important historical events. It need not prefer a philosophy or know the relative value of all things.

2. We are building a machine that performs - intelligent behavior. And because it is to have "general" intelligent behavior, it must grow. The growth will be done by adopting new xyz? (whatever it finds useful...) For example, as Steve Reed increases the conversational ability of a machine, he will be giving more capability to the unit. With this new capability the unit will have more "choices." It will be able to function in an environment where intelligence can be developed / harvested and tested. Who says it won't grow from the advice of those it converses with?

3. Confusion is amplified when there is no distinction between what it is to be intelligent and what it is to be super intelligent. Why make it more difficult than it already is? Why ask the fledgling performer to do what is way beyond it's capacity at inception?

4. If the big deal is "will an AGI ever use images?" We know that they will. If the question is can they have human comprehension of images? It isn't too much of a stretch to say "yeah, probably." As humans we have rich comprehension of many things. Then again, I know many people who think a snake is a snake and that's all they need to know.

5. Minimal system is a target. In the sense of minimal system, I view AGI as a narrow problem. What is the essence of intelligence? - what's required to see intelligent behavior? (qualified to include the broader sense of general intelligence, that is including the growth factors.)

Mike, are you saying that there is no such thing as a minimal system?

6. The problem of THIS minimal system is that it is complicated. A few techniques and methods won't do - else such a system exhibiting general intelligent behavior would exist and be growing today.

My point - there will continue to be *misunderstanding* if intelligence is viewed without distinguishing "mature" from fledgling.

I'm interested in the "minimal" system. I consider it my good fortune to have a good seat to observe historic events - I appreciate the project, this list, and it's contributors.




Mike Tintner wrote:
Matthias: a state  description could be:....
..I am in a kitchen. The door is open. It has two windows. There is a
sink. And three cupboards. Two chairs. A fly is on
the right window. The sun is shining. The color of the chair is... etc.
etc.....
.......................................................................................................... I think studying the limitations of human intelligence or better to say the
predefined innate knowledge in our algorithms is essential to create that
what you call AGI. Because only with this knowledge you can avoid the
problem of huge state spaces.



You did something v. interesting, which is you started to ground the
discussion about general intelligence. These discussions are normally almost
totally ungrounded as to the SUBJECTS/OBJECTS  of intelligence.

Essentially, the underlying perspectives of discussions of GI in this field are computational and mathematical re the MEDIUM of intelligence. People basically think along the lines of: how much information can a computer hold, and how can it manipulate that information? But that - the equivalent would be something like: how much can a human brain hold and manipulate? - is not all there is to intelligence.

What is totally missing is a philosophical and semiotic perspective. A philosopher looks at things v. differently and asks essentially : how much information can we get about a given subject (and the world generally)? A semioticist asks: how much and what kinds of information about any given subject (or the world generally) can different forms of representation give us? (A verbal description, photo, movie, statue will all give us different forms of info and show different dimensions of a subject).


The AI-er asks how much information (about the world) can I and my machine handle? The philosopher: how much information about the world can we actually *get*? - How knowable is the world? ANd what do we have to do to get and present knowledge about the world?

If you are truly serious here, I suggest, you have to look at intelligence from both perspectives.

You took a kitchen as a possible subject to ground the discussion. Why not take something easier to think about - to consider the difficulties of getting to know the world - a human being. Take one at random:

http://lifeboat.com/board/ben.goertzel.jpg

What does anyone, any society or any intelligence need to be a)
intelligent - to -  ultimately b) omniscient about this man?

How many disciplines of knowledge studying how many LEVELS OF THE SUBJECT - levels of this man and his body, behaviour and relationships do we need to bring
in? Presumably we need somewhere between something and everything our
culture has to offer - every branch of science - psychology, social
psychology, biopsychology, social anthropology, behavioural economics,
cognitive science, neuroscience, down to cardiology, gastroenterology,
immunology ... down to biochemistry, molecular biology, genetics - focussing
on every part of his behaviour, and every part or subsystem of his body.
(Would you want a total systems science view which would attempt to
integrate all their views into one totally inegrated model of the man? Our
culture doesn't offer such a thing only a piecemeal view, but maybe you'd
like to attempt one?)

And those are just the generalists. Then we really ought to bring in somewhere between some
and all kinds of the arts  - they specialise in individual portraits.
Novelists, painters, sculptors, moviemakers, cartoonists etc. A Scorsese at least to do justice to his titanic struggles. They can all show us different of dimensions of this man.

Which brings us to HOW MANY KINDS OF REPRESENTATIONS OF A SUBJECT.. do we need to form a comprehensive representation of the man - textual, references on Google, mathematicial, photographic, drawing, cartoon, movies,
statues, 3-d molecular models, holograms, tax returns, bank statements

how many scientific representations - mammogram, cardiogram, urine samples,
skin samples, biopsies, blood tests...

And then how much PERSONAL INTERACTION WITH THE SUBJECT is needed. Should you have interviewed, worked with him, partied with him, had sex with m? - And the SUBJECT'S RELATIONS ... should you know his family, friends etc.?

How extensive REPRESENTATIONS OF THE SUBJECT'S ENVIRONMENT... his home, office, car, beat-up chair, clothes etc...local neighbourhood, town, etc..

And what DEGREE OF EMBODIMENT should you, the knower, - or your computer - have? Because, obviously, you can only identify with any given subject to the extent that you have a similar/the same body. Hence philosophy's "what's it like to be a bat?" and "how can you know *my* qualia?" obsessions. Even God, according to some religions, had to become flesh to know humans.

Ultimately, I suggest, PERFECTION ...near godlike knowledge and intelligence would involve having a PERFECT REPLICA OF THE SUBJECT AND HIS ENVIRONMENT... with total powers of investigation and vision - the ability to look inside any part of his body or head or personal world and find out just what was going on.

Everything short of that, should be considered as DEGREES OF INTELLIGENCE... perhaps degrees of reality.

Once you think like this - philosophically - you become more realistic about the problems of developing intelligence of any kind. Now my experience is that AGI-ers won't do this, because they're only prepared to think within the disciplines they're familiar and comfortable with - computational and mathematical, mainly. But if you're truly interested in general intelligence, you can't be culturally insular - that should actually be regarded as a cardinal sin - you have to have an overview of our culture, all forms of human intelligence, and the world at large, as well as computers - the "to-be-known" as well as the "means-of-knowing".






-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: http://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com



-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: http://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com



--
No virus found in this incoming message.
Checked by AVG. Version: 7.5.524 / Virus Database: 269.23.6/1403 - Release Date: 4/29/2008 7:26 AM




-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com

Reply via email to