Terren,

You may be right - in the sense that I would have to just butt out of certain conversations, to go away & educate myself.

There's just one thing here though - and again this is a central philosophical difference this time concerning the creative process.

Can you tell me which kind of programming is necessary for which end-problem[s] that general intelligence must solve? Which kind of programming, IOW, can you *guarantee* me will definitely not be a waste of my time (other than by way of general education) ? Which kind are you *sure* will help solve which unsolved problem of AGI?

P.S. OTOH the idea that in the kind of general community I'm espousing, (and is beginning to crop up in other areas), everyone must be proficient in everyone else's speciality is actually a non-starter, Terren. It defeats the object of the division of labour central to all parts of the economy. If you had to spend as much time thinking about those end-problems as I have, I suggest you'd have to drop everything. Let's just share expertise instead?


Terren: Good summary. I think your point of view is valuable in the sense of helping engineers in AGI to see what they may be missing. And your call for technical AI folks to take up the mantle of more artistic modes of intelligence is also important.

But it's empty, for you've demonstrated no willingness to cross over to engage in technical arguments beyond a certain, quite limited, depth. Admitting your ignorance is one thing, and it's laudable, but it only goes so far. I think if you're serious about getting folks (like Pei Wang) to take you seriously, then you need to also demonstrate your willingness to get your hands dirty and do some programming, or in some other way abolish your ignorance about technical subjects - exactly what you're asking others to do.

Otherwise, you have to admit the folly of trying to compel any such folks to move from their hard-earned perspectives, if you're not willing to do that yourself.

Terren


--- On Sun, 9/7/08, Mike Tintner <[EMAIL PROTECTED]> wrote:

From: Mike Tintner <[EMAIL PROTECTED]>
Subject: [agi] Philosophy of General Intelligence
To: agi@v2.listbox.com
Date: Sunday, September 7, 2008, 6:26 PM
Jiri: Mike,

If you think your AGI know-how is superior to the know-how
of those
who already built testable thinking machines then why
don't you try to
build one yourself?

Jiri,

I don't think I know much at all about machines or
software & never claim
to. I think I know certain, only certain, things about the
psychological and
philosophical aspects of general intelligence - esp. BTW
about the things
you guys almost never discuss, the kinds of problems that a
general
intelligence must solve.

You may think that your objections to me are entirely
personal  about my
manner. I suggest that there is also a v. deep difference
of philosophy
involved here.

I believe that GI really is about *general* intelligence -
a GI, and the
only serious example we have is human, is, crucially, and
must be, able to
cross domains - ANY domain. That means the whole of our
culture and society.
It means every kind of representation, not just
mathematical and logical and
linguistic, but everything - visual, aural, solid, models,
embodied etc etc.
There is a vast range. That means also every subject domain
 - artistic,
historical, scientific, philosophical, technological,
politics, business
etc. Yes, you have to start somewhere, but there should be
no limit to how
you progress.

And the subject of general intelligence is tberefore, in no
way, just the
property of a small community of programmers, or
roboticists - it's the
property of all the sciences, incl. neuroscience,
psychology, semiology,
developmental psychology, AND the arts and philosophy etc.
etc. And it can
only be a collaborative effort. Some robotics disciplines,
I believe, do
think somewhat along those lines and align themselves with
certain sciences.
Some AI-ers also align themselves broadly with scientists
and philosophers.

By definition, too, general intelligence should embrace
every kind of
problem that humans have to deal with - again artistic,
practical,
technological, political, marketing etc. etc.

The idea that general intelligence really could be anything
else but truly
general is, I suggest, if you really think about it,
absurd. It's like
preaching universal brotherhood, and a global society, and
then practising
severe racism.

But that's exactly what's happening in current AGI.
You're actually
practising a highly specialised approach to AGI - only
certain kinds of
representation, only certain kinds of problems are
considered - basically
the ones you were taught and are comfortable with - a very,
very narrow
range - (to a great extent in line with the v. narrow
definition of
intelligence involved in the IQ test).

When I raised other kinds of problems, Pei considered it
not "constructive."
When I recently suggested an in fact brilliant game for
producing creative
metaphors, DZ considered it "childish,"  because
it was visual and
imaginative, and you guys don't do those things, or
barely. (Far from being
childish, that game produced a rich series of visual/verbal
metaphors, where
AGI has produced nothing).

If you aren't prepared to use your imagination and
recognize the other half
of the brain, you are, frankly, completely buggered as far
as AGI is
concerned. In over 2000 years, logic and mathematics
haven't produced a
single metaphor or analogy or crossed any domains.
They're not meant to,
that's expressly forbidden. But the arts produce
metaphors and analogies on
a daily basis by the thousands. The grand irony here is
that creativity
really is - from a strictly technical pov -  largely what
our culture has
always said it is - imaginative/artistic and not rational..
(Many rational
thinkers are creative - but by using their imagination).
AGI will in fact
only work if sciences and arts align.

Here, then is basically why I think you're getting
upset over and over by
me. I'm saying in many different ways, general
intelligence really should be
general, and embrace the whole of culture and intelligence,
not just the
very narrow sections you guys espouse. And yes, I think you
should be
delighted to defer to, and learn from
"outsiders", (if they deserve it),
just as I'm delighted to learn from you. But you're
not - you resent
outsiders like me telling you about "your"
subject.

I think you should also be prepared to admit your ignorance
- and most of
you, frankly, don't have much of a clue about
imaginative/visual/artistic
intelligence and vast swathes of problemsolving, ( just as
I have don't have
much of a clue re your technology and many kinds of
problemsolving...etc).
But there is v. little willingness to admit ignorance, or
to acknowledge the
value of other disciplines.

IN the final analysis, I suggest, that's just sheer
cultural prejudice. It
doesn't belong in the new millennium when the defining
paradigm is global
(and general) as opposed to the local (and specialist)
mentality of the old
one - recognizing the value and interdependence of ALL
parts of society and
culture. And it doesn't belong in a true field of
*General* INtelligence. I
think you need to change your central philosophy in a major
way and be
culturally open-minded. (and then just possibly you
won't find me quite so
upsetting.)




-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com





-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com





-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com

Reply via email to