In response to the below post from Mike Tintner of 10/4/2007 12:33 PM:

You talk about the Cohen article I quoted as perhaps leading to a major
paradigm shift, but actually much of its central thrust is similar to
idea’s that have been around for decades.  Cohen’s gists are surprisingly
similar to the scripts Schank was talking about circa 1980.

Again, I think the major paradigm shift needed for AGI is not so much some
new idea that blows everything away, but rather a realization of how most
of the basic problems in AI have actually been solved at a conceptual
level, an appreciation of the power of the concepts we already have, and
understanding of what they could do if put together and run on brain level
hardware that has human-level world knowledge, and a focus on learning how
to pick and chose from all the ideas the right components and getting them
to all work together well automatically on such really powerful hardware.

As Goertzel points out in his articles on Novamente -- and as anyone who
has thought about the problem understand -- even with brain level hardware
you have to come up with good context appropriate schemes for distributing
the computational power you have to where it is most effective.  This is
because no mater how great your computational power is, it will always be
infinitesimal compared to the massively combinatorial space of possible
inferences and computations.  There are lots of possible schemes for how
to do this, including sophisticated probabilistic inference and context
specific importance weighting.  But until I see results from actual
world-knowledge-size systems running with various such algorithms, I can’t
begin to understand how big a problem it is to get things to work well.

Regarding your disappointment that Cohen’s schema operated at something
close to a predicate logic level -- far removed from the actual sensation
of from which one would think they would be derived -- I expressed a
similar sentiment in my response to you are now responding.  A good
human-level system should have  much more visually grounding, and a much
more sophisticated on at that.  But that is not meant as a criticism of
Cohen's work, because he is trying to get stuff done on relatively small
hardware.

At the risk of repeating myself, check out the visual grounding in Thomas
Serre’s great article about a visual recognition system
(http://cbcl.mit.edu/projects/cbcl/publications/ps/MIT-CSAIL-TR-2006-028.p
df).  I have cited this article multiple times in the last week, but it
blows me away.  This article does not pretend to explain the whole story
in visual recognition, but it explains a lot of it.  It gives a very good
idea of both the hierarchical memory that Jeff Hawkings and many others
are heralding as the solution to much of the non-literal match problem
(previously one of the major problems in AI), it gives a pretty good feel
for the types of grounding our brains actually use, it demonstrates the
importance of computer power, thru simulations, in brain understanding,
and it is a damn powerful little system.  To the extent that there are new
paradigms, this article captures a few of them.

You will note that the type of hierarchical representation used in Serre’s
paper would not normally be comparing views of similar objects at a pixel
level, but at levels higher up in its hierarchical memory scheme that are
derived from pixel level mappings against the different views separately.
So schemas of the type Cohen talks, if operating at a semantic level on
top of a hierarchical representation like that used in Serre would not be
operating at anything close to the pixel level, but they could be quickly
mapped to, or from, the pixel level and intermedial levels in between.
Implications from such intermediate representations could be combined with
those from the semantic level to improve semantic implication from visual
information.  They could also be used by imagination from such
intermediary representations, semantically relevant information such as
generalizations of how, or whether or not, a context appropriate view of
an objecte would fit in a given context.  (I don't think Serre focuses
much on top down processessing, except mainly for inhibition of less
relevant upward flow, but there has been much work on top down information
flows, so it is not hard to imagine how it could be mapped into his
system.)

As the Serre article shows, grounding of semantic representations in the
pixel level and more importantly in the many levels between the semantic
and the pixel level is possible with today’s hardware in limited domains.
It should be fully possible across all sensory domains with the much more
powerful hardware that the Serre’s of the future will be working on.


Edward W. Porter
Porter & Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]



-----Original Message-----
From: Mike Tintner [mailto:[EMAIL PROTECTED]
Sent: Thursday, October 04, 2007 12:33 PM
To: [email protected]
Subject: Re: [agi] breaking the small hardware mindset


Edward P: II skimmed “LGIST: Learning Generalized Image Schemas for
Transfer
Thrust D Architecture Report”, by Carole Beal and Paul Cohen at the USC
Information Sciences Institute.  It was one of the PDFs listed on the web
link you sent me (at
http://eksl.isi.edu/files/papers/cohen_2006_1160084799.pdf).  It was
interesting and valuable.  I found its initial few pages a good statement
of
some solid AI ideas.  Its idea of splitting states based on entropy is a
good one, one that I have myself have considered as a guide for when and
where in semantic space to split models and how to segment temporal
representations.

Thanks for pointing this out.  My v. quick impression is that it is a
step,
at least, to a major paradigm shift (although all your criticisms may be
valid). [JWJohnston - I only saw this site literally today after I had
posted]

However - correct me - their "image schemas" are symbolically represented.

They are not true image schemas in my or Lakoff/Mark Johnson's terms.

I believe one of the central sources of the brain's adaptive power is the
ability to represent, manipulate and compare visual and other kinds of
graphics/ "image schemas" directly. IOW it can represent "an agent goes to
a
place" as  (speaking very roughly):

outline graphics of - a circle or similar (for "agent") -  an arrow (for
"goes to") - another circle or square (for "place").

(AFAICT this is consistent with Lakoff & Johnson's thinking).

If I ask you or any human to tell me a story about "an agent going to a
place", you will of course, be able to tell me a virtually infinite number

of stories - a prime example of the brain's adaptive power and ability to
draw analogies.

That ability, I believe, derives from being able to directly, visually
transform a circle or similar into almost any object or creature . Thus
you
will be able to tell me a story about a human/man/woman/rabbit/snake etc
for
your "agent." That ability can also visually transform an arrow or similar

into any form of object movement - into say a human
running/walking/driving/riding a bus etc. for "goes to" - and can
transform
a square into a skyscraper/ town/ shop/ forest etc. for "place."

(One obvious piece of evidence for this is the brain's ability to see any
objects whatsoever their shape as balls on an abacus - it's the foundation

of our ability to count objects and maths).

But all this - as I understand - is beyond digital computers. They can't
handle visual shapes directly - no "imagination." And that is just one of
the absolute brick walls AGI faces, which no amount of tweaking will
overcome.

P.S. I have to say  I wasn't that impressed by the other 2 papers of Cohen

linked by JWJohnston. But thanks also for pointing them out




-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=50026498-5f2264

Reply via email to