Mike,

CopyCat and Shruiti are small systems, and thus limited.  (Although as of
several years ago there has been an implimentation of Shruiti on a
connection machine with, I think, over 100K relation nodes).

But when I read papers I try to focus on their aspects that teach me
something valuable, rather than on their limitations.  CopyCat helped
clarify my thinking on how AI can best find analogies, particularly with
its notion of coordianted context-specific slippage, and I found its
codelet based control scheme very interesting and very parallelizable.
Shruity helped clarify the concept of pasing bindings through implications
for me.  I also liked the way it indicated how binding might operate
through synchrony in the human mind, and I found its concept of reflexive
thinking interesting.  Both have been valuable in showing me the path
forward.

>From what I know of Goertzel's work I am very impressed.  I think drawing
analogies should be child's play for Novamente.

Edward W. Porter
Porter & Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]



-----Original Message-----
From: Mike Tintner [mailto:[EMAIL PROTECTED]
Sent: Friday, October 12, 2007 8:32 PM
To: [email protected]
Subject: Re: [agi] Do the inference rules.. P.S.


Edward,

Thanks again for a detailed response (I really do appreciate it).

Your interesting examples of systems confirm my casual impressions of what
can actually be done -  and my reluctance to shell out money on "Fluid
Concepts." Inferences like MARY DOES OWN A BOOK and those from "IF I
CHANGED “ABC TO ABD”, HOW WOULD YOU (THE COPYCAT) MAKE AN ANALOGOUS CHANGE
TO “MRRJJJ”. strike me as fairly trivial, though by no means useless, and
not really AGI . (Yes those are what I mean by "purely symbolic" systems,
although I take your point that there are no absolute boundaries between
different kinds of signs and particularly sign systems - even networks of
symbols are used in complex ways that are not just symbolic).

Inferences like those you mention in:

IF YOU ASKED SUCH A SYSTEM WHAT LOVE BETWEEN A MAN AND A WOMAN WAS, IT
WOULD BE ABLE TO GIVE YOU ALL SORTS OF MEANINGFUL GENERALIZATIONS ABOUT
WHAT LOVE WAS, BASED ON ALL THE DESCRIPTIONS OF LOVE AND HOW IT MAKES
CHARACTERS ACT IN THE BOOKS IT HAS READ.

might be v. productive and into AGI territory, but I note that you are
talking hypothetically, not about real systems.

Ben, if you followed our exchange, has claimed a v. definite form of true
AGI analogy - his system inferring from being able to "fetch", how to play
hide-and-seek. I would like more explanation and evidence, though. But
that's the sort of inference/analogy I think we should all be talking
about.

Vis-a-vis neuroscience & what it tells us about what information is laid
down in the brain, & in what form, I would be vastly more cautious than
you. For instance, we see images as properly shaped, right? But the images
on the retina have a severely distorted form - see Hawkin's photo in On
Intelligence. So where in the brain or in the world is the properly shaped
image? (The main point of that question is simply: hey, there's still
masses we don't know - although if you have an answer, I'd be v.
interested).

P.S. I hope you receive my privately emailed post with the definitions you
requested.

----- Original Message -----
From: Edward W.  <mailto:[EMAIL PROTECTED]> Porter
To: [email protected]
Sent: Friday, October 12, 2007 5:49 PM
Subject: Re: [agi] Do the inference rules.. P.S.


IN RESPONSE TO MIKE TINTNER’S Thu 10/11/2007 11:47 PM POST.  AGAIN MY
RESPONSE IS IN BLUE ALL CAPS.
=================================================

Edward,

Thanks for interesting info - but if I may press you once more. You talk
of different systems, but you don't give one specific example of the kind
of useful (& significant for AGI) inferences any of them can produce -as I
do with my cat example. I'd especially like to hear of one or more from
Novamente, or Copycat.

TO THE BEST OF MY KNOWLEDGE THERE IS NO AGI THAT CURRENTLY DOES ANYTHING
CLOSE TO HUMAN LEVEL INFERENCING, IF THAT IS WHAT YOU MEAN.  NOVAMENTE IS
THE CLOSEST THING I KNOW OF.  BUT, UNFORTUNATELY, AS OF THIS WRITING, I
DON’T KNOW ENOUGH ABOUT IT TO KNOW EXACTLY HOW POWERFUL ITS CURRENT
CAPABILITIES ARE.

BAYESIAN NETS ARE CURRENTLY USED TO DO A TON USEFUL INFERENCING OF A
GENERAL TYPE THAT COULD BE VALUABLE TO AGI.  FOR EXAMPLE, A LOT OF
COMPUTER-BASED DIAGNOSTIC SYSTEMS US IT.  BAYESIAN NETS HAVE SOME LIMITS,
BUT THE LIMITS ARE BEING LOOSENED BY BRIGHT PEOPLE LIKE DAPHNE KOLLER
(REALLY BRIGHT!).  I ATTENDED A LECTURE SHE GAVE AT MIT ABOUT A YEAR AND A
HALF AGO IN WHICH SHE TALKED ABOUT HER GROUPS’ WORK ON GETTING BAYSIAN
NETS TO HANDLE RELATIONAL REASONING, SOMETHING THAT WOULD SUBSTAINTIALLY
INCREASE THEIR POWER.  SHE HAS ALSO DONE WORK ON INTRODUCING BAYESIAN
INFERENCING INTO FORMAL LOGICS.


SHRUITI (DESCRIBED IN “ADVANCES IN SHRUT -- A NEURALLY MOTIVATED MODEL
..’, BY LOKENDRA SHASTRI  (A REALLY GREAT PIECE OF WORK)) PROVIDES A
SYSTEM IN WHICH INFORMATION IS REPRESENTED IN A PREDICATE LOGIC FORM, IN
GENERALIZATION HIERARCHIES, AND WITH A FORM OF PROBABILISTIC IMPLICATION.
THIS SYSTEM ANSWERS A QUESTION, SUCH AS “DOES MARY OWN A BOOK” BY
APPROPRIATELY REMEMBERING (THROUGH PROBABILISTIC SPREADING ACTIVATION)
THAT “JOHN GAVE MARY BOOK 17”, THAT IF A DONOR GIVES A RECIPIENT
SOMETHING, THE RECIPIENT WILL NORMALLY THEN OWN THAT SOMETHING, AND, THUS
THAT MARY OWNS BOOK17, AND THAT BOOK 17 IS A BOOK, AND, THUS, FINALLY,
THAT MARY DOES OWN A BOOK.  SHRUITI HAS LIMITS, BUT IT IS CAPABLE OF
PERFORMING INTERESTING INFERENCES.

Can you think of a single analogy or metaphor, in addition, that is purely
symbolic?

LIKE BEN, I DON’T KNOW HOW YOU ARE USING YOUR TERMS.  FOR EXAMPLE, WHAT DO
YOU MEAN BY “PURELY SYMBOLIC.”  (A QUESTION I HAVE ASKED BEFORE.)

COPYCAT MADE ANALOGIES BETWEEN STRINGS OF CHARACTERS, WHICH ARE SYMBOLIC.
AN EXAMPLE OF THE TYPE OF PROBLEM IT HANDLED IS:  IF I CHANGED “ABC TO
ABD”, HOW WOULD YOU (THE COPYCAT) MAKE AN ANALOGOUS CHANGE TO “MRRJJJ”.
THE COMPUTER PROGRAM CAME UP WITH MULTIPLE DIFFERENT ANSWERS.  ONE OF THEM
WAS “MRRJJJ TO MRRJJJJ”.  THE DESCRIPTION OF THE SYSTEM I KNOW IS IN
CHAPTER 5 OF DOUGLAS HOFSTADTER’S FLUID CONCEPTS AND CREATIVE ANALOGIES,
PUBLISHED IN 1995 BY BASIC BOOKS.

THIS SYSTEM IS ARGUABLY “PURELY SYMBOLIC, BUT ITS ALGORITHM DOES EMBODY
SOME LIMITED FORM SEMANTICS, SUCH THE RELATIONS OF INDIVIDUAL LETTERS TO
ALPHABETICAL ORDER, NOTIONS OF INDIVIDUAL SIMPLE INTEGERS AND THEIR
ORDERING, THE NOTION OF ORDERING IN GENERAL, THE NOTION OF BEFORE AND
AFTER IN AN ORDERING, THE NOTION OF NEXT TO IN AN ORDERING INDEPENDENT OF
DIRECTION, ETC.  THIS KNOWLEDGE IS REPRESENTING IN A SLIPNET WHICH
VARIABLY DEFINES MEASURE OF SIMILARITY BETWEEN ITS CONCEPTS.

DOES THAT MATCH YOUR DEFINITION OF PURELY SYMBOLIC?

I ASSUME NOVAMENTE WOULD BE QUITE CAPABLE OF MAKING ANALOGIES.  I ALSO
ASSUME THERE HAS BEEN MUCH MORE WORK ON ANALOGY, BUT OFF THE TOP OF MY
HEAD I DON’T KNOW OF IT, OTHER THAN NARS AND NOVAMENTE.


I don't, and didn't, deny that logical thought is important. But it's only
a small part, I'm arguing of most productive, AGI-type reasoning.


IT DEPENDS ON WHAT YOU MEAN BY LOGICAL THOUGHT.  IF ALL REASONED INFERENCE
IS LOGIC, THEN LOGIC WOULD BE QUITE IMPORTANT TO AGI-TYPE REASONING. EVEN
REASONED INFERENCES FROM IMAGES COULD BE VIEWED AS LOGICAL THOUGHT.

A POWERFUL AGI WOULD PRESUMABLY INVOLVE MASSIVELY PARALLEL LOGICAL
INFERENCING (AS INDICATED BY THE TERM “PROBABILISTIC LOGIC NETWORKS” FOR
NOVAMENTE’S MAJOR INFERENCING MECHANISM) AND WOULD USE IT NOT ONLY FOR
THINGS TRADITIONALLY CONSIDERED LOGICAL REASONING, BUT ALSO FOR MUCH MORE
SUBTLE THINGS LIKE USING CONTEXT TO CHANGE THE PROBABLE INTERPRETATION OF
A SENTENCE, SUBCONSCIOUS THOUGHT, AND INTUITIVE FEELINGS OR INSIGHTS.
THUS, LOGICAL INFERENCE IN AN AGI CAN BE USED FOR MUCH MORE THAN WHAT HAS
TRADITIONALLY BEEN CONSIDERED “LOGIC” OR EVEN “REASONING”, AND WOULD
LIKELY BE A VITAL PART OF THE THINKING OF ANY POWERFUL AGI.

Nor BTW are am I arguing  at all against symbols, (you might care to look
at the "Picture Tree" thread I started a few months ago to better
understand my thinking here) - the brain (and any true AGI, I believe)
uses symbols, outline graphics [or Johnson's image schemata] and images in
parallel, interdependently and continuously, to reason about the world.
(Note: "continuously." You seem to think that some occasional sensory
grounding of an AGI system here and there will do. No, I'm arguing, it has
to be, and is, continuous and applied to all information and subjects).


I THINK IT IS OBVIOUS THAT CERTAIN TYPES OF LEARNING REQUIRE REASONABLY
HIGH TEMPORAL RESOLUTION IN THE REPRESENTATIONS THEY LEARN FROM.  A PRIME
EXAMPLE WOULD BE , HUMAN MOTOR CONTROL.

What I am arguing against, rather than symbols,  is what you might call
the "bookroom illusion" - which you saw graphically illustrated in John
Rose's post - the illusion that you can "learn about the world just from
books" - or, to be precise, that you can learn, and think about and build
models of the world with symbols/ text alone. It's an understandable
illusion given that we often  spend hours apparently doing nothing but
read text. But it is an illusion. The brain does, and has to, continuously
make sense (in images) of everything we read.And if it can't then that
text won't make sense - & it's a case of "I can't see what you are talking
about."

(1) WITH REGARD TO WHETHER AN AGI HAS TO MAKE SENSE USING IMAGES -- I
DON’T KNOW HOW YOU ARE DEFINING IMAGES?

IT IS NOT CLEAR TO ME THAT ALL POWERFUL, CREATIVE, ADAPTIVE, THINKING,
INTUITIVE AGI’S REQUIRE 2- OR 3-D SPATIAL MODELS REPRESENTING SENSATIONS
OBTAINED BY VISION, TOUCH, OR STEREOPHONIC SPATIAL SENSING.

AGAIN, FOR EXAMPLE, I REFER TO THE TYPE OF AGI REFERRED TO IN MY Thu
10/11/2007 8:14 AM POST, AN AGI WHOSE WORLD IS LIMITED TO A PROGRAMMING
LANGUAGE, PROGRAMS IT HAS CREATED IN THAT LANGUAGE, THE RESULTING OUTPUT
IN ITS WORKSPACE FROM THAT PROGRAM, ITS OBSERVATIONS OF THE CHANGES TO THE
WORKSPACE MADE BY ITS VARIOUS PROGRAMS, AND HOW WELL THE CHANGES SATISFY
ITS GOALS AND VALUES.

I SPEND MUCH OF MY THOUGHT ABOUT AGI THINKING ABOUT REPRESENTATION IN WHAT
I CALL SEMANTIC HYPERSPACE.  THIS IS THE SPACE OF PATTERNS AND
RELATIONSHIPS BETWEEN PATTERNS, INCLUDING TEMPORAL RELATIONS.  IT IS A
HYPERSPACE BECAUSE IT CAN CONTAIN MILLIONS OR BILLIONS OF PATTERNS, EACH
OF WHICH IS A POTENTIAL DIMENSION, AND EACH OF ITS SUCCESSION OF
ACTIVATION STATES HAS A POTENTIAL COMBINATORIAL EXPLOSION OF POSSIBLE
STATES, AND THE SPACE DEFINED BY A SEQUENCE OF JUST TWO SUCH STATES IS A
COMBINATORIAL EXPLOSION TIMES A COMBINATORIAL EXPLOSION.  SO WE ARE NOT
TALKING ABOUT ANYTHING EVEN CLOSE TO A 2- OR 3-DIMENSIONAL SPACE, NOR
ANYTHING EVEN CLOSE TO 2-D IMAGE.

EVEN SHRUITI (CITED ABOVE), WHEN SOLVING THE “DOES MARY OWN A BOOK”
PROBLEM CREATES A TEMPORARY NETWORKED ACTIVATION STATE IN A SIMPLE
SEMANTIC SPACE.  I AM ATTACHING A PDF OF A MARK-UP I MADE FROM THE ABOVE
QUOTED ARTICLE ABOUT SHRUITI.  THE MARK-UP BETTER ILLUSTRATES THE
ACTIVATION NET IT CREATES IN SOLVING THIS PROBLEM.

WOULD THE NETWORKED REPRESENTATIONS AN AGI MAKES IN SUCH A SEMANTIC
HYPERSPACE BE “IMAGES” OF THE TYPE YOU ARE ADVOCATING?  IF SO, IMAGES ARE
IMPORTANT TO MY VISION OF AGI.  BUT SUCH IMAGES CAN BE CREATED AND
REPRESENTED IN “LOGICAL” SYSTEMS, SUCH AS SHRUITI.

(2)  WITH REGARD TO BOOKWORLD -- IF ALL THE WORLD’S BOOKS WERE IN
ELECTRONIC FORM AND YOU HAD A MASSIVE AMOUNT OF AGI HARDWARD TO READ THEM
ALL I THINK YOU WOULD BE ABLE TO GAIN A TREMENDOUS AMOUNT OF WORLD
KNOWLEDGE FROM THEM, AND THAT SUCH WORLD KNOWLEDGE WOULD PROVIDE A
SURPRISING AMOUNT OF GROUNDING AND BE QUITE USEFUL.

BUT, AS I SAID IN MY 10/11/2007 7:33 PM  POSTING, I THINK THERE WOULD
PROBABLY BE SOME PRETTY BIG GAPS IN ITS UNDERSTANDING. SUCH A SYSTEM WOULD
PROBABLY BE BRILLIANT IN SOME WAYS AND REALLY DUMB IN OTHERS.

WHAT YOU SEEM TO FAIL TO UNDERSTAND IS THAT A REALLY POWERFUL BRAIN HAS AN
ABILITY TO CREATE A SENSE OF REALITY OUT OF BITS AND BYTES, IF THOSE BITS
AND BYTES ARE PROPERLY GROUNDED IN A SET OF COHERENT RELATIONSHIPS.  WE
ARE NOT CONSCIOUS OF THE WORLD DIRECTLY, INSTEAD WE ARE CONSCIOUS OF
MENTAL CONSTRUCTS THAT HAVE BEEN CREATED BY GENERALIZATIONS OUT OF SENSORY
DATA. .

FOR EXAMPLE, BECAUSE OF  FOVIATION AND CONSTANT SACCADES, OUR EYES AND V1
SEE WITH SUCH DISCONTINUOUS FISHEYED VISION THAT IF WE WERE TO FORCED TO
WATCH A TV IMAGE OF WHAT WAS PROJECTED V1 NOT ONLY WOULD WE PROBABLY NOT
BE ABLE TO RECOGNIZED MOST OF IT, BUT IF IT WERE ON A LARGE SCREEN TV WE
MIGHT PUKE.  YET BECAUSE OF THE CORRELATIONS BETWEEN ALL OF THE
DIFFERENTLY SHAPED PATTERNS CREATED BY GIVEN SHAPE WHEN SEEN IN DIFFERENT
PARTS OF THE FOVIATED FIELD OF VIEW, OUR BRAIN HAS LEARNED A RELATIVELY
INVARIANT REPRESENTATION OF THAT SHAPE AND THAT INVARIANT SHAPE IS WHAT WE
THINK WE “SEE” EVEN THOUGH THAT IS NOT WHAT IS BEING PROJECT ON V1.
SIMILARLY OUR BRAIN STITCHES TOGETHER THE MULTIPLE VIEWS CREATED BY OUR
RAPIDLY SACCADING EYES INTO A SENSE OF A VISUALLY CONTINUOUS SPACE (A
TRICK MADE DOUBLY HARD BY FOVIATION.)

THUS, OUR SENSE OF REALITY INCLUDES ALL SORTS OF MENTAL FABRICATIONS,
CREATED AS REASONABLE REPRESENTATIONS OF RELATIONSHIPS CONTAINED IN THE
SENSORY DATA WE RECEIVE.  WE HAVE A NOTION THAT THE TOP OF A TABLE IS
CONTINUOUS AND SOLID, YET FROM CHEMISTRY AND PHYSICS WE KNOW IT IS NOT.

IN FACT, CURRENT BRAIN SCIENCE INDICATES WE DON’T STORE PICTURES IN
ANYTHING LIKE THE FORM OF A PHOTOGRAPH OR A LINE DRAWING.  INSTEAD WE
NORMALLY STORE A NETWORK OF ONE OR MORE NODES FROM A GEN/COMP HIEARARCHY,
EACH OF WHICH MAPS TO MULTIPLE POSSIBLE LOWER LEVEL REPRESENTATIONS UNTIL
YOU GET DOWN TO THE EQUIVALENT OF THE PIXEL LEVEL.  IT IS GENERALLY
BELIEVED THERE IS NO ONE NODE THAT STORES A PARTICULAR IMAGE.

SO EVEN OUR MEMORIES OF THE PICTURES YOU CONSIDER SO IMPORTANT ARE
SYMBOLIC, IN THAT THEY ARE MADE UP OF NODES THAT SYMBOLIZE PATTERNS OF
OTHER NODES.

SO GETTING BACK TO BOOKWORLD, WHAT I AM TRYING TO SAY IS THAT JUST AS OUR
MINDS FABRICATE CONCEPTS OF “PHYSICAL REALITY” BASED ON CORRELATIONS AND
RELATIONS WITHIN A HUGE AMOUNT OF DATA, AN EXTREMELY POWERFUL AGI THAT HAD
A REASONABLE DEEP STRUCTURE REPRESENTATION OF ALL CURRENTLY EXISTING BOOKS
WOULD SIMILARLY HAVE FABRICATED CONCEPTS OF A “BOOK-WORLD REALITY”, AND
THAT SUCH CONCEPTS WOULD BE WELL GROUNDED IN THE SENSE THAT THEY WOULD BE
CONNECTED BY MANY RELATIONSHIPS, ORDERINGS, GENERALIZATIONS, AND
BEHAVIORS.

I DON’T REALLY KNOW EXACTLY HOW MUCH KNOWLEDGE COULD BE EXTRACTED FROM
BOOKWORLD.  I KNOW THAT LINGUISTS PLAYING WITH ROUGHLY 1G WORD TEXT
CORPORA BITCH ABOUT HOW SPARSE THE DATA IS.  BUT MY HUNCH IS THAT IF YOU
READ SAY 30 MILLION BOOKS AND THE WEB WITH A GOOD AGI YOU WOULD BE ABLE TO
LEARN A LOT.

IF YOU ASKED THE BOOKWORLD AGI WHAT HAPPENS WHEN A PERSON DROPS SOMETHING,
IT WOULD PROBABLY BE ABLE TO GUESS IT OFTEN FALLS TO THE GROUND, AND THAT
IF IT IS MAKE OF GLASS IT MIGHT BREAK.

IF YOU ASKED SUCH A SYSTEM WHAT LOVE BETWEEN A MAN AND A WOMAN WAS, IT
WOULD BE ABLE TO GIVE YOU ALL SORTS OF MEANINGFUL GENERALIZATIONS ABOUT
WHAT LOVE WAS, BASED ON ALL THE DESCRIPTIONS OF LOVE AND HOW IT MAKES
CHARACTERS ACT IN THE BOOKS IT HAS READ.  I WOULD NOT BE SURPRISED IF SUCH
A SYSTEM UPON READING A ROMANTIC NOVEL WOULD PROBABLY HAVE ABOUT AS GOOD A
CHANCE AS THE AVERAGE HUMAN READER OF PREDICTING WHETHER TO THE TWO LOVERS
WILL OR WILL NOT BE TOGETHER AT THE END OF THE NOVEL.

IF YOU ASKED IT ABOUT HOW PEOPLE MENTALLY ADJUST TO GROWING OLD, IT WOULD
PROBABLY BE ABLE TO GENERATE A MORE THOUGHTFUL ANSWER THAN MOST YOUNG
HUMAN BEINGS.

IN SHORT, IT IS MY HUNCH THAT A POWERFUL BOOKWORLD AGI COULD BE EXTREMELY
VALUABLE.  AND AS I SAID IN MY Thu 10/11/2007 7:33 PM POST, THERE IS NO
REASON WHY KNOWLEDGE LEARNED FROM BOOKWORLD COULD NOT BE COMBINED
KNOWLEDGE LEARNED BY OTHER MEANS, INCLUDING THE IMAGE SEQUENCES YOU ARE SO
FOND OF.




Edward W. Porter
Porter & Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]

  _____

This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/? <http://v2.listbox.com/member/?&;> &



  _____




No virus found in this incoming message.
Checked by AVG Free Edition.
Version: 7.5.488 / Virus Database: 269.14.8/1064 - Release Date:
11/10/2007 15:09


  _____

This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?
<http://v2.listbox.com/member/?&;
> &

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=53197029-e4b0ae

Reply via email to