Dear Edward, may I ask why you regularly choose to type in all-caps? Do you
have a broken keyboard? Otherwise, please restrain from doing so since (1)
many people associate it with shouting and (2) small-caps is easier to
read...

Kind regards,
Durk Kingma

On 10/12/07, Edward W. Porter <[EMAIL PROTECTED]> wrote:
>
>  This is in response to Mike Tintner's 10/11/2007 7:53 PM post.  My
> response is in all-caps.
>
> Vladimir: .and also why can't 3D world model be just described abstractly,
>
> by
> > presenting the intelligent agent with bunch of objects with attached
> > properties and relations between them that preserve certain
> > invariants? Spacial part of world model doesn't seem to be more
> > complex than general problem of knowledge arrangement, when you have
> > to keep track of all kinds of properties that should (and shouldn't)
> > be derived for given scene.
> >
> Vladimir and Edward,
>
> I didn't really address this idea essentially common to you both,
> properly.
>
> The idea is that a network or framework of symbols/ symbolic concepts can
> somehow be used to reason usefully and derive new knowledge about the
> world - a network of classes and subclasses and relations between them,
> all
> expressed symbolically. Cyc and Nars are examples.
>
> OK let's try and set up a rough test of how fruitful such networks/ models
>
> can be.
>
> Take your Cyc or similar symbolic model, which presumably will have
> something like "animal - mammals - humans -  primates - cats  etc " and
> various relations to "move - jump - sit - stand "       and then "jump -
> on - objects" etc etc. A vast hierarchy and network of symbolic concepts,
> which among other things tell us something about various animals and the
> kinds of movements they can make.
>
> Now ask that model in effect: "OK you know that the cat can sit and jump
> on
> a mat. Now tell me what other items in a domestic room a cat can sit and
> jump on. And create a scenario of a cat moving around a room."
>
> I suspect that you will find that any purely symbolic system like Cyc will
>
> be extremely limited in its capacity to deduce further knowledge about
> cats
> or other animals and their movements with relation to a domestic room  -
> and
> may well have no power at all to create scenarios.
>
> IT DEPENDS WHAT YOU MEAN BY "PURELY SYMBOLIC".  IN THE PAST "SYMBOLIC"
> GENERALLY REFERRED TO SYSTEMS WITHOUT MUCH GROUNDING, SO THE SYMBOLS HAD
> RELATIVELY LITTLE SEMANTIC "MEANING."  OFTEN SUCH SYSTEMS RELIED ON
> RELATIVELY BRITTLE DEFINITIONS AND RULES OF INFERENCE.
>
> THAT IS NOT THE APPROACH I ADVOCATE.  (AND PLEASE DON'T HOLD CYC UP AS A
> GOOD EXAMPLE OF THE APPROACH I ADVOCATE.  THERE IS A WORLD OF DIFFERENCE
> BETWEEN A RELATIVELY OLD-FASHIONED AI SYSTEM LIKE CYC AND A STATE OF THE ART
> AGI SYSTEM LIKE NOVAMENTE.)
>
> THE STATE OF THE ART AGI APPROACH I FAVOR IS BASED ON (1) MASSIVE AMOUNTS
> OF EXPERIENCES OF SOME SORT TO PROVIDE GROUNDING TO SYMBOLS AND (2) FLEXIBLE
> RULES FOR MATCHING, INSTANTIATING, GENERALIZATION, AND INFERENCING IN A
> CONTEXT-SPECIFIC WAY FROM SUCH MASSIVE EXPERIENCE, SO AS TO ENABLE SOMETHING
> APPROACHING -- AND ULTIMATELY SURPASSING -- HUMAN-LEVEL INTELLIGENCE.
>
> BUT SUCH SYSTEMS WOULD BE COMPOSED ALMOST ENTIRELY OF SYMBOLS.*  EVEN THE
> TYPES OF SYSTEMS YOU SEEM TO BE FAVORING WOULD BE COMPOSED OF SYMBOLS.*
> BITS AND BYTES ARE, AFTER ALL, SYMBOLS.  SO PLEASE LET'S STOP KNOCKING
> SYMBOLS, PER SE.
>
> THE DISTINCTION SHOULD BE BETWEEN RELATIVELY NAKED SYMBOLS AND SYMBOL
> GROUNDED IN NETWORKS OF MEANING – I.E., NETWORKS OF RELATIONSHIPS SUCH AS
> SENSORY PATTERNS (YES, I LIKE YOU THINK SENSORY EXPERIENCE IS GENERALLY
> IMPORTANT), ASSOCIATIONS, CONDITIONAL PROBABILITIES, TEMPORAL RELATIONS,
> CAUSE-AND-EFFECTS, ATTRIBUTES, FUNCTIONS, GOALS, VALUES, IMPORTANCE
> WEIGHTINGS, GENERALIZATIONS, SPECIALIZATIONS, AND BEHAVIORAL SCHEMAS, ALL IN
> THE CONTEXT POWERFUL  INFERENCING AND AUTOMATIC LEARNING.
>
> OF COURSE AS I SAID IN A VERY RECENT POST, GROUNDING COMES IN ALL SORTS OF
> DIFFERENT TYPES AND DEGREES.   SO DIFFERENT TYPES AND DEGREES OF
> INTELLIGENCE CAN BE DERIVED WITH DIFFERENT TYPES OF GROUNDING.  EVEN IN A
> SYSTEM LIKE CYC OR WORDNET A CONCEPT WOULD NORMALLY HAVE SOME DEGREE OF
> GROUNDING.
>
> But you or I, with a visual/ sensory model of that cat and that room, will
>
> be able to infer with reasonable success whether it can or can't jump, sit
>
> and stand on every single object in that room -  sofa, chair, bottle,
> radio,
> cupboard  etc etc. And we will also be able to make very complex
> assessments
> about which parts of the objects it can or can't jump or stand on - which
> parts of the sofa, for example - and assessments about which states of
> objects, (well  it couldn't jump or stand on a large Coke bottle if erect,
>
> but maybe if the bottle were on its side, and almost certainly if it were
> a
> jeroboam on its side). And I think you'll find that our capacity to draw
> inferences - from our visual and sensory model - about cats and their
> movements is virtually infinite.
>
> And we will also be able to create a virtually infinite set of scenarios
> of
> a cat moving in various ways from point to point around the room.
>
> Reality check: what you guys are essentially advocating is logical systems
>
> and logical reasoning for AGI's - now how many kinds of problems in the
> real
> human world is logic actually used to solve? Not that many. Oh it's an
> important part of much problemsolving but only a part. How much scientific
>
> problemsolving depends seriously on logic? Is logic going to help you
> understand and have ideas about genetics or how cells work, or the brain
> works, or how and why wars start? Is it going to be much use for design
> problems? Does it help in telling stories? .. keep on going through the
> vast
> range of human and animal problemsolving (all of which remember are the
> ONLY
> forms of [A]GI that actually work).
>
> That's why I asked you: give me some examples of useful new knowledge or
> analogies [especially analogies] that have been derived from logical
> systems
> or logic, period (except about logic itself).
>
> THIS TIME THE ANSWER DEPENDS ON WHAT YOU MEAN BY "LOGICAL."  WIKIPEDIA'S
> BROAD DEFINITION OF "LOGIC" IS: "THE STUDY OF THE PRINCIPLES AND CRITERIA OF
> VALID INFERENCE AND DEMONSTRATION." THUS, THE TERM IS MUCH MORE BROAD THAN
> THE BRITTLE FORMAL LOGICS THAT MUCH OF AI WAS HUNG UP ON FOR YEARS.
>
> I AM NOT A BIG FAN OF TRADITIONAL FORMAL LOGIC.  SINCE THE EARLY '70'S I
> HAVE SAID "FORMAL LOGIC IS TO HUMAN THOUGH WHAT DRESSAGE IS THE MOTION OF
> HORSES -- EXCEPT IN ITS SIMPLEST FORMS IT IS TOTALLY UNNATURAL."  COMMON
> SENSE NOTIONS, SUCH AS "THE EXCEPTION THAT PROVES THE RULE" INDICATES THAT
> REASONING WITH BINARY TRUTH VALUES IS BRAIN-DEAD IN MANY DOMAINS.
>
> BUT MANY FORMS OF LOGICAL REASONING ARE MUCH MORE FLEXIBLE.  BAYESIAN
> INFERENCING, FOR EXAMPLE, IS A TYPE OF LOGIC BECAUSE IT IS A TYPE OF
> REASONING   DESPITE ITS LIMITATIONS HAS SHOWN ITSELF TO BE EXTREMELY
> VALUABLE.  IT IS USED IN MANY SUCCESSFUL COMMERCIAL PRODUCTS.  BAYESIAN
> CLASSIFIERS, FOR EXAMPLE, HAVE BEEN USED TO MAKE NEW SCIENTIFIC DISCOVERIES
> FROM VAST AMOUNTS OF SENSOR DATA. * SO IN FACT, SOME TYPES OF LOGIC ARE
> EXTREMELY VALUABLE AND DO HELP SCIENTISTS SOLVE PROBLEMS.*
>
> FURTHERMORE, IF YOU HAVE READ DOUG HOFSTADTER'S COPYCAT, WHICH MAKES
> CONTEXT SPECIFIC ANALOGIES, YOU REALIZE IT USES A FLEXIBLE SIMILARITY
> SYSTEM, CALLED SLIPNET, THAT CAUSES SIMILARITY MEASURES TO BE TIGHTENED OR
> LOOSENED IN A CONTEXT-DEPENDENT WAY.  THIS ALLOWS COPYCAT TO HANDLED THE
> DIS-SIMILARITES IN THE CORRESPONDING THINGS THAT ARE BEING COMPARED TO MAKE
> AN ANALOGY.
>
> NARS OR A NARS-LIKE SYSTEM COULD EASILY BE USED TO REPLACE HOFSTADTER'S
> SLIPNET,  AND COULD ARGUABLY HAVE SIGNIFICANT ADVANTAGES OVER SLIPNET, SUCH
> AS MAKING COPYCAT'S ANALOGY DRAWING PROGRAM MORE GENERALIY APPLICABLE TO A
> WORLD KNOWLEDGE BASE. * SO "LOGIC" OF THE TYPE FOUND IN NARS COULD
> ACTUALLY BE USEFUL IN THE VERY FIELD OF DRAWING ANALOGIES THAT THE ABOVE
> TEXT IMPLIES IT IS USELESS FOR.*
>
> IN RECENT YEARS THERE HAS BEEN A LOT OF WORK IN DESIGNING SYSTEMS THAT
> AUTOMATICALLY LEARN APPROPRIATE PROBABILISTIC LOGICS.  ONE OF THESE IS
> NOVAMENTE'S PROBABALISTIC LOGIC NETWORKS, OR PLN, WHICH BEN GOERTZEL
> REFERRED TO IN HIS POST OF 10/10/2007 4:45 AM ON THIS LIST.  I DON'T YET
> KNOW HOW WELL ANY OF THESE SYSTEMS WORK, BUT THEY HOLD THE PROMISE OF
> ALLOWING LOGIC TO DELIVER ALL OF THE VERY THINGS YOU SAY LOGIC CANNOT
> DELIVER IN LARGE WORLD-KNOWLEDGE-COMPUTING AGI'S.
>
> SO, PLEASE LET'S STOP KNOCKING  "LOGIC."
>
> New knowledge - especially new science - comes primarily from new
> observation of the world, not from logically working through old
> knowledge.
> Artificial general intelligence - the ability to develop new, unprogrammed
>
> solutions to problems - depends on sensory models and observations.
>
> Let me be brutally challenging here : the reason you guys are attached to
> purely symbolic models of the world is not because you have any real
> evidence of their being productive (for AGI), but because they're what you
>
> know how to do. Hence Vlad's "why can't 3D world model be just described
> abstractly.." He doesn't know  - he just hopes - that it can. Logically.
> What you need here is not logic but - ahem - evidence {sensory stuff].
>
> BRUTAL CHALLENGE ACCEPTED.
>
> (AGAIN, "PURELY SYMBOLIC" COVERS ANY DIGITAL SYSTEM, EVEN THE TYPE YOU
> SEEM TO FAVOR.)
>
> ACTUALLY, EVER SINCE I DID MY READING LIST UNDER MINSKY IN 1969-70, MY
> GUIDING PHILOSOPHY HAS BEEN THE GIST OF K-LINE THEORY – I.E.,  THAT ONE
> REASONS ABOUT NEW SITUATIONS BY EVOKING MEMORIES OF PAST SIMILAR
> SITUATIONS.  SO I HAVE BEEN IN FAVOR OF "EXPERIENTIAL REASONING" FOR OVER 37
> YEARS.  AND I HAVE NEVER BEEN A BIG FAN OF FORMAL LOGIC FOR THE REASONS
> STATED ABOVE.
>
> BUT I SEEK TO AVOID BEING NARROW MINDED. I THINK THERE ARE MANY DIFFERENT
> POSSIBLE TYPES AND DEGREES OF EXPERIENCE, THERE ARE MANY DIFFERENT WAYS IT
> CAN BE REPRESENTED, ALTHOUGH SOME REPRESENTATIONS ARE MUCH MORE CAPABLE AND
> EFFICIENT THAN OTHERS.  THERE ARE MANY DIFFERENT DEGREES AND TYPES OF
> INTELLIGENCE.  NOT ALL AGI'S NEED VISUAL MODELS, OR EVEN SENSORY MODELS OF
> PHYSICAL REALITY.  AGI'S USED FOR SOME LIMITED DOMAINS MAY NOT EVEN NEED
> MODEL'S OF 3-DIMENSIONAL PHYSICAL SPACE -- SUCH AS THE HYPOTHETICAL
> PROGRAM-LEARNING AGI IN MY EARLIER POST OF TODAY. (ALTHOUGH IT WOULD ALMOST
> CERTAINLY DEVELOP OR START WITH A GENERAL MODEL OF N-DIMENSIONAL SPACES.)
>
> I BELIEVE THE CONCEPT OF TURING EQUIVALENCE SHOULD OPEN OUR MINDS TO THE
> FACT THAT MOST THINGS IN COMPUTATION CAN BE DONE MANY DIFFERENT WAYS.
> ALTHOUGH SOME WAYS ARE MUCH LESS EFFICIENT THAN OTHERS AS TO BE PRACTICALLY
> USELESS, AND ALTHOUGH SOME WAYS MAY LACK ESSENTIAL CHARACTERISTICS THAT
> LIMIT EVEN THEIR THEORETICAL CAPABILITIES.
>
> AS MUCH AS YOU MAY KNOCK OLD FASHIONED AI SYSTEMS, THEY ACCOMPLISHED A
> HELL OF A LOT WITH FLY-BRAIN LEVEL HARDWARE.  THUS, RATHER THAN DISMISS THE
> TYPES OF REPRESENTATIONS AND REASONING THEY USED AS USELESS, I WOULD SEEK TO
> UNDERSTAND BOTH THEIR STRENGTHS AND WEAKNESSES.  BEN GOERTZEL'S NOVAMENTE
> EMBRACES USING THE EFFICIENCY OF SOME MORE NARROW FORMS OF AI IN DOMAINS OR
> TASKS WHERE THEY ARE MORE EFFICIENT (SUCH AS LOW LEVEL VISION, OR FOR
> DIFFERENT TYPES OF MENTAL FUNCTIONS), BUT HE SEEKS TO HAVE SUCH DIFFERENT
> AI'S RELATIVELY TIGHTLY INTEGRATED, SUCH AS BY HAVING THE SYSTEM HAVE SELF
> AWARENESS OF THEIR INDIVIDUAL CHARACTERISTICS.  WITH SUCH SELF AWARENESS AN
> INTELLIGENT AGI MIGHT WELL OPTIMIZE REPRESENTATIONS FOR DIFFERENT DOMAINS OR
> DIFFERENT LEVELS OF ACCESS.
>
> LIKE NOVAMENTE, I HAVE FAVORED A FORM OF REPRESENTATION WHICH IS MORE LIKE
> A SEMANTIC NET.  BUT ONE CAN REPRESENT A SET OF LOGICAL STATEMENTS IN
> SEMANTIC NET FORM.  I THINK WITH ENOUGH LOGICAL STATEMENTS IN A GENERAL,
> FLEXIBLE, PROBABILISTIC LOGIC ONE SHOULD BE ABLE TO THEORETICALLY REPRESENT
> MOST FORMS OF EXPERIENCE THAT ARE RELEVANT TO AN AGI -- INCLUDING THE VERY
> TYPE OF VISUAL SENSORY MODELING YOU SEEM TO BE ADVOCATING.
>
>
> Edward W. Porter
> Porter & Associates
> 24 String Bridge S12
> Exeter, NH 03833
> (617) 494-1722
> Fax (617) 494-1822
> [EMAIL PROTECTED]
>
>
> -----Original Message-----
> From: Mike Tintner [*mailto:[EMAIL PROTECTED]<[EMAIL PROTECTED]>]
>
> Sent: Thursday, October 11, 2007 7:53 PM
> To: [email protected]
> Subject: Re: [agi] Do the inference rules.. P.S.
>
> Vladimir: ..and also why can't 3D world model be just described
> abstractly,
> by
> > presenting the intelligent agent with bunch of objects with attached
> > properties and relations between them that preserve certain
> > invariants? Spacial part of world model doesn't seem to be more
> > complex than general problem of knowledge arrangement, when you have
> > to keep track of all kinds of properties that should (and shouldn't)
> > be derived for given scene.
> >
> Vladimir and Edward,
>
> I didn't really address this idea essentially common to you both,
> properly.
>
> The idea is that a network or framework of symbols/ symbolic concepts can
> somehow be used to reason usefully and derive new knowledge about the
> world - a network of classes and subclasses and relations between them,
> all
> expressed symbolically. Cyc and Nars are examples.
>
> OK let's try and set up a rough test of how fruitful such networks/ models
>
> can be.
>
> Take your Cyc or similar symbolic model, which presumably will have
> something like "animal - mammals - humans -  primates - cats  etc " and
> various relations to "move - jump - sit - stand "       and then "jump -
> on - objects" etc etc. A vast hierarchy and network of symbolic concepts,
> which among other things tell us something about various animals and the
> kinds of movements they can make.
>
> Now ask that model in effect: "OK you know that the cat can sit and jump
> on
> a mat. Now tell me what other items in a domestic room a cat can sit and
> jump on. And create a scenario of a cat moving around a room."
>
> I suspect that you will find that any purely symbolic system like Cyc will
>
> be extremely limited in its capacity to deduce further knowledge about
> cats
> or other animals and their movements with relation to a domestic room  -
> and
> may well have no power at all to create scenarios.
>
> But you or I, with a visual/ sensory model of that cat and that room, will
>
> be able to infer with reasonable success whether it can or can't jump, sit
>
> and stand on every single object in that room -  sofa, chair, bottle,
> radio,
> cupboard  etc etc. And we will also be able to make very complex
> assessments
> about which parts of the objects it can or can't jump or stand on - which
> parts of the sofa, for example - and assessments about which states of
> objects, (well  it couldn't jump or stand on a large Coke bottle if erect,
>
> but maybe if the bottle were on its side, and almost certainly if it were
> a
> jeroboam on its side). And I think you'll find that our capacity to draw
> inferences - from our visual and sensory model - about cats and their
> movements is virtually infinite.
>
> And we will also be able to create a virtually infinite set of scenarios
> of
> a cat moving in various ways from point to point around the room.
>
> Reality check: what you guys are essentially advocating is logical systems
>
> and logical reasoning for AGI's - now how many kinds of problems in the
> real
> human world is logic actually used to solve? Not that many. Oh it's an
> important part of much problemsolving but only a part. How much scientific
>
> problemsolving depends seriously on logic? Is logic going to help you
> understand and have ideas about genetics or how cells work, or the brain
> works, or how and why wars start? Is it going to be much use for design
> problems? Does it help in telling stories? .. keep on going through the
> vast
> range of human and animal problemsolving (all of which remember are the
> ONLY
> forms of [A]GI that actually work).
>
> That's why I asked you: give me some examples of useful new knowledge or
> analogies [especially analogies] that have been derived from logical
> systems
> or logic, period (except about logic itself).
>
> New knowledge - especially new science - comes primarily from new
> observation of the world, not from logically working through old
> knowledge.
> Artificial general intelligence - the ability to develop new, unprogrammed
>
> solutions to problems - depends on sensory models and observations.
>
> Let me be brutally challenging here : the reason you guys are attached to
> purely symbolic models of the world is not because you have any real
> evidence of their being productive (for AGI), but because they're what you
>
> know how to do. Hence Vlad's "why can't 3D world model be just described
> abstractly.." He doesn't know  - he just hopes - that it can. Logically.
> What you need here is not logic but - ahem - evidence {sensory stuff].
>
> -----
> This list is sponsored by AGIRI: 
> *http://www.agiri.org/email*<http://www.agiri.org/email>
> To unsubscribe or change your options, please go to: *
> http://v2.listbox.com/member/?&* <http://v2.listbox.com/member/?&;>
>
> ------------------------------
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;
>

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=52759625-cf48d6

Reply via email to