William,

Re the Epimenides paradox, Eliezer Yudkowsky had some interesting comments
in "Levels of Organization in General Intelligence," Section 2.7.1 "From
Thoughts to deliberation.  Which I quote below

-In the universe of bad TV shows, speaking the Epimenides Paradox1 "This
sentence is false" to an artificial mind causes that mind to scream in
horror and collapse into a heap of smoldering parts.  This is based on a
stereotype of thought processes that cannot divert, cannot halt, and possess
no bottom-up ability to notice regularities across an extended thought
sequence.  Given how deliberation emerges from the thought level, it is
possible to imagine a sufficiently sophisticated, sufficiently reflective AI
that could naturally surmount the Epimenides Paradox.  Encountering the
paradox "This sentence is false" would probably indeed lead to a looping
thought sequence at first, but this would not cause the AI to become
permanently stuck; it would instead lead to categorization across repeated
thoughts (like a human noticing the paradox after a few cycles), which
categorization would then become salient and could be pondered in its own
right by other sequiturs.  If the AI is sufficiently competent at deductive
reasoning and introspective generalization, it could generalize across the
specific instances of "If the statement is true, it must be false" and "If
the statement is false, it must be true" as two general classes of thoughts
produced by the paradox, and show that reasoning from a thought of one class
leads to a thought of the other class; if so the AI could deduce - not just
inductively notice, but deductively confirm - that the thought process is an
eternal loop.  Of course, we won't know whether it really works this way
until we try it. 
-The use of a blackboard sequitur model is not automatically sufficient for
deep reflectivity; an AI that possessed a limited repertoire of sequiturs,
no reflectivity, no ability to employ reflective categorization, and no
ability to notice when a train of thought hasn't yielded anything useful for
a while, might still loop eternally through the paradox as the emergent but
useless product of the sequitur repertoire.  Transcending the Epimenides
Paradox requires the ability to perform inductive generalization and
deductive reasoning on introspective experiences.  But it also requires
bottom-up organization in deliberation, so that a spontaneous introspective
generalization can capture the focus of attention.  Deliberation must emerge
from thoughts, not just use thoughts to implement rigid algorithms.

-----Original Message-----
From: William Pearson [mailto:[EMAIL PROTECTED] 
Sent: Monday, April 21, 2008 5:42 PM
To: agi@v2.listbox.com
Subject: Re: [agi] WHAT ARE THE MISSING CONCEPTUAL PIECES IN AGI? --- recent
input and responses

On 21/04/2008, Ed Porter <[EMAIL PROTECTED]> wrote:
>  So when people are given a sentence such as the one you quoted about
verbs,
>  pronouns, and nouns, presuming they have some knowledge of most of the
words
>  in the sentence, they will understand the concept that verbs "are doing
>  words."   This is because of the groupings of words that tend to occur in
>  certain syntactical linguistic contexts, the ones that would be most
>  associated with the types of experiences the mind would associates with
>  "doing" would be largely word senses that are verbs and that the mind's
>  experience and learned patterns most often proceeds by nouns or pronouns.
>  So all this stuff falls out of the magic of spreading activation in a
>  Novamente-like hierarchical experiential memories (with the help of a
>  considerable control structure such as that envisioned for Novamente).
>
>  Declarative information learned by NL gets projected into the same type
of
>  activations in the hierarchical memory

How does this happen?  What happens when you try and project, "This
sentence is false." into the activations of the hierarchical memory?
And consider that the whole of the english understanding is likely to
be in the hierarchical memory. That is the projection must be learnt.

> as would actual experiences that
>  teaches the same thing, but at least as episodes, and in some patterns
>  generalized from episodes, such declarative information would remain
linked
>  to the experience of having been learned from reading or hearing from
other
>  humans.
>  So in summary, a Novamete-like system should be able to handle this
alleged
>  problem, and at the moment it does not appear to provide an major
unanswered
>  conceptual problem. >

My conversation with Ben about similar subject (words acting on the
knowledge of words) didn't get anywhere.

The conversation starting here ->
http://www.mail-archive.com/agi@v2.listbox.com/msg09485.html

And I consider him the authority on Novamente-like systems, for now at
least.

Will

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
http://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com

Reply via email to