RE: [agi] Early Apps.

2002-12-28 Thread Ben Goertzel

Gary Miller wrote:
***
I guess I'm still having trouble with the concept of grounding.  If I
teach/encode a
bot with 99% of the knowledge about hydrogen using facts and information
available in
books and on the web.  It is now an idiot savant in that it knows all
about hydrogen and
nothing about anything else and it is not grounded.  But if I then
examine the knowledge learned about hydrogen for other mentioned topics
like gases, elements, water, atoms, etc... And teach/encode 99% of
of the knowledge on these topics to the bot.  Then the bot is still an
idiot savant but less so isn't it better grounded?  A certain amount of
grounding I think has occurred by providing knowledge of related
concepts.

If we repeat this process again, we may say the program is an idiot
savant in chemistry.

...

I will agree that today's bots are not grounded because they are idiot
savants and lack the broad based high level knowledge with which to
ground any given fact or concept.  But if I am correct in my thinking
this is the same problem that Helen Keller's teacher was faced with in
teaching Helen one concept at a time until she had enough simple
information or knowledge to build more complex knowledge and concepts
upon.
***

What you're describing is the Expert System approach to AI, closely
related to the common sense approach to AI.

Cycorp takes this point of view, and so have a whole lot of other AI
projects in the last few decades...

I certainly believe there's some truth to it.  If you encoded a chemistry
textbook in formal logic, fed it into an AI system, and let the AI system do
a lot of probabilistic reasoning and associating on the information, then
you'd have a lot of speculative uncertain intuitive knowledge generated in
the system, complementing the hard knowledge that was explicitly encoded.
If you encoded a physics textbook and a bio textbook as well, you could have
the system generate uncertain, intuitive cross-domain knowledge in the same
way.

In fact, we are doing something like this in Novamente now, for a
bioinformatics application.  We're feeding in information from a dozen
different bio databases and letting the system reason on the integrated
knowledge  right now we're at the feeding in stage.

Unlike some anti-symbolic-AI extremists, I think this sort of thing can be
*useful* for AGI.  But I think it can only be a part of the picture.
Whereas I think experience-based learning is a lot more essential

I don't think that a pragmatically-achievable amount of formally-encoded
knowledge is going to be enough to allow a computer system to think deeply
and creatively about any domain -- even a technical domain about science.
What's missing, among other things, is the intricate interlinking between
declarative and procedural knowledge.  When humans learn a domain, we learn
not only facts, we learn techniques for thinking and problem-solving and
experimenting and information-presentation .. and we learn these in such a
way that they're all mixed up with the facts  In theory, I believe, all
this stuff could be formalized -- but the formalization isn't pragmatically
possible to do, because we humans don't explicitly know the techniques we
use for thinking, problem-solving, etc. etc.   In large part, we do them
tacitly, and we learn them tacitly..

When we learn a new domain declaratively, we start off by transferring some
of our tacit knowledge from other domains to that new domain.  Then, we
gradually develop new tacit knowledge of that domain, based on experience
working in the domain...

I think that this tacit knowledge (lots of uncertain knowledge, mixing
declarative  procedural) has got to be there as a foundation, for a system
to really deploy factual knowledge in a creative  fluent way...


***
 I think we cut and paste what we are trying to
say into what we think is the correct template and then read it back to
ourselves to see if it sounds like other things we have heard and seems
to make sense.
***

I think this is a good description of one among  many processes involved in
language generation...

I also think there's some more complex unconscious inference going on, than
is implied by your statement.  It's not a matter of cutting and pasting
into a template, it's a matter of recursively applying a bunch of syntactic
rules that build up complex linguistic forms from simpler ones.  The
syntactic buildup process has parallels to the thought-buildup process, and
the two sometimes proceed in synchrony, which is one of the reasons
formulating thoughts in language can help clarify them.

I dealt with some of these issues -- on a conceptual, not an
implementational level - in a chapter in my book from complexity to
creativity, entitled Fractals and Sentence Production:

http://www.goertzel.org/books/complex/ch9.html

If I were to rewrite that chapter now, it would have a lot of stuff on
probabilistic inference  unification grammars -- richer and better details,
enhanced by the particular 

RE: [agi] Early Apps.

2002-12-28 Thread Gary Miller
Ben Goertzal wrote:

I don't think that a pragmatically-achievable amount of formally-encoded
knowledge is going to be enough to allow a computer system to think
deeply and creatively about any domain -- even a technical domain about
science. What's missing, among other things, is the intricate
interlinking between declarative and procedural knowledge.  When humans
learn a domain, we learn not only facts, we learn techniques for
thinking and problem-solving and experimenting and
information-presentation .. and we learn these in such a way that
they're all mixed up with the facts 

What you're describing is the Expert System approach to AI, closely
related to the common sense approach to AI.
 
...

I agree that as humans we bring a lot of general knowledge with us when
we learn a new domain.  That is why I started off with the general
conversational domain and am now branching into science, philosophy,
mathematics and history.  And of course the AI can not make all the
connections without being extensively interviewed on a subject and
having a human help clarify it's areas of confusion just as a parent
answers questions for a child or a teacher for a student.  I am not in
fact trying to take the exhaustive approach one domain at a time
approach but rather to teach it the most commonly known and requested
information first.  My last email just used that description to identify
my thoughts on grounding.  I am hoping that by doing this and repeating
the interviewing process in an iterative development cycle that
eventually the bot will eventually be able to discuss many different
subjects at a somewhat superficial level much as same as most humans are
capable of.  This is a lot different from the exhaustive definition that
Cyc provides for each concept.

I view what I am doing distinct from expert systems because I do not yet
use either a backward or forward inference engine to satisfy a limited
number of goal states. The knowledge base is not in the form of rules
but rather many matched patterns and encoded factoids of knowledge many
of which are transitory in nature and track the context of the
conversation.  Each pattern may trigger a request for additional
information like an expert system.  But the bot does not have a
particular goal state in mind other that learning new information unless
a specific request of it is made by the user.  I also differ from Cyc in
that realizing the importance of English as a user interface from the
beginning, all internal thoughts and goal states occur as an internal
dialog in English.  This eliminates the requirement to translate an
internal knowledge representation to an external natural language other
than providing one or response patterns to specific input patterns.  It
also makes it easy to monitor what the bot is learning and whether it is
making proper inferences because it's internal thought process is
displayed in English while in debug mode..  The templates which generate
the responses in some cases do have conditional logic to determine which
output template is appropriate response based on the AI's personality
variables and the context of the current conversation.  Variables are
also set conditionally to maintain metadata for context.  If the
references a male in it's response [He] and [Him] get set vs. [Her] and
[She] if a female is referenced.  [CurrentTopic], [It], [There] and
[They] are all set to maintain backward contextual references.  

I was able to find a few references to the Common Sense approach to AI
on google and some of the difficulties in achieving it.  And I must
admit I have not yet implemented non-monotonic reason or probabilistic
reasoning as of yet.  I am not under the illusion that I am necessarily
inventing or implementing anything that has not been conceived of
before.  As Newton says if I achieve great heights it will be because I
have stood on the shoulders of giants.  I just see the current state of
the art and think that it can be made much better.  I do not actually
know how far I can take it while staying self-funded, but hopefully by
the time my money runs out it will demonstrate enough utility and
potential to be of value to someone.  I think I like the sound of the
Common Sense Approach to AI though.   I can't remember the last time
anyone accused me of having common sense, but I like the sound of it!

I don't think AI is absent sufficient theory, just sufficient execution.
I feel like the Cyc Project's heart was in the right place and the level
of effort was certainly great, but perhaps the purity of their vision
took priority over usability of the end result.  Is any company actually
using Cyc as anything other than a search engine yet?  

That being said other than Cyc I am at a loss to name any serious AI
efforts which are over a few years in duration and have more that 5 man
years worth of effort (not counting promotional and fundraising).  

The Open Source efforts are interesting and have some utility but are

[agi] Thinking may be overrated.

2002-12-28 Thread Kevin Copple
Perhaps thinking is overrated.  It sometimes seems the way progress is made,
and lessons learned, is predominately by trial and error.  Thomas Edison's
light bulb is good example, especially since it is the very symbol of
idea.  From what I know, Edison's contribution was his desire to make the
light bulb, previously invented by other's, into a commercially successful
product.  His approach was to try this and try that until he finally
succeeded.

Benjamin Franklin invented the rocking chair.  Why had no one invented it
before?  Surely ancient Chinese, Egyptian, and Sumerian civilizations would
have loved this bit of easy low-tech entertainment.  Perhaps we think a
little too highly of our intellectual ability.  Native Americans did not
discover the three-finger (index, middle, ring) method of archery, even
though they spent dozens of generations developing their archery skills.
The more natural thumb and index finger method reduces the effective range
by a factor of three.  Lucky thing for the Pilgrims I guess.

Random evolution resulted in our fantastic technology-using brains.  No
planned design using calculus or any other type of logic seems to have been
needed.  Nervous systems developed for one purpose randomly morphed to
perform others.  Some of the more complex organisms had evolutionary
advantages that allowed them to propagate.   But evolution largely failed to
take advantage of basic technologies like fire, wheels, and metallurgy.  It
is ironic that we have succeeded doing a lot of technology the evolutionary
computer failed to develop, but we are struggling to duplicate much of the
technology it did.

Thinking in humans, much like genetic evolution, seems to involve
predominately trial and error.  Even the logic we like to use is more
often than not faulty, but can lead us to try something different.  And
example of popular logic that is invariably faulty is reasoning by analogy.
It is attractive, but always breaks down on close examination.  But this
type of reasoning will lead to a trial that may succeed, possibly because of
the attractive similarities, but more likely in spite of them.

When the Wright brothers made the first airplane, they used a lot of
different technologies.  There was no single silver bullet, except for a
determination to accomplish their goal.  Like any technological advancement,
the road to AGI will be paved with a variety techniques, technologies,
trials, and errors.  This seems doubly true since thinking as we know it is
apparently a hodgepodge of methods.
Catch you all later . . . Kevin C.

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]