Re: [agi] Seeking Is-a Functionality

2010-07-20 Thread Steve Richfield
Arthur,

Your call for an AGI roadmap is well targeted. I suspect that others here
have their own, somewhat different roadmaps. These should all be merged,
like decks of cards being shuffled together, maybe with percentages
attached, so that people could announce that, say, I am 31% of the way to
having an AGI. At least this would provide SOME metric for progress.

This would apparently place Ben in a awkward position, because on the one
hand he is somewhat resistant to precisely defining his efforts, while on
the other hand he desperately needs to be able to demonstrate some progress
as he works toward something that is useful/salable.

Is a is too vague, e.g. in A robot is a machine, it is unclear whether
robots and machines are simply two different words for the same thing, or
whether robots are a member of the class known as machines. There are also
other more perverse potential meanings, e.g. that a single robot is a
machine, but that multiple robots are something different, e.g. a junk pile.

In Dr. Eliza, I (attempt to) deal with ambiguous statements by having the
final parser demand an unambiguous statement, and utilize my idiom
resolver to recognize common ambiguous statements and fill in the gaps
with clearer words. Hence, simple unambiguous statements and common gapping
works, but less common gapping fails, as do complex statements that can't be
split into 2 or more simple statements.

I suspect that you may be heading toward the common brick wall of paradigm
limitation, where you initially adopt an oversimplified paradigm to get
something to work, and then run into the limitations of that oversimplified
paradigm. For example, Dr. Eliza is up against its own paradigm limitations
that we have discussed here. Hence, it may be time for some paradigm
overhaul if your efforts are to continue smoothly ahead.

I hope this helps.

Steve
=
On Tue, Jul 20, 2010 at 7:20 AM, A. T. Murray menti...@scn.org wrote:

 Tues.20.JUL.2010 -- Seeking Is-a Functionality

 Recently our overall goal in coding MindForth
 has been to build up an ability for the AI to
 engage in self-referential thought. In fact,
 SelfReferentialThought is the Milestone
 next to be achieved on the RoadMap of the
 Google Code MindForth project. However, we are
 jumping ahead a little when we allow ourselves
 to take up the enticing challenge of coding
 Is-a functionality when we have work left over
 to perform on fleshing out question-word queries
 and pronominal gender assignments. Such tasks
 are the loathsome scutwork of coding an AI Mind,
 so we reinvigorate our sense of AI ambition by
 breaking new ground and by leaving old ground to
 be conquered more thoroughly as time goes by.

 We simply want our budding AI mind to think
 thoughts like the following.

 A robin is a bird.
 Birds have wings.

 Andru is a robot.
 A robot is a machine.

 We are not aiming directly at inference or
 logical thinking here. We want rather to
 increase the scope of self-referential AI
 conversations, so that the AI can discuss
 classes and categories of entities in the
 world. If people ask the AI what it is,
 and it responds that it is a robot and
 that a robot is a machine, we want the
 conversation to flow unimpeded and
 naturally in any direction that occurs
 to man or machine.

 We have already built in the underlying
 capabilities such as the usage of articles
 like a or the, and the usage of verbs
 of being. Teaching the AI how to use am
 or is or are was a major problem that
 we worried about solving during quite a
 few years of anticipation of encountering
 an impassable or at least difficult roadblock
 on our AGI Roadmap. Now we regard introducing
 Is-a functionality not so much as an
 insurmountable ordeal as an enjoyable
 challenge that will vastly expand the
 self-referential wherewithal of the
 incipient AI.

 Arthur
 --
 http://robots.net/person/AI4U/diary/22.html


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Seeking Is-a Functionality

2010-07-20 Thread Jan Klauck
Steve Richfield wrote

 maybe with percentages
 attached, so that people could announce that, say, I am 31% of the
 way to having an AGI.

Not useful. AGI is still a hypothetical state and its true composition
remains unknown. At best you can measure how much of an AGI plan is
completed, but that's not necessarily equal to actually having an AGI.

Of course, you could use a human brain as an upper bound, but that's
still questionable, because--as I see it--most AGI designs arent'
intended to be isomorphic and I don't know how good the brain is
understood today that we can use it as an invariant measure.

cu Jan


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com