I was just asking Google and Bing questions and I was surprised at how well
they did.  No, they were not able to answer more complicated questions or
refine on their searches if I did not know how to express my simple
questions with more refined key words but compared to the Search engines of
10 years ago they are amazing.  In some of the cases my question was echoed
as a key phrase in the website that was indexed, but that was not always
the case.  So this convinces me that contemporary search engine
technology is not just narrow AI although it is not general AI either.

http://www.google.com/insidesearch/features/search/knowledge.html

A knowledge graph is pretty much what I was getting at when I mentioned
index branching.  I realize now that the word "graph" was a better word. I
would use a knowledge graph where the nodes can contain
distributed 'conceptual information' related to other nodes or index
information that showed how some group of 'concepts' were related.  I only
used the term branching or tree to emphasize that by shaping how this graph
of interrelated concepts is used might isolate certain information if the
search conditions seemed to merit it.  This would avoid the combinatorial
explosion for some cases.  So each node of the knowledge graph might only
contain index information, but they might contain more than one kind of
index.  Or perhaps there might be some system that governed the use of the
graph so that different kinds of searches could use different kinds of
methods to search the data.

So I would use a knowledge index graph which could be governed by different
methods of using it to search for information.

But that doesn't explain *what* would use it.  People use Search engines
but when we go to imagining how an AGI program would use a knowledge index
graph we have to conclude that either the algor-unculus is separate from
the knowledge graph or that its actions can be derived from knowledge
graph.  While the subprogram (the agent that I am imagining would 'use' the
knowledge graph) would be programmed with some default values I see this as
also being able to learn.  It is this ability, the ability to truly learn
something, and use that knowledge as the basis for judgement, that would
make the program act like it was capable of understanding.  So the
knowledge index graph would include insights about using the knowledge
index graph when making certain kinds of searches so that the algor-unculus
would be able to recognize that some information could be translated into
(algorithmic) actions that it could take in trying to 'understand' a
problem.

Although I believe that the complexity problem is too great a problem for
me to solve, I do believe that I could demonstrate what I am talking about
with simplistic idea-world that could stand as evidence that this is a
feasible model (given more computing power).  So this is a statement of an
experimental test that can be evaluated.  Most of us would be able to
recognize whether or not I (or someone) was able to use these ideas with a
simple data world or not.
Jim Bromer



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to