I like Aaron's solution.  

In fact, all the blue boxes in the PAM-P2 architecture diagram   
http://piagetmodeler.tumblr.com represent areas in the knowledge-base and all 
the other boxes represent specialized agent programsserving as "functions" over 
the KB. 
I think that's a trending pattern these days. Agents acting upon a knowledge 
base or data base.
My $0.02
~PM 

Date: Tue, 11 Dec 2012 21:17:04 -0600
Subject: Re: [agi] Internal Representation
From: [email protected]
To: [email protected]

But that doesn't explain *what* would use it.  People use Search engines but 
when we go to imagining how an AGI program would use a knowledge index graph we 
have to conclude that either the algor-unculus is separate from the knowledge 
graph or that its actions can be derived from knowledge graph.  While the 
subprogram (the agent that I am imagining would 'use' the knowledge graph) 
would be programmed with some default values I see this as also being able to 
learn.  It is this ability, the ability to truly learn something, and use that 
knowledge as the basis for judgement, that would make the program act like it 
was capable of understanding.  So the knowledge index graph would include 
insights about using the knowledge index graph when making certain kinds of 
searches so that the algor-unculus would be able to recognize that some 
information could be translated into (algorithmic) actions that it could take 
in trying to 'understand' a problem.

I would lean more towards the idea of having multiple cooperative agents or 
inference rules, each storing its state not internally, but in the shared 
"knowledge index graph" (a.k.a. semantic net) and being triggered by relevant 
changes made by the others. A set of such agents or rules could be evolved or 
otherwise learned automatically by comparing the nodes/vertices or 
connections/edges they create against those produced from direct observation 
(or other agents/rules) to determine their efficacy at modeling the real world 
consistently. The meta-algorithm which is used to produce, evaluate, modify, 
and cull these agents/rules would be where the intelligence comes from 
(learning), but the agents/rules would collectively make up the actual 
intelligence of the system (competence).


On Tue, Dec 11, 2012 at 2:11 PM, Jim Bromer <[email protected]> wrote:

I was just asking Google and Bing questions and I was surprised at how well 
they did.  No, they were not able to answer more complicated questions or 
refine on their searches if I did not know how to express my simple questions 
with more refined key words but compared to the Search engines of 10 years ago 
they are amazing.  In some of the cases my question was echoed as a key phrase 
in the website that was indexed, but that was not always the case.  So this 
convinces me that contemporary search engine technology is not just narrow AI 
although it is not general AI either.


 http://www.google.com/insidesearch/features/search/knowledge.html A knowledge 
graph is pretty much what I was getting at when I mentioned index branching.  I 
realize now that the word "graph" was a better word. I would use a knowledge 
graph where the nodes can contain distributed 'conceptual information' related 
to other nodes or index information that showed how some group of 'concepts' 
were related.  I only used the term branching or tree to emphasize that by 
shaping how this graph of interrelated concepts is used might isolate certain 
information if the search conditions seemed to merit it.  This would avoid the 
combinatorial explosion for some cases.  So each node of the knowledge graph 
might only contain index information, but they might contain more than one kind 
of index.  Or perhaps there might be some system that governed the use of the 
graph so that different kinds of searches could use different kinds of methods 
to search the data.

 So I would use a knowledge index graph which could be governed by different 
methods of using it to search for information. But that doesn't explain *what* 
would use it.  People use Search engines but when we go to imagining how an AGI 
program would use a knowledge index graph we have to conclude that either the 
algor-unculus is separate from the knowledge graph or that its actions can be 
derived from knowledge graph.  While the subprogram (the agent that I am 
imagining would 'use' the knowledge graph) would be programmed with some 
default values I see this as also being able to learn.  It is this ability, the 
ability to truly learn something, and use that knowledge as the basis for 
judgement, that would make the program act like it was capable of 
understanding.  So the knowledge index graph would include insights about using 
the knowledge index graph when making certain kinds of searches so that the 
algor-unculus would be able to recognize that some information could be 
translated into (algorithmic) actions that it could take in trying to 
'understand' a problem.

 Although I believe that the complexity problem is too great a problem for me 
to solve, I do believe that I could demonstrate what I am talking about with 
simplistic idea-world that could stand as evidence that this is a feasible 
model (given more computing power).  So this is a statement of an experimental 
test that can be evaluated.  Most of us would be able to recognize whether or 
not I (or someone) was able to use these ideas with a simple data world or not.

Jim Bromer
 



  
    
      
      AGI | Archives

 | Modify
 Your Subscription


      
    
  







  
    
      
      AGI | Archives

 | Modify
 Your Subscription


      
    
  

                                          


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to