Here is an example of a generic ontology, which is quantum ready and supports 
1-step mutation. 

It could be "flavoured" for AGI purposes via content. 


From: a...@listbox.com
To: a...@listbox.com
Subject: RE: [agi] Knowm - Machine Learning Coprocessor
Date: Sun, 15 Mar 2015 22:31:38 +0200




Seems there is a semantic difference here with Cyc's OWL. They do have an 
ontology, but what they publish is more of an ontological taxonomy, or the 
classification sub-component of a general, ontological component referring to 
Convention.  This implies that somewhere, there must be a whole ontology, which 
unfortunately they have elected not to share. Given this example, there is no 
way of knowing how the ontology would deal with single-step mutation, if at 
all. Therefore, in its absence, it cannot be considered as a candidate AGI 
ontology. 

For AGI, below is more of a systems example of what I am talking about. 
Ontology is the philosophical study of the nature of being, becoming, 
existence, or reality, as well as the basic categories of being and their 
relations. Traditionally listed as a part of the major branch of philosophy 
known as metaphysics, ontology deals with questions concerning what entities 
exist or can be said to exist, and how such entities can be grouped, related 
within a hierarchy, and subdivided according to similarities and differences

From: a...@listbox.com
To: a...@listbox.com
Subject: RE: [agi] Knowm - Machine Learning Coprocessor
Date: Sun, 15 Mar 2015 12:28:43 -0700




Try this...
http://www.cycfoundation.org/
There are links for the concept browser, etc.
 I think you can download the ontology here:
http://datahub.io/dataset/opencyc
You'll have to figure out how it handles mutation.
Cheers,
Michael.

From: a...@listbox.com
To: a...@listbox.com
Subject: RE: [agi] Knowm - Machine Learning Coprocessor
Date: Sun, 15 Mar 2015 21:00:02 +0200




PM

Thanks for the headsup. 

I'd appreciate a link to a logical system's model of the whole ontology. 

How does it handle mutation (in the narrowest sense) as an adaptive construct? 

From: a...@listbox.com
To: a...@listbox.com
Subject: RE: [agi] Knowm - Machine Learning Coprocessor
Date: Sun, 15 Mar 2015 11:03:14 -0700




Why not use Cyc's ontology then?  It's been over 30 years in the making.
~PM

From: a...@listbox.com
To: a...@listbox.com
Subject: RE: [agi] Knowm - Machine Learning Coprocessor
Date: Sun, 15 Mar 2015 19:10:51 +0200




Further to the previous comment on ontology. Herewith, my thoughts... 

One simply cannot put the cart before the horse, not if it is expected of the 
horse to pull the cart.

The above does somehow return us to more than just the ontological and 
complexity questions. It semeingly returns us to a specific vision of AGI, 
reflected as a paradigm of reasoning and associated mindset, from which should 
flow a natural AGI ontology. Suppose such an AGI ontology was viewed as a 
system, in compliance with systems theory. From this would follow that, unless 
it does not specify the approach to effectively address definite AGI research 
objectives with, it would ultimately fail. 

Doe the previous statement imply there might be more than one ontology that 
would effectively relate to AGI? In this context then, yes and no. No, on 
condition that the AGI ontology was specified at an appropriate AGI level (the 
AGI bar everyone is talking about). Yes, on condition that the AGI bar was set 
high enough to embrace emerging and existing ontological value within its 
systems boundary. For AGI, I contend this point to be of critical value. As 
such then, if an AGI ontology was designed at the highest-possible AGI level, 
then its followers may rely on it to become the holistic containerform for 
whatever form, function, association, entanglement, context or content may be 
invoked within its system boundaries. In short, it would prove to be valid and 
reliable, as a scientific basis for governing AGI with.  

Therefore, one of the starting points of any ontology should be the 
philosophical sub-model or collective consciousness that is best suited for a 
future construct of AGI. For example, we may start by returning to the fathers 
of AI and gather their thoughts on the philosophic approach, and so on. From 
such an eventual collection of entangled thoughts, as a philisophical 
foundation model, would emerge the way (in the sense of a path of 
enlightenment) towards an adaptive AGI model. 

I'm adding the term "adaptive", because we must surely entangle  the philosophy 
of adaptiveness within a standard model of AGI. This has to be done from the 
outset, first to prevent followers of the path to lose their AGI way. Further, 
to inform all analysis-and-design and development decisions along the way, of 
every insantiation of the AGI way, in any capacity of a selected AGI SDLC.  

In the case of such a term as "adaptive", an AGI ontology would be further 
developed then to releft adaptiveness, as a meta model of adaptiveness, as 
inherent, economic value. Further, once the ontology is in place, the principle 
of adaptivenesswould naturally become a meta standard - or meme - for an 
emerging culture of AGI. This then, within the notion of an AGI ontology 
itself, but not conclusively so, merely presenting as a possible scenario.

Assuming then such an ontologic was completed as a standard AGI model, when 
applying the ontology to a specific AGI problem of adaptiveness (by following 
the AGI way), this model would systemically invoke the principle of adaptivenes 
across all its contextual components, and in doing so, satisfy the inherent 
meta requirements to emerge the value as a possible solution to the problem in 
a form of functional (reductionist) adaptiveness for AGI.

I think, many of the circular debates around coding and languages, although 
most useful in their own right, intelligent and relevant for functional 
application of adaptiveness, would unfortunately all meet their respective cul 
de sac. I'm saying this purely as an argument in favour of the definition and 
adoption of an appropriately specified AGI ontology, as a way of AGI.

 



From: a...@listbox.com
Date: Sun, 15 Mar 2015 16:19:51 +0100
Subject: Re: [agi] Knowm - Machine Learning Coprocessor
To: a...@listbox.com


On Sat, Mar 14, 2015 at 10:02 PM, Steve Richfield via AGI <a...@listbox.com> 
wrote:
IMHO the future of ALL AGI approaches lies in the careful design of the APIs 
and other interfaces

As much as I like to dabble in all the freely available designs and codebases, 
and with the obvious "commoditization" that widely available APIs will bring, I 
dont see how this would solve the issues of ontological and complexity barriers 
of AGI. More specifically, in my own designs maintaining a well grounded 
ontology at all times is a sine qua non, as are recursive agent interactions 
("game trees" if you will), with agents of course well grounded and maintained 
across the different scenarios (inevitable combinatorial explosion with 
probabilistic agents). No, I do not expect any API magic! But yeah, since no 
"learning" problem could be assummed to be of one type or another, a generic 
hypothesis engine or, in should that prove too tricky, a large collection of 
machine learning APIs and automation is also a vital AGI component,

AT




  
    
      
      AGI | Archives

 | Modify
 Your Subscription


      
    
  

                                          


  
    
      
      AGI | Archives

 | Modify
 Your Subscription


      
    
  

                                          


  
    
      
      AGI | Archives

 | Modify
 Your Subscription


      
    
  

                                          


  
    
      
      AGI | Archives

 | Modify
 Your Subscription


      
    
  

                                          


  
    
      
      AGI | Archives

 | Modify
 Your Subscription


      
    
  

                                          


  
    
      
      AGI | Archives

 | Modify
 Your Subscription


      
    
  

                                          


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Attachment: STEP10.wmf
Description: windows/metafile

Reply via email to