At this point in time, I'm for less gedankenexperiment and more coding. 
Cheers,
~PM

Date: Mon, 19 May 2014 15:51:44 -0400
Subject: Re: [agi] The Parts Knowledge Can be Used to Make Many Generalizations
From: [email protected]
To: [email protected]; [email protected]

If George is right his program will (eventually) work.Refining this you get a 
slightly better statement. If and only if George is right will his program 
eventually work.But which is the better statement? They both have some merit. 
It is unlikely that George will get his program to work even if he is wrong, 
but it is a possibility.  So both statements have some value. In fact there are 
quite a few useful statements that could touch on those conceptual objects.
 Now look at this:If Jim is right George will not be able to get his project to 
work. Now suppose that some time goes by and George, against his own judgment, 
figures out what Jim is saying and he begins incorporating some of the 
necessary sense of reality that would make his project feasible (according to 
Jim). Now all the listed logical statements previously made are irrelevant. 
They can't be validated - even if George gets his project to work - because the 
presumptions changed slightly.
 You can attempt to assign 'blame' for the failure of the logical statements to 
retain their relevance. The problem was that new information was introduced to 
the system and that new information made the older logical statements 
irrelevant. While understanding why a theory failed is an important part of 
learning, it does not matter why the theories failed to sustain their 
description-of-reality value to invalidate the use of elaborate logical systems 
of thought. All that is important is the recognition that they do not even have 
local consistency unless you are defining local consistency as the assertion of 
the application of the logical system to the world case.  Once an application 
error becomes apparent, or as in this example, comes to pass, the validity of 
the logical system is only useful if it might one day be used to analyze some 
other situation.
 So the sense that the methods would be locally consistent even when they 
aren't globally consistent isn't really reasonable - unless you are defining 
local systems as an assertion of the premises and the premises of the 
application of the system.
 Jim BromerJim Bromer


On Mon, May 19, 2014 at 3:13 PM, Jim Bromer <[email protected]> wrote:

OK, that's cool but the logical framework that Michalski is talking about is a 
representation system but not a true logical system. It can be used to 
represent some interesting relationships of thought-stuff. 

Jim Bromer


On Mon, May 19, 2014 at 3:00 PM, Piaget Modeler via AGI <[email protected]> 
wrote:





The forward and backward confidence parameters are adjusted by the type of 
knowledge transmutation performed over the knowledge base.
My opinion is that Global Consistency is not important for an AGI system.  


Cyc handles consistency by using microtheories, or collections of propositions 
and inference rules. Each microtheory is consistent, but if taken altogether, 
there will be global inconsistencies across

microtheories.  
In PAM-P2 we take a similar approach.  We have viewpoints which are similar to 
Lenat's microtheories,but we also don't really care if premises are 
inconsistent.  We embrace inconsistency and rely more on 

activation to sort things out.  (PAM-P2 is still in process so we'll let you 
know how things turn out, and whether or not we modify our position on this 
point.)
But I think Michalski's introduction of merit parameters and probability into 
his logical framework has merit, 

no pun intended. 
~PM
Date: Mon, 19 May 2014 14:42:31 -0400
Subject: Re: [agi] The Parts Knowledge Can be Used to Make Many Generalizations
From: [email protected]


To: [email protected]

But what was the basis for the forward and backward confidence? The problem is 
that this is still a logically inconsistent system posing as a logically 
consistent system. I can't create logically consistent AGI systems, but maybe I 
am just more honest about it.



The consequence of this is that his logical system is merely a representational 
system. I've known guys who tried to talk about ideas and then thought they 
could emphasize them with pseudo-formalization (or maybe 
partial-formalization).  Nothing wrong with that - unless they thought that 
they were actually formalizing their various conjectures. But they were only 
simplifying the representation of very narrow ideas by using formal symbols and 
stuff.



So the formalization for these kinds of things are not truly consistent 
abstract systems that can be used clearly as the programmatic basis's for 
computer programs. It is a notation for an informal system that has limited 
applications. Nothing wrong with that, but let's be honest about it.


Jim Bromer


On Mon, May 19, 2014 at 1:09 PM, Piaget Modeler <[email protected]> 
wrote:







Michalski injected probability into his system with the notion of merit 
parameters, for forward and backward confidence in statements, implying that a 
purely logical system might be insufficient to handle real world phenomena.



~PM. 
Date: Mon, 19 May 2014 12:25:22 -0400
Subject: Re: [agi] The Parts Knowledge Can be Used to Make Many Generalizations



From: [email protected]
To: [email protected]; [email protected]




Since false assertions can be mixed in with good assertions, the potential 
complexity of an idea (a reference) cannot be neatly or easily categorized. 
Michalski did mention that some logical relations are truth-preserving and some 
are not but the whole idea of an underlying logical system is that some 
important relations may be derived based on the abstractions. (Just as new 
mathematical theories are discovered.) The important abstract relations would 
typically be discovered by a close study of the applications of these ideas to 
real world situations (or to the situations that the mind can consider).  But 
since references will contain hidden combinations of other references and since 
false assertions will tend to be embedded along with good assertions and since 
the reasons that would support the insights would also be based on similar 
combinations of information, my conclusion is that the potential benefit that 
the elaborated logical system might provide may well be compromised and even 
fatally flawed by inappropriate assertions and assumptions.




So while I would use logic in arbitrarily constrained systems, I feel strongly 
that the underlying 'logic' of an AGI system has to be comprised of the 
description of the construction of the relationships of the references. In 
other words it is a dynamic descriptive system that must tend to limit the 
assumption that the systems are based on broad underlying generalizations.  The 
generalizations that I have in mind will tend to be specialized (even though I 
do suppose that similar methods can be used with them when the methods are fit 
to the application through trial and error.)




I really don't have a solid idea what verification will consist of, but I am 
supposing that systems of insight that can lead to reliable interactions will 
have some value.
Jim Bromer


On Mon, May 19, 2014 at 8:59 AM, Jim Bromer <[email protected]> wrote:




Thanks for the reference to Inferential Theories of Learning. I found something 
on the Internet. http://www.mli.gmu.edu/papers/91-95/MSL4-ITL.pdf








I am glad to see that someone has been interested in looking at learning as the 
ability to see how different kinds of inferences may lead to useful knowledge. 
I have written (in these groups) about how I believe that conceptual projection 
and the integration of different kinds of knowledge is very important to AGI. 
So these can reasonably be considered as different kinds of inferences similar 
to Michalski's definition.







 My feeling is that an emphasis of the formal - or general - processes that the 
author likes to rely on may be a misrepresentation error. Some of his ideas are 
good, and the examples are interesting. However, in detailing some fundamental 
abstractions (programming abstractions) he is in effect declaring these as 
special fundamental abstraction-to-generalization methods. Maybe I should say 
it is a fundamental attribution error.





The problem is that the combination will certainly, and the individual 
application will probably lead to contradictions of the theory. In order to 
avoid this one would have to create fundamental application definitions which 
assert the kind of rule that is being applied to an actual problem.





In other words, the attempt to rely on a fundamental abstraction or general 
rule won't work. I realize that Michalski is aware of this, at least at some 
level, but in his assertion that there is some kind of competency test, (I 
forget what the test was based on) he is implying that false assertions can be 
eliminated. They can't be.





Sure, I will be using some kind of logic in my model. But, the underlying 
principles in my model does not consist of an abstraction of logic but simply 
an abstraction of construction that will describe, to some extent, how the 
relations of a concept were formed.







 Jim Bromer


On Sun, May 18, 2014 at 1:15 PM, Piaget Modeler via AGI <[email protected]> 
wrote:











You may want to read The Inferential Theory of Learning by Ryszard Michalski. 
He and Gheorghe Tecuci of GMU did some very good work in Reasoning.
It may be helpful in your thinking about this topic. 








~PM

Date: Sun, 18 May 2014 12:51:40 -0400
Subject: [agi] The Parts Knowledge Can be Used to Make Many Generalizations
From: [email protected]








To: [email protected]

In order to make detailed insights feasible, they need to be generalized. I bet 
that almost everyone who will read this in 2014 will misunderstand what I meant 
at first. I don't mean that many pieces of knowledge should be generalized into 
one idea, but that the parts of many individual pieces of knowledge can be 
generalized into many individualized generalizations. I am sure that this is 
being implemented in some nlp, but only at a very rudimentary level.








 The possible abstractions and combinations are uncountable. This process then 
would have the capacity for immense individualization. But it is not as simple 
as it might seem because computer programs that can keep track of, refer to and 
wisely use an immense number of possible combinations are not simple.








Jim Bromer




  
    
      
      AGI | Archives

 | Modify
 Your Subscription





      
    
  

                                          







  
    
      
      AGI | Archives

 | Modify
 Your Subscription





      
    
  






                                          

                                          




  
    
      
      AGI | Archives

 | Modify
 Your Subscription


      
    
  






                                          


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to