Thanks for the reference to Inferential Theories of Learning. I found
something on the Internet. http://www.mli.gmu.edu/papers/91-95/MSL4-ITL.pdf
 I am glad to see that someone has been interested in looking at learning
as the ability to see how different kinds of inferences may lead to useful
knowledge. I have written (in these groups) about how I believe that
conceptual projection and the integration of different kinds of knowledge
is very important to AGI. So these can reasonably be considered as
different kinds of inferences similar to Michalski's definition.

My feeling is that an emphasis of the formal - or general - processes that
the author likes to rely on may be a misrepresentation error. Some of his
ideas are good, and the examples are interesting. However, in detailing
some fundamental abstractions (programming abstractions) he is in effect
declaring these as special fundamental abstraction-to-generalization
methods. Maybe I should say it is a fundamental attribution error.

The problem is that the combination will certainly, and the individual
application will probably lead to contradictions of the theory. In order to
avoid this one would have to create fundamental application definitions
which assert the kind of rule that is being applied to an actual problem.

In other words, the attempt to rely on a fundamental abstraction or general
rule won't work. I realize that Michalski is aware of this, at least at
some level, but in his assertion that there is some kind of competency
test, (I forget what the test was based on) he is implying that false
assertions can be eliminated. They can't be.

Sure, I will be using some kind of logic in my model. But, the underlying
principles in my model does not consist of an abstraction of logic but
simply an abstraction of construction that will describe, to some extent,
how the relations of a concept were formed.

Jim Bromer


On Sun, May 18, 2014 at 1:15 PM, Piaget Modeler via AGI <[email protected]>wrote:

> You may want to read *The Inferential Theory of Learning *by Ryszard
> Michalski.
>
> He and Gheorghe Tecuci of GMU did some very good work in Reasoning.
>
> It may be helpful in your thinking about this topic.
>
> ~PM
>
> ------------------------------
> Date: Sun, 18 May 2014 12:51:40 -0400
> Subject: [agi] The Parts Knowledge Can be Used to Make Many Generalizations
> From: [email protected]
> To: [email protected]
>
>
> In order to make detailed insights feasible, they need to be generalized.
> I bet that almost everyone who will read this in 2014 will misunderstand
> what I meant at first. I don't mean that many pieces of knowledge should be
> generalized into one idea, but that the parts of many individual pieces of
> knowledge can be generalized into many individualized generalizations. I am
> sure that this is being implemented in some nlp, but only at a very
> rudimentary level.
>
> The possible abstractions and combinations are uncountable. This
> process then would have the capacity for immense individualization. But it
> is not as simple as it might seem because computer programs that can keep
> track of, refer to and wisely use an immense number of possible
> combinations are not simple.
> Jim Bromer
>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/19999924-4a978ccc> |
> Modify <https://www.listbox.com/member/?&;> Your Subscription
> <http://www.listbox.com>
>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/24379807-f5817f28> |
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to