I need to understand your design better to talk about the details, and
the discussion is getting too technical for this list. I will hold my
doubts and wait for you to go further.

Pei

On 1/27/07, YKY (Yan King Yin) <[EMAIL PROTECTED]> wrote:



On 1/25/07, Pei Wang <[EMAIL PROTECTED]> wrote:

> > Suppose I have a set of *deductive* facts/rules in FOPL.  You can
actually
> > use this data in your AGI to support other forms of inference such as
> > induction and abduction.  In this sense the facts/rules collection does
not
> > dictate the form of inference engine we use.
>
> No, you cannot do that without twisting some definitions. You are
> right that now many people define "induction" and "abduction" in the
> language of FOPL, but what they actually do is to omit important
> aspects in the process, such as uncertainty. To me that is cheating. I
> addressed this issue in
> http://nars.wang.googlepages.com/wang.syllogism.ps . In
>
http://www.springer.com/west/home/computer/artificial?SGWID=4-147-22-173659733-0
> I explained in detail (especially in Ch. 9 and 10) why the language of
> FOPL is improper for AI.


OK, there is some confusion here too.  You're talking about "standard" FOPL,
a version that is described in textbooks of mathematical logic.  My logic is
"based on" standard FOPL, but there are some significant differences.  First
of all, it can be extended with uncertainty values (eg according to your
theory of <f,c>).  Secondly, it does not use Frege-style quantifiers.
Thirdly, it does not make a strict distinction between predicates and
arguments (eg I can say Loves(john,mary) and Is_Blind(love)).

Given these differences, the 3 objections to FOPL in your book may be
answered.  In the end, your NARS logic and my logic may be very similar in
both expressivity and semantics.  If you're interested we may consider a
collaboration or merging of theories.

One issue I have not yet formed an opinion about is the universality of the
inheritence relation in NARS.  We can discuss that later...

> > That's why my top priority is to build an inference engine for
deduction.
> > Inductive learning will be added later in the form of data mining, which
is
> > very computation-intensive.
>
> I'm afraid it is not going to work --- many people have tried to
> extend FOPL to cover a wider range, and run into all kinds of
> problems. To restart from scratch is actually easier than to maintain
> consistency among many ad hoc patches and hacks.
>
> To me, one of the biggest mistake of mainstream AI is to treat
> "learning" as independent to "working", and can be added in later. To
> see AI in this way and to put learning into the foundation will
> produce very different systems. In NARS, "learning" and "reasoning",
> as well as some other "cognitive facilities", are different aspects of
> the same underlying process, and cannot handled separately.


Inductive learning under FOPL is a vast topic, and is still under
development (eg the field of inductive logic programming).  It is still too
early to say that it won't work.  Also, many methods in "data mining" are
forms of inductive learning, and I believe these techniques can be borrowed
for AGI.  I guess Ben uses pattern mining techniques in Novamente too.

There is not a clear reason why "reasoning" and "learning" must be unified.
Can you elaborate on the advantages of such an approach?

The "learning" problem in AGI is difficult partly because GOFAI knowledge
representation schemes are usually very cumbersome (with frames,
microtheories, modal operators for temporal / epistemological aspects, etc).
 My logic is very minimalistic, almost structureless.  This makes learning
easier since learning is a search for hypotheses in the hypothesis space.


YKY ________________________________
 This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

Reply via email to