Re: [agi] Do the inference rules of categorical logic make sense?

2007-10-21 Thread Vladimir Nesov
Pei,

Sorry for delayed reply. I answer point-by-point below.

On 10/11/07, Pei Wang [EMAIL PROTECTED] wrote:


  Basic rule for evidence-based
  estimation of implication in NARS seems to be roughly along the lines
  of term construction in my framework (I think there's much freedom in
  its choice, do you have other variants of it/justification for current
  choice relative to other possibilities which is not concerned with
  applicability to derivation of rules for abduction/induction/etc.?),

 There is some justification behind the design of every inference rule
 (and its truth value function), not only abduction/induction. You can
 find most in the book, and many are also in my other publications.


I meant the basic rule of evidence measuring that considers extension and
intension sets. There certainly is a justification for it, but there
obviously are alternatives, so my question is about the choice of this
extension/intension measuring above other options.


 but I'm not sure about how you handle variations of structures (that
  is, how does system represents two structures which are similar in
  some sense and how it extracts the common part from them). It's
  difficult to see from basic rules if it's not addressed directly.

 The basic rules (deduction/abduction/induction/revision) ignore the
 internal structure of compound terms. There are special inference
 rules that handles the composition/decomposition of various compound
 structure. Again, they are mostly given by the book.


I didn't mean the structure of compound terms, but the structure of
experience representation, which consists of a set of individual statements
and terms that describe that experience.


 For
  example, how will it see similarities and differences between
  111222333 and ? Would it enable simple slippage between
  them? How will it learn these representations?

 Yes, the two can be recognized as similar, so the analogy rule can use
 one as the other in certain situations.


It'd be interesting to get an idea of how such things can be translated to
internal representation that implements these operations.


 Basic rule seems to require presence of terms at the same
  time, which for example can't be made neurologically plausible, unless
  semantics of terms is time-dependent (because neuron only knows that
  other neurons from which it received input fired some time in the
  past, and feature/term it represents if it chooses to fire is a
  statement about features represented by those other fired neurons in
  the past).

 It depends on what you mean by presence of terms at the same time.
 In NARS, all inference happens within a concept (because every
 inference rule requires two premises sharing a term), so as far as two
 beliefs are recalled at the same time, the basic rules can be applied.


I mean the difference between experience of term in the present and
experience of the same term (from I/O POV) that happened in the past. If
these notions are represented by separate terms, how are they connected? I'm
sorry if I'm asking about something that's being addressed in your book, I
don't have a copy.


 Why do you need so many rules?

 I didn't expect so many rules myself at the beginning. I add new rules
 only when the existing ones are not enough for a situation. It will be
 great if someone can find a simpler design.


I feel that some of complexity comes from modeling of natural language
statements. Do you agree?


-- 
Vladimir Nesovmailto:[EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=56112168-b226f2

Re: [agi] Do the inference rules of categorical logic make sense?

2007-10-21 Thread Benjamin Goertzel
About NARS...  Nesov/Wang dialogued:

 Why do you need so many rules?
 
  I didn't expect so many rules myself at the beginning. I add new rules
  only when the existing ones are not enough for a situation. It will be
  great if someone can find a simpler design.


 I feel that some of complexity comes from modeling of natural language
 statements. Do you agree?



I think the complexity comes from the particular logical/algebraic formalism
underlying NARS...

In PLN, which is similar to NARS in some respects but with a probabilistic
foundation, there are fewer rules because the underlying algebra is more
powerful, allowing more cases in which rules may be derived from other
rules.  E.g. in NARS, induction and abduction are primary rules, whereas in
PLN they are derived via combining Bayes rule with deduction in different
(simple) ways.  And in NARS, higher-order inference rules are posited
separately than first-order inference rules, whereas in PLN most of the
higher-order rules are derived directly from corresponding first-order
rules.  [Note that in PLN and NARS, the terms first-order and higher-order
have  different meanings than the ones often seen.  First order term logic
is the pure logic of inheritance with no explicit variables or quantifiers;
higher-order term logic introduced quantified variables.]

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=56116649-ef41f9

Re: [agi] Do the inference rules of categorical logic make sense?

2007-10-21 Thread Pei Wang
On 10/21/07, Vladimir Nesov [EMAIL PROTECTED] wrote:
 Pei,

 Sorry for delayed reply. I answer point-by-point below.

 On 10/11/07, Pei Wang  [EMAIL PROTECTED] wrote:
 
   Basic rule for evidence-based
   estimation of implication in NARS seems to be roughly along the lines
   of term construction in my framework (I think there's much freedom in
   its choice, do you have other variants of it/justification for current
   choice relative to other possibilities which is not concerned with
   applicability to derivation of rules for abduction/induction/etc.?),
 
  There is some justification behind the design of every inference rule
  (and its truth value function), not only abduction/induction. You can
  find most in the book, and many are also in my other publications.

 I meant the basic rule of evidence measuring that considers extension and
 intension sets. There certainly is a justification for it, but there
 obviously are alternatives, so my question is about the choice of this
 extension/intension measuring above other options.

Sorry I still don't quite get your question. If you mean (1) why
extension and intension are measured in a mixed manner, not separated,
then I have a whole section (7.2) devoted to this issue in my book,
and the summary is such a unified treatment is necessary for
intelligence. If you mean (2) why the amount of evidence is defined
as the size of the extension and intention of the related terms, then
the answer directly follows from the definition of evidence, as given
in many of my publications --- if what is defined as evidence only
exists in those sets, then it is natural to use the size of the sets
as the amount of evidence.

   but I'm not sure about how you handle variations of structures (that
   is, how does system represents two structures which are similar in
   some sense and how it extracts the common part from them). It's
   difficult to see from basic rules if it's not addressed directly.
 
  The basic rules (deduction/abduction/induction/revision)
 ignore the
  internal structure of compound terms. There are special inference
  rules that handles the composition/decomposition of various compound
  structure. Again, they are mostly given by the book.

 I didn't mean the structure of compound terms, but the structure of
 experience representation, which consists of a set of individual statements
 and terms that describe that experience.

Experience is formally defined as the stream (not set) of incoming
tasks, each of which can be (1) new knowledge (a statement with a
truth value), (2) question (a statement without a truth value), or (3)
goal (a statement with a desire value).

   For
   example, how will it see similarities and differences between
   111222333 and ? Would it enable simple slippage between
   them? How will it learn these representations?
 
  Yes, the two can be recognized as similar, so the analogy rule can use
  one as the other in certain situations.

 It'd be interesting to get an idea of how such things can be translated to
 internal representation that implements these operations.

It's a long story, and there are many possibilities, but basically, it
is about the positive and negative evidence of the following
similarity statement:
(* (* 1 1 1) (* 2 2 2) (* 3 3 3)) - (* (* 1 1 1 1) (* 2 2 2 2)
(* 3 3 3 3))

   Basic rule seems to require presence of terms at the same
   time, which for example can't be made neurologically plausible, unless
   semantics of terms is time-dependent (because neuron only knows that
   other neurons from which it received input fired some time in the
   past, and feature/term it represents if it chooses to fire is a
   statement about features represented by those other fired neurons in
   the past).
 
  It depends on what you mean by presence of terms at the same time.
  In NARS, all inference happens within a concept (because every
  inference rule requires two premises sharing a term), so as far as two
  beliefs are recalled at the same time, the basic rules can be applied.

 I mean the difference between experience of term in the present and
 experience of the same term (from I/O POV) that happened in the past. If
 these notions are represented by separate terms, how are they connected?

Well, if past experience and current experience involve the same
concept, they will use the same term. You may want to see the actual
examples in http://nars.wang.googlepages.com/NARS-Examples-SingleStep.txt
and http://nars.wang.googlepages.com/NARS-Examples-MultiSteps.txt

 I'm
 sorry if I'm asking about something that's being addressed in your book, I
 don't have a copy.

I'm sorry to say that if you are seriously interested in NARS, you do
need to read the book. If your library doesn't have it, it may be
obtained through inter-library loan. If you have absolutely no way to
get it, send me a private email and I'll arrange something.

   Why do you need so many rules?
 
  I didn't expect so many rules myself 

Re: [agi] Do the inference rules of categorical logic make sense?

2007-10-21 Thread Pei Wang
The difference between NARS and PLN has much more to do with their
different semantics, than with their different logical/algebraic
formalism.

For example, according to the semantics of NARS, Bayes rule, with all
of its variants, is deduction. Therefore it is impossible to use on
induction/abduction/...

Also, in NARS the higher-order inference rules are mostly isomorphic
to first-order inference rules, in the sense that they use the same
truth value function, and there is one-to-one mappings between them
--- see http://nars.wang.googlepages.com/wang.abduction.pdf for people
who don't have the book.

Pei

On 10/21/07, Benjamin Goertzel [EMAIL PROTECTED] wrote:


 About NARS...  Nesov/Wang dialogued:

 
 
Why do you need so many rules?
  
   I didn't expect so many rules myself at the beginning. I add new rules
   only when the existing ones are not enough for a situation. It will be
   great if someone can find a simpler design.
 
 
  I feel that some of complexity comes from modeling of natural language
 statements. Do you agree?


 I think the complexity comes from the particular logical/algebraic formalism
 underlying NARS...

 In PLN, which is similar to NARS in some respects but with a probabilistic
 foundation, there are fewer rules because the underlying algebra is more
 powerful, allowing more cases in which rules may be derived from other
 rules.  E.g. in NARS, induction and abduction are primary rules, whereas in
 PLN they are derived via combining Bayes rule with deduction in different
 (simple) ways.  And in NARS, higher-order inference rules are posited
 separately than first-order inference rules, whereas in PLN most of the
 higher-order rules are derived directly from corresponding first-order
 rules.  [Note that in PLN and NARS, the terms first-order and higher-order
 have  different meanings than the ones often seen.  First order term logic
 is the pure logic of inheritance with no explicit variables or quantifiers;
 higher-order term logic introduced quantified variables.]

 -- Ben G
  
  This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=56145391-2cb7f6


Re: [agi] Do the inference rules of categorical logic make sense?

2007-10-21 Thread Benjamin Goertzel
On 10/21/07, Pei Wang [EMAIL PROTECTED] wrote:

 The difference between NARS and PLN has much more to do with their
 different semantics, than with their different logical/algebraic
 formalism.


Sure; in both cases, the algebraic structure of the rules and the
truth-value formulas follow from the semantics


For example, according to the semantics of NARS, Bayes rule, with all
 of its variants, is deduction. Therefore it is impossible to use on
 induction/abduction/...


For the benefit of others besides Pei ...
what I meant was that inferences like

A -- B
A -- C
|-
B -- C

and

A -- C
B -- C
|-
A -- B

are handled in PLN via a combination of Bayes rule
and deduction, whereas in NARS they are handled
by special induction and abduction truth value
formulas...

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=56156686-9d128e

Re: [agi] Do the inference rules of categorical logic make sense?

2007-10-10 Thread Charles D Hixson

Mike Tintner wrote:

Charles H:as I understand it, this still wouldn't be an AGI, but merely a
categorizer.

That's my understanding too. There does seem to be a general problem 
in the field of AGI, distinguishing AGI from narrow AI - 
philosophically. In fact, I don't think I've seen any definition of 
AGI or intelligence that does.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;

But *do* notice that the terminal nodes are uninterpreted.  This means 
that they could be assigned, e.g., procedural values.
Because of this, even though the current design (as I understand it) of 
NARS is purely a categorizer, it's not limited in what it's extensions 
and embedding environment can be.  It would be a trivial extension to 
allow terminal nodes to have a type, and that what was done when a 
terminal node was generated could depend upon that type.


(There's a paper called wang.roadmap.pdf that I *must* get around to 
reading!)


P.S.: In the paper on computations it seems to me that items of high 
durability should not be dropped from the processing queue even if it 
becomes full of higher priority tasks.  There should probably be a 
postponed tasks location where things like garbage collection and 
database sanity checking and repair can be saved to be done during 
future idle times.



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=52084316-6120bf


Re: [agi] Do the inference rules of categorical logic make sense?

2007-10-10 Thread Charles D Hixson

Linas Vepstas wrote:

On Sun, Oct 07, 2007 at 12:36:10PM -0700, Charles D Hixson wrote:
  

Edward W. Porter wrote:


   Fred is a human
   Fred is an animal
  
You REALLY can't do good reasoning using formal logic in natural 
language...at least in English.  That's why the invention of symbolic 
logic was so important.



I suppose this was pounded to death in the rest of the thread, 
(which I haven't read) but still: syllogistic reasoning does occur 
in hypothesis formation, and thus, learning:


-- maybe humans are animals?  What evidence do I have to support this?
-- maybe animals are human? Can that be?

If Fred has an artificial heart, then perhaps he isn't simply 
just a special case of an animal.


If some pig has human organs in it, then perhaps its an animal that
is human.
 
Neither syllogistic deduction is purely false in the real world; 
there is an it depends aspect to it.  learning AI would chalk it

up as a maybe, and see is this reasoning leads anywhere. I beleive
Pei Wang's NARS system tries to do this; it seems more structured 
than the fuzzy logic type approaches that antedate it.


--linas
  
For me the sticking point was that we were informed that we didn't know 
anything about anything outside of the framework presented.  We didn't 
know what a Fred was, or what a human was, or what an animal was.  A 
Fred could be a audio frequency of 440 Hz for all we knew.  And telling 
us that he was a human didn't rule that out, because we didn't know what 
a human was either.


Your extension questions make sense if we aren't dealing with a tabula 
rasa.  But we were explicitly told that we were, so the answers to your 
questions would have been ??? and none and ??? and no evidence.


Your hypothetical extensions are also only considerable in the context 
of extensive knowledge that was specified as unknown.


OTOH, the context was really about NARS.  (I feel that my objections 
still apply, but not as strongly.  If I had understood what was being 
discussed as well then as I do now, I would have commented less strongly.)


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=52086180-e7e4ee


Re: [agi] Do the inference rules of categorical logic make sense?

2007-10-10 Thread Charles D Hixson

Mark Waser wrote:
Thus, as I understand it, one can view all inheritance statements as 
indicating the evidence that one instance or category belongs to, and 
thus is “a child of” another category, which includes, and thus can be 
viewed as “a parent” of the other.
Yes, that is inheritance as Pei uses it. But are you comfortable with 
the fact that I am allowed to drink alcohol is normally both the 
parent and the child of I am an adult  (and vice versa)? How about 
the fact that most ravens are black is both the parent and child of 
this raven is white (and vice versa)?
Since inheritance relations are transitive, the resulting hierarchy of 
categories involves nodes that can be considered ancestors (i.e., 
parents, parents of parents, etc.) of others and nodes that can be 
viewed as descendents (children, children of children, etc.) of others.
And how often do you really want to do this with concepts like the 
above -- or when the evidence is substantially less than unity?

And loops and transitivity are really ugly . . . .
NARS really isn't your father's inheritance.
A definite point, and one that argues against my model of a prototype 
based computer language. I prefer to think in lattice structures rather 
than in directed graphs. Another problem is the matter of probability 
and stability values being attached to the links. I definitely need a 
better model.


To continue your point, just because A--B at one point in time doesn't 
ensure that it will also be true (with a probability above any 
particular threshold)at a later point. Links, especially low stability 
links, get re-evaluated, where prototype descendants maintain their 
ancestry.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=52089907-ea36e2


Re: [agi] Do the inference rules of categorical logic make sense?

2007-10-10 Thread Vladimir Nesov
Pei,

(Sorry for a long list of questions; maybe I'm trying to see NARS as
what it isn't, through lens of my own approach.)

Do you have a high-level description of how statements evolve during
learning of complex descriptions, including creation of new
subsymbolic terms (compound terms)? Basic rule for evidence-based
estimation of implication in NARS seems to be roughly along the lines
of term construction in my framework (I think there's much freedom in
its choice, do you have other variants of it/justification for current
choice relative to other possibilities which is not concerned with
applicability to derivation of rules for abduction/induction/etc.?),
but I'm not sure about how you handle variations of structures (that
is, how does system represents two structures which are similar in
some sense and how it extracts the common part from them). It's
difficult to see from basic rules if it's not addressed directly. For
example, how will it see similarities and differences between
111222333 and ? Would it enable simple slippage between
them? How will it learn these representations?

Do you address temporal activation of terms, where term being active
is a temporal statement expressed as relative to current moment, and
learning of structure results from prolonged cooccurrence of its
components? Basic rule seems to require presence of terms at the same
time, which for example can't be made neurologically plausible, unless
semantics of terms is time-dependent (because neuron only knows that
other neurons from which it received input fired some time in the
past, and feature/term it represents if it chooses to fire is a
statement about features represented by those other fired neurons in
the past).

Why do you need so many rules? Ultimately all you need are rules for
term formation (for which intersection as starting point seems to be
enough) and term activation given currently active terms (fluid
inference). Is there a basic set which is theoretically sufficient,
although probably requires too much indirect support structures (I
assume that input/output experience is presented as flat conjunction
of active terms)? Why do you need to separately regard operations on
terms and statements (and why statements have any significance in
themselves, other than specific interpretation of underlying term
activation rule)?

On 10/10/07, Pei Wang [EMAIL PROTECTED] wrote:
 In NARS, the Deduction/Induction/Abduction trio has (at least) three
 different-though-isomorphic forms, one on inheritance, one on
 implication, and one mixed.

 For people who don't have access to the book, see
 http://nars.wang.googlepages.com/wang.abduction.pdf , though the
 symbols used in that paper is slightly different from the current
 form.

 Pei


-- 
Vladimir Nesovmailto:[EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=52127503-7a35a9


Re: [agi] Do the inference rules of categorical logic make sense?

2007-10-10 Thread Mike Tintner

Charles,

I don't see - no doubt being too stupid - how what you are saying is going 
to make a categorizer into more than that - into a system that can, say, go 
on to learn various logic's, or how to build a house or other structures or 
tell a story -  that can be a *general* intelligence.


What struck me about the overall discussion of NARS' logical capabilities, 
firstly, was that they all depended -  I think you may have made this 
point - on everyone's *common sense* interpretations of inheritance and 
other relations and the logic generally. In other words, any logic is - and 
always will be - a very *secondary*  sign system for both representing and 
reasoning about the world. It is a highly evolved derivative of more basic, 
common sense systems in the brain - and, like language itself, has 
continually to be made sense of by the brain. (That's why I would suspect 
that all of you, however versed in logic you are, will, while looking at 
those logical propositions, go fuzzy from time to time - when your brain 
can't for a while literally make sense of them).


A hierarchy of abstract/ concrete sign systems, grounded in the senses, is - 
I believe - essential for any AGI and general learning - and, NARS,  AFAICT, 
lacks that.


Secondly, I don't see how what you are saying will give NARS the ability to 
*create* new rules and strategies for its activities, (that are not derived 
from existing rules). AFAICT it simply applies logic and follows rules, even 
though they include rules for modifying rules. It cannot, like Pei or Bayes 
have done, create or fundamentally extend logics. If so, it is still narrow 
AI, not AGI.


(There is, I repeat, a major need for a philosophical distinction between AI 
and AGI  - in talking about the area of the last paragraph, I think we all 
flounder and grope for terms).




Mike Tintner wrote:

Charles H:as I understand it, this still wouldn't be an AGI, but merely a
categorizer.

That's my understanding too. There does seem to be a general problem in 
the field of AGI, distinguishing AGI from narrow AI - philosophically. In 
fact, I don't think I've seen any definition of AGI or intelligence that 
does.


But *do* notice that the terminal nodes are uninterpreted.  This means 
that they could be assigned, e.g., procedural values.
Because of this, even though the current design (as I understand it) of 
NARS is purely a categorizer, it's not limited in what it's extensions and 
embedding environment can be.  It would be a trivial extension to allow 
terminal nodes to have a type, and that what was done when a terminal node 
was generated could depend upon that type.


(There's a paper called wang.roadmap.pdf that I *must* get around to 
reading!)


P.S.: In the paper on computations it seems to me that items of high 
durability should not be dropped from the processing queue even if it 
becomes full of higher priority tasks.  There should probably be a 
postponed tasks location where things like garbage collection and 
database sanity checking and repair can be saved to be done during future 
idle times.



Version: 7.5.488 / Virus Database: 269.14.6/1060 - Release Date: 
09/10/2007 16:43






-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=52128101-e8e3f7


Re: [agi] Do the inference rules of categorical logic make sense?

2007-10-10 Thread Pei Wang
On 10/10/07, Vladimir Nesov [EMAIL PROTECTED] wrote:
 Pei,

 (Sorry for a long list of questions; maybe I'm trying to see NARS as
 what it isn't, through lens of my own approach.)

 Do you have a high-level description of how statements evolve during
 learning of complex descriptions, including creation of new
 subsymbolic terms (compound terms)?

Nothing too detail, but you can start from the paper I co-authored
with Hofstadter, especially on the discussion on Compositionality
and Categorical dynamics.

I don't call the compound terms subsymbolic.

 Basic rule for evidence-based
 estimation of implication in NARS seems to be roughly along the lines
 of term construction in my framework (I think there's much freedom in
 its choice, do you have other variants of it/justification for current
 choice relative to other possibilities which is not concerned with
 applicability to derivation of rules for abduction/induction/etc.?),

There is some justification behind the design of every inference rule
(and its truth value function), not only abduction/induction. You can
find most in the book, and many are also in my other publications.

 but I'm not sure about how you handle variations of structures (that
 is, how does system represents two structures which are similar in
 some sense and how it extracts the common part from them). It's
 difficult to see from basic rules if it's not addressed directly.

The basic rules (deduction/abduction/induction/revision) ignore the
internal structure of compound terms. There are special inference
rules that handles the composition/decomposition of various compound
structure. Again, they are mostly given by the book.

 For
 example, how will it see similarities and differences between
 111222333 and ? Would it enable simple slippage between
 them? How will it learn these representations?

Yes, the two can be recognized as similar, so the analogy rule can use
one as the other in certain situations.

 Do you address temporal activation of terms, where term being active
 is a temporal statement expressed as relative to current moment, and
 learning of structure results from prolonged cooccurrence of its
 components?

Yes, to a degree, though not in the same way as neural network. I'm
sorry that I don't have the time to give a detailed explanation on
this topic.

 Basic rule seems to require presence of terms at the same
 time, which for example can't be made neurologically plausible, unless
 semantics of terms is time-dependent (because neuron only knows that
 other neurons from which it received input fired some time in the
 past, and feature/term it represents if it chooses to fire is a
 statement about features represented by those other fired neurons in
 the past).

It depends on what you mean by presence of terms at the same time.
In NARS, all inference happens within a concept (because every
inference rule requires two premises sharing a term), so as far as two
beliefs are recalled at the same time, the basic rules can be applied.

Whether NARS rules are neurologically plausible is not a major
consideration for me. NARS is not a brain model.

 Why do you need so many rules?

I didn't expect so many rules myself at the beginning. I add new rules
only when the existing ones are not enough for a situation. It will be
great if someone can find a simpler design.

 Ultimately all you need are rules for
 term formation (for which intersection as starting point seems to be
 enough) and term activation given currently active terms (fluid
 inference). Is there a basic set which is theoretically sufficient,
 although probably requires too much indirect support structures (I
 assume that input/output experience is presented as flat conjunction
 of active terms)?

Maybe, but until I see a concrete design, I cannot be sure.

 Why do you need to separately regard operations on
 terms and statements (and why statements have any significance in
 themselves, other than specific interpretation of underlying term
 activation rule)?

Not fully separate. Statements in many cases are treated just like
other terms. However, since statements are terms with truth value,
they do need special treatment here or there, which don't make sense
for other (non-statement) terms.

Pei

 On 10/10/07, Pei Wang [EMAIL PROTECTED] wrote:
  In NARS, the Deduction/Induction/Abduction trio has (at least) three
  different-though-isomorphic forms, one on inheritance, one on
  implication, and one mixed.
 
  For people who don't have access to the book, see
  http://nars.wang.googlepages.com/wang.abduction.pdf , though the
  symbols used in that paper is slightly different from the current
  form.
 
  Pei


 --
 Vladimir Nesovmailto:[EMAIL PROTECTED]

 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To 

Re: [agi] Do the inference rules of categorical logic make sense?

2007-10-10 Thread Linas Vepstas
On Wed, Oct 10, 2007 at 01:06:35PM -0700, Charles D Hixson wrote:
 For me the sticking point was that we were informed that we didn't know 
 anything about anything outside of the framework presented.  We didn't 
 know what a Fred was, or what a human was, or what an animal was.  

?? Well, no. In NARS, you actually know a lot more; you know 
the relative position of each statement in the lattice of posets, 
and that is actually a very powerful bit of knowledge. From this, 
you can compute a truth value, and evidence, for the statements.

NARS tells you how to  combine the truth values. So, while you
might not explicitly know what Fred is, you do have to compute
a truth value for fred is an animal and fred is a human. 
NARS then tells you what the corresponding evidence is for
an animal is a human and a human is an animal (presumably
the evidence is weak, and strong, depending on the relation 
of these posets within the universe.)

In measure-theoreic terms, the truth value is the measure of 
the size of the poset relative the size of the universe.  NARS
denotes this by the absolute value symbol. The syllogism rules
suggest how the measures of the various intersections and unions
of the posets need to be combined.

I presume that maybe there is some theorem that shows that 
the NARS system assigns evidence values that are consistent
with the axioms of measure theory. Seems reasonable to me;
I haven't thought it through, and I haven't read more in that
direction.

--linas

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=52168417-2fb31b


Re: [agi] Do the inference rules of categorical logic make sense?

2007-10-10 Thread Charles D Hixson

Mike Tintner wrote:

Charles,

I don't see - no doubt being too stupid - how what you are saying is 
going to make a categorizer into more than that - into a system that 
can, say, go on to learn various logic's, or how to build a house or 
other structures or tell a story -  that can be a *general* intelligence.
I wouldn't say you were being stupid.  Nobody knows how to build an AGI 
yet.  And I'm envisioning the current system of NARS as only a 
component, albeit an important component.  (I don't know how Pei Wang is 
envisioning it.)


But if you study the input system from the eye (overview...I have no 
detailed knowledge), you discover that the initial sensory stimuli are 
split into several streams that are processed separately (possibly 
categorized) and then recombined.  Sometimes something very important 
will jump out of the system, however, and cause rapid reactions that the 
consciousness never becomes aware of noticing before acting on.  (N.B.:  
This being aware of before acting on is often-to-usually an 
hallucination.) 

Clearly some categorizer has noticed that something was VERY important.  
As such, apparently some kind of categorizer is very important.  My 
suspicion is that most categorizers work with small databases in 
restricted domains, acting as black-box functions...though function 
isn't the right word for something that can return multiple results.


What struck me about the overall discussion of NARS' logical 
capabilities, firstly, was that they all depended -  I think you may 
have made this point - on everyone's *common sense* interpretations of 
inheritance and other relations and the logic generally. In other 
words, any logic is - and always will be - a very *secondary*  sign 
system for both representing and reasoning about the world. It is a 
highly evolved derivative of more basic, common sense systems in the 
brain - and, like language itself, has continually to be made sense 
of by the brain. (That's why I would suspect that all of you, however 
versed in logic you are, will, while looking at those logical 
propositions, go fuzzy from time to time - when your brain can't for a 
while literally make sense of them).


A hierarchy of abstract/ concrete sign systems, grounded in the 
senses, is - I believe - essential for any AGI and general learning - 
and, NARS,  AFAICT, lacks that.


Secondly, I don't see how what you are saying will give NARS the 
ability to *create* new rules and strategies for its activities, (that 
are not derived from existing rules). AFAICT it simply applies logic 
and follows rules, even though they include rules for modifying rules. 
It cannot, like Pei or Bayes have done, create or fundamentally extend 
logics. If so, it is still narrow AI, not AGI.


(There is, I repeat, a major need for a philosophical distinction 
between AI and AGI  - in talking about the area of the last paragraph, 
I think we all flounder and grope for terms).




Mike Tintner wrote:
Charles H:as I understand it, this still wouldn't be an AGI, but 
merely a

categorizer.

That's my understanding too. There does seem to be a general problem 
in the field of AGI, distinguishing AGI from narrow AI - 
philosophically. In fact, I don't think I've seen any definition of 
AGI or intelligence that does.


But *do* notice that the terminal nodes are uninterpreted.  This 
means that they could be assigned, e.g., procedural values.
Because of this, even though the current design (as I understand it) 
of NARS is purely a categorizer, it's not limited in what it's 
extensions and embedding environment can be.  It would be a trivial 
extension to allow terminal nodes to have a type, and that what was 
done when a terminal node was generated could depend upon that type.


(There's a paper called wang.roadmap.pdf that I *must* get around 
to reading!)


P.S.: In the paper on computations it seems to me that items of high 
durability should not be dropped from the processing queue even if it 
becomes full of higher priority tasks.  There should probably be a 
postponed tasks location where things like garbage collection and 
database sanity checking and repair can be saved to be done during 
future idle times.



Version: 7.5.488 / Virus Database: 269.14.6/1060 - Release Date: 
09/10/2007 16:43


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=52174802-a2d0ec


Re: [agi] Do the inference rules of categorical logic make sense?

2007-10-10 Thread Charles D Hixson

Generally, yes, you know more.
In this particular instance we were told the example was all that was known.

Linas Vepstas wrote:

On Wed, Oct 10, 2007 at 01:06:35PM -0700, Charles D Hixson wrote:
  
For me the sticking point was that we were informed that we didn't know 
anything about anything outside of the framework presented.  We didn't 
know what a Fred was, or what a human was, or what an animal was.  



?? Well, no. In NARS, you actually know a lot more; you know 
the relative position of each statement in the lattice of posets, 
and that is actually a very powerful bit of knowledge. From this, 
you can compute a truth value, and evidence, for the statements.


NARS tells you how to  combine the truth values. So, while you
might not explicitly know what Fred is, you do have to compute
a truth value for fred is an animal and fred is a human. 
NARS then tells you what the corresponding evidence is for

an animal is a human and a human is an animal (presumably
the evidence is weak, and strong, depending on the relation 
of these posets within the universe.)


In measure-theoreic terms, the truth value is the measure of 
the size of the poset relative the size of the universe.  NARS

denotes this by the absolute value symbol. The syllogism rules
suggest how the measures of the various intersections and unions
of the posets need to be combined.

I presume that maybe there is some theorem that shows that 
the NARS system assigns evidence values that are consistent

with the axioms of measure theory. Seems reasonable to me;
I haven't thought it through, and I haven't read more in that
direction.

--linas

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;

  


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=52175736-6ccdee


Re: [agi] Do the inference rules of categorical logic make sense?

2007-10-09 Thread Mark Waser
It looks to me as if NARS can be modeled by a prototype based language 
with operators for is an ancestor of and is a descendant of.


   I don't believe that this is the case at all.  NARS correctly handles 
cases where entities co-occur or where one entity implies another only due 
to other entities/factors.  Is an ancestor of and is a descendant of has 
nothing to do with this.


To me a model can well be dynamic and experience based.  In fact I 
wouldn't consider a model very intelligent if it didn't either itself 
adapt itself to experience, or it weren't embedded in a matrix which 
adapted it to experiences.  (This doesn't seem to be quite the same 
meaning that you use for model.  Your separation of the rules of 
inference, the rational faculty, and the model as a fixed and unchanging 
condition don't match my use of the term.


And your use of the term is better than his use of the term because . . . . 
?:-)


By model, he means model of cognition.  For him (and all of us), cognition 
is dynamic and experience-based but the underlying process is relatively 
static and the same from individual to individual.


I still find that I am forced to interpret the inheritance relationship as 
a is a child of relationship.


Which is why you're having problems understanding NARS.  If you can't get 
past this, you're not going to get it.


And I find the idea of continually calculating the powerset of inheritance 
relationships unappealing.  There may not be a better way, but if there 
isn't, than AGI can't move forwards without vastly more powerful machines.


This I agree with.  My personal (hopefully somewhat informed) opinion is 
that NARS (and Novamente) are doing more than absolutely needs to be done 
for AGI.  Time will tell.


I do feel that the limited sensory modality of the environment (i.e., 
reading the keyboard) makes AGI unlikely to be feasible.  It seems to me 
that one of the necessary components of true intelligence is integrating 
multi-modal sensory experience.


Why?



- Original Message - 
From: Charles D Hixson [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Monday, October 08, 2007 5:50 PM
Subject: Re: [agi] Do the inference rules of categorical logic make sense?


OK.  I've read the paper, and don't see where I've made any errors.  It 
looks to me as if NARS can be modeled by a prototype based language with 
operators for is an ancestor of and is a descendant of.  I do have 
trouble with the language terms that you use, though admittedly they 
appear to be standard for logicians (to the extent that I'm familiar with 
their dialect).  That might well not be a good implementation, but it 
appears to be a reasonable model.


To me a model can well be dynamic and experience based.  In fact I 
wouldn't consider a model very intelligent if it didn't either itself 
adapt itself to experience, or it weren't embedded in a matrix which 
adapted it to experiences.  (This doesn't seem to be quite the same 
meaning that you use for model.  Your separation of the rules of 
inference, the rational faculty, and the model as a fixed and unchanging 
condition don't match my use of the term.  I might pull out the rules of 
inference as separate pieces and stick them into a datafile, but 
datafiles can be changed, if anything, more readily than programs...and 
programs are readily changeable.  To me it appears clear that much of the 
language would need to be interpretive rather than compiled.  One should 
pre-compile what one can for the sake of efficiency, but with the 
knowledge that this sacrifices flexibility for speed.


I still find that I am forced to interpret the inheritance relationship as 
a is a child of relationship.  And I find the idea of continually 
calculating the powerset of inheritance relationships unappealing.  There 
may not be a better way, but if there isn't, than AGI can't move forwards 
without vastly more powerful machines.  Probably, however, the 
calculations could be shortcut by increasing the local storage a bit.  If 
each node maintained a list of parents and children, and a count of 
descendants and ancestors it might suffice.  This would increase storage 
requirements, but drastically cut calculation and still enable the 
calculation of confidence.  Updating the counts could be saved for 
dreamtime.  This would imply that during the early part of learning sleep 
would be a frequent necessity...but it should become less necessary as the 
ratio of extant knowledge to new knowledge learned increased.  (Note that 
in this case the amount of new knowledge would be a measured quantity, not 
an arbitrary constant.)


I do feel that the limited sensory modality of the environment (i.e., 
reading the keyboard) makes AGI unlikely to be feasible.  It seems to me 
that one of the necessary components of true intelligence is integrating 
multi-modal sensory experience.  This doesn't necessarily mean vision and 
touch, but SOMETHING.  As such I can see NARS (or some

Re: [agi] Do the inference rules of categorical logic make sense?

2007-10-09 Thread Lukasz Stafiniak
When looking at it through a crisp glass, the relation is a
preorder, not a (partial) order. And priming is essential. For
example, in certain contexts, we think that an animal is a human
(anthropomorphism).

On 10/9/07, Mark Waser [EMAIL PROTECTED] wrote:

 Ack!  Let me rephrase.  Despite the fact that Pei always uses the words of
 inheritance (and is technically correct), what he means is quite different
 from what most people assume that he means.  You are stuck on the common
 meanings of the terms  is an ancestor of and is a descendant of and it's
 impeding your understanding.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=51437008-630e6a


RE: [agi] Do the inference rules of categorical logic make sense?

2007-10-09 Thread Edward W. Porter
 properties or
elements).

Although I understand there is an importance equivalence between down in
the comp hierarchical and up in the gen hierarchy, and that the two could
be viewed as one hierarchy, I have preferred to think of them as different
hierarchies, because the type of gens one gets by going up in the gen
hierarchy tend to be different than the type of gens one gets by going
down in the comp hierarchy.

Each possible set in the powerset (the set of all subsets) of elements
(eles), relationships (rels), attributes (atts) and contextual patterns
(contextual pats) could be considered as possible generalizations.  I have
assumed, as does Goertzel’s Novamente, that there is a competitive
ecosystem for representational resources, in which only the fittest pats
and gens -- as determined by some measure of usefulness to the system --
survive.  There are several major uses of gens, such as aiding in
perception, providing inheritance of significant implication, providing
appropriate level of representation for learning, and providing invariant
representation in higher level comps.  Although temporary gens will be
generated at a relatively high frequency, somewhat like the inductive
implications in NARS, the number of gens that survive and get incorporated
into a lot of comps and episodic reps, will be an infinitesimal fraction
of the powerset of eles, rels, atts, and contextual features stored in the
system.  Pats in the up direction in the Gen hierarchy will tend to be
ones that have been selected for the usefulness as generalizations.  They
will often have reasonable number of features that correspond to that of
their species node, but with some of them more broadly defined.  The gens
found by going down in the comp hierarchy are ones that have been selected
for their representational value in a comp, and many of them would not
normally be that valuable as what we normally think of as generalizations.

In the type of system I have been thinking of I have assumed there will be
substantially less multiple inheritance in the up direction in the gen
hierarchy than in the down direction in the comp hierarchy (in which there
would be potential inheritance from every ele, rel, att, and contextual
feature of in a comp’s descendant nodes at multiple levels in the comp
hierarchy below it.  Thus, for spreading activation control purposes, I
think it is valuable to distinguish between generalization and
compositional hierarchies, although I understand they have an important
equivalence that should not be ignored.

I wonder if NARS makes such a distinction.

These are only initial thoughts.  I hope to become part of a team that
gets an early world-knowledge computing AGI up and running.  Perhaps when
I do feedback from reality will change my mind.

I would welcome comments, not only from Mark, but also from other readers.


Edward W. Porter
Porter  Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]



-Original Message-
From: Mark Waser [mailto:[EMAIL PROTECTED]
Sent: Tuesday, October 09, 2007 9:46 AM
To: agi@v2.listbox.com
Subject: Re: [agi] Do the inference rules of categorical logic make sense?


I don't believe that this is the case at all.  NARS correctly
 handles
 cases where entities co-occur or where one entity implies another only
due
 to other entities/factors.  Is an ancestor of and is a descendant of

 has nothing to do with this.

Ack!  Let me rephrase.  Despite the fact that Pei always uses the words of

inheritance (and is technically correct), what he means is quite different

from what most people assume that he means.  You are stuck on the common

meanings of the terms  is an ancestor of and is a descendant of and
it's
impeding your understanding.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=51480730-4665d4

Re: [agi] Do the inference rules of categorical logic make sense?

2007-10-09 Thread Mark Waser
RE: [agi] Do the inference rules of categorical logic make sense?Thus, as I 
understand it, one can view all inheritance statements as indicating the 
evidence that one instance or category belongs to, and thus is a child of 
another category, which includes, and thus can be viewed as a parent of the 
other. 

Yes, that is inheritance as Pei uses it.  But are you comfortable with the fact 
that I am allowed to drink alcohol is normally both the parent and the child 
of I am an adult  (and vice versa)?  How about the fact that most ravens are 
black is both the parent and child of this raven is white (and vice versa)?

Since inheritance relations are transitive, the resulting hierarchy of 
categories involves nodes that can be considered ancestors (i.e., parents, 
parents of parents, etc.) of others and nodes that can be viewed as descendents 
(children, children of children, etc.) of others.  

And how often do you really want to do this with concepts like the above -- or 
when the evidence is substantially less than unity?

And loops and transitivity are really ugly . . . . 

NARS really isn't your father's inheritance.

  - Original Message - 
  From: Edward W. Porter 
  To: agi@v2.listbox.com 
  Sent: Tuesday, October 09, 2007 12:24 PM
  Subject: RE: [agi] Do the inference rules of categorical logic make sense?


  RE: (1) THE VALUE OF CHILD OF AND PARENT OF RELATIONS(2) DISCUSSION 
OF POSSIBLE VALUE IN DISTINGUISHING BETWEEN GENERALIZATIONAL AND COMPOSITIONAL 
INHERITANCE HIERARCHIES.

  Re Mark Waser's 10/9/2007 9:46 AM post: Perhaps Mark understands something I 
don't. 

  I think relations that can be viewed as child of and parent of in a 
hierarchy of categories are extremely important (for reasons set forth in more 
detail below) and it is not clear to me that Pei meant something other than 
this.

  If Mark or anyone else has reason to believe that what [Pei] means is quite 
different than such child of and parent of relations, I would appreciate 
being illuminated by what that different meaning is.




  My understanding of NARS is that it is concerned with inheritance relations, 
which as I understand it, indicate the truth value of the assumption that one 
category falls within another category, where category is broadly defined to 
included not only what we normally think of as categories, but also 
relationships, slots in relationships, and categories defined by a sets of one 
or more properties, attributes, elements, relationships, or slot in 
relationships.  Thus, as I understand it, one can view all inheritance 
statements as indicating the evidence that one instance or category belongs to, 
and thus is a child of another category, which includes, and thus can be 
viewed as a parent of the other.  Since inheritance relations are transitive, 
the resulting hierarchy of categories involves nodes that can be considered 
ancestors (i.e., parents, parents of parents, etc.) of others and nodes that 
can be viewed as descendents (children, children of children, etc.) of others.  

  I tend to think of similarity as a sibling relationship under a shared hidden 
parent category -- based on similar aspects of the sibling's extensions and/or 
intensions.

  In much of my own thinking I have thought of such categorization relations as 
is generalization, in which the parent is the genus, and the child is the 
species.   Generalization is important for many reasons.  First, perception is 
trying to figure which in category or generalization of things, actions, or 
situations various parts of a current set of sensory information might fit.  
Secondly, Generalization is important because it is necessary for implication.  
All those Bayesian probabilities we are used to thinking about such as 
P(A|B,C), are totally useless unless we have some way of knowing the 
probability the situation being considered contains a B or C.  To do that you 
have to have categories that help you determine the extent to which a B or a C 
is present.  To understand the implication of P(A|B,C) you have to have some 
meaning for the category A.  Generalization is important for behavior because 
one uses generalization learned from past experiences to develop plans for how 
to achieve goals, and because most action schema are usually generalization 
that have to be instantiated in a context specific way.

  One of the key problems in AI has been non-literal matching.  That is why 
representation schemes that have a flexibility something like that of NARS are 
necessary for any intelligence capable of operating well in anything other than 
limited domains.  That is why so-called invariant or hierarchical memory 
representations are so valuable.  This is indicated in writings of Jeff 
Hawkins, Thomas Serre (Learning a Dictionary of Shape-Components in Visual 
Cortex: Comparison with Neurons, Humans and Machines, by Thomas Serre, the 
google-able article I have cited so many times), and many others

RE: [agi] Do the inference rules of categorical logic make sense?

2007-10-09 Thread Edward W. Porter
Mark,

Thank you for your reply.  I just ate a lunch with too much fat (luckily
largely olive oil) in it so, my brain is a little sleepy.  If it is not
too much trouble could you please map out the inheritance relationships
from which one derives how I am allowed to drink alcohol is both a
parent and the child of I am an adult.  And could you please do the same
with how most ravens are balck is both parent and child of this raven
is white.

Most of the discussion I read in Pei's article related to inheritance
relations between terms, that operated as subject and predicates in
sentences that are inheritance statements, rather than between entire
statemens, unless the statement was a subject or a predicate of a higher
order inheritance statement.  So what you are referring to appears to be
beyond what I have read.

Edward W. Porter
Porter  Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]



-Original Message-
From: Mark Waser [mailto:[EMAIL PROTECTED]
Sent: Tuesday, October 09, 2007 12:47 PM
To: agi@v2.listbox.com
Subject: Re: [agi] Do the inference rules of categorical logic make sense?


Thus, as I understand it, one can view all inheritance statements as
indicating the evidence that one instance or category belongs to, and thus
is “a child of” another category, which includes, and thus can be viewed
as “a parent” of the other.

Yes, that is inheritance as Pei uses it.  But are you comfortable with the
fact that I am allowed to drink alcohol is normally both the parent and
the child of I am an adult  (and vice versa)?  How about the fact that
most ravens are black is both the parent and child of this raven is
white (and vice versa)?

Since inheritance relations are transitive, the resulting hierarchy of
categories involves nodes that can be considered ancestors (i.e., parents,
parents of parents, etc.) of others and nodes that can be viewed as
descendents (children, children of children, etc.) of others.

And how often do you really want to do this with concepts like the above
-- or when the evidence is substantially less than unity?

And loops and transitivity are really ugly . . . .

NARS really isn't your father's inheritance.


- Original Message -
From: Edward W.  mailto:[EMAIL PROTECTED] Porter
To: agi@v2.listbox.com
Sent: Tuesday, October 09, 2007 12:24 PM
Subject: RE: [agi] Do the inference rules of categorical logic make sense?


RE: (1) THE VALUE OF “CHILD OF” AND “PARENT OF” RELATIONS(2)
DISCUSSION OF POSSIBLE VALUE IN DISTINGUISHING BETWEEN GENERALIZATIONAL
AND COMPOSITIONAL INHERITANCE HIERARCHIES.

Re Mark Waser’s 10/9/2007 9:46 AM post: Perhaps Mark understands something
I don’t.

I think relations that can be viewed as “child of” and “parent of” in a
hierarchy of categories are extremely important (for reasons set forth in
more detail below) and it is not clear to me that Pei meant something
other than this.

If Mark or anyone else has reason to believe that “what [Pei] means is
quite different” than such “child of” and “parent of” relations, I would
appreciate being illuminated by what that different meaning is.



My understanding of NARS is that it is concerned with inheritance
relations, which as I understand it, indicate the truth value of the
assumption that one category falls within another category, where category
is broadly defined to included not only what we normally think of as
categories, but also relationships, slots in relationships, and categories
defined by a sets of one or more properties, attributes, elements,
relationships, or slot in relationships.  Thus, as I understand it, one
can view all inheritance statements as indicating the evidence that one
instance or category belongs to, and thus is “a child of” another
category, which includes, and thus can be viewed as “a parent” of the
other.  Since inheritance relations are transitive, the resulting
hierarchy of categories involves nodes that can be considered ancestors
(i.e., parents, parents of parents, etc.) of others and nodes that can be
viewed as descendents (children, children of children, etc.) of others.

I tend to think of similarity as a sibling relationship under a shared
hidden parent category -- based on similar aspects of the sibling’s
extensions and/or intensions.

In much of my own thinking I have thought of such categorization relations
as is generalization, in which the parent is the genus, and the child is
the species.   Generalization is important for many reasons.  First,
perception is trying to figure which in category or generalization of
things, actions, or situations various parts of a current set of sensory
information might fit.  Secondly, Generalization is important because it
is necessary for implication.  All those Bayesian probabilities we are
used to thinking about such as P(A|B,C), are totally useless unless we
have some way of knowing the probability the situation being considered
contains a B or C.  To do that you

Re: [agi] Do the inference rules of categorical logic make sense?

2007-10-09 Thread Mark Waser
MessageMost of the discussion I read in Pei's article related to inheritance 
relations between terms, that operated as subject and predicates in sentences 
that are inheritance statements, rather than between entire statements, unless 
the statement was a subject or a predicate of a higher order inheritance 
statement.  So what you are referring to appears to be beyond what I have read.

Label the statement I am allowed to drink alcohol as P and the statement I 
am an adult as Q.  P implies Q and Q implies P (assume that age 21 equals 
adult) --OR-- P is the parent of Q and Q is the parent of P.

Label the statement that most ravens are black as R and the statement that 
this raven is white as S.  R affects the probability of S and, to a lesser 
extent, S affects the probability of R (both in a negative direction) --OR-- R 
is the parent of S and S is the parent of R (although, realistically, the 
probability change is so miniscule that you really could argue that this isn't 
true).

NARS's inheritance is the inheritance of influence on the probability values.

- Original Message - 
  From: Edward W. Porter 
  To: agi@v2.listbox.com 
  Sent: Tuesday, October 09, 2007 1:12 PM
  Subject: RE: [agi] Do the inference rules of categorical logic make sense?


  Mark, 

  Thank you for your reply.  I just ate a lunch with too much fat (luckily 
largely olive oil) in it so, my brain is a little sleepy.  If it is not too 
much trouble could you please map out the inheritance relationships from which 
one derives how I am allowed to drink alcohol is both a parent and the child 
of I am an adult.  And could you please do the same with how most ravens are 
balck is both parent and child of this raven is white.  

  Most of the discussion I read in Pei's article related to inheritance 
relations between terms, that operated as subject and predicates in sentences 
that are inheritance statements, rather than between entire statemens, unless 
the statement was a subject or a predicate of a higher order inheritance 
statement.  So what you are referring to appears to be beyond what I have read.

  Edward W. Porter
  Porter  Associates
  24 String Bridge S12
  Exeter, NH 03833
  (617) 494-1722
  Fax (617) 494-1822
  [EMAIL PROTECTED]


-Original Message-
From: Mark Waser [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, October 09, 2007 12:47 PM
To: agi@v2.listbox.com
Subject: Re: [agi] Do the inference rules of categorical logic make sense?


Thus, as I understand it, one can view all inheritance statements as 
indicating the evidence that one instance or category belongs to, and thus is 
a child of another category, which includes, and thus can be viewed as a 
parent of the other. 

Yes, that is inheritance as Pei uses it.  But are you comfortable with the 
fact that I am allowed to drink alcohol is normally both the parent and the 
child of I am an adult  (and vice versa)?  How about the fact that most 
ravens are black is both the parent and child of this raven is white (and 
vice versa)?

Since inheritance relations are transitive, the resulting hierarchy of 
categories involves nodes that can be considered ancestors (i.e., parents, 
parents of parents, etc.) of others and nodes that can be viewed as descendents 
(children, children of children, etc.) of others.  

And how often do you really want to do this with concepts like the above -- 
or when the evidence is substantially less than unity?

And loops and transitivity are really ugly . . . . 

NARS really isn't your father's inheritance.

  - Original Message - 
  From: Edward W. Porter 
  To: agi@v2.listbox.com 
  Sent: Tuesday, October 09, 2007 12:24 PM
  Subject: RE: [agi] Do the inference rules of categorical logic make sense?


  RE: (1) THE VALUE OF CHILD OF AND PARENT OF RELATIONS(2) 
DISCUSSION OF POSSIBLE VALUE IN DISTINGUISHING BETWEEN GENERALIZATIONAL AND 
COMPOSITIONAL INHERITANCE HIERARCHIES.

  Re Mark Waser's 10/9/2007 9:46 AM post: Perhaps Mark understands 
something I don't. 

  I think relations that can be viewed as child of and parent of in a 
hierarchy of categories are extremely important (for reasons set forth in more 
detail below) and it is not clear to me that Pei meant something other than 
this.

  If Mark or anyone else has reason to believe that what [Pei] means is 
quite different than such child of and parent of relations, I would 
appreciate being illuminated by what that different meaning is.




  My understanding of NARS is that it is concerned with inheritance 
relations, which as I understand it, indicate the truth value of the assumption 
that one category falls within another category, where category is broadly 
defined to included not only what we normally think of as categories, but also 
relationships, slots in relationships, and categories defined by a sets of one 
or more properties, attributes, elements

Re: [agi] Do the inference rules of categorical logic make sense?

2007-10-09 Thread Linas Vepstas
On Sun, Oct 07, 2007 at 12:36:10PM -0700, Charles D Hixson wrote:
 Edward W. Porter wrote:
 
 Fred is a human
 Fred is an animal
 
 You REALLY can't do good reasoning using formal logic in natural 
 language...at least in English.  That's why the invention of symbolic 
 logic was so important.

I suppose this was pounded to death in the rest of the thread, 
(which I haven't read) but still: syllogistic reasoning does occur 
in hypothesis formation, and thus, learning:

-- maybe humans are animals?  What evidence do I have to support this?
-- maybe animals are human? Can that be?

If Fred has an artificial heart, then perhaps he isn't simply 
just a special case of an animal.

If some pig has human organs in it, then perhaps its an animal that
is human.
 
Neither syllogistic deduction is purely false in the real world; 
there is an it depends aspect to it.  learning AI would chalk it
up as a maybe, and see is this reasoning leads anywhere. I beleive
Pei Wang's NARS system tries to do this; it seems more structured 
than the fuzzy logic type approaches that antedate it.

--linas

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=51598751-972d92


RE: [agi] Do the inference rules of categorical logic make sense?

2007-10-09 Thread Edward W. Porter
Mark,



The basic inference rules in NARS that would support an implication of the
form S is a child of P are of the form:



DEDUCTION INFERENCE RULE:
 Given S -- M and M-- P, this implies S -- P

ABDUCTION INFERENCE RULE:
 Given S -- M and P -- M, this implies S -- P to some degree

INDUCTION INFERENCE RULE:
 Given M -- S and M -- P, this implies S -- P to some degree



where -- is the inheritance relations.



Your arguments, are of the very different form :

Given P and Q, this implies Q -- P and P -- Q



And



Given S and R, this implies S -- R and R -- S



 In the argument regarding drinking and being an adult, you do not
appear to use any of these NARS inference rules to show that P inherits
from Q or vice versa (unless, perhaps, one assumes multiple other NARS
sentences or terms that might help the inference along, such as an uber
category such as the “category of all categories” from which one could use
the abduction rule to imply both of the inheritances mentioned (which one
would assume the system would have learned over time was such a weak
source of implication as to be normally useless).



But in that example, just from common sense reasoning, including knowledge
of the relevant subject matter, (absent any knowledge of NARS) it appears
reasonable to imply P from Q and Q from P.  So if NARS did the same it
would be behaving in a common sense way.  Loops in transitivity might be
really ugly, but it seems any human-level AGI has to have the same ability
to deal with them as human common sense.



To be honest, I do not yet understand how implication is derived from the
inheritance relations in NARS.  Assuming truth values of one for the child
and child/parent inheritance statement, I would guess a child implies its
parent with a truth value of one.  I would assume a parent with a truth
value of one implies a given child with a lesser value that decrease the
more often the parent is mapped against other children.



The argument claiming NARS says that R (most ravens are black) is both
the parent and child of S (this raven is white) (and vice versa),
similarly does not appear to be derivable from only the statements given
using the NARS inference rules.



Nor does my common sense reasoning help me understand why “most ravens are
black” is both the parent and child of “this raven is white.”  (All though
my common sense does tell me that “this raven is black” would provide
common sense inductive evidence for “most ravens are black” and that “this
raven” that is black would be a child of the category of “most ravens”
that are black.)



But I do understand that each of these two statements would tend to have
probabilistic effects on the other, as you suggested,  assuming that the
fact a raven is black has implications on whether or not it is white.  But
such two way probabilistic relationships are at the core of Bayesian
inference, so there is no reason why they should not be part of an AGI.


Edward W. Porter
Porter  Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]



-Original Message-
From: Mark Waser [mailto:[EMAIL PROTECTED]
Sent: Tuesday, October 09, 2007 2:28 PM
To: agi@v2.listbox.com
Subject: Re: [agi] Do the inference rules of categorical logic make sense?



Most of the discussion I read in Pei's article related to inheritance
relations between terms, that operated as subject and predicates in
sentences that are inheritance statements, rather than between entire
statements, unless the statement was a subject or a predicate of a higher
order inheritance statement.  So what you are referring to appears to be
beyond what I have read.

Label the statement I am allowed to drink alcohol as P and the statement
I am an adult as Q.  P implies Q and Q implies P (assume that age 21
equals adult) --OR-- P is the parent of Q and Q is the parent of P.

Label the statement that most ravens are black as R and the statement
that this raven is white as S.  R affects the probability of S and, to a
lesser extent, S affects the probability of R (both in a negative
direction) --OR-- R is the parent of S and S is the parent of R (although,
realistically, the probability change is so miniscule that you really
could argue that this isn't true).

NARS's inheritance is the inheritance of influence on the probability
values.

- Original Message -

From: Edward W.  mailto:[EMAIL PROTECTED] Porter
To: agi@v2.listbox.com
Sent: Tuesday, October 09, 2007 1:12 PM
Subject: RE: [agi] Do the inference rules of categorical logic make sense?

Mark,

Thank you for your reply.  I just ate a lunch with too much fat (luckily
largely olive oil) in it so, my brain is a little sleepy.  If it is not
too much trouble could you please map out the inheritance relationships
from which one derives how I am allowed to drink alcohol is both a
parent and the child of I am an adult.  And could you please do the same
with how most ravens are balck

Re: [agi] Do the inference rules of categorical logic make sense?

2007-10-09 Thread Pei Wang
In NARS, the Deduction/Induction/Abduction trio has (at least) three
different-though-isomorphic forms, one on inheritance, one on
implication, and one mixed.

For people who don't have access to the book, see
http://nars.wang.googlepages.com/wang.abduction.pdf , though the
symbols used in that paper is slightly different from the current
form.

Pei

On 10/9/07, Edward W. Porter [EMAIL PROTECTED] wrote:




 Mark,



 The basic inference rules in NARS that would support an implication of the
 form S is a child of P are of the form:



 DEDUCTION INFERENCE RULE:
  Given S -- M and M-- P, this implies S -- P

 ABDUCTION INFERENCE RULE:
  Given S -- M and P -- M, this implies S -- P to some degree

 INDUCTION INFERENCE RULE:
  Given M -- S and M -- P, this implies S -- P to some degree



 where -- is the inheritance relations.



 Your arguments, are of the very different form :

 Given P and Q, this implies Q -- P and P -- Q



 And



 Given S and R, this implies S -- R and R -- S



  In the argument regarding drinking and being an adult, you do not
 appear to use any of these NARS inference rules to show that P inherits from
 Q or vice versa (unless, perhaps, one assumes multiple other NARS sentences
 or terms that might help the inference along, such as an uber category such
 as the category of all categories from which one could use the abduction
 rule to imply both of the inheritances mentioned (which one would assume the
 system would have learned over time was such a weak source of implication as
 to be normally useless).



 But in that example, just from common sense reasoning, including knowledge
 of the relevant subject matter, (absent any knowledge of NARS) it appears
 reasonable to imply P from Q and Q from P.  So if NARS did the same it would
 be behaving in a common sense way.  Loops in transitivity might be really
 ugly, but it seems any human-level AGI has to have the same ability to deal
 with them as human common sense.



 To be honest, I do not yet understand how implication is derived from the
 inheritance relations in NARS.  Assuming truth values of one for the child
 and child/parent inheritance statement, I would guess a child implies its
 parent with a truth value of one.  I would assume a parent with a truth
 value of one implies a given child with a lesser value that decrease the
 more often the parent is mapped against other children.



 The argument claiming NARS says that R (most ravens are black) is both the
 parent and child of S (this raven is white) (and vice versa), similarly
 does not appear to be derivable from only the statements given using the
 NARS inference rules.



 Nor does my common sense reasoning help me understand why most ravens are
 black is both the parent and child of this raven is white.  (All though
 my common sense does tell me that this raven is black would provide common
 sense inductive evidence for most ravens are black and that this raven
 that is black would be a child of the category of most ravens that are
 black.)



 But I do understand that each of these two statements would tend to have
 probabilistic effects on the other, as you suggested,  assuming that the
 fact a raven is black has implications on whether or not it is white.  But
 such two way probabilistic relationships are at the core of Bayesian
 inference, so there is no reason why they should not be part of an AGI.

 Edward W. Porter
 Porter  Associates
 24 String Bridge S12
 Exeter, NH 03833
 (617) 494-1722
 Fax (617) 494-1822
 [EMAIL PROTECTED]




 -Original Message-
 From: Mark Waser [mailto:[EMAIL PROTECTED]
 Sent: Tuesday, October 09, 2007 2:28 PM
 To: agi@v2.listbox.com
 Subject: Re: [agi] Do the inference rules of categorical logic make sense?



 Most of the discussion I read in Pei's article related to inheritance
 relations between terms, that operated as subject and predicates in
 sentences that are inheritance statements, rather than between entire
 statements, unless the statement was a subject or a predicate of a higher
 order inheritance statement.  So what you are referring to appears to be
 beyond what I have read.

 Label the statement I am allowed to drink alcohol as P and the statement
 I am an adult as Q.  P implies Q and Q implies P (assume that age 21
 equals adult) --OR-- P is the parent of Q and Q is the parent of P.

 Label the statement that most ravens are black as R and the statement that
 this raven is white as S.  R affects the probability of S and, to a lesser
 extent, S affects the probability of R (both in a negative direction) --OR--
 R is the parent of S and S is the parent of R (although, realistically, the
 probability change is so miniscule that you really could argue that this
 isn't true).

 NARS's inheritance is the inheritance of influence on the probability
 values.

 - Original Message -

 From: Edward W. Porter
 To: agi@v2.listbox.com
 Sent: Tuesday, October 09, 2007 1:12 PM
 Subject: RE: [agi

Re: [agi] Do the inference rules of categorical logic make sense?

2007-10-09 Thread Pei Wang
, Thomas Serre (Learning a Dictionary of
 Shape-Components in Visual Cortex: Comparison with Neurons, Humans and
 Machines, by Thomas Serre, the google-able article I have cited so many
 times), and many others.  Such hierarchical representations achieve their
 flexibility though a composition/generalization hierarchy which presumably
 maps easily into NARS.

 Another key problem in AI is context sensitivity.  A hierarchical
 representation scheme that is capable of computing measures of similarity,
 fit, and implications throughout multiple levels in such a hierarchical
 representation scheme of multiple aspects of a situation in real time can be
 capable of sophisticated real time context sensitivity.  In fact, the
 ability to perform relative extensive real time matching and implication
 across multiple levels of compositional and generalization hierarchies has
 been a key feature of the types of systems I have been thinking of for
 years.

 That is one of the major reasons why I have argued for BREAKING THE SMALL
 HARDWARE MINDSET.

 I understand NARS's inheritance (or categorizations) as being equivalent two
 both of what I have considered two of the major dimensions in an AGI's self
 organizing memory, (1) generalization/similarity and (2) composition.  I
 was, however, aware, that down in the compositional (comp) hierarchy can be
 viewed as up in the generalization (gen) hierarchy, since the set of things
 having one or more properties or elements of a composition can be viewed as
 a generalization of that composition (i.e., the generalization covering the
 category of things having that one or more properties or elements).

 Although I understand there is an importance equivalence between down in the
 comp hierarchical and up in the gen hierarchy, and that the two could be
 viewed as one hierarchy, I have preferred to think of them as different
 hierarchies, because the type of gens one gets by going up in the gen
 hierarchy tend to be different than the type of gens one gets by going down
 in the comp hierarchy.

 Each possible set in the powerset (the set of all subsets) of elements
 (eles), relationships (rels), attributes (atts) and contextual patterns
 (contextual pats) could be considered as possible generalizations.  I have
 assumed, as does Goertzel's Novamente, that there is a competitive ecosystem
 for representational resources, in which only the fittest pats and gens --
 as determined by some measure of usefulness to the system -- survive.  There
 are several major uses of gens, such as aiding in perception, providing
 inheritance of significant implication, providing appropriate level of
 representation for learning, and providing invariant representation in
 higher level comps.  Although temporary gens will be generated at a
 relatively high frequency, somewhat like the inductive implications in NARS,
 the number of gens that survive and get incorporated into a lot of comps and
 episodic reps, will be an infinitesimal fraction of the powerset of eles,
 rels, atts, and contextual features stored in the system.  Pats in the up
 direction in the Gen hierarchy will tend to be ones that have been selected
 for the usefulness as generalizations.  They will often have reasonable
 number of features that correspond to that of their species node, but with
 some of them more broadly defined.  The gens found by going down in the comp
 hierarchy are ones that have been selected for their representational value
 in a comp, and many of them would not normally be that valuable as what we
 normally think of as generalizations.

 In the type of system I have been thinking of I have assumed there will be
 substantially less multiple inheritance in the up direction in the gen
 hierarchy than in the down direction in the comp hierarchy (in which there
 would be potential inheritance from every ele, rel, att, and contextual
 feature of in a comp's descendant nodes at multiple levels in the comp
 hierarchy below it.  Thus, for spreading activation control purposes, I
 think it is valuable to distinguish between generalization and compositional
 hierarchies, although I understand they have an important equivalence that
 should not be ignored.

 I wonder if NARS makes such a distinction.

 These are only initial thoughts.  I hope to become part of a team that gets
 an early world-knowledge computing AGI up and running.  Perhaps when I do
 feedback from reality will change my mind.

 I would welcome comments, not only from Mark, but also from other readers.


 Edward W. Porter
 Porter  Associates
 24 String Bridge S12
 Exeter, NH 03833
 (617) 494-1722
 Fax (617) 494-1822
 [EMAIL PROTECTED]



 -Original Message-
 From: Mark Waser [mailto:[EMAIL PROTECTED]
 Sent: Tuesday, October 09, 2007 9:46 AM
 To: agi@v2.listbox.com
 Subject: Re: [agi] Do the inference rules of categorical logic make sense?



 I don't believe that this is the case at all.  NARS correctly
  handles
  cases where entities co-occur

Re: [agi] Do the inference rules of categorical logic make sense?

2007-10-09 Thread Pei Wang
In NARS, If A then B is represented as an Implication statement P
== Q, whose truth value serves a similar role as P(B|A) in a
Bayesian network, though the two have subtle and important
differences. For detailed discussion, see
http://nars.wang.googlepages.com/wang.bayesianism.pdf and
http://nars.wang.googlepages.com/wang.confidence.pdf

The Implication relation is isomorphic to the Inheritance relation,
but the two are not the same, and cannot exchange with each other. I
don't have a short explanation on this topic, so you'd have to read
the book, or at least
http://nars.wang.googlepages.com/wang.abduction.pdf

Pei

On 10/9/07, Edward W. Porter [EMAIL PROTECTED] wrote:




 It wasn't a question to you in particular, but to the list.



 You had suggested that the terms parent and child were awkward and
 misleading for probabilistic implication.  I was interested in seeing how
 inheritance statement would represent the types of probabilistic implication
 most of us are used to thinking in terms of.



 Bayesian probabilities provide a valuable tool for representation and
 inference.  If one has a probability statement such as p(A|B,C) I understand
 how NARS's inheritance rules are useful in determining whether you have a B
 and/or a C, and if you had an A, much of what that would entail.  I also
 understand how they could be used to determine when it would be appropriate
 for a given perceived or conceived pattern or set of patterns to inherit
 inferences from other patterns or categories.



 What I was asking is how categorical logic actually represents the rules of
 Bayesian inference, and how it derives them from inheritance statements.  I
 was also interested in how the truth values for the existence of B and C, if
 either or both were less than one, in the above examples, would be blended
 with the conditional probability of A that p(A|B,C) would imply if the truth
 values of B and C were one.



 I might be able to figure this out on my own, but I assume others could do
 it faster than I, and if somebody has already done it, rather than spending
 time trying to re-invent the wheel, it would be easier to just read it.



 I know Novamente has a Probabilistic Term Logic based on both inference from
 inheritance rules and Bayesian analysis, and I am looking forward to
 learning more about it, but until that day, perhaps somebody else, such as
 Pei, has already come up with a mapping between categorical logic and
 Bayesian probabilities.

 Ed Porter




 -Original Message-
 From: Mark Waser [mailto:[EMAIL PROTECTED]
 Sent: Tuesday, October 09, 2007 5:32 PM
 To: agi@v2.listbox.com
 Subject: Re: [agi] Do the inference rules of categorical logic make sense?


 I'm sorry . . . . but I don't understand the question . . . .

 - Original Message -
 From: Edward W. Porter
 To: agi@v2.listbox.com
 Sent: Tuesday, October 09, 2007 4:57 PM
 Subject: RE: [agi] Do the inference rules of categorical logic make sense?


 Mark Waser,

 With regard to your statement in the below post that

 my point was meant to be that using the terms parent and child for
 probabilistic implication is very awkward and misleading,

 perhaps some one could point out how categorical logic maps into and
 represents bayesian probabilities (other than the vital role it could play
 in determining if you have terms corresponding to those in a given Bayesian
 probability statement---the role Pei was referring to when he said
 Inference/reasoning is not about to find/prove the absolute truth, but
 to treat one thing (e.g., a novel object/situation) as another (which is
 better known in experience)).

 Edward W. Porter
 Porter  Associates
 24 String Bridge S12
 Exeter, NH 03833
 (617) 494-1722
 Fax (617) 494-1822
 [EMAIL PROTECTED]




 -Original Message-
 From: Mark Waser [mailto:[EMAIL PROTECTED]
 Sent: Tuesday, October 09, 2007 4:25 PM
 To: agi@v2.listbox.com
 Subject: Re: [agi] Do the inference rules of categorical logic make sense?





 To be honest, I do not yet understand how implication is derived from the
 inheritance relations in NARS.

 Implication is a form of inheritance.

 Assuming truth values of one for the child and child/parent inheritance
 statement, I would guess a child implies its parent with a truth value of
 one.  I would assume a parent with a truth value of one implies a given
 child with a lesser value that decrease the more often the parent is mapped
 against other children.

 A child implies its parent with the frequency of the implication statement.



 Your arguments, are of the very different form :

 Given P and Q, this implies Q -- P and P -- Q

 My apologies.  I wasn't even talking about inference rules yet and was
 unclear.

 I assumed that you recognized the equivalence of adult and drinking age
 (i.e.  P == Q) and realized that equivalence is exactly the same as two
 implication statements (P == Q and Q == P).  My point was meant to be that
 using the terms parent and child

Re: [agi] Do the inference rules of categorical logic make sense?

2007-10-08 Thread Pei Wang
Charles,

I fully understand your response --- it is typical when people
interpret NARS according to their ideas about how a formal logic
should be understood.

But NARS is VERY different. Especially, it uses a special semantics,
which defines truth and meaning in a way that is fundamentally
different from model-theoretic semantics (which is implicitly assumed
in your comments everywhere), and I believe is closer to how truth
and meaning are treated in natural languages (so you may end up like
it).

As Mark suggested, you may want to do some reading first (such as
http://nars.wang.googlepages.com/wang.semantics.pdf), and after that
the discussion will be much more fruitful and efficient. I'm sorry
that I don't have a shorter explanation to the related issues.

Pei

On 10/8/07, Charles D Hixson [EMAIL PROTECTED] wrote:
 Pei Wang wrote:
  Charles,
 
  What you said is correct for most formal logics formulating binary
  deduction, using model-theoretic semantics. However, Edward was
  talking about the categorical logic of NARS, though he put the
  statements in English, and omitted the truth values, which may caused
  some misunderstanding.
 
  Pei
 
  On 10/7/07, Charles D Hixson [EMAIL PROTECTED] wrote:
 
  Edward W. Porter wrote:
 
  So is the following understanding correct?
 
  If you have two statements
 
  Fred is a human
  Fred is an animal
 
  And assuming you know nothing more about any of the three
  terms in both these statements, then each of the following
  would be an appropriate induction
 
  A human is an animal
  An animal is a human
  A human and an animal are similar
 
  It would only then be from further information that you
  would find the first of these two inductions has a larger
  truth value than the second and that the third probably
  has a larger truth value than the second..
 
  Edward W. Porter
  Porter  Associates
  24 String Bridge S12
  Exeter, NH 03833
  (617) 494-1722
  Fax (617) 494-1822
  [EMAIL PROTECTED]
 
 
  Actually, you know less than you have implied.
  You know that there exists an entity referred to as Fred, and that this
  entity is a member of both the set human and the set animal.  You aren't
  justified in concluding that any other member of the set human is also a
  member of the set animal.  And conversely.  And the only argument for
  similarity is that the intersection isn't empty.
 
  E.g.:
  Fred is a possessor of purple hair.   (He dyed his hair)
  Fred is a possessor of jellyfish DNA. (He was a subject in a molecular
  biology experiment.  His skin would glow green under proper stimulation.)
 
  Now admittedly these sentences would usually be said in a different form
  (i.e., Fred has green hair), but they are reasonable translations of
  an equivalent sentence (Fred is a member of the set of people with
  green hair).
 
  You REALLY can't do good reasoning using formal logic in natural
  language...at least in English.  That's why the invention of symbolic
  logic was so important.
 
  If you want to use the old form of syllogism, then at least one of the
  sentences needs to have either an existential or universal quantifier.
  Otherwise it isn't a syllogism, but just a pair of statements.  And all
  that you can conclude from them is that they have been asserted.  (If
  they're directly contradictory, then you may question the reliability of
  the asserter...but that's tricky, as often things that appear to be
  contradictions actually aren't.)
 
  Of course, what this really means is that logic is unsuited for
  conversation... but it also implies that you shouldn't program your
  rule-sets in natural language.  You'll almost certainly either get them
  wrong or be ambiguous.  (Ambiguity is more common, but it's not
  exclusive of wrong.)
 
 Well, truth values would allow one to assign probabilities to the
 various statements (i.e., the proffered values plus some uncertainty),
 but he specifically said we didn't know anything else about the terms,
 so I don't see how one can go any further.  If you don't know what a
 human is, then knowing that Fred is one doesn't tell you anything about
 his other characteristics.

 So when you have two statements about Fred, you know the two
 statements, but you don't know anything about the relationship between
 them except that their intersection is non-empty.  Since it was
 specified that we didn't know anything about them, Fred could be a line,
 and human could be vertical lines and animal could be named entities.

 For fancier forms of logic (induction, deduction, etc.) you need to have
 more information.  Most forms require that there be at least a partial
 ordering available, if not several.  Many modes of reasoning require
 that a complete ordering be available.  (It doesn't need to be an
 

Re: [agi] Do the inference rules of categorical logic make sense?

2007-10-08 Thread Charles D Hixson
OK.  I've read the paper, and don't see where I've made any errors.  It 
looks to me as if NARS can be modeled by a prototype based language with 
operators for is an ancestor of and is a descendant of.  I do have 
trouble with the language terms that you use, though admittedly they 
appear to be standard for logicians (to the extent that I'm familiar 
with their dialect).  That might well not be a good implementation, but 
it appears to be a reasonable model.


To me a model can well be dynamic and experience based.  In fact I 
wouldn't consider a model very intelligent if it didn't either itself 
adapt itself to experience, or it weren't embedded in a matrix which 
adapted it to experiences.  (This doesn't seem to be quite the same 
meaning that you use for model.  Your separation of the rules of 
inference, the rational faculty, and the model as a fixed and unchanging 
condition don't match my use of the term.  I might pull out the rules 
of inference as separate pieces and stick them into a datafile, but 
datafiles can be changed, if anything, more readily than programs...and 
programs are readily changeable.  To me it appears clear that much of 
the language would need to be interpretive rather than compiled.  One 
should pre-compile what one can for the sake of efficiency, but with the 
knowledge that this sacrifices flexibility for speed.


I still find that I am forced to interpret the inheritance relationship 
as a is a child of relationship.  And I find the idea of continually 
calculating the powerset of inheritance relationships unappealing.  
There may not be a better way, but if there isn't, than AGI can't move 
forwards without vastly more powerful machines.  Probably, however, the 
calculations could be shortcut by increasing the local storage a bit.  
If each node maintained a list of parents and children, and a count of 
descendants and ancestors it might suffice.  This would increase storage 
requirements, but drastically cut calculation and still enable the 
calculation of confidence.  Updating the counts could be saved for 
dreamtime.  This would imply that during the early part of learning 
sleep would be a frequent necessity...but it should become less 
necessary as the ratio of extant knowledge to new knowledge learned 
increased.  (Note that in this case the amount of new knowledge would be 
a measured quantity, not an arbitrary constant.)


I do feel that the limited sensory modality of the environment (i.e., 
reading the keyboard) makes AGI unlikely to be feasible.  It seems to me 
that one of the necessary components of true intelligence is integrating 
multi-modal sensory experience.  This doesn't necessarily mean vision 
and touch, but SOMETHING.  As such I can see NARS (or some similar 
system) as a component of an AGI, but not as a core component (if such 
exists).  OTOH, it might develop into something that would exhibit 
consciousness.  But note that consciousness appears to be primarily an 
evaluative function rather than a decision making component.  It logs 
and evaluates decisions that have been made, and maintains a delusion 
that it made them, but they are actually made by other processes, whose 
nature is less obvious.  (It may not actually evaluate them, but I 
haven't heard of any evidence to justify denying that, and it's 
certainly a good delusion.  Still, were I to wager, I'd wager that it 
was basically a logging function, and that the evaluations were also 
made by other processes.)  Consciousness appears to have developed to 
handle those functions that required serialization...and when language 
came along, it appeared in consciousness, because the limited bandwidth 
available necessitated serial conversion.



Pei Wang wrote:

Charles,

I fully understand your response --- it is typical when people
interpret NARS according to their ideas about how a formal logic
should be understood.

But NARS is VERY different. Especially, it uses a special semantics,
which defines truth and meaning in a way that is fundamentally
different from model-theoretic semantics (which is implicitly assumed
in your comments everywhere), and I believe is closer to how truth
and meaning are treated in natural languages (so you may end up like
it).

As Mark suggested, you may want to do some reading first (such as
http://nars.wang.googlepages.com/wang.semantics.pdf), and after that
the discussion will be much more fruitful and efficient. I'm sorry
that I don't have a shorter explanation to the related issues.

Pei

On 10/8/07, Charles D Hixson [EMAIL PROTECTED] wrote:
  

Pei Wang wrote:


Charles,

What you said is correct for most formal logics formulating binary
deduction, using model-theoretic semantics. However, Edward was
talking about the categorical logic of NARS, though he put the
statements in English, and omitted the truth values, which may caused
some misunderstanding.

Pei

On 10/7/07, Charles D Hixson [EMAIL PROTECTED] wrote:

  

Edward W. Porter wrote:

  

Re: [agi] Do the inference rules of categorical logic make sense?

2007-10-08 Thread Vladimir Nesov
Charles,

In experience-based learning there are two main problems relating to
knowledge acquisition: you have to come up with hypotheses and you
have to assess their plausibility. Theoretically, you can regard all
hypotheses, but you can't actually do it explicitly because of
combinatorial explosion. Instead you create them based on various
heuristics. Assessment of plausibility also can't be based on proof
most of the time, as new knowledge isn't analytic, it asserts
something about the future even though future hasn't happened yet. So,
various assessments of plausibility based on usefulness or support by
evidence need to be kept track of. As those 'theories' are not limited
to explicit language-level statements, they cumulatively can provide
all needed facets of meaning.

-- 
Vladimir Nesovmailto:[EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=51282587-5eb9f7


Re: [agi] Do the inference rules of categorical logic make sense?

2007-10-08 Thread Pei Wang
Charles,

To be concrete, let me summarize the assumptions in your previous
comments, and briefly explain why they don't apply to NARS.

*. The meaning of Fred is an entity referred to by the term --- in
NARS, the meaning of a term is its relations with other terms
(according to the system's experience), not an outside entity.

*. The meaning of human and animal are sets of entities --- in
NARS, once again the meaning of these terms are determined by their
experienced relation with other terms, not sets of outside entities.

*. The is a relation (as in Fred is a human) is represented as
membership relation in set theory --- in NARS, it is an inheritance
relation with experience-based truth value.

*. The truth value of a statement measures whether, or how much, the
statement matches the corresponding fact (you didn't say so
explicitly, but it is implied by your comments about INDUCTION and
ABDUCTION) --- in NARS, as you have read in my paper, truth value
measures evidential support, that is, how much a statement matches
what the system knows, not the world as it is.

Now let's see Edward's example for induction: from Fred is a human
and Fred is an animal to derive A human is an animal and An
animal is a human (truth values omitted). You said

 Actually, you know less than you have implied.
 You know that there exists an entity referred to as Fred, and that this
 entity is a member of both the set human and the set animal.  You aren't
 justified in concluding that any other member of the set human is also a
 member of the set animal.  And conversely.

which is correct deduction according to a model-theoretic
interpretation of the statements. However, under the
experience-grounded semantics, the NARS conclusions don't state that
the two sets human and animal, as we know them, includes each
other --- that cannot be derived, even in a probabilistic sense.
Instead, they states that the two concepts, human and animal, as
the system know them, can substitute each other, in certain way and to
certain extent. An intelligent system will use this kind of inference
to predict the future (such as to expect the next time human is used
as a predicate term, it can be replaced by animal), so as to go
beyond the scope of binary deduction. Such predictions can turn out to
be wrong, but I believe this is how adaptation/intelligence works.

For now I won't comment on the other issues in your following message
--- there are too many of them. Instead, I hope to make myself clear
on the basic topics first.

Pei

On 10/8/07, Charles D Hixson [EMAIL PROTECTED] wrote:
 OK.  I've read the paper, and don't see where I've made any errors.  It
 looks to me as if NARS can be modeled by a prototype based language with
 operators for is an ancestor of and is a descendant of.  I do have
 trouble with the language terms that you use, though admittedly they
 appear to be standard for logicians (to the extent that I'm familiar
 with their dialect).  That might well not be a good implementation, but
 it appears to be a reasonable model.

 To me a model can well be dynamic and experience based.  In fact I
 wouldn't consider a model very intelligent if it didn't either itself
 adapt itself to experience, or it weren't embedded in a matrix which
 adapted it to experiences.  (This doesn't seem to be quite the same
 meaning that you use for model.  Your separation of the rules of
 inference, the rational faculty, and the model as a fixed and unchanging
 condition don't match my use of the term.  I might pull out the rules
 of inference as separate pieces and stick them into a datafile, but
 datafiles can be changed, if anything, more readily than programs...and
 programs are readily changeable.  To me it appears clear that much of
 the language would need to be interpretive rather than compiled.  One
 should pre-compile what one can for the sake of efficiency, but with the
 knowledge that this sacrifices flexibility for speed.

 I still find that I am forced to interpret the inheritance relationship
 as a is a child of relationship.  And I find the idea of continually
 calculating the powerset of inheritance relationships unappealing.
 There may not be a better way, but if there isn't, than AGI can't move
 forwards without vastly more powerful machines.  Probably, however, the
 calculations could be shortcut by increasing the local storage a bit.
 If each node maintained a list of parents and children, and a count of
 descendants and ancestors it might suffice.  This would increase storage
 requirements, but drastically cut calculation and still enable the
 calculation of confidence.  Updating the counts could be saved for
 dreamtime.  This would imply that during the early part of learning
 sleep would be a frequent necessity...but it should become less
 necessary as the ratio of extant knowledge to new knowledge learned
 increased.  (Note that in this case the amount of new knowledge would be
 a measured quantity, not an arbitrary constant.)

 I 

RE: [agi] Do the inference rules of categorical logic make sense?

2007-10-08 Thread Edward W. Porter
Charles D. Hixson’s post of 10/8/2007 5:50 PM, was quite impressive as a
first reaction upon reading about NARS.

After I first read Pei Wang’s “A Logic of Categorization”, it took me
quite a while to know what I thought of it.  It was not until I got
answers to some of my basic questions from Pei though postings under the
current thread title that I was able to start to understand it reasonably
well.  Since then I have been coming to understand that it is quite
similar to some of my own previous thinking, and if it were used in a
certain way, it would seem to have tremendous potential.

But I still have some questions about it, such as” (PEI, IF YOU ARE
READING THIS I WOULD BE INTERESTED IN HEARING YOUR ANSWERS)

--(1) How are episodes represented in NARS?
--(2) How are complex pattern and sets of patterns with many interrelated
elements represented in NARS?  (I.e., how would NARS represents an auto
mechanic’s understanding of automobiles?  Would it be in terms of many
thousands of sentences containing relational inheritance statements such
as those shown on page 197 of “A Logic of Categorization”?)
--(3) How are time and temporal patterns represented?
--(4) How are specific mappings between the elements of a pattern and what
they map to represented in NARS?
--(5) How does NARS learn behaviors?
--(6) Finally, this is a much larger question.  Is it really optimal to
limit your representational scheme to a language in which all sentences
are based on the inheritance relation?

With regard to Question (6):

Categorization is essential.  I don’t question that.  I believe the
pattern is the essential source of intelligence.  It is essential to
implication and reasoning from experiences.  NARS’s categorization relates
to patterns and relationships between patterns.  It patterns are
represented in a generalization hierarchy (where a property or set of
properties can be viewed as a generalization), with a higher level pattern
(i.e., category) being able to represent different species of itself in
the different contexts where those different species are appropriate,
thus, helping to solve two of the major problems in AI, that of
non-literal matching and context appropriateness.

All this is well and good.  But without having had a chance to fully
consider the subject it seems to me that there might be other aspects of
reality and representation that -- even if they might all be reducible to
representation in terms of categorization -- could perhaps be more easily
thought of by us poor humans in terms of concepts other than
categorization.

For example, Novamente bases its inference and much of its learning on
PTL, Probabilistic Term Logic, which is based on inheritance relations,
much as is NARS.  But both of Ben’s articles on Novamente spend a lot of
time describing things in terms like “hypergraph”, “maps”, “attractors”,
“logical unification”, “PredicateNodes”, “genetic programming”, and
“associative links”.  Yes, perhaps all these things could be thought of as
categories, inheritance statements, and things derived from them of the
type described in you paper “A Logic of Catagorization”, and such thoughts
might provide valuable insights, but is that the most efficient way for us
mortals to think of them and for a machine to represent them.

I would be interested in hearing your answer to all these questions.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=51300772-e34770

Re: [agi] Do the inference rules of categorical logic make sense?

2007-10-08 Thread Pei Wang
On 10/8/07, Edward W. Porter [EMAIL PROTECTED] wrote:

 --(1) How are episodes represented in NARS?

As events --- see http://nars.wang.googlepages.com/wang.roadmap.pdf
, pages 7-8

 --(2) How are complex pattern and sets of patterns with many interrelated
 elements represented in NARS?  (I.e., how would NARS represents an auto
 mechanic's understanding of automobiles?  Would it be in terms of many
 thousands of sentences containing relational inheritance statements such as
 those shown on page 197 of A Logic of Categorization?)

Not necessarily inheritance statements, but Narsese statements in general.

 --(3) How are time and temporal patterns represented?

As events or operations --- again, see
http://nars.wang.googlepages.com/wang.roadmap.pdf , pages 7-8

 --(4) How are specific mappings between the elements of a pattern and what
 they map to represented in NARS?

As various types of relation, which are special type of term.

 --(5) How does NARS learn behaviors?

Mainly through procedural reasoning --- the above paper has a brief
description, and the book has a more detailed description, though I'm
still working on the details.

 --(6) Finally, this is a much larger question.  Is it really optimal to
 limit your representational scheme to a language in which all sentences are
 based on the inheritance relation?

Well, it indeed deserves a longer answer.

First, NARS doesn't use the inheritance relation for all sentences ---
in the current implementation, there are four relations in the memory:
inheritance, similarity, implication, and equivalence. Though the
later three are derived from inheritance conceptually, they are
processed on their own.

Second, to say the memory contains four basic relation types doesn't
prevent the system from representing and processing other
user-defined relations --- see the above paper, page 5, Products
and Images. It is just that only the four basic types have fixed
meaning, while the meaning of the other relations are learned from
experience.

 With regard to Question (6):

 Categorization is essential.  I don't question that.  I believe the pattern
 is the essential source of intelligence.  It is essential to implication and
 reasoning from experiences.  NARS's categorization relates to patterns and
 relationships between patterns.  It patterns are represented in a
 generalization hierarchy (where a property or set of properties can be
 viewed as a generalization), with a higher level pattern (i.e., category)
 being able to represent different species of itself in the different
 contexts where those different species are appropriate, thus, helping to
 solve two of the major problems in AI, that of non-literal matching and
 context appropriateness.

 All this is well and good.  But without having had a chance to fully
 consider the subject it seems to me that there might be other aspects of
 reality and representation that -- even if they might all be reducible to
 representation in terms of categorization -- could perhaps be more easily
 thought of by us poor humans in terms of concepts other than categorization.

NARS doesn't rule out problem-specific and domain-specific
representation, though they are handled at a different level. Narsese
is like the native language of NARS, though based on it the system
can learn various types of second/foreign languages (including
natural languages). However, this is different from merging those
languages into Narsese. See the above paper, pages 9-10, Natural
languages, for a brief explanation.

 For example, Novamente bases its inference and much of its learning on PTL,
 Probabilistic Term Logic, which is based on inheritance relations, much as
 is NARS.  But both of Ben's articles on Novamente spend a lot of time
 describing things in terms like hypergraph, maps, attractors, logical
 unification, PredicateNodes, genetic programming, and associative
 links.  Yes, perhaps all these things could be thought of as categories,
 inheritance statements, and things derived from them of the type described
 in you paper A Logic of Catagorization, and such thoughts might provide
 valuable insights, but is that the most efficient way for us mortals to
 think of them and for a machine to represent them.

NARS and Novamente surely still have some family resemblance left
--- for a family story, read
http://www.goertzel.org/benzine/WakingUpFromTheEconomyOfDreams.htm

These two systems have many similarities, as well as important
differences, on which I and  Ben have debated for years. It is too big
a topic to be addressed here.

Pei

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=51308317-fadbbc


Re: [agi] Do the inference rules of categorical logic make sense?

2007-10-08 Thread Charles D Hixson

Mike Tintner wrote:
Vladimir:  In experience-based learning there are two main problems 
relating to

knowledge acquisition: you have to come up with hypotheses and you
have to assess their plausibility. ...you create them based on various
heuristics.


How is this different from narrow AI? It seems like narrow AI - does 
Nars have the ability to learn unprogrammed, or invent, totally new 
kinds of logic? Or kinds of algebra?


In fact, the definitions of Nars:

NARS is intelligent in the sense that it is adaptive, and works 
with insufficient

knowledge and resources.

By adaptive, we mean that NARS uses its experience (i.e., the 
history of its


interaction with the environment) as the guidance of its inference 
activities.


For each question, it looks for an answer that is most consistent with 
its


experience (under the restriction of available resources).

define narrow AI systems - which are also intelligent, adaptive, 
work with insufficient knowledge and resources and learn from 
experience.  There seems to be nothing in those definitions which is 
distinctive to AGI.


With a sufficient knowledge base, which would require learning, NARS 
looks as if it could categorize that which it knows about, and make 
guesses as to how certain pieces of information are related to other 
pieces of information.


An extended version should be adaptive in the patterns that it recognizes.

OTOH, I don't recognize any features that would enable it to take 
independent action, so I suspect that it would be but one module of a 
more complex system. 
N.B.:  I'm definitely no expert at NARS, I've only read two of the 
papers a a few arguments.  Features that I didn't notice could well be 
present.  And they could certainly be in the planning stage.


I'm a bit hesitant about the theoretical framework, as it appears 
computationally expensive.  Still, implementation doesn't necessarily 
follow theory, and theory can jump over the gnarly bits, leaving them 
for implementation.  It's possible that lazy evaluation and postponed 
stability calculations could make things a LOT more efficient.  These 
probably aren't practical until the database grows to a reasonable size, 
however.


But as I understand it, this still wouldn't be an AGI, but merely a 
categorizer.  (OTOH, I only read two of the papers.  These could just be 
the papers that cover the categorizer.  Plausibly other papers cover 
other aspects.)


N.B.:  The current version of NARS, as described, only parses a 
specialized language covering topics of inheritance of characteristics.  
As such, that's all that was covered by the paper I most recently read.  
This doesn't appear to be an inherent limitation, as the terminal nodes 
are primitive text and, as such, could, in principle, invoke other 
routines, or refer to the contents of an image.  The program would 
neither know nor care.



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=51310341-2108b3


Re: [agi] Do the inference rules of categorical logic make sense?

2007-10-08 Thread Pei Wang
Charles,

The computational complexity or resources expense of NARS is
another aspect on which this system is fundamentally different from
existing systems. I understand that from the inference rules alone,
people will think it is too expensive to be actually implemented,
simply because there are so many possible ways to make inference. You
may want to read http://nars.wang.googlepages.com/wang.computation.pdf
to see how the inference processes are controlled in the system.

I've commented on the perceived limitation of the inheritance-based
language in my comment on Edward.

Pei

On 10/8/07, Charles D Hixson [EMAIL PROTECTED] wrote:
 Mike Tintner wrote:
  Vladimir:  In experience-based learning there are two main problems
  relating to
  knowledge acquisition: you have to come up with hypotheses and you
  have to assess their plausibility. ...you create them based on various
  heuristics.
 
  How is this different from narrow AI? It seems like narrow AI - does
  Nars have the ability to learn unprogrammed, or invent, totally new
  kinds of logic? Or kinds of algebra?
 
  In fact, the definitions of Nars:
 
  NARS is intelligent in the sense that it is adaptive, and works
  with insufficient
  knowledge and resources.
 
  By adaptive, we mean that NARS uses its experience (i.e., the
  history of its
 
  interaction with the environment) as the guidance of its inference
  activities.
 
  For each question, it looks for an answer that is most consistent with
  its
 
  experience (under the restriction of available resources).
 
  define narrow AI systems - which are also intelligent, adaptive,
  work with insufficient knowledge and resources and learn from
  experience.  There seems to be nothing in those definitions which is
  distinctive to AGI.
 
 With a sufficient knowledge base, which would require learning, NARS
 looks as if it could categorize that which it knows about, and make
 guesses as to how certain pieces of information are related to other
 pieces of information.

 An extended version should be adaptive in the patterns that it recognizes.

 OTOH, I don't recognize any features that would enable it to take
 independent action, so I suspect that it would be but one module of a
 more complex system.
 N.B.:  I'm definitely no expert at NARS, I've only read two of the
 papers a a few arguments.  Features that I didn't notice could well be
 present.  And they could certainly be in the planning stage.

 I'm a bit hesitant about the theoretical framework, as it appears
 computationally expensive.  Still, implementation doesn't necessarily
 follow theory, and theory can jump over the gnarly bits, leaving them
 for implementation.  It's possible that lazy evaluation and postponed
 stability calculations could make things a LOT more efficient.  These
 probably aren't practical until the database grows to a reasonable size,
 however.

 But as I understand it, this still wouldn't be an AGI, but merely a
 categorizer.  (OTOH, I only read two of the papers.  These could just be
 the papers that cover the categorizer.  Plausibly other papers cover
 other aspects.)

 N.B.:  The current version of NARS, as described, only parses a
 specialized language covering topics of inheritance of characteristics.
 As such, that's all that was covered by the paper I most recently read.
 This doesn't appear to be an inherent limitation, as the terminal nodes
 are primitive text and, as such, could, in principle, invoke other
 routines, or refer to the contents of an image.  The program would
 neither know nor care.


 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=51312250-bbbc49


Re: [agi] Do the inference rules of categorical logic make sense?

2007-10-07 Thread Charles D Hixson

Edward W. Porter wrote:


So is the following understanding correct?

If you have two statements

Fred is a human
Fred is an animal

And assuming you know nothing more about any of the three
terms in both these statements, then each of the following
would be an appropriate induction

A human is an animal
An animal is a human
A human and an animal are similar

It would only then be from further information that you
would find the first of these two inductions has a larger
truth value than the second and that the third probably
has a larger truth value than the second..

Edward W. Porter
Porter  Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]

Actually, you know less than you have implied. 
You know that there exists an entity referred to as Fred, and that this 
entity is a member of both the set human and the set animal.  You aren't 
justified in concluding that any other member of the set human is also a 
member of the set animal.  And conversely.  And the only argument for 
similarity is that the intersection isn't empty.


E.g.:
Fred is a possessor of purple hair.   (He dyed his hair)
Fred is a possessor of jellyfish DNA. (He was a subject in a molecular 
biology experiment.  His skin would glow green under proper stimulation.)


Now admittedly these sentences would usually be said in a different form 
(i.e., Fred has green hair), but they are reasonable translations of 
an equivalent sentence (Fred is a member of the set of people with 
green hair).


You REALLY can't do good reasoning using formal logic in natural 
language...at least in English.  That's why the invention of symbolic 
logic was so important.


If you want to use the old form of syllogism, then at least one of the 
sentences needs to have either an existential or universal quantifier.  
Otherwise it isn't a syllogism, but just a pair of statements.  And all 
that you can conclude from them is that they have been asserted.  (If 
they're directly contradictory, then you may question the reliability of 
the asserter...but that's tricky, as often things that appear to be 
contradictions actually aren't.)


Of course, what this really means is that logic is unsuited for 
conversation... but it also implies that you shouldn't program your 
rule-sets in natural language.  You'll almost certainly either get them 
wrong or be ambiguous.  (Ambiguity is more common, but it's not 
exclusive of wrong.)



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=50932465-797f53


Re: [agi] Do the inference rules of categorical logic make sense?

2007-10-07 Thread Mike Dougherty
On 10/7/07, Charles D Hixson [EMAIL PROTECTED] wrote:
 ... logic is unsuited for conversation...

what a great quote

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=50946633-33f0fb


Re: [agi] Do the inference rules of categorical logic make sense?

2007-10-07 Thread Pei Wang
Charles,

What you said is correct for most formal logics formulating binary
deduction, using model-theoretic semantics. However, Edward was
talking about the categorical logic of NARS, though he put the
statements in English, and omitted the truth values, which may caused
some misunderstanding.

Pei

On 10/7/07, Charles D Hixson [EMAIL PROTECTED] wrote:
 Edward W. Porter wrote:
 
  So is the following understanding correct?
 
  If you have two statements
 
  Fred is a human
  Fred is an animal
 
  And assuming you know nothing more about any of the three
  terms in both these statements, then each of the following
  would be an appropriate induction
 
  A human is an animal
  An animal is a human
  A human and an animal are similar
 
  It would only then be from further information that you
  would find the first of these two inductions has a larger
  truth value than the second and that the third probably
  has a larger truth value than the second..
 
  Edward W. Porter
  Porter  Associates
  24 String Bridge S12
  Exeter, NH 03833
  (617) 494-1722
  Fax (617) 494-1822
  [EMAIL PROTECTED]
 
 Actually, you know less than you have implied.
 You know that there exists an entity referred to as Fred, and that this
 entity is a member of both the set human and the set animal.  You aren't
 justified in concluding that any other member of the set human is also a
 member of the set animal.  And conversely.  And the only argument for
 similarity is that the intersection isn't empty.

 E.g.:
 Fred is a possessor of purple hair.   (He dyed his hair)
 Fred is a possessor of jellyfish DNA. (He was a subject in a molecular
 biology experiment.  His skin would glow green under proper stimulation.)

 Now admittedly these sentences would usually be said in a different form
 (i.e., Fred has green hair), but they are reasonable translations of
 an equivalent sentence (Fred is a member of the set of people with
 green hair).

 You REALLY can't do good reasoning using formal logic in natural
 language...at least in English.  That's why the invention of symbolic
 logic was so important.

 If you want to use the old form of syllogism, then at least one of the
 sentences needs to have either an existential or universal quantifier.
 Otherwise it isn't a syllogism, but just a pair of statements.  And all
 that you can conclude from them is that they have been asserted.  (If
 they're directly contradictory, then you may question the reliability of
 the asserter...but that's tricky, as often things that appear to be
 contradictions actually aren't.)

 Of course, what this really means is that logic is unsuited for
 conversation... but it also implies that you shouldn't program your
 rule-sets in natural language.  You'll almost certainly either get them
 wrong or be ambiguous.  (Ambiguity is more common, but it's not
 exclusive of wrong.)


 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=50965360-951ab5


Re: [agi] Do the inference rules of categorical logic make sense?

2007-10-06 Thread Lukasz Stafiniak
Major premise and minor premise in a syllogism are not
interchangeable. Read the derivation of truth tables for abduction and
induction from the semantics of NAL to learn that different ordering
of premises results in different truth values. Thus while both
orderings are applicable, one will usually give more confident result
which will dominate the other.

On 10/6/07, Edward W. Porter [EMAIL PROTECTED] wrote:


 But I don't understand the rules for induction and abduction which are as
 following:

 ABDUCTION INFERENCE RULE:
  Given S -- M and P -- M, this implies S -- P to some degree

 INDUCTION INFERENCE RULE:
  Given M -- S and M -- P, this implies S -- P to some degree

 The problem I have is that in both the abduction and induction rule --
 unlike in the deduction rule -- the roles of S and P appear to be
 semantically identical, i.e., they could be switched in the two premises
 with no apparent change in meaning, and yet in the conclusion switching S
 and P would change in meaning.  Thus, it appears that from premises which
 appear to make no distinctions between S and P a conclusion is drawn that
 does make such a distinction.  At least to me, with my current limited
 knowledge of the subject, this seems illogical.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=50749379-2a7926


Re: [agi] Do the inference rules of categorical logic make sense?

2007-10-06 Thread Pei Wang
Right. See concrete examples in
http://nars.wang.googlepages.com/NARS-Examples-SingleStep.txt

In induction and abduction, S--P and P--S are usually (though not
always) produced in pair, though usually (though not always) with
different truth values, unless the two premises have the same
truth-value --- as Edward said, it would be illogical to produce
difference from sameness. ;-)

Especially, positive evidence equally support both conclusions, while
negative evidence only deny one of the two --- see the Induction and
Revision example in
http://nars.wang.googlepages.com/NARS-Examples-MultiSteps.txt

For a more focused discussion on induction in NARS, see
http://www.cogsci.indiana.edu/pub/wang.induction.ps

The situation for S-P is similar --- see comparison in
http://nars.wang.googlepages.com/NARS-Examples-SingleStep.txt

Pei

On 10/6/07, Lukasz Stafiniak [EMAIL PROTECTED] wrote:
 Major premise and minor premise in a syllogism are not
 interchangeable. Read the derivation of truth tables for abduction and
 induction from the semantics of NAL to learn that different ordering
 of premises results in different truth values. Thus while both
 orderings are applicable, one will usually give more confident result
 which will dominate the other.

 On 10/6/07, Edward W. Porter [EMAIL PROTECTED] wrote:
 
 
  But I don't understand the rules for induction and abduction which are as
  following:
 
  ABDUCTION INFERENCE RULE:
   Given S -- M and P -- M, this implies S -- P to some degree
 
  INDUCTION INFERENCE RULE:
   Given M -- S and M -- P, this implies S -- P to some degree
 
  The problem I have is that in both the abduction and induction rule --
  unlike in the deduction rule -- the roles of S and P appear to be
  semantically identical, i.e., they could be switched in the two premises
  with no apparent change in meaning, and yet in the conclusion switching S
  and P would change in meaning.  Thus, it appears that from premises which
  appear to make no distinctions between S and P a conclusion is drawn that
  does make such a distinction.  At least to me, with my current limited
  knowledge of the subject, this seems illogical.

 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=50765665-44f7f5


RE: [agi] Do the inference rules of categorical logic make sense?

2007-10-06 Thread Edward W. Porter
If you are a machine reasoning from pieces of information you receive in
no particular order how do you know which is the major and which is the
minor premise?

Edward W. Porter
Porter  Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]



-Original Message-
From: Lukasz Stafiniak [mailto:[EMAIL PROTECTED]
Sent: Saturday, October 06, 2007 4:30 AM
To: agi@v2.listbox.com
Subject: Re: [agi] Do the inference rules of categorical logic make sense?


Major premise and minor premise in a syllogism are not interchangeable.
Read the derivation of truth tables for abduction and induction from the
semantics of NAL to learn that different ordering of premises results in
different truth values. Thus while both orderings are applicable, one will
usually give more confident result which will dominate the other.

On 10/6/07, Edward W. Porter [EMAIL PROTECTED] wrote:


 But I don't understand the rules for induction and abduction which are
 as
 following:

 ABDUCTION INFERENCE RULE:
  Given S -- M and P -- M, this implies S -- P to some degree

 INDUCTION INFERENCE RULE:
  Given M -- S and M -- P, this implies S -- P to some degree

 The problem I have is that in both the abduction and induction rule --
 unlike in the deduction rule -- the roles of S and P appear to be
 semantically identical, i.e., they could be switched in the two
 premises with no apparent change in meaning, and yet in the conclusion
 switching S and P would change in meaning.  Thus, it appears that from
 premises which appear to make no distinctions between S and P a
 conclusion is drawn that does make such a distinction.  At least to
 me, with my current limited knowledge of the subject, this seems
 illogical.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=50766573-29b233


RE: [agi] Do the inference rules of categorical logic make sense?

2007-10-06 Thread Edward W. Porter
So is the following understanding correct?

If you have two statements

Fred is a human
Fred is an animal

And assuming you know nothing more about any of the three
terms in both these statements, then each of the following would be an
appropriate induction

A human is an animal
An animal is a human
A human and an animal are similar

It would only then be from further information that you
would find the first of these two inductions has a larger truth value than
the second and that the third probably has a larger truth value than the
second..

Edward W. Porter
Porter  Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]



-Original Message-
From: Pei Wang [mailto:[EMAIL PROTECTED]
Sent: Saturday, October 06, 2007 7:03 AM
To: agi@v2.listbox.com
Subject: Re: [agi] Do the inference rules of categorical logic make sense?


Right. See concrete examples in
http://nars.wang.googlepages.com/NARS-Examples-SingleStep.txt

In induction and abduction, S--P and P--S are usually (though not
always) produced in pair, though usually (though not always) with
different truth values, unless the two premises have the same truth-value
--- as Edward said, it would be illogical to produce difference from
sameness. ;-)

Especially, positive evidence equally support both conclusions, while
negative evidence only deny one of the two --- see the Induction and
Revision example in
http://nars.wang.googlepages.com/NARS-Examples-MultiSteps.txt

For a more focused discussion on induction in NARS, see
http://www.cogsci.indiana.edu/pub/wang.induction.ps

The situation for S-P is similar --- see comparison in
http://nars.wang.googlepages.com/NARS-Examples-SingleStep.txt

Pei

On 10/6/07, Lukasz Stafiniak [EMAIL PROTECTED] wrote:
 Major premise and minor premise in a syllogism are not
 interchangeable. Read the derivation of truth tables for abduction and
 induction from the semantics of NAL to learn that different ordering
 of premises results in different truth values. Thus while both
 orderings are applicable, one will usually give more confident result
 which will dominate the other.

 On 10/6/07, Edward W. Porter [EMAIL PROTECTED] wrote:
 
 
  But I don't understand the rules for induction and abduction which
  are as
  following:
 
  ABDUCTION INFERENCE RULE:
   Given S -- M and P -- M, this implies S -- P to some degree
 
  INDUCTION INFERENCE RULE:
   Given M -- S and M -- P, this implies S -- P to some degree
 
  The problem I have is that in both the abduction and induction rule
  -- unlike in the deduction rule -- the roles of S and P appear to be
  semantically identical, i.e., they could be switched in the two
  premises with no apparent change in meaning, and yet in the
  conclusion switching S and P would change in meaning.  Thus, it
  appears that from premises which appear to make no distinctions
  between S and P a conclusion is drawn that does make such a
  distinction.  At least to me, with my current limited knowledge of
  the subject, this seems illogical.

 -
 This list is sponsored by AGIRI: http://www.agiri.org/email To
 unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=50767228-6b318e

Re: [agi] Do the inference rules of categorical logic make sense?

2007-10-06 Thread Pei Wang
The order here isn't the incoming order of the premises. From
M--S(t1) and M--P(t2), where t1 and t2 are truth values, the rule
produces two symmetric conclusions, and which truth function is called
depends on the subject/predicate order in the conclusion. That is,
S--P will use a function f(t1,t2), while P--S will use the symmetric
function f(t2,t1).

Pei

On 10/6/07, Edward W. Porter [EMAIL PROTECTED] wrote:
 If you are a machine reasoning from pieces of information you receive in
 no particular order how do you know which is the major and which is the
 minor premise?

 Edward W. Porter
 Porter  Associates
 24 String Bridge S12
 Exeter, NH 03833
 (617) 494-1722
 Fax (617) 494-1822
 [EMAIL PROTECTED]



 -Original Message-
 From: Lukasz Stafiniak [mailto:[EMAIL PROTECTED]
 Sent: Saturday, October 06, 2007 4:30 AM
 To: agi@v2.listbox.com
 Subject: Re: [agi] Do the inference rules of categorical logic make sense?


 Major premise and minor premise in a syllogism are not interchangeable.
 Read the derivation of truth tables for abduction and induction from the
 semantics of NAL to learn that different ordering of premises results in
 different truth values. Thus while both orderings are applicable, one will
 usually give more confident result which will dominate the other.

 On 10/6/07, Edward W. Porter [EMAIL PROTECTED] wrote:
 
 
  But I don't understand the rules for induction and abduction which are
  as
  following:
 
  ABDUCTION INFERENCE RULE:
   Given S -- M and P -- M, this implies S -- P to some degree
 
  INDUCTION INFERENCE RULE:
   Given M -- S and M -- P, this implies S -- P to some degree
 
  The problem I have is that in both the abduction and induction rule --
  unlike in the deduction rule -- the roles of S and P appear to be
  semantically identical, i.e., they could be switched in the two
  premises with no apparent change in meaning, and yet in the conclusion
  switching S and P would change in meaning.  Thus, it appears that from
  premises which appear to make no distinctions between S and P a
  conclusion is drawn that does make such a distinction.  At least to
  me, with my current limited knowledge of the subject, this seems
  illogical.

 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;

 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=50767869-3791d3


Re: [agi] Do the inference rules of categorical logic make sense?

2007-10-06 Thread Pei Wang
On 10/6/07, Edward W. Porter [EMAIL PROTECTED] wrote:



 So is the following understanding correct?


 If you have two statements


 Fred is a human
 Fred is an animal

 And assuming you know nothing more about any of the three terms in both
 these statements, then each of the following would be an appropriate
 induction


 A human is an animal
 An animal is a human
 A human and an animal are similar

Correct, though for technical reasons I don't call the last one
induction but comparison.

 It would only then be from further information that you would find the first
 of these two inductions has a larger truth value than the second and that
 the third probably has a larger truth value than the second..

Right, though the rules immediately assigns truth values to the
conclusion, based on the evidence provided by the current premises.
The role of further information is to revise the previous truth
values. In this way, the system can always form a belief (rather than
waiting for enough information), though the initial beliefs will
have low confidence.

Pei

 Edward W. Porter
 Porter  Associates
 24 String Bridge S12
 Exeter, NH 03833
 (617) 494-1722
 Fax (617) 494-1822
 [EMAIL PROTECTED]



 -Original Message-
 From: Pei Wang [mailto:[EMAIL PROTECTED]
 Sent: Saturday, October 06, 2007 7:03 AM
 To: agi@v2.listbox.com
 Subject: Re: [agi] Do the inference rules of categorical logic make sense?



 Right. See concrete examples in
 http://nars.wang.googlepages.com/NARS-Examples-SingleStep.txt

 In induction and abduction, S--P and P--S are usually (though not
 always) produced in pair, though usually (though not always) with different
 truth values, unless the two premises have the same truth-value --- as
 Edward said, it would be illogical to produce difference from sameness. ;-)

 Especially, positive evidence equally support both conclusions, while
 negative evidence only deny one of the two --- see the Induction and
 Revision example in
 http://nars.wang.googlepages.com/NARS-Examples-MultiSteps.txt

 For a more focused discussion on induction in NARS, see
 http://www.cogsci.indiana.edu/pub/wang.induction.ps

 The situation for S-P is similar --- see comparison in
 http://nars.wang.googlepages.com/NARS-Examples-SingleStep.txt

 Pei

 On 10/6/07, Lukasz Stafiniak [EMAIL PROTECTED] wrote:
  Major premise and minor premise in a syllogism are not
  interchangeable. Read the derivation of truth tables for abduction and
  induction from the semantics of NAL to learn that different ordering
  of premises results in different truth values. Thus while both
  orderings are applicable, one will usually give more confident result
  which will dominate the other.
 
  On 10/6/07, Edward W. Porter [EMAIL PROTECTED] wrote:
  
  
   But I don't understand the rules for induction and abduction which
   are as
   following:
  
   ABDUCTION INFERENCE RULE:
Given S -- M and P -- M, this implies S -- P to some degree
  
   INDUCTION INFERENCE RULE:
Given M -- S and M -- P, this implies S -- P to some degree
  
   The problem I have is that in both the abduction and induction rule
   -- unlike in the deduction rule -- the roles of S and P appear to be
   semantically identical, i.e., they could be switched in the two
   premises with no apparent change in meaning, and yet in the
   conclusion switching S and P would change in meaning.  Thus, it
   appears that from premises which appear to make no distinctions
   between S and P a conclusion is drawn that does make such a
   distinction.  At least to me, with my current limited knowledge of
   the subject, this seems illogical.
 
  -
  This list is sponsored by AGIRI: http://www.agiri.org/email To
  unsubscribe or change your options, please go to:
  http://v2.listbox.com/member/?;
 



 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?; 
  This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=50768597-1784af


Re: [agi] Do the inference rules of categorical logic make sense?

2007-10-06 Thread Pei Wang
On 10/6/07, Pei Wang [EMAIL PROTECTED] wrote:
 On 10/6/07, Edward W. Porter [EMAIL PROTECTED] wrote:
 
  So is the following understanding correct?
 
  If you have two statements
 
  Fred is a human
  Fred is an animal
 
  And assuming you know nothing more about any of the three terms in both
  these statements, then each of the following would be an appropriate
  induction
 
  A human is an animal
  An animal is a human
  A human and an animal are similar

 Correct, though for technical reasons I don't call the last one
 induction but comparison.

BTW, in the future you can easily try it yourself, if you want:

(1) start the NARS demo by clicking http://nars.wang.googlepages.com/NARS.html
(2) open the inference log window by select View/Inference Log from
the main window
(3) copy/paste the following two lines into the input window:

Fred {-- human.
Fred {-- animal.

then click OK.
(4) click Walk in the main window for a few times. For this example,
in the 5th step the three conclusions you mentioned will be produced,
with a bunch of others.

There is a User's Guide for the demo at
http://nars.wang.googlepages.com/NARS-Guide.html

Pei

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=50769241-903319


RE: [agi] Do the inference rules of categorical logic make sense?

2007-10-06 Thread Edward W. Porter
Thanks.

So as I understand it, whether a premise is major or minor is defined by
its role of its terms relative to a given conconclusion.  But the same
premise could play a major role relative to once conclusion and a minor
role relative to another.

Edward W. Porter
Porter  Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]



-Original Message-
From: Pei Wang [mailto:[EMAIL PROTECTED]
Sent: Saturday, October 06, 2007 8:20 AM
To: agi@v2.listbox.com
Subject: Re: [agi] Do the inference rules of categorical logic make sense?


The order here isn't the incoming order of the premises. From
M--S(t1) and M--P(t2), where t1 and t2 are truth values, the rule
produces two symmetric conclusions, and which truth function is called
depends on the subject/predicate order in the conclusion. That is,
S--P will use a function f(t1,t2), while P--S will use the symmetric
function f(t2,t1).

Pei

On 10/6/07, Edward W. Porter [EMAIL PROTECTED] wrote:
 If you are a machine reasoning from pieces of information you receive
 in no particular order how do you know which is the major and which is
 the minor premise?

 Edward W. Porter
 Porter  Associates
 24 String Bridge S12
 Exeter, NH 03833
 (617) 494-1722
 Fax (617) 494-1822
 [EMAIL PROTECTED]



 -Original Message-
 From: Lukasz Stafiniak [mailto:[EMAIL PROTECTED]
 Sent: Saturday, October 06, 2007 4:30 AM
 To: agi@v2.listbox.com
 Subject: Re: [agi] Do the inference rules of categorical logic make
 sense?


 Major premise and minor premise in a syllogism are not
 interchangeable. Read the derivation of truth tables for abduction and
 induction from the semantics of NAL to learn that different ordering
 of premises results in different truth values. Thus while both
 orderings are applicable, one will usually give more confident result
 which will dominate the other.

 On 10/6/07, Edward W. Porter [EMAIL PROTECTED] wrote:
 
 
  But I don't understand the rules for induction and abduction which
  are as
  following:
 
  ABDUCTION INFERENCE RULE:
   Given S -- M and P -- M, this implies S -- P to some degree
 
  INDUCTION INFERENCE RULE:
   Given M -- S and M -- P, this implies S -- P to some degree
 
  The problem I have is that in both the abduction and induction rule
  -- unlike in the deduction rule -- the roles of S and P appear to be
  semantically identical, i.e., they could be switched in the two
  premises with no apparent change in meaning, and yet in the
  conclusion switching S and P would change in meaning.  Thus, it
  appears that from premises which appear to make no distinctions
  between S and P a conclusion is drawn that does make such a
  distinction.  At least to me, with my current limited knowledge of
  the subject, this seems illogical.

 -
 This list is sponsored by AGIRI: http://www.agiri.org/email To
 unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;

 -
 This list is sponsored by AGIRI: http://www.agiri.org/email To
 unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=50771155-cc051f


RE: [agi] Do the inference rules of categorical logic make sense?

2007-10-06 Thread Edward W. Porter
Great,  I look forward to trying this when I get back from a brief
vacation for the holiday weekend.

Edward W. Porter
Porter  Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]



-Original Message-
From: Pei Wang [mailto:[EMAIL PROTECTED]
Sent: Saturday, October 06, 2007 8:51 AM
To: agi@v2.listbox.com
Subject: Re: [agi] Do the inference rules of categorical logic make sense?


On 10/6/07, Pei Wang [EMAIL PROTECTED] wrote:
 On 10/6/07, Edward W. Porter [EMAIL PROTECTED] wrote:
 
  So is the following understanding correct?
 
  If you have two statements
 
  Fred is a human
  Fred is an animal
 
  And assuming you know nothing more about any of the three terms in
  both these statements, then each of the following would be an
  appropriate induction
 
  A human is an animal
  An animal is a human
  A human and an animal are similar

 Correct, though for technical reasons I don't call the last one
 induction but comparison.

BTW, in the future you can easily try it yourself, if you want:

(1) start the NARS demo by clicking
http://nars.wang.googlepages.com/NARS.html
(2) open the inference log window by select View/Inference Log from the
main window
(3) copy/paste the following two lines into the input window:

Fred {-- human.
Fred {-- animal.

then click OK.
(4) click Walk in the main window for a few times. For this example, in
the 5th step the three conclusions you mentioned will be produced, with a
bunch of others.

There is a User's Guide for the demo at
http://nars.wang.googlepages.com/NARS-Guide.html

Pei

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=50771487-e5f225


Re: [agi] Do the inference rules of categorical logic make sense?

2007-10-06 Thread Pei Wang
On 10/6/07, Edward W. Porter [EMAIL PROTECTED] wrote:
 Thanks.

 So as I understand it, whether a premise is major or minor is defined by
 its role of its terms relative to a given conconclusion.  But the same
 premise could play a major role relative to once conclusion and a minor
 role relative to another.

Exactly (though I usually don't use the terms major and minor).

Furthermore, the same belief can be used as premise in various types
of inference (deduction, induction, abduction, comparison, analogy,
revision, ...), and plays different roles in each of them.

Pei

 Edward W. Porter
 Porter  Associates
 24 String Bridge S12
 Exeter, NH 03833
 (617) 494-1722
 Fax (617) 494-1822
 [EMAIL PROTECTED]



 -Original Message-
 From: Pei Wang [mailto:[EMAIL PROTECTED]
 Sent: Saturday, October 06, 2007 8:20 AM
 To: agi@v2.listbox.com
 Subject: Re: [agi] Do the inference rules of categorical logic make sense?


 The order here isn't the incoming order of the premises. From
 M--S(t1) and M--P(t2), where t1 and t2 are truth values, the rule
 produces two symmetric conclusions, and which truth function is called
 depends on the subject/predicate order in the conclusion. That is,
 S--P will use a function f(t1,t2), while P--S will use the symmetric
 function f(t2,t1).

 Pei

 On 10/6/07, Edward W. Porter [EMAIL PROTECTED] wrote:
  If you are a machine reasoning from pieces of information you receive
  in no particular order how do you know which is the major and which is
  the minor premise?
 
  Edward W. Porter
  Porter  Associates
  24 String Bridge S12
  Exeter, NH 03833
  (617) 494-1722
  Fax (617) 494-1822
  [EMAIL PROTECTED]
 
 
 
  -Original Message-
  From: Lukasz Stafiniak [mailto:[EMAIL PROTECTED]
  Sent: Saturday, October 06, 2007 4:30 AM
  To: agi@v2.listbox.com
  Subject: Re: [agi] Do the inference rules of categorical logic make
  sense?
 
 
  Major premise and minor premise in a syllogism are not
  interchangeable. Read the derivation of truth tables for abduction and
  induction from the semantics of NAL to learn that different ordering
  of premises results in different truth values. Thus while both
  orderings are applicable, one will usually give more confident result
  which will dominate the other.
 
  On 10/6/07, Edward W. Porter [EMAIL PROTECTED] wrote:
  
  
   But I don't understand the rules for induction and abduction which
   are as
   following:
  
   ABDUCTION INFERENCE RULE:
Given S -- M and P -- M, this implies S -- P to some degree
  
   INDUCTION INFERENCE RULE:
Given M -- S and M -- P, this implies S -- P to some degree
  
   The problem I have is that in both the abduction and induction rule
   -- unlike in the deduction rule -- the roles of S and P appear to be
   semantically identical, i.e., they could be switched in the two
   premises with no apparent change in meaning, and yet in the
   conclusion switching S and P would change in meaning.  Thus, it
   appears that from premises which appear to make no distinctions
   between S and P a conclusion is drawn that does make such a
   distinction.  At least to me, with my current limited knowledge of
   the subject, this seems illogical.
 
  -
  This list is sponsored by AGIRI: http://www.agiri.org/email To
  unsubscribe or change your options, please go to:
  http://v2.listbox.com/member/?;
 
  -
  This list is sponsored by AGIRI: http://www.agiri.org/email To
  unsubscribe or change your options, please go to:
  http://v2.listbox.com/member/?;
 

 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;

 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=50771875-8eff2f


[agi] Do the inference rules of categorical logic make sense?

2007-10-05 Thread Edward W. Porter

I am trying to understand categorical logic from reading Pei Wang’s very
interesting paper, “ A Logic of Categorization.”  Since I am a total
newbie to the field I have some probably dumb questions.  But at the risk
of making a fool of myself let me ask them to members of the list.

Lets use “--” as the arrow symbol commonly used to represent an
inheritance relation of the type used in categorical logic, where A -- B,
roughly means category A is a species (or instance) of category B.
Category B, in addition to what we might normally think as a
generalization, can also be a property (meaning B’s category would be that
of concepts having property B).

I understand how the deduction inference rule works.

DEDUCTION INFERENCE RULE:
 Given S -- M and M-- P, this implies S -- P

This make total sense.  If S is a type of M, and M is a type of P, S is a
type of P.

But I don’t understand the rules for induction and abduction which are as
following:

ABDUCTION INFERENCE RULE:
 Given S -- M and P -- M, this implies S -- P to some degree

INDUCTION INFERENCE RULE:
 Given M -- S and M -- P, this implies S -- P to some degree

The problem I have is that in both the abduction and induction rule --
unlike in the deduction rule -- the roles of S and P appear to be
semantically identical, i.e., they could be switched in the two premises
with no apparent change in meaning, and yet in the conclusion switching S
and P would change in meaning.  Thus, it appears that from premises which
appear to make no distinctions between S and P a conclusion is drawn that
does make such a distinction.  At least to me, with my current limited
knowledge of the subject, this seems illogical.

It would appear to me that both the Abduction and Induction inference
rules should imply each of the following, each with some degree of
evidentiary value
 S -- P
 P -- S,  and
 S -- P, where “--” represents a similarity relation.

Since these rules have been around for years I assume the rules are right
and my understanding is wrong.

I would appreciate it if someone on the list with more knowledge of the
subject than I could point out my presumed error.

Edward W. Porter
Porter  Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=50726265-cee19c