I don't think you can solve the problem in such a general way. It seems to me that this depends much more on the context. Let's consider this example (for each sentence there is strength and confidence): 1. Mark is a worker of XYZ (1; 0.7) 2. Workers of XYZ are honest (0.95; 0.8) 3. Honest person would not steal $10 (1; 1)
However, Mark stole $10. From this fact it would be rational to assume that maybe not all workers of XYZ are honest, so strength or confidence of this fact is decreased. 3 should not change - if you consider honesty an absolute category (then by definition honest person would never steal). We are not sure if Mark works in XYZ but that is not crucial, people from other companies are probably the same honest. However when "worker of XYZ" is replaced with a "policeman" or "judge" then probability of the first sentence would be decreased as more honesty is expected from these professions. If "honest" is replaced with "good-earning" then basic confidence of sentence 3 is smaller and it should be further decreased by the new fact. I also don't see why order of sentences in reasoning should matter during credit assignment --Jan On Monday, September 5, 2016 at 5:08:26 AM UTC+2, Ben Goertzel wrote: > It would be Interesting to compare PLN with NARS on these simple credit > assignment ish examples.... > ---------- Forwarded message ---------- > From: "Pei Wang" <[email protected] <javascript:>> > Date: Sep 5, 2016 2:59 AM > Subject: [open-nars] credit assignment > To: "open-nars" <[email protected] <javascript:>> > Cc: > > In AI, "credit assignment" is the problem of distributing the overall > credit (or blame) to the involved steps. Back-prop in ANN is for a similar > problem -- to adjust the weights on a path to get a desired overall result. > I'm trying to use a simple example to should how it is handled in NARS. > > Here is the situation: from <a --> b>, <b --> c>, and <c --> d>, the > system derives <a --> d> (as well as some other conclusions). If now the > system is informed that <a --> d> is false, it will surely change its > belief on this statement. Now the problem is: how much it should change its > beliefs on <a --> b>, <b --> c>, and <c --> d>, and in what process? > > In the attached text file, I worked out the example step by step, using > the default truth-value for the inputs. In the attached spreadsheet, the > whole process is coded, so you can change the input values (in green) to > see how the other values are changed accordingly. In particular, you should > try (1) giving different confidence values to <a --> b>, <b --> c>, and <c > --> d>, and (2) giving confirming observation on <a --> d>. > > In the spreadsheet, there are two places where a conclusion can be derived > in two different paths and the truth-values may be different. I have both > results listed, and in the system the choice rule will pick the one that > has a higher confidence. > > This example can be extended into more than three steps. One interesting > result is that the beliefs at the ends (<a --> b> and <c --> d>) are > adjusted more than the ones in the middle (<b --> c>), which I think can be > justified. > > This result can be used in comparing NARS with other models, such as deep > learning or non-classic logic systems (non-monotonic, para-consistent, > probabilistic, etc.). > > Comments, issues, and additions? > > Regards, > > Pei > > -- > You received this message because you are subscribed to the Google Groups > "open-nars" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to [email protected] <javascript:>. > To post to this group, send email to [email protected] > <javascript:>. > Visit this group at https://groups.google.com/group/open-nars. > For more options, visit https://groups.google.com/d/optout. > -- You received this message because you are subscribed to the Google Groups "opencog" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To post to this group, send email to [email protected]. Visit this group at https://groups.google.com/group/opencog. To view this discussion on the web visit https://groups.google.com/d/msgid/opencog/5d92a4e2-83aa-4a18-a107-4e7a98e75a84%40googlegroups.com. For more options, visit https://groups.google.com/d/optout.
