On Wed, May 16, 2018 at 6:32 PM, Ben Goertzel <[email protected]> wrote:

> Alexey, Nil, Zar, Linas, others...
>
>
> GENERAL BLATHER
>
> 2) A route with a large role for probabilistic-logic theorem-proving
> is one viable route
>

I'm starting to wonder if probabilistic logic, in this narrow sense, is
actually needed, or whether its a distraction.  From elsewhere in this
email chain, we have the example "Look at that black crow!" The tasks  here
are to determine if there is something that can pass for "black crow" in
the visual scene; if so, go with that at 100% probability. We don't need a
fractional assignment.  Now, the thing being identified might not actually
be the black crow, in which case we say that the visual subsystem was
tricked by an optical illusion: it saw a crow where there wasn't one.  The
solution is not to have the optical subsystem report  "I think its a crow
with 95% confidence", but rather, assume perfect accuracy, until the
conversation falls apart: e.g. "you're looking in the wrong place, look
here not there" at which point a more sophisticated analysis is required,
with a percentage assignment: viz: "I'm not sure, but I think I see it now."

Similar remarks about what the word "Look" might mean, in that sentence,
and what one should do, if that is the word that (you think) you heard.

This is a very Medieval, Scholastic conception of probability, which lies
at the foundation of modern legal systems.  Trial courts don't assign a
number, 69.73% probability that the accused committed a murder. Rather,
very complex networks of inter-related claims, proofs, evidence are
presented, and one examines the consistency of that network, looking for
logical flaws and self-contradictions, weeding those out, as needed.   At
the end of the trial, there are two complex networks left: a proof of
innocence, and a proof of guilt.  Ideally, one of those networks has a high
self-consistency and consistency with the external world, and the other is
low.    The court of law proceeds not by computing logical probabilities,
but by analyzing complex networks.

There are "probabilities" in there: the accused "probably" had enough time
to drive from point A to point B, commit the crime, and return before
dinner.  So there are some "probabilities" and "likelihoods"  in there, but
they cannot be articulated over terribly long deduction chains.  As you
know, confidence rapidly decays over long deduction chains, when individual
steps have mediocre probability.  But this is the stuff of detective novels
and crime stories: if the accused did this and the accused did that, then
maybe, just maybe, it was possible .. this is where the drama and suspense
comes from.

What does this mean in practice, for AGI and software algorithms? It means
we should focus on networks and network analysis. We need to construct
networks from various bits of evidence, and then explicitly crawl over them
.. sometimes assigning crisp but competing parallel-world truth
assignments, sometimes assigning confidence values (or mutual information,
or "surprisingness") to identify weak links and strong links.  Weak links
in the sense of "integrated information". Weak links in the sense of
identifying irrelevant information irrelevant arguments, irrelevant
deductions that can be pruned away.


>
>
>
> First of all, Linas, if you haven’t seen it before you will enjoy the
> diagrams in
>
> http://www.scholarpedia.org/article/Connection_method



I have not seen that before, but yes, that is what I'm talking about.  It
is presented a bit awkwardly, and I can explain why: he's focusing on
implication P->Q as (not-P or Q) and P,Q are binary-valued T/F so of
course, his connectors and links are of the form (P, not-P)   Some of this
awkwardness can (maybe) be removed by switching to link-grammar style
connectors, where we simply say that A+ connects to A- without making
assumptions that A is a binary T/F value, or that A is a probability or
something else.



>
>  link parser it's
> way faster for long sentences to use SAT for parsing


FYI, with various tunings and fixes, we squeezed out factors of 1.5x here
and 3x there, and today's LG parser is maybe an order of magnitude faster
than it used to be.  Which we've sponged up by making the dictionary far
more complex.  SAT is faster, now, only for very long sentences.


>
> chaining



Yes. The Bibel/Kreitz connection method is a great example of "parsing"
instead of "chaining", and the suggestion/claim that this is faster and
easier than chaining.



>
>
> L1(w_1, w_3) & L5(w_3,w_7) & L9(w_1,w_7) ==> P4(w_1,w_7) & P8(w_7, w_3)
>  <p>
>
> where the Pi are logical relationships and <p> is a probability value.
>

The point of my Medieval/Scholastic Probability thesis is that <p> does not
have to be terribly accurate; rather, that it is more important to assign a
parse ranking so that one can make judgments of the form:

(a)   L1(w_1, w_3) & L5(w_3,w_7) & L9(w_1,w_7) ==> P4(w_1,w_7) & P8(w_7,
w_3)

is more likely than

(b)   L1(w_1, w_3) & L5(w_3,w_7) & L9(w_1,w_7) ==> P2(w_3,w_5) & P6(w_1,
w_2)

One then works in two parallel universes: one universe (A) where (a) is
100% true, and a second universe(B)  where (b) is 100% true (but is less
likely).  After extended network analysis on universe (A), we might
discover logical inconsistencies in universe (A), which forces us to
discard universe (A) and conclude that universe (B) (however unlikely)
true.

All of this in the face of the fact that the initial parse might have
assigned <p>=0.95% to (a) and <p>=5% to (b).   The fact that (a) had a very
high probability simply does not matter, if universe (A) is inconsistent,
flawed, self-contradictory.

Sherlock Holmes. Wikiquote. How often have I said to you that when you have
eliminated the impossible, whatever remains, *however improbable*, *must*
be the *truth*?

Linas.
-- 
cassette tapes - analog TV - film cameras - you

-- 
You received this message because you are subscribed to the Google Groups 
"opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/opencog.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/opencog/CAHrUA37KYfkxuho5d6fUvNRzQJm46Nz98ipKy5dhijD224DPMg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to