Hi Michele,

On Fri, Aug 27, 2021 at 4:53 PM Michele Thiella <[email protected]>
wrote:

>
> A minor performance note: using push/pop might be slightly faster, by
>>> avoiding a copy. (you can also manually push/pop with cog-new-atomspace,
>>> cog-set-atomspace! and stuff like that.)
>>>
>>
> I hadn't seen push and pop so i was using (cog-new-atomspace). Actually,
> the algorithm is in python so I do Atomspace () and for the copy I extract
> the atoms from one and rerun them in the other (probably with decreased
> performance).
>

For your problem, the performance impact is minor. It only matters when you
have millions of Atoms.

The python API should allow layering: so

base_as = Atomspace ()
child_as = Atomspace (base_as)

will create child_as so that anything in base_as is visible in it, and all
deletes/adds/state changes are confined only to the child, without changing
the base. I think there is a demo of this, somewhere.


>
>
>> This is a generic problem with backwards-chaining: the algorithms are
>> heavy, slow, and have combinatoric explosions.  This has been known since
>> the 1980's and has been the subject of extensive academic research, and, no
>> doubt, dozens of PhD thesis.
>>
>
> I saw it in old tests.. Anyway, my algorithm is more or less a guided
> forward chaining, but going up or down the tree often makes no difference.
>

Because forward-chaining has exactly the same problem with combinatoric
explosion!


>
>> This is why I keep yabbering about answer-set programming (ASP) and the
>> Univ. Potsdam ASP solver. Because ASP uses a SAT solver under the covers,
>> much or most or all of the combinatoric explosion can be avoided.  Or
>> rather, the SAT solvers prune the graph in such a way that the explosion is
>> avoided.
>>
>> Exactly how to make use of this whiz-bang technology in the AtomSpace
>> remains an open research question.
>>
>
> I've studied the theory of the SAT problem and some pseudocodes but I
> don't know most of ASP. When I have more time I will try to learn about it
> because solve the combinatorial explosions seems like a nice achievement.
>

The only "advantage" that ASP offers on top of SAT is that it allows you to
use prolog syntax. That is, it is almost exactly the same as prolog, but
instead of chaining, it uses SAT "under the covers".

It's more powerful than traditional prolog alone, partly because it does
"cut" automatically (thanks to SAT), and performs an exhaustive search only
after pruning.  One can (easily) write general constraint-satisfaction
problems in ASP, which is hard/impossible in prolog.

The problem with URE/PLN is that it is "impossible" to prune, because any
probability that is greater than zero must be considered. Thus, the
combinatoric explosion is unavoidable.  However...

There is something called "Morse Theory", of which an important application
is "Floer Theory".

So: Morse theory, applied to probabilistic logic, would look like this:
pick a value p and consider all logical deductions for which the
probability of a clause is greater than p.  So basically, you do
probabilistic logic "just like always", but terminate inference whenever
the probability is less than p.  As you vary p from 0 to one, fewer and
fewer inferences can be made. Setting p=1 gives you just plain crisp-logic
inference (which will still have combinatoric explosions, but fewer than
the case of p=0.5, for example.)

As you vary p from 0.0 to 1.0, there will be special values where the
number of inferences change. These are called "critical points".  These
critical points are what you use to define the "Morse homology".  If there
are two different inference paths to a given result, then you have a
1-simplex. If there are 3, then you have a 2-simplex, etc.  They are
"homotopic" if there is a way of "continuously" deforming one inference
path into another.  Continuity is with respect to the "Scott topology". The
Scott topology just defines collections of logical statements that can be
reached from one-another by single, individual steps. The HoTT book
explains more.

The Floer homology is what you get, if you consider the manifold of all
possible inferences as a function of all possible probability assignments
to your inference rules.  This is effectively infinite-dimensional, but you
can still write down a local "Lagrangian". The goal is now to apply Morse
theory to this manifold.

After this point, my mind goes "bonk" and it is all very cloudy, but it
seems "obvious" that almost all inferences are independent of small changes
in the probabilities assigned to the rules. That is, inference is locally
continuous, at almost all assignments of probability to the rules. There
are only a small number of critical points, where the number of inferences
can change.

(Again --- to be clear -- by "inference" I really do mean using PLN, or any
other probabilistic system, even e.g. NARS or fuzzy logic, etc.)

To articulate the Morse-theory/Floer theory of probabilistic inference
would require a lot of very abstract thinking and paper-n-pencil work. It
is NOT coding. However, perhaps some nice theorems or some nice invariants
pop out, and those could be used to create high-speed practical
algorithms.  For example, mapping regions near critical points to a SAT
solver, or something like that.  This is all completely new, utterly
unexplored territory.

--linas

-- 
Patrick: Are they laughing at us?
Sponge Bob: No, Patrick, they are laughing next to us.

-- 
You received this message because you are subscribed to the Google Groups 
"opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/opencog/CAHrUA35aHGH6M6wz0FUY1G0nA%3DKU%3DWTJOiM5ry7nHUW%3D%2BCCmyA%40mail.gmail.com.

Reply via email to