Yes, I do vaguely see how the forward chainer could be used as you
describe for pattern mining and program evolution. In fact, if worth
keeping, I suppose the generative parts (currently written in C++) could
be exposed to scheme, wrapped in rules, and the URE would merely be used
as main control loop. Cool hybridizations + inference control could take
place.
I understand less well your use case of grammar parsing, especially the
forward chainer part. But my feeling is that you probably wouldn't need
to modify the BC, just have the appropriate fitness function to guide
the back inference tree expansion to follow this word by word iterative
heuristic. Note that the BC already does backtracking. If some back
inference tree path is fruitless, it will "backtrack" and explore others
paths. There's a complexity penalty parameter over the inference tree
size to control how much backtracking (breath first search) may take
place. Wait, I think I understand the FC part! That's in fact to get an
evaluation of the fitness function. Hmm, maybe the FC part could be
wrapped in the fitness function. Anyway, without all the details in
mind, I may not be thinking correctly about it.
Nil
On 03/28/2017 05:58 PM, Ben Goertzel wrote:
2) Grammar parsing
Here the idea would be similar to the "Viterbi link parser" that Linas
got partway through writing a while back
The parser would iterate through a sentence from beginning to end.
As it proceeds through it would maintain a tree of possible parses,
similar to a "backward inference tree". At each stage, it expands
the tree of possible parses, via using an "expansion rule." The
expansion rule takes the next word in the sentence, and finds its
disjuncts, and chooses the highest-weight way to match all its
disjuncts with those of words occurring elsewhere in the sentence.
Its output is either to add the links representing the match to the
parse tree it's expanding to create a new possible parse; or to fail.
If it fails then backtracking happens, and some earlier word in the
sentence is subjected to the expansion rule again (but the expansion
rule is tasked with finding the highest-weight way to match the
disjuncts of the word being expanded, that has not been tried and
found to fail already in the context of the rest of the parse tree
being expanded) ...
This would require a modified chainer, i.e. a "backward-forward
chainer" that does forward chaining in a way that enables explicit
backtracking when a dead-end is hit...
...
There is a similarity between these two cases in that they're both to
do with using rules that expand structures...
obviously 1) is similar to
3) expanding the program tree in a MOSES deme using the URE ...
I.e. 3) is basically the same as 1) except that one has an arbitrary
fitness function in place of surprisingness. The MOSES program tree
node vocabulary in use in a given evolution round, would be specified
by a "pattern template" similar to what one uses in the pattern miner
now to specify what kinds of patterns to search for...
...
Doing 1-3 would make things rather elegant (and combined with our
in-progress replacement of the NLP comprehension pipeline, would make
the whole of OpenCog finally somewhat pretty...)
-- Ben
--
You received this message because you are subscribed to the Google Groups
"opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/opencog.
To view this discussion on the web visit
https://groups.google.com/d/msgid/opencog/c05e14f0-5199-3bbe-5b8e-9f1cac70f3de%40gmail.com.
For more options, visit https://groups.google.com/d/optout.