Marco van de Voort wrote:
In our previous episode, Mark Morgan Lloyd said:
Is anybody interested in a PROLOG interpreter written in Turbo Pascal, plus a couple of typeset articles which outline how it works internally?

When I found it I was wondering whether it could be usefully used to handle inference rules in a Delphi/FPC/Lazarus program, in the same way that MS use Prolog for some of their network configuration stuff. However it turns out that it is coded explicitly for Turbo Pascal with a garbage collector on top of the normal heap, which I think implies that porting it to FPC would need either a mark/release facility or multiple heaps which could be "thrown away" when no longer needed.

I don't understand why interested people couldn't implement mark/release for
the base TP compatible level of FPC ?  What is so different between TP and
FPC there?

Of course it wouldn't scale, but TP didn't scale further than 16MB anyway
(64MB with 386 tricks).

Basically you only need an own heapmanager that allocates all allocations
in-order from a 16MB block, and mark and release procedures that call that
heapmanager and are declared in some unit preloaded wiht -Fa?

The main issue is that dynamic memory is used by the Prolog implementation in several different ways. The first way is that as rules are being entered, they go into memory and usually stay there. The second way is that as queries are evaluated a lot of temporary stuff (variable bindings etc.) is put into memory, this is all thrown away on completion. The third way is that during query evaluation, rules could potentially be added/deleted/changed which shouldn't be thrown away.

I think that in practice this could be handled by normal OO techniques, since these days the amount of (virtual) memory is significantly larger than typical embedded problems. If somebody wanted to e.g. process the entire collection of Wikipedia titles and categories as a set of rules then they should probably be using specialist tools.

I was reminded of this by the discussion elsewhere on refcounted objects, which would obviously be another way to do it. Timothy Budd's book on Leda gives an interesting example of using the unification operation which underpins Prolog to process graph structures.

Prolog is an interpreter, and I assume its structures are movable. Native
languages allocations usually aren't.

Up to a point. A typical Prolog implementation acts as an interpreter since rules and queries are entered interactively, however the underlying unification algorithm can equally well be embedded in compiled code. Budd's example (chap 3 of ref below, pp 46, 49, 51ff) creates a representation of a graph, calls a Prolog-style unification function to return the edge(s) meeting certain criteria, and then goes on to use this to find in- and outdegree for each vertex, find cycles and so on. http://web.engr.oregonstate.edu/~budd/Books/leda/info/

Now I'm not necessarily saying that any of this is blindingly efficient compared with a proper set of algorithms. But from the POV of an engineer who wasn't brought up on this stuff I think that being aware of unification as (something analogous to) a design pattern might be useful, and I'll probably have a bash at some of this myself if I recognise a potential use case. Until then, I've got an example implementation if anybody else is interested.

--
Mark Morgan Lloyd
markMLl .AT. telemetry.co .DOT. uk

[Opinions above are the author's, not those of his employers or colleagues]
_______________________________________________
fpc-other maillist  -  fpc-other@lists.freepascal.org
http://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-other

Reply via email to