Hi Henrik,

> So I've tried the remote pilog stuff out now (better late than never).

Good :)

> I've attached a simple project (needs to be extracted to
> /opt/picolisp/projects/).
> Very cool stuff, indeed.
> You start by running:
> /p projects/remote-test/remote.l -go 1
> /p projects/remote-test/remote.l -go 2
> /p projects/remote-test/main.l

Please let me first post two general notes:

1. The 'p' script was deprecated some time ago, and doesn't come with
   the PicoLisp releases any longer.

   The recommended startup command is 'pil', with an optional '+' at the
   end of the command line for debug mode. Then you don't need to load
   "dbg.l" and "lib/debug.l".

   "lib/misc.l" is automatically included, so that only "@lib/http.l"
   must be explicitly loaded in the first lines of "remote.l" and

2. Since earlier this year, it is necessary to call 'tell' after an
   explicit 'sync' (see "doc/refS.html#sync"). So line 18 in "remote.l"
   holds the expression

               (while (rd)
                  (out Sock
                     (pr (eval @)) ) ) )

> I have three questions:
> 1.) The result of insertTestData on line 50 in main.l is that only it
> manages to insert Jane and John in the remotes, it's as if they need
> to be wakened up by the first query that fails, any idea of why this
> happens and how to prevent it?

This is a synchronization problem. As I have pointed out before, the
parent process has the duty of coordinating the communication between
the child processes, and should do as little other work as possible.
Everything, from DB manipulations to GUI interfacing, should be done in
child processes.

For that reason, the 'go' function traditionally calls (rollback)
immediately before starting the main event (usually (wait)), so that
every spawned child process gets a virgin environment, without possibly
cached object's inherited from the parent.

However, in 'go' in "remote.l", you have

   (pool (pack *DbDir *IdxNum))
   (mapc show (collect 'uname '+User))
   (task (port (+ *IdxNum 4040))

The problematic line here is the 'collect'. It causes the parent to
pollute its cache with objects from the DB. Especially, this contains
the *DB root object. As a result, each child starts with this pre-cached
and partially filled root object.

I would write 'go' as

   (pool (pack *DbDir *IdxNum))
   (mapc show (collect 'uname '+User))
   (task (port (+ *IdxNum 4040))
      (let? Sock (accept @)
         (unless (fork)
            (in Sock
               (while (rd)
                  (out Sock
                     (pr (eval @)) ) ) )
            (bye) )
         (close Sock) ) )
   (rollback) )

i.e. move down the (rollback), and take care that the parents doesn't do
anything else than calling (wait) after that.

> 2.) This is OT but I noticed that +Key (which the reference says is
> unique) is not respected, I am able to insert several John for
> instance.

Right. '+Key' maintains a unique index. If you do two inserts into that
tree, with the same key but two different values, the second one will

Consistency of objects referred to by that tree and their values are
handled at higher levels. For example, the GUI will check that, and
won't allow the user to give a value to an object which is already
indexed by some other object.

> Is (request) the only way to get around this or? This has
> probably been discussed on a number of earlier occasions but
> unfortunately my memory is weak.

Yes. 'request' and 'new' are different in this regard. 'request'
basically first searches for the key combination, and uses the existing
object if found, otherwise calls 'new'.

If you call 'new' two times with the same key value, you'll get two
objects and an inconsistent index. (dbCheck) will indicate an error in
that case.

> 3.) How much work would it be to implement put!>> and lose!>> that
> somehow infers where an object comes from in order to perform those
> operations? It seems to me there are two steps to solving that problem

Not sure about the concrete case. The interprocess communication
protocoll must include additional information from the originating
machine (some key per machine) which can be stored along with the
object. I don't see that we need some special function like 'put!>>' for

> that should not be impossible to overcome, first inferring which
> server the object belongs to with the help of the file offset, then
> somehow going back to the original, ie {Y} -> {X} and then executing
> the put or lose remotely with the help of {X}.

Yes, this sounds feasible to me, though the above idea of simply passing
information from the sender might be easier.

- Alex
UNSUBSCRIBE: mailto:picolisp@software-lab.de?subject=Unsubscribe

Reply via email to