> I just discovered picolisp and must say how impressed and excited I am
> about it;-)
Thanks, I'm happy to hear that :-)
> utilities in picolisp dealing with deployment and ssl, e.g. httpGate,
> replica, watchdog... Is there any description on how to use these
Unfortunately, not yet. However, this mailing list may be a good
possibility to discuss them.
'httpGate' is a kind of proxy, that maps standard HTTP ports to the
ports of local application servers.
It is started on all our production servers (and also on my local PCs)
at boot time (rc script), usually twice:
/installdir/picolisp/bin/httpGate 80 8080
/installdir/picolisp/bin/httpGate 443 8080 /installdir/picolisp/pem
The first argument (80, 443) determines the port where this instance of
'httpGate' should listen on.
The second argument (8080) is the default port where requests to 80 or
443 should be directed to.
If the third argument is non-empty, 'httpGate' runs in encrypted mode
(SSL), and the argument should be a valid PEM file.
'httpGate' then works as follows: Requests like "http://hostname/" (port
80) or "https://hostname/" (443) are forwarded to port 8080 on the local
machine. The picoLisp application server allocates a new unique port for
each session (child process). When 'httGate' is not running, links are
created in the form "http://hostname:12345". When it is running,
however, they will be created as "http://hostname/12345". 'httpGate'
interprets requests for such links, extracts the port (12345), and again
connects locally to the corresponding child process.
Thus, there are two purposes of 'httpGate': One is to provide SSL
encryption, and the other is to channel all communication to the server
through only two ports (80 and 443).
'ssl' is a bit special, and complicated to explain. Let me try ...
'ssl' can be used for transmission of various data in encrypted format.
One typical use is in combination with 'replica' to mirror a database at
runtime to another server. This can be achieved in the following way:
1. Open the database with 'pool', supplying a file name as the third
argument (asynchronous replication journal).
(pool "dbpath/" *Dbs "fifoFile")
This will cause all modifications to the database be written to that
2. Start 'ssl' on the application server
bin/ssl <remoteServer> 443 40001/@replica keyFile fifoFile blob/directory/ 20
and 'replica' on <remoteServer>
bin/replica 40001 keyFile "" db/ blob/ 2 4 6 .. &
The numbers 2 4 6 ... must be the database sizes (in '*Dbs') supplied to
'pool' on the application server.
Now 'ssl' will try every 20 seconds to connect to <remoteServer> on port
4001, authenticate with the contents of 'keyFile', and cause 'replica'
to run on the remote server. Whenever the journal is completely
transmitted, it is reset to a size of zero, and this process continues.
As a result, you end up with an identical copy of the database and blobs
on the remote machine.
'watchdog' is a little easier. It monitors the processes on an
application server, and sends eMails if a process does not respond for
more than 5 minutes. If the process is not back again within one hour,
it is killed.
Before the application server, 'watchdog' must be started:
bin/watchdog localhost 25 [EMAIL PROTECTED] [EMAIL PROTECTED] [EMAIL
PROTECTED] >>log/watchdog 2>&1 &
while [ ! -p fifo/beat ]
do sleep 1
'localhost' and 25 are host and port of a mail server. "[EMAIL PROTECTED]"
is the FROM address for outgoing mails, and "[EMAIL PROTECTED]" "[EMAIL
PROTECTED]" are the
recepients of the alarm mails.
When started, 'watchdog' creates a fifo special file "fifo/beat".
Then the application server may be started. It must load the heartbeat
to register at the watchdog process (via the fifo special file).
To stop it all again, the following sequence is used:
# Kill applications
killall -w picolisp
# Stop watchdog
test "$(ps h -C watchdog)" && bin/picolisp -'out "fifo/beat" (pr)' -bye
# Kill replica pipe
test -e key/messe && killall ssl
Wow, this was a kind of crash course ;-)
> What are the rules/heuristics for splitting entities and relations
> into database files?
As you perhaps know, you can specify database files and sizes with the
'dbs' function, which results in the initialization of the global
variable '*Dbs'. You can see examples in "doc/family.l" and "app/er.l".
As a minimum, I would recommend to separate objects and B-trees. Usually
I experiment with the data model at the beginning of a project, and look
at typical entity objects with 'size'. Then I specify a file for each
The B-tree block sizes seem to be not very critical. A block is filled
up dynamically anyway. It makes sense to put related indexes into the
same file, or use a separate file for each index if the database is
going to be big. I would put all small indexes together into a few
files, and put each big index into its own file.
At least for not too big databases, performance does not seem to depend
very much on the block size. A value of 3 (block size 512 bytes) or 4
(1024) seems a good compromise. Only for very large databases (more than
10 million objects) I found that using very large blocks (6, resulting
in 4096 bytes) improves performance.
So much for now.
This is just for a first overview. I hope it helps at least a little.
Software Lab. Alexander Burger
Bahnhofstr. 24a, D-86462 Langweid
[EMAIL PROTECTED], www.software-lab.de, +49 8230 5060