After some testing I think the basic problem is that I try to use a higher
number than the dbs call has resulted in, thus causing a "gap", it seems
like this is a no no.

A small test:

(load "dbg.l")

(class +Server +Entity)
(rel ip        (+Key +String))

(class +Entry +Entity)
(rel tag   (+Ref +String))

   (3 +Server)
   (3 (+Server ip)))

(dbs+ 100
   (4 +Entry)
   (4 (+Entry tag)) )

(pool "/opt/picolisp/projects/test/db/" *Dbs)

(new! '(+Server) 'ip "an ip")
(new! '(+Entry) 'tag "a tag")

(mapc show (collect 'ip '+Server))
(mapc show (collect 'tag '+Entry))

Even with an empty/fresh db dir the above results in:
!? (pass new (or (meta "Typ" 'Dbf 1) 1) "Typ")
Bad DB file

Works fine if 100 is changed to 3 though.

This is causing problems for me with the extended framework I'm working on,
the framework is loading its own E/R structure with a dbs call. The thought
here is that projects that use the framework will first load it and its
E/R. Then the project code will add its own E/R through a dbs+ call.

The crux is that since the framework will always be under development it
might need more database numbers, I was hoping that I could pick some
arbitrarily high number like 100 as a rule for projects to use in their
dbs+ calls to ensure that there would never be a collision.

Any suggestions on how this conundrum can be resolved?

On Sat, Sep 7, 2013 at 1:56 PM, Alexander Burger <>wrote:

> Hi Henrik,
> > Hi, I'm in a bind, I'm using dbs+ with say 7, suddenly I find that I need
> > to increase the number to something more to make more room for new stuff
> in
> > the "base" ER so to speak.
> >
> > Simply changing the number doesn't work, it gives me "Bad DB file" which
> is
> > quite understandable.
> >
> > How can I resolve this without losing any data?
> This is a tough one. A lot of database objects must be moved to
> different files.
> I would, in general, not recommend that, and rather try to extend the DB
> "horizontally", i.e. by putting new classes and indexes into existing
> 'dbs' entries (files).
> But if absolutely necessary, it can be done. I see two ways:
> 1. Completely rebuild the DB. For that, export all entities:
>    (load "@lib/too.l")  # for 'dump'
>    (out "myData.l"
>       (prinl "# " (stamp))
>       (prinl)
>       (prinl "# Roles")
>       (dump (db nm +Role @@))
>       (println '(commit))
>       (prinl)
>       (prinl "# User")
>       (dump (db nm +User @@))
>       (println '(commit))
>       (prinl)
>       (prinl "# SomeClass")
>       (dump (db key +Cls1 @@))
>       (println '(commit))
>       (prinl)
>       (prinl "# SomeOtherClass")
>       (dump (db key +Cls2 @@))
>       (println '(commit))
>       (prinl)
>       ... )
>    You must find proper Pilog expressions to select all entities. This
>    depends on the class(es) and the indexes involved. To put it simply,
>    you must find an index for each class that is complete, i.e. which
>    indexes all objects of that class.
>    Take care not to forget anything :)
>    The resulting file can be imported into the newly structured, but
>    still empty, DB with
>       : (load "myData.l")
>    This works rather well, but requires thorough testing and checking of
>    the new DB.
> 2. There is a possibility to re-organize an existing DB. This can be
>    done with 'dbfMigrate' in "@lib/too.l". At least theoretically ;-)
>    I haven't yet used it in that way. But I used it rather often to port
>    databases from pil32 to pil64 format, which involves similar
>    problems, and it always worked reliably.
>    So you might give it a try (after backing up your DB, of course). When
>    you changed '*Dbs' (i.e. the 'dbs' and 'dbs+' calls), you open the DB
>    as before
>       : (pool "db/xxx/" *Dbs)
>    and then cross your fingers and call
>       : (dbfMigrate "db/xxx/" *Dbs)
>    Also, you should perform the usual DB check
>       : (dbCheck)
>    I hope I didn't forget anything ;-)
> ♪♫ Alex
> --

Reply via email to