On Tue, May 17, 2005 at 03:57:20PM +0000, John Goerzen wrote:
> On 2005-05-16, Florian Weimer <[EMAIL PROTECTED]> wrote:
> > * John Goerzen:
> >> Yes, I can do that more regularly, but that doesn't actually reduce the
> >> size of the inventory file, does it?
> >
> > Ah, you must run "optimize", too:
> 
> I've tested that on some smaller repos, but this is taking a LONG time:
> 
>   PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
>   30817 jgoerzen  26   1  288m 253m  99m R 93.2 25.3  97:10.80 darcs
> 
> I tagged, then ran darcs optimize --checkpoint.  This is darcs 1.0.2.
> 
> Is this normal?
> 
> I left it running overnight on the server, and it finished at some
> point, but I didn't think to time it.  The above is on my workstation.

Optimize --checkpoint costs the same as a darcs get--actually a bit more,
since it needs to write the checkpoint patch as well.  But for what Florian
requested, you don't need to create a checkpoint.  Just running optimize
with no arguments will split the inventory on the latest tag, so users with
up-to-date repositories won't need to download old histories.

Actually, now that I think about it, optimize --checkpoint *also* is worse
than an "initial record", which is one of the things that darcs has trouble
with--but Ian has largely fixed in darcs-unstable.  So assuming you're
running darcs 1.0.2, I'd guess that this is a fixed problem.  Perhaps Ian
can double-check that optimize --checkpoint doesn't have any
stupid-hanging-onto-memory issues, since this is an important and necesary
command for large repositories with lots of read-only users (e.g. linux
kernel).
-- 
David Roundy
http://www.darcs.net

_______________________________________________
darcs-users mailing list
[email protected]
http://www.abridgegame.org/mailman/listinfo/darcs-users

Reply via email to