[EMAIL PROTECTED] (2007-12-02 at 0902.40 -0000):
> > I also tried the original approach, it took ~120 secs (~30 without
> > setting limits, no swapping in any case) and did not crash due the
> > forced memory limit (original JPEGs were 2560*1920 and ~3MBytes each,
> > work dir ended being ~180MB with all the intermediate PNGs and the two
> > final versions):
> > 
> > convert [a-j].jpg -average direct.png
> Let's try again.
> free -m:
>              total       used       free     shared    buffers     cached
> Mem:           250        115        135          0          2         57
> -/+ buffers/cache:         55        195
> Swap:          729          0        728

So you have 135 free, that is good. 195 if cache and buffers are not

> After few seconds, hd is swapping.
> After 10 minutes is still swapping.
> free -m reports:
>              total       used       free     shared    buffers     cached
> Mem:           250        247          3          0          0         36
> -/+ buffers/cache:        210         40
> Swap:          729        377        352

377 in swap, not good now. :[

> Then, ctrl-c .

I think the issue you have can be solved by the same trick I used to
simulate small memory. Your system has free memory before starting
(and all swap free too), so imagemagick requests memory and as it
never gets a "no more memory", it keeps on requesting, until swap
usage is so big that the system is mostly trashing the disk instead of
doing real jobs.

But if imagemagick gets a "there is no more memory" (due to ulimit or
really hitting the hardware max space), it completes the task with
what it already got, even if a bit slower than it would be in a
computer with lots of RAM.

Remember, for apps, the "memory" is ram+swap, but in practice once you
have to use the "slow memory" (swap), it is not worth in many cases.
So try again running ulimit -S -d 131072 -m 131072 -v 131072 then
convert [a-j].jpg -average direct.png. That should keep imagemagick
into memory, or at least most of it, instead of forcing the system to
use over 300 of "really slow memory".

Personally I prefer computers with really small swap, except when used
to cover the needs of tmpfs, swsusp or similar systems. If a process
goes mad, it will die soon, instead of making the computer become
barely usable for minutes. In your case, maybe I would set up a soft
ulimit of 192 or so for user accounts (that would leave 64 free for
other processes running at the same time), so no app can request more
than that. As it would be soft kind, not hard, users can raise if in
case they really really need (at their own risk).

> And i have always found that imagemagick is extremely slow (
> converting images is for example much faster with netpbm tools ).

Yes, it is not exactly fast, it focuses more in features.

Gimp-user mailing list

Reply via email to