Hallo Sven!
>> Reducing the number of program invocations may speed up the hole
>> process if that system overhead is high or the program gets called
>> very often. On the other hand is the possibility of the pipe
>> solution for a bigger memory usage peak relatively high.
> Interesting point. Is this an issue for which ARG_MAX could
> take account for or do you consider this a situative issue?
True. It heavily depends on ARG_MAX as this limits the upper amount of
memory consumed for argument passing. Hence it limits the number of
arguments passed to the invoked command which might steer the amount of
additional memory allocated by the command.
... beside this you can call it situative, as program startup overhead
depends on several issues and total memory consumption depends on system
architecture, length of file names and the command executed. In my eyes
there is no simple rule which variant of command invocation is the
"better" solution.
It is a typical speed/size optimization question, where solving which
one is considered to be "better" sometimes tend to be like a
philosophical or religious question.
> xargs -n might be a way to work around such a problem, while find -exec +
> doesn't provide such.
As xargs -n limits the number of arguments it may be used to limit the
memory peak. In situations where this memory peak may lead to extra page
faults (or even memory exhaustion) it is a practical solution ... and
right "-exec +" doesn't provide this ability (at least I do not know
about it).
I'm normally using "-exec {} \;". As long as the number of program
invocations are small or speed isn't critical. This is the safe way
(least memory consumption). If find is going to process lot of file
entries and/or execution tend to get too slow, I'm trying to throw in
xargs (may be with -n or -s argument limitation).
--
Harald
_______________________________________________
busybox mailing list
[email protected]
http://lists.busybox.net/mailman/listinfo/busybox