On Fri, Mar 16, 2018 at 09:27:21AM -0400, Robert P. J. Day wrote:
>   course i taught recently had a section on "xargs", emphasizing its
> value(?) in being able to run a command in bite-size pieces but, these
> days, is that really that much of an issue?

If it was a year ago when I tried to glob some subset of a ~90K file
directory. It's still broken today on a simulated similar setup. Some
shell builtins don't care, which can be surprising (rm on zsh), but
both bash and zsh error before calling execve.

>   IIRC (and i might not), the historical limiting factor for command
> line length was the limit of an internal buffer in the shell that was
> used to build the command to be run, and it used to be fairly small
> (5000 bytes?). these days, i'm fairly sure bash can handle far longer
> commands than that.

There's this in-depth answer from SO, but you might want to verify
yourself.

    
https://superuser.com/questions/705928/compile-the-kernel-to-increase-the-command-line-max-length

>   now i can see the obvious value of xargs in that it supports a ton
> of cool options like defining the delimiter to be used in parsing the
> input stream and so on, but WRT simply limiting the command line size,
> rather than something like this:
>
>   $ find . -type f -name core | xargs rm -f

That's not space-safe. You'll have to use the non-portable -d with
$'\n', or just…

> i would simply assume i can generate a really long command and write:
>
>   $ rm -f $(find . -type f -name core)

Not space-safe either. What if the directory names contain anything in
$IFS?

> and, yes, there's always "find .... -exec rm -f {} \;" and so on.

You might want to use the non-portable '+', which'll do it in batches
rather than exec-ing rm once for every single file. There's also the
non-portable -delete which may be faster for some obscure edge cases.
_______________________________________________
Linux mailing list
Linux@lists.oclug.on.ca
http://oclug.on.ca/mailman/listinfo/linux

Reply via email to