> Hi, 
> 
> I have a problem. I compiled my private shell in 64
> bit mode and I use libfind
> to make the find command a shell builtin.
> 
> Now find -exec cmd {} + needs to know ARG_MAX and
> calls sysconf(_SC_ARG_MAX).
> 
> If the shell is compiled in 32 bit mode, it will get
> 1048320
> If the shell is compiled in 64 bit mode, it will get
> 2096640
> 
> If the shell will call "cmd" and it is a 64 bit
> binary everything will be OK.
> If the shell will call "cmd" and it is a 32 bit
> binary we may see an
> "arg list too long" error.
> 
> My questions are:
> 
> -     How do I get the 32 bit ARG_MAX value from inside a
> 64 bit program?

Well, unless someone adds an _SC_ARG32_MAX and _SC_ARG64_MAX,
I suppose you need to popen() a helper program of the bitness other
than your own that looks this up for you.  You'd want to have a flag,
so you only did that the first time you needed to know, and cached
the value for subsequent use.

> -     What is the best way to detect whether "cmd" is a
> 32 os a 64 bit binary?

I'm not sure that's generally answerable.  If the executable is also readable,
you can do anything from run "file" on it to using the libelf functions (or the
even more generic and portable GNU bfd library if available) to discover its
type, with a library call of course being much more efficient than calling an
external program.  But an executable doesn't have to be readable, in which
case I suppose you have to make the safer assumption, in this case that it's
a 32-bit process.

In general, I have a problem with people using insanely large argument lists;
a directory with hundreds of thousands of files probably won't perform well
anyway, and aside from wildcard expansions, there's really no excuse _at_all_,
for it IMO (and it's their responsibility to only use wildcards when their 
expansions
could safely be expected to be reasonable sized).  That's what xargs, or a
program reading its stdin rather than taking args, is for.  (And I remember the
really old days when the max arg list size was something tiny, like 5120 or so.)
Doing that is almost exactly as bad as writing a program that sucks its entire
input into memory.  _Occasionally_ that's a good idea (like for super-caching
a database, or in similar cases where lots of arbitrarily complex 
interrelationships
among the data will be explored), but mostly it's just inexcusably lazy, 
especially
if it doesn't result in much of a performance gain.
--
This message posted from opensolaris.org
_______________________________________________
opensolaris-code mailing list
opensolaris-code@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/opensolaris-code

Reply via email to