On 25.10.2005., at 21:02, Alvaro Lopez Ortega wrote:

Marko Mikulicic wrote:

> in config.h I see
> /* Number of bits in a file offset, on hosts where this is settable. */
> /* #undef _FILE_OFFSET_BITS */
> someone knowns well the autoconf/automake

  I think I do, but it doesn't make sense to me..

  configure.in uses AC_SYS_LARGEFILE, which is a function that defines
  _FILE_OFFSET_BITS and add a compilation flag in order to activate
  the large file support.

  Could you please check what is detecting AC_SYS_LARGEFILE?

  I think the problem is in the configure stuff, the code is, AFAIK,
  alright.


I'm afraid I have to say that there is some subtle interpretation error on what AC_SYS_LARGEFILE seems to mean.

it generates the following code in configure:

----------------------------------------------
echo "$as_me:$LINENO: checking for _FILE_OFFSET_BITS value needed for large files" >&5 echo $ECHO_N "checking for _FILE_OFFSET_BITS value needed for large files... $ECHO_C" >&6
if test "${ac_cv_sys_file_offset_bits+set}" = set; then
  echo $ECHO_N "(cached) $ECHO_C" >&6
else
  while :; do
  ac_cv_sys_file_offset_bits=no
  cat >conftest.$ac_ext <<_ACEOF
/* confdefs.h.  */
_ACEOF
cat confdefs.h >>conftest.$ac_ext
cat >>conftest.$ac_ext <<_ACEOF
/* end confdefs.h.  */
#include <sys/types.h>
/* Check that off_t can represent 2**63 - 1 correctly.
    We can't simply define LARGE_OFF_T to be 9223372036854775807,
    since some C++ compilers masquerading as C compilers
    incorrectly reject 9223372036854775807.  */
#define LARGE_OFF_T (((off_t) 1 << 62) - 1 + ((off_t) 1 << 62))
  int off_t_is_large[(LARGE_OFF_T % 2147483629 == 721
                       && LARGE_OFF_T % 2147483647 == 1)
                      ? 1 : -1];
int
main ()
{

  ;
  return 0;
}
_ACEOF
----------------------------

the phrase "checking for _FILE_OFFSET_BITS value needed for large files" means to me "ensure that we have to define _FILE_OFFSET_BITS explicitly so that off_t would be defined to be 64-bit wide". A proof is in the checking code itself. Let's see it by itself (the main is empty):

---------------------------------------
#include <sys/types.h>
/* Check that off_t can represent 2**63 - 1 correctly.
    We can't simply define LARGE_OFF_T to be 9223372036854775807,
    since some C++ compilers masquerading as C compilers
    incorrectly reject 9223372036854775807.  */


#define LARGE_OFF_T (((off_t) 1 << 62) - 1 + ((off_t) 1 << 62))


  int off_t_is_large[(LARGE_OFF_T % 2147483629 == 721
                       && LARGE_OFF_T % 2147483647 == 1)
                      ? 1 : -1];
--------------------------------------

so, LARGE_OFF_T is an expression that yields to (off_t) 9223372036854775807 which is (off_t)0x7fffffffffffffff, 63 least significant bits set to one and the most significant bit set to zero.

the macro is a work around for some compilers. we can write it like this:

int off_t_is_large[((off_t)0x7fffffffffffffff % 0x7fffffed == 0x2d1
                       &&  (off_t)0x7fffffffffffffff % 0x7fffffff == 1)
                      ? 1 : -1];

if off_t is 32 bit value then the cast evaluates only to the leas significant bits:

(gdb) print (unsigned long)0x7fffffffffffffff
$4 = 0xffffffff

so the remainder of the expressions are different:

as you can see in gdb:
(gdb) print (unsigned long long)0x7fffffffffffffff % 0x7fffffed
$6 = 0x2d1
(gdb) print (unsigned long)0x7fffffffffffffff % 0x7fffffed
$7 = 0x25

the rest of the trick is to generate a declaration of an array with negative dimesions in order to trigger a compilation error
in case the value is truncated by the cast, yeilding to:

large.c:9: error: size of array 'off_t_is_large' is negative

since darwin defaults to 64 bit off_t this succedes without the special macro.

(I ignore why the simply didn't do (off_t)0x7fffffffffffffff >> 62 == 1 ? 1 : -1 ... but probably there are some strange compilers out there.)

Historically programs were built assuming a 32 bit off_t. In order to preserve backwards compatibility one should explicitly define _FILE_OFFSET_BITS or LARGE_FILE (in some platforms like AIX) so that unistd.h can define off_t to have 64 bits and eventually define macros for using correct syscalls etc. On darwin there was no need for backward incompatiblity so it is natively implemented as: ./sys/_types.h: typedef __int64_t __darwin_off_t; /* [???] Used for file sizes */


So this is my motivation to think that you were using incorrectly the presence of _FILE_OFFSET_BITS, also because on AIX for example it would define another macro and not _FILE_OFFSET_BITS.

is there a list of platforms where cherokee was tested on?

-------------------------
Now we can talk about the solution. Anyone know how to get autoconf define a macro which is true even on platforms
where large file is default?


-- marko



PS: wouldn't be better to use reply-to to the list instead of having to use "replay-all" in the mail client?
_______________________________________________
Cherokee mailing list
[email protected]
http://www.alobbs.com/cgi-bin/mailman/listinfo/cherokee

Reply via email to