Davi Arnaut wrote: > William A. Rowe, Jr. wrote: > >> But no, we should probably figure out how to report this case more >> intellegently in testlfs so people don't panic. LARGE_FILES, imho, >> should not be set where special handling of the file offsets didn't >> happen. > > I think that APR_HAS_LARGE_FILES should be defined whenever we have a 64 > bits long apr_off_t, because that's probably what the user want to know > -- if the platform supports large files, which is true for 64 bit > systems.
What is large? If on a future platform, if off_t (which -is- largely portable, like ssize_t, just came a little after size_t) is 128 bits and size_t is 64 bits, that's going to require exception code again. You are reading APR_HAS_LARGE_FILES as APR_OFF_T_IS_64BIT, but this really is not true. The flag LARGE_FILE to the apr_file_open tells the system that you want to open the classic apr_file_t (in apr 0.9) in a mode where it will handle offsets > size_t bytes wide. LARGE_FILES tells us that a generic int/void* union will not hold the offset into a file, and this isn't true of most 64 bit builds. > On 32-bits with LFS we should define an internal macro for > special handling of the file offsets. > For example, testlfs.c is perfect > fine and should run on 64 bits platforms. This is trues > Although not a urgent issue, we should clarify and document better the > APR_HAS_LARGE_FILES meaning. +1, after we figure out what that is:0