Roland Mainz writes: > James Carlson wrote: > > First of all, large file handling does not require compilation in > > 64-bit mode. > > That's not always the case - for example: > 1. You can't |mmap()| files larger than 4GB (or better: 2GB in most > cases)
Not true. You'll just need to window your access into the file, which you may need to do anyway. Existing programs (such as Vern Schryver's DCC) deal with the large mmaped file problem that way. > 2. Loading large files into the memory for processing hits the a similar > "heap barrier" Probably so, and yet that still doesn't invalidate what I've said. > 3. Handling stuff like larger |ARG_NAX| doesn't work with 32bit > applications Again, for many applications, that's irrelevant. Sure, it's possible to cobble together instances where an application simply must be 64-bit, but that *ALSO* means that such an application won't really work right when forced into 32-bit mode as it will be when run on 32-bit kernels. Since we still support 32-bit kernels (on x86 only; not SPARC), this means that applications that require 64-bit mode are in many ways _deficient_. They can't be used (or are severely limited) when run on supported systems. That's not a good thing. > I agree in an uniform Solaris environment with UFS+ZFS only but in a > heterogenous environment with different OSes and filesystems you may > encouter such stuff on a regular basis (for example mounting non-Solaris > filesystems (old Novel remote filesystem, SMBFS share hosted on FAT, > some FAT versions, many of the FuseFS-based filesystems etc. etc.), > copying data from tapes without timestamps etc. all may end-up with > timestamps the current utilities can handle). Fixable. See "clamptime" and "noclamptime" options in mount_pcfs(1M). > > as we > > haven't crossed the 2038 boundary yet, > > Which isn't far away anymore and we're only a few years away until > (government) customers are going to require some of kind Y2039 > compilance. Few, but many enough that it'll by my kids' problem. ;-} > > and thus we're effectively > > talking about dealing with corrupted files on non-standard file > > systems where the result is an unexpected EOVERFLOW. Even there, it's > > something that could be worked around without recompiling the > > universe. > > I'm not talking about "recompiling the universe", I am talking about > making the maximum use of a system's capabilties (which usually comes > "for free" (except for cases like OS/Net where cleaning-up the tree was > skipped twice (e.g. during 64bit Solaris 7 development and again for the > AMD64 port... and today this haunts platforms like Solaris/SystemZ that > badly that an artificial+resource-eating 32bit emulation layer was > required (on hardware which is 64bit/128bit by default). The damage done > here can't IMHO be greater.))). It does *NOT* come "for free" in any sense. You have to compile and test both ways. You have to deliver separate binaries. You have to deal with the (mostly nominal) overhead of checking which kernel you're on. You may have to deal with performance _regressions_ due to the change. Some of those costs will be high. I agree that for SPARC, where the 32-bit kernel is dead and gone, recompiling as 64-bit should be mostly safe, assuming you don't need to deal with legacy libraries or plug-ins that are compiled as 32-bit. At worst, it likely ought to result in just lower overall performance, assuming that all of applications are already LP64-clean and don't try to store pointers in "int" variables. But someone does need to check for LP64-cleanliness, and does need to test the multiple binaries produced. -- James Carlson, Solaris Networking <james.d.carlson at sun.com> Sun Microsystems / 35 Network Drive 71.232W Vox +1 781 442 2084 MS UBUR02-212 / Burlington MA 01803-2757 42.496N Fax +1 781 442 1677