James Carlson wrote:
> Roland Mainz writes:
> > James Carlson wrote:
> > > Charles Wright writes:
> > > > I'm confused by what you mean when you say
> > > > "Most Solaris user space applications are not 64 bit clean"
> > >
> > > Many applications we ship are compiled in 32-bit mode because:
> > >
> > >   - They usually don't require any of the expanded address space or
> > >     special features of 64-bit-ness, so compiling specially provides
> > >     little or no benefit.
> >
> > Erm... I disagree in this case. AFAIK bugster has many bugs with issues
> > related to timestamps, largefile handling, inablity to handle datasets
> > >= 2GB etc. etc. making such applications 64bit ones would fix this kind of 
> > >problems.
> 
> First of all, large file handling does not require compilation in
> 64-bit mode.

That's not always the case - for example:
1. You can't |mmap()| files larger than 4GB (or better: 2GB in most
cases)
2. Loading large files into the memory for processing hits the a similar
"heap barrier"
3. Handling stuff like larger |ARG_NAX| doesn't work with 32bit
applications

> See lf64(5) and lfcompile(5) for various ways to access
> large files in 32-bit applications.  If people are unable to use those
> interfaces properly, then those issues are just plain bugs, not
> excuses to recompile the world as 64-bit in order to "fix" it.

Agreed... but the issues above are real-world problems (remeber the
discussion about getting a 64bit version of ksh93 by default in
PSARC/2006/550&co. ? The same reasons apply here).

> Secondly, the timestamp issue is a bit of an oddity.  I agree that
> when the large file compilation environments were created, they
> probably should have addressed 64-bit timestamps as well.

Erm... AFAIK this was skipped intentionally (but I don't know the
reason(s) (yet)).

> That was
> arguably a mistake.  Fortunately, it's rarely a serious issue,

I agree in an uniform Solaris environment with UFS+ZFS only but in a
heterogenous environment with different OSes and filesystems you may
encouter such stuff on a regular basis (for example mounting non-Solaris
filesystems (old Novel remote filesystem, SMBFS share hosted on FAT,
some FAT versions, many of the FuseFS-based filesystems etc. etc.),
copying data from tapes without timestamps etc. all may end-up with
timestamps the current utilities can handle).

> as we
> haven't crossed the 2038 boundary yet,

Which isn't far away anymore and we're only a few years away until
(government) customers are going to require some of kind Y2039
compilance.

> and thus we're effectively
> talking about dealing with corrupted files on non-standard file
> systems where the result is an unexpected EOVERFLOW.  Even there, it's
> something that could be worked around without recompiling the
> universe.

I'm not talking about "recompiling the universe", I am talking about
making the maximum use of a system's capabilties (which usually comes
"for free" (except for cases like OS/Net where cleaning-up the tree was
skipped twice (e.g. during 64bit Solaris 7 development and again for the
AMD64 port... and today this haunts platforms like Solaris/SystemZ that
badly that an artificial+resource-eating 32bit emulation layer was
required (on hardware which is 64bit/128bit by default). The damage done
here can't IMHO be greater.))).

----

Bye,
Roland

-- 
  __ .  . __
 (o.\ \/ /.o) roland.mainz at nrubsig.org
  \__\/\/__/  MPEG specialist, C&&JAVA&&Sun&&Unix programmer
  /O /==\ O\  TEL +49 641 3992797
 (;O/ \/ \O;)

Reply via email to