> > I can think of three main categories of capacity
> limits on 32-bit that might
> > be higher on 64-bit:
> > * file size max of 2GB
> > * register size and usage
> > * available address space, and therefore amount
> that can be mmap()'d at once
> > (Also, IIRC stdio is supposed to handle > 256 file
> pointers on 64-bit.)
> > 
> > AFAIK, only where the address space is an issue is
> 64-bit all that beneficial
> > on SPARC.
> 
> Erm... that's not correct. The ideas where more like
> this:
> -- snip --
> - No problem with dates >= 2030 (64bit |time_t|)
> - High-resolution timestamps by default

Ok, those matter:
* by sometime decently before 2038
* for programs that deal with future dates (dealing with the end date
of a 30-year mortgage as a 32-bit time_t is obviously a problem _now_)
* maybe for programs that might deal with files created via CIFS clients,
which may have set timestamps in terms of their FILETIME idea of times,
which covers a much larger range than 1901-2038.
* when else?

So the timestamps have to be fixed on everything, preferably within the
next 20 years (and definitely within 30 years); and in some cases, much sooner.
Are the programs in question good examples of something that needs to be
fixed sooner?

> - Stack would be non-executable by default
An advantage, certainly, although if someone is paranoid (and has no
self-modifying code that would choke on it), this can be done for 32-bit with
an /etc/system setting.

> - ARG_MAX would be larger (twice the size), therefore
> allowing
> applications to deal with much more data (note this
> extra memory is
> mapped via |mmap()|-like calls and only reserves the
> size, not actually
> allocate it (reserved address space != real memory
> usage)).

According to http://www.in-ulm.de/~mascheck/various/argmax/
even the 32-bit value on Solaris is pretty good.  IMNSHO, anyone depending
on humungous ARG_MAX isn't being sensible anyway, since there will always
be some situation that would push it over _any_ finite limite.


> - "bash" supports arrays and it would be nice that we
> don't limit the
> array content to 2GB

Ok, I can see that, although again, in most cases when someone is trying
to do e.g. a memory-resident DBMS as a shell script, there's something wrong
with that picture.

[...]
> Umpf... "GNU coreutils" and "bash" are only two
> packages which cannot be
> split into smaller pieces without pain. And
> "sparcv8plus" won't help a
> lot since it#s still 32bit and has no access to a
> larger addressspace,
> 64bit timestamps or other items listed above.
> 
> > Where that would be sufficient to reach maximum
> capacity limits, I think
> > performance testing of large file+v8plus versus v9
> (64-bit) should be considered,
> > to avoid regression.
> 
> See my other email about performance regressions.
> Compared to the
> current 32bit versions we will likely improve the
> situation by fixing
> the build flags.

Insofar as that can be done without adverse effects, I suppose
most executables on SPARC could be 64-bit only.  They might also
use a little more memory, which might be an issue in some borderline
cases.  Don't go too crazy though, there's at least
one exception (but not among what you're talking about yet):
http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/common/os/main.c#305

public 32-bit shared objects and libs would presumably still have to live
forever, since there would still be 3rd party and user executables that
referenced them.

64-bit munmap() used to be painfully slow; hopefully that's long since been 
fixed...
 
 
This message posted from opensolaris.org

Reply via email to