On 10/28/2010 10:45 AM, John Plevyak wrote:


This is based on experience with FreeBSD vs Linux where FreeBSD
took the route that they would upgrade all values to be 64-bit for 64-bit
platforms whereas Linux has taken the #3 route in places which has lead
to lots of confusion and subtle problems.

I agree.


One solution is to go with #2, but to use size_t or our own type which would scale from 32-bit to 64-bit. This assumes that there is interest in using ATS
in embedded situations where either processor support or memory pressure


warrants not using 64-bit values everywhere. On a practical note, we could make the API use the scaling type but not actually implement it through the
entire system (which would be a good deal of work, instead we would just
use 64-bits internally) so that we would not have to change the API again *if*
we ever wanted to support embedded systems.

Hmmm, so we'd only support this scaling on the APIs themself? In the example of the VIO (which is where I've run into complete failure problems), if internally the VIO supports 64-bit values (nbytes etc.), but the API only exposes 32-bit, is that good enough? Does that limit the 32-bit API to only transform up to 2GB objects ? (I can't see that it is, but not positive since i'm not hugely familiar with the code).

To get a better idea of what my problem is, I've attached a patch that addresses my immediate problems to TS-499 (https://issues.apache.org/jira/browse/TS-499). Basically, this patch bridges the notation of MAX_INT between the 64-bit internals and the 32-bit APIs (I'd imagine in your size_t based proposal, it would have to do something similar?).

My gut feeling is though, if we're willing to break API, we should just make these APIs 64-bit :). And even the size_t solution would break ABIs, i.e. plugins compiled with the old APIs will no longer work if they upgrade TS (on a 64-bit system).

Cheers,

-- Leif

Reply via email to