It would be, except that we won't handle more than ONE size_t's worth
of data at a time within a single bucket.

The patch to normalize to one size_t's worth of data took a 71 line delta.
The patch to normalize to any off_t's worth of data was at 500 lines and
growing when I gave up.

I've said it a dozen times on list, when someone writes a patch that
builds clean [and is correct, I suppose there is a distinction there :-]
then I'll entertain the change.  I'm not surprised there have been no
takers, I tried myself long before I gave up and worked in size_t's.

Note that sendfile would -never- succeed on more than a size_t's
worth ... for that matter, I believe the cap is 56MB by one of the
IBM coder's own experiments.

At 09:48 PM 4/1/2002, you wrote:
This looks broken to me (at least on Windows). apr_off_t is an int64 and apr_size_t is an
int.


APU_DECLARE(apr_bucket *) apr_bucket_file_create(apr_file_t *fd,
apr_off_t offset,
apr_size_t len, apr_pool_t *p,
apr_bucket_alloc_t *list)


I think both the len and offset should be apr_off_t.

Bill




Reply via email to