Davi Arnaut wrote:

Yes, first because size_t is 32 bits :). When you do a read like this on
a file bucket, the whole bucket is not read into memory. The read
function splits the bucket and changes the current bucket to refer to
data that was read.

32 bits is 4GB. A large number of webservers don't have memory that size, thus the problem.

The problem lies that those new buckets keep accumulating in the
brigade! See my patch again.

Where?

We start with one 4.7GB bucket in a brigade.

The bucket is split into 16MB + rest of bucket, then the brigade is split across the split bucket. We now have two brigades, one with a 16MB bucket, the other with a 4.64GB bucket. The resulting left hand brigade consisting of one 16MB file bucket is written to the cache, then written to the network, then deleted, so we're back to one brigade, containing one 4.64GB bucket.

Rinse, repeat.

The only place buckets can accumulate is in the ap_core_output_filter(), but that's fine - 293 buckets all pointing at the same file backend is manageable.

Regards,
Graham
--

Attachment: smime.p7s
Description: S/MIME Cryptographic Signature

Reply via email to