Hi Lukas,
On Tue, Jul 30, 2013 at 09:39:49AM +0200, Lukas Tribus wrote:
> Hi Willy,
>
>
> > This shortcoming was addressed in 1.5-dev with the attached patch.
>
> I understand this is addressed by increasing the limit from 256MB to 2GB.
Yes.
> However, I'm pretty certain that some users have files above 2GB (like big
> ISO files, for example).
Most servers and intermediaries do not support 2GB chunks (at least last
time I checked). Chunked encoding was made for contents you don't know
the length before sending, so that suggests the sender is filling a buffer
and sending it. I was already surprized that some applications might want
to buffer up to 256MB before starting to send (especially HTML which is
slow to produce), 2GB is even less likely.
> If the statement in the patch is still correct even for 1.5 ("increasing the
> limit past 2 GB causes trouble due to some 32-bit subtracts in various
> computations becoming negative (eg: buffer_max_len)", perhaps we can document
> this somewhere? Seems frustrating to let the users discover this on his own.
Yes I agree with you.
> I'm wondering where the right place would be to document this limitation
> (since chunked transfer-encoding has no config-keyword).
There is a reminder about the HTTP protocol in the config manual which also
indicates some of haproxy's limitations regarding the protocol. Probably we
should put this there.
Best regards,
Willy