Good Morning! So recently I went into battle between our CDN provider and our application team due to some HTTP400's coming from somewhere. At first I never suspected haproxy to be at fault due to the way I was groking our logs. The end result is that I discovered haproxy doesn't log the GET request, but rather only logs a `BADREQ` with a termination state of `PR--`. Which based on reading documentation haproxy isn't going to log a 414, but instead a 400. I ponder if this is due to something being truncated forcing haproxy to see a malformed request.
Digging into documentation, I glossed over the fact that the default buffer size isn't 16k, but actually a lower 8192. Unfortunately my fault, reading quickly got me to this point. But due to reading further the following statement is where I finally have a question; under the config item tune.maxrewrite: "...It is generally wise to set it to about 1024. It is automatically readjusted to half of bufsize if it is larger than that. This means you don't have to worry about it when changing bufsize" I do not see in the source code that actually supports that statement. We plan on mucking around with this setting; starting at 1024. Perhaps the documentation is simply not clear to me, but if I need it larger, documentation indicates it'll revert back to being half of `bufsize` which is not where I want to be; forcing me to need to tune `bufsize` instead of `maxrewrite.` Secondly, what would occur if we blow out the maxrewrite reserves? I did some quick and absurd testing and I was not able to force haproxy to throw an HTTP400, instead the request went to a backend server just fine. But I worry that the Headers may be getting truncated. Thank you much. -- John

