Apologies for the necro-post, but I wanted to follow up on this for

It turns out that the bottleneck was not ATS at all. By itself, ATS seems
to have trouble with characters longer than 65,536 chars. However, the
issue I was running into was because another part of our stack was throwing
a fit.

Many thanks for your help. James, the suggestion to isolate this to a
script to exercise the bug was enormously helpful. Thanks for helping me
see the error in my assumptions.

All best,


On Thu, Sep 22, 2016 at 1:31 PM, James Peach <jpe...@apache.org> wrote:

> > On Sep 15, 2016, at 5:57 PM, Adam McCullough <amccullo...@imvu.com>
> wrote:
> >
> > Hello!
> >
> > I'm running into a problem with ATS. I'm using it as a proxy to an
> internal endpoint, which regularly takes very long URLs -- 3KiB URLs are
> normal, I've seen as long as 11KiB.
> >
> > ATS doesn't seem to handle this terribly well. I've done some
> experimenting and found that ATS tends to respond with a HTTP 400 Invalid
> Client Request when given a request where the URL + Header/Cookie data goes
> above about 8 KiB.
> >
> > This does not overly surprise me, since a URL that's longer than 8K is,
> in most circumstances, absurd. However, it's a necessairy feature for the
> app I'm tasked with deploying.
> >
> > So, I have two questions. 1 - is my suspicion that I'm overflowing some
> 8K sized buffer somewhere correct, and 2 - if so, is there an easy way to
> grow this buffer to allow for very long requests?
> >
> > I've found both proxy.config.http.request_header_max_size and
> proxy.config.http.response_header_max_size and set them to 524288 (512 *
> 1024) as an attempt to troubleshoot, but this did not seem to help.
> Adam, can you please file a bug at https://issues.apache.org/
> jira/browse/TS and include a script (maybe something like the one Miles
> used) to demonstrate the problem you are seeing?
> thanks!

Reply via email to