To recreate I would use curl and set a huge cookie.  I also came across some 
other people who had similar issues.  I actually didn’t feel like recompiling 
pound so what I did instead to fix the issue was have the SSL terminated at the 
load balancer.  For me this solved my issue but may not be an option for 
everyone.



DAN FINN

Linux System Administrator



Office: 801-746-7580 ext. 5381

Mobile: 801-609-4705

[email protected]<mailto:[email protected]>



Backcountry.com<http://www.backcountry.com/>

Competitive Cyclist<http://www.competitivecyclist.com/>

RealCyclist.com<http://www.realcyclist.com/>

Dogfunk.com<http://www.dogfunk.com/>

SteepandCheap.com<http://www.steepandcheap.com/>

Chainlove.com<http://www.chainlove.com/>

WhiskeyMilitia.com<http://www.whiskeymilitia.com/>

From: Jeff Jordan <[email protected]<mailto:[email protected]>>
Reply-To: "[email protected]<mailto:[email protected]>" 
<[email protected]<mailto:[email protected]>>
Date: Thursday, November 14, 2013 at 4:22 PM
To: "[email protected]<mailto:[email protected]>" 
<[email protected]<mailto:[email protected]>>
Subject: Re: [Pound Mailing List] Getting 500 errors with large headers

I am seeing this error a lot as well but have been having problems reproducing 
it.  Would you describe how you are able to reproduce the error?

>From what I have read increasing the MAXBUF during compile might be what you 
>are looking for.

--with-maxbuf=nnn       Value of the MAXBUF parameter (default: 4096)

Thanks,
Jeff

On Sat, Nov 9, 2013 at 8:51 PM, Dan Finn 
<[email protected]<mailto:[email protected]>> wrote:
It looks like we are having issues with pound and headers of a certain size.  
I’m seeing this error in the log:

Nov 10 04:21:40 ip-10-203-87-209 pound: (7fe015975700) e500 can't read header


I can duplicate the issue with curl and if I remove a bunch of the cookie data 
from the header than it goes away, unfortunately we need all of that data.  Is 
it possible to raise the request header size limit?

We are running version 2.5-1ubuntu1 .

Thanks,
Dan

Reply via email to