Hi,

> There are some bugs with splice in 1.5-dev19... they have been fixed.
> 
> See this thread for the patches:
> http://comments.gmane.org/gmane.comp.web.haproxy/12774
> 
> (Or google for: "Oh and by the way, the bug was present since 1.5-dev12." )
> 
> 
thank you for the hint. I will try this on staging.



> 
> On Mon, Dec 9, 2013 at 2:56 PM, Lukas Tribus <[email protected]> wrote:
> Hi Annika,
> 
> 
> > we have a few regarding load at our Haproxy 1.5-dev19 cluster.
> > We run constantly at a load of 12 - 15 most of it is system load.
> > [...]
> > On our old cluster i do not see any of the "Resource temporarily
> > unavailableā€ at splicing operation. 
> 
> We can't tell if that kind of load is normal for your box; please
> don't make us guess from the context.
> 
> Also please tell:
> - hardware (cpu/ram/nic at least) on old/new cluster
> - software (kernel/OS) on old/new cluster
> - HAProxy configuration on old/new cluster
> - what is the actual number of concurrent sessions?
> 
> 
> 
> > Has something changed in kernel 3.11.5?
> 
> Compared to what kernel release?
> 
> 
> 
> > Are there any things which can be tried out at staging cluster
> > to break down this problem?
> 
> It seems you have a load problem; what happens if you disabling
> splicing?
> 
> Are you using splice-auto or forcing splice by configuring
> splice-request / splice-response?
> 
> 
> Could be a kernel thing, could be a NIC limitation or it could simply
> be a higher load due to more concurrent connections ...
> 
> 
> 
> 
> Regards,
> 
> Lukas
> 
> 
> 
> -- 
> Mark Janssen  --  maniac(at)maniac.nl
> Unix / Linux Open-Source and Internet Consultant
> Maniac.nl Sig-IO.nl Vps.Stoned-IT.com
> 


Reply via email to