Hello, the usage is based on session rate (i.e the percentage I listed, those are the approximate session rates per haProxy process). The CPU% of the respective core mirrors this as well (nothing else running on those cores basically).
I do realize now that your example is different from my configuration however. I did get this from examples on the web, is the following wrong? Do I explicitly need one frontend configuration per process, I take your last comment about the binding and ports as a yes on that? frontend frontend-HTTP bind X.X.X.X:80 bind-process 1 2 3 mode http option forwardfor option httpclose default_backend webfarm Thank you. -----Original Message----- From: Pavlos Parissis [mailto:[email protected]] Sent: Wednesday, December 2, 2015 10:58 AM To: [email protected] Subject: Re: Multiproc balance On 30/11/2015 06:03 μμ, Stefan Johansson wrote: > Hello, > > > > I’ve started to switch to a multiproc setup for a high traffic site > and I was pondering a potential stupid question; What is actually > balancing the balancers so to speak? Is it Linux itself that balances > the number of connections between the instances? > > I’m running in a vSphere/ESXi machine with 5 vCores, where I use core > 0 for interrupts, 1-3 for http and 4 for https. > > Since it’s a VM, NIC queueing and IRQ coalescing seems to be out of > the question, so I’m just leaving the core 0 for interrupts and it > seems to work fine. I just bind cores 1 through 4 to the haproxy > processes and leave 0 out. > > However, the three haProxy processes serving http requests, they are > taking 10%, 30% and 60% respectively of the load. It’s always the same > cores taking the same amount of load, it never changes, it’s somehow > “decided” that one process takes 10%, the other 30% and the last 60%. > Are these numbers CPU user level usages for haproxy process? > What decides this “balancing” between the haproxy processes? Can it be > the VM setup? I’ve never run a multiproc setup with haProxy on a > physical machine, so I don’t have any reference to such a setup. > > kernel will balancer traffic assuming you have something like the following: frontend foobar bind 10.1.1.5:80 process 1 bind 10.1.1.5:80 process 2 bind 10.1.1.5:80 process 3 bind 10.1.1.5:80 process 4 which implies that there are multiple bindings and the kernel loads balancing traffic to them as HAProxy uses SO_REUSEPORT socket option. The kernel uses a hash on the quadruple TCP info(ip+ports) to divide traffic to all sockets.

