On 02/12/2015 01:35 μμ, Stefan Johansson wrote: > Hello, > > the usage is based on session rate (i.e the percentage I listed, those are > the approximate session rates per haProxy process). The CPU% of the > respective core mirrors this as well (nothing else running on those cores > basically). >
You need to watch where these CPU is spent. It is very common to see haproxy to use 1/3 of the CPU and the rest of the 2/3 are for system stuff. Thus, I asked about CPU user level stats for haproxy. Furthermore, CPU cycles for hardware interrupts (SI on htop) can be easily spread unevenly across all CPU, you can easily see this with htop. > I do realize now that your example is different from my configuration > however. I did get this from examples on the web, is the following wrong? > Do I explicitly need one frontend configuration per process, I take your last > comment about the binding and ports as a yes on that? > No, it is not wrong and fundamentally does the same thing. But, having separated bindings(bind lines) per CPU adds some capacity as backlog setting for the each socket is different therefore you can have more TCP connections, although I haven't fully tested the difference, Willy has mentioned here few times if I am not mistaken. > frontend frontend-HTTP > bind X.X.X.X:80 > bind-process 1 2 3 > mode http > option forwardfor > option httpclose > default_backend webfarm > > Thank you. > > -----Original Message----- > From: Pavlos Parissis [mailto:[email protected]] > Sent: Wednesday, December 2, 2015 10:58 AM > To: [email protected] > Subject: Re: Multiproc balance > > > > On 30/11/2015 06:03 μμ, Stefan Johansson wrote: >> Hello, >> >> >> >> I’ve started to switch to a multiproc setup for a high traffic site >> and I was pondering a potential stupid question; What is actually >> balancing the balancers so to speak? Is it Linux itself that balances >> the number of connections between the instances? >> >> I’m running in a vSphere/ESXi machine with 5 vCores, where I use core >> 0 for interrupts, 1-3 for http and 4 for https. >> >> Since it’s a VM, NIC queueing and IRQ coalescing seems to be out of >> the question, so I’m just leaving the core 0 for interrupts and it >> seems to work fine. I just bind cores 1 through 4 to the haproxy >> processes and leave 0 out. >> >> However, the three haProxy processes serving http requests, they are >> taking 10%, 30% and 60% respectively of the load. It’s always the same >> cores taking the same amount of load, it never changes, it’s somehow >> “decided” that one process takes 10%, the other 30% and the last 60%. >> > > Are these numbers CPU user level usages for haproxy process? > >> What decides this “balancing” between the haproxy processes? Can it be >> the VM setup? I’ve never run a multiproc setup with haProxy on a >> physical machine, so I don’t have any reference to such a setup. >> >> > > kernel will balancer traffic assuming you have something like the following: > frontend foobar > bind 10.1.1.5:80 process 1 > bind 10.1.1.5:80 process 2 > bind 10.1.1.5:80 process 3 > bind 10.1.1.5:80 process 4 > > which implies that there are multiple bindings and the kernel loads balancing > traffic to them as HAProxy uses SO_REUSEPORT socket option. The kernel uses a > hash on the quadruple TCP info(ip+ports) to divide traffic to all sockets. >
signature.asc
Description: OpenPGP digital signature

