Re[2]: haproxy-1.8 in Fedora

2018-01-05 Thread Aleksandar Lazic

Hi.

-- Originalnachricht --
Von: "Ryan O'Hara" 
An: "Aleksandar Lazic" 
Cc: haproxy@formilux.org
Gesendet: 05.01.2018 23:35:10
Betreff: Re: haproxy-1.8 in Fedora




On Fri, Jan 5, 2018 at 3:12 PM, Aleksandar Lazic  
wrote:

Hi Ryan.

-- Originalnachricht --
Von: "Ryan O'Hara" 
An: haproxy@formilux.org
Gesendet: 05.01.2018 17:19:15
Betreff: haproxy-1.8 in Fedora

Just wanted to inform Fedora users that haproxy-1.8.3 is now in the 
master branch and built for Rawhide. I will not be updating haproxy 
to 1.8 in current stable releases of Fedora since I received some 
complaints about doing major updates (eg. 1.6 to 1.7) is previous 
stables releases. That said, the source rpm will build on Fedora 27. 
If there is enough interest, I can build haproxy-1.8 in copr and 
provide a repository for current stable Fedora releases.
I don't know what 'copr' is but how about to add the haproxy 1.8 into 
the software collection similar like nginx 1.8 and apache httpd 2.4 ?


The customer then is able to use haproxy 1.8 with the software 
collection subscription.


​Which software collection are you referring to? Fedora? CentOS? RHEL? 
Either way, it is something that we have discussed and are planning to 
do for the next release of RHSCL, but we've not had any requests for 
other collections.

Uff so much 8-O?

I just know and use the RHSCL on customer setups. This is for the RHEL 
subscriptions, afaik.


What's the naming for the others?

You can learn more about copr here [1] and here [2]. Basically I can 
take my package and build for specific releases, create a repo for the 
built package(s), etc. Useful for builds that aren't included in a 
certain release.


Ryan

[1] https://copr.fedorainfracloud.org/
[2] https://developer.fedoraproject.org/deployment/copr/about.html

Best Regards
Aleks

Re: haproxy-1.8 in Fedora

2018-01-05 Thread Ryan O'Hara
On Fri, Jan 5, 2018 at 3:12 PM, Aleksandar Lazic  wrote:

> Hi Ryan.
>
> -- Originalnachricht --
> Von: "Ryan O'Hara" 
> An: haproxy@formilux.org
> Gesendet: 05.01.2018 17:19:15
> Betreff: haproxy-1.8 in Fedora
>
> Just wanted to inform Fedora users that haproxy-1.8.3 is now in the master
>> branch and built for Rawhide. I will not be updating haproxy to 1.8 in
>> current stable releases of Fedora since I received some complaints about
>> doing major updates (eg. 1.6 to 1.7) is previous stables releases. That
>> said, the source rpm will build on Fedora 27. If there is enough interest,
>> I can build haproxy-1.8 in copr and provide a repository for current stable
>> Fedora releases.
>>
> I don't know what 'copr' is but how about to add the haproxy 1.8 into the
> software collection similar like nginx 1.8 and apache httpd 2.4 ?
>
> The customer then is able to use haproxy 1.8 with the software collection
> subscription.


​Which software collection are you referring to? Fedora? CentOS? RHEL?
Either way, it is something that we have discussed and are planning to do
for the next release of RHSCL, but we've not had any requests for other
collections.

You can learn more about copr here [1] and here [2]. Basically I can take
my package and build for specific releases, create a repo for the built
package(s), etc. Useful for builds that aren't included in a certain
release.

Ryan

[1] https://copr.fedorainfracloud.org/
[2] https://developer.fedoraproject.org/deployment/copr/about.html


​
>
>

>> Ryan
>>
> Best regards
> aleks
>
>


Re: haproxy-1.8 in Fedora

2018-01-05 Thread Andrew Smalley
Hi Ryan

Copr is an easy-to-use automatic build system providing a package
repository as its output.

Start with making your own repository in these three steps:

choose a system and architecture you want to build for
provide Copr with src.rpm packages available online
let Copr do all the work and wait for your new repo

NOTE: Copr is not yet officially supported by Fedora Infrastructure.

https://copr.fedorainfracloud.org/

It has useful user contributed builds. I've found it useful for
packages compiled with dependency in the past.


Andruw Smalley

Loadbalancer.org Ltd.

www.loadbalancer.org
+1 888 867 9504 / +44 (0)330 380 1064
asmal...@loadbalancer.org

Leave a Review | Deployment Guides | Blog


On 5 January 2018 at 21:12, Aleksandar Lazic  wrote:
> Hi Ryan.
>
> -- Originalnachricht --
> Von: "Ryan O'Hara" 
> An: haproxy@formilux.org
> Gesendet: 05.01.2018 17:19:15
> Betreff: haproxy-1.8 in Fedora
>
>> Just wanted to inform Fedora users that haproxy-1.8.3 is now in the master
>> branch and built for Rawhide. I will not be updating haproxy to 1.8 in
>> current stable releases of Fedora since I received some complaints about
>> doing major updates (eg. 1.6 to 1.7) is previous stables releases. That
>> said, the source rpm will build on Fedora 27. If there is enough interest, I
>> can build haproxy-1.8 in copr and provide a repository for current stable
>> Fedora releases.
>
> I don't know what 'copr' is but how about to add the haproxy 1.8 into the
> software collection similar like nginx 1.8 and apache httpd 2.4 ?
>
> The customer then is able to use haproxy 1.8 with the software collection
> subscription.
>
>>
>> Ryan
>
> Best regards
> aleks
>
>



Re: How can i use proxy server for my backend servers.

2018-01-05 Thread Aleksandar Lazic

Hi.

-- Originalnachricht --
Von: "Kuldip Madnani" 
An: haproxy@formilux.org
Gesendet: 05.01.2018 17:32:25
Betreff: How can i use proxy server for my backend servers.


Hi,

I would like to use a http proxy, to access my backends that are 
defined in my haproxy configuration. Is there a way we can define 
http_proxy in HAProxy configuration?
Well there was a similar question not long ago on this list and on 
stackoverflow.


I have add the following answer on stackoverflow.

https://stackoverflow.com/a/47759772/6778826

The handicap from my point of view is that delegate is not developed 
since ~2014.




Thanks,
Kuldip


Hth
Aleks




Re: haproxy-1.8 in Fedora

2018-01-05 Thread Aleksandar Lazic

Hi Ryan.

-- Originalnachricht --
Von: "Ryan O'Hara" 
An: haproxy@formilux.org
Gesendet: 05.01.2018 17:19:15
Betreff: haproxy-1.8 in Fedora

Just wanted to inform Fedora users that haproxy-1.8.3 is now in the 
master branch and built for Rawhide. I will not be updating haproxy to 
1.8 in current stable releases of Fedora since I received some 
complaints about doing major updates (eg. 1.6 to 1.7) is previous 
stables releases. That said, the source rpm will build on Fedora 27. If 
there is enough interest, I can build haproxy-1.8 in copr and provide a 
repository for current stable Fedora releases.
I don't know what 'copr' is but how about to add the haproxy 1.8 into 
the software collection similar like nginx 1.8 and apache httpd 2.4 ?


The customer then is able to use haproxy 1.8 with the software 
collection subscription.




Ryan

Best regards
aleks




Re[2]: haproxy without balancing

2018-01-05 Thread Aleksandar Lazic

Hi Angelo.

-- Originalnachricht --
Von: "Angelo Hongens" 
An: haproxy@formilux.org
Gesendet: 05.01.2018 11:49:55
Betreff: Re: haproxy without balancing


On 05-01-2018 11:28, Johan Hendriks wrote:

Secondly we could use a single ip and use ACL to route the traffic to
the right backend server.
The problem with the second option is that we have around 2000 
different

subdomains and this number is still growing. So my haproxy config will
then consists over 4000 lines of acl rules.
and I do not know if haproxy can deal with that or if it will slowdown
request to much.

Maybe there are other options I did not think about?
For me the second config is the best option because of the single IP,
but i do not know if haproxy can handle 2000 acl rules.


I would choose the second option. I don't think the 2000 acls is a 
problem. I've been running with more than that without any problems.


A single point of entry is easiest.

We run a lot of balancers with varnish+hitch+haproxy+corosync for 
high-available loadbalancing. Perhaps high-availability is not a 
requirement, but it's also nice to be able to do maintenance during the 
day and have your standby node take over..
Just for my curiosity why hitch and not only haproxy for ssl 
termination?



--

met vriendelijke groet,
Angelo Höngens


Regards
Aleks




How can i use proxy server for my backend servers.

2018-01-05 Thread Kuldip Madnani
Hi,

I would like to use a http proxy, to access my backends that are defined in
my haproxy configuration. Is there a way we can define http_proxy in
HAProxy configuration?

Thanks,
Kuldip


haproxy-1.8 in Fedora

2018-01-05 Thread Ryan O'Hara
Just wanted to inform Fedora users that haproxy-1.8.3 is now in the master
branch and built for Rawhide. I will not be updating haproxy to 1.8 in
current stable releases of Fedora since I received some complaints about
doing major updates (eg. 1.6 to 1.7) is previous stables releases. That
said, the source rpm will build on Fedora 27. If there is enough interest,
I can build haproxy-1.8 in copr and provide a repository for current stable
Fedora releases.

Ryan


Re: mworker: seamless reloads broken since 1.8.1

2018-01-05 Thread Pierre Cheynier
On 05/01/2018 16:44, William Lallemand wrote:
> I'm able to reproduce, looks like it happens with the nbthread parameter only,
Exact, I observe the same.
At least I have a workaround for now to perform the upgrade.
> I'll try to find the problem in the code.
>
Thanks !

Pierre




Re: mworker: seamless reloads broken since 1.8.1

2018-01-05 Thread William Lallemand
On Fri, Jan 05, 2018 at 03:52:22PM +0100, Pierre Cheynier wrote:
> OK so now that I've applied all of Lukas recos (I kept the -x added ) :
> 
> * I don't see any ALERT log anymore.. Only the WARNs
> 

I'm still seing a few of them in journalctl. Maybe you don't see those emitted
by the workers, there is still room for improvement there. I'm taking notes.

> Jan 05 14:47:12 hostname systemd[1]: Reloaded HAProxy Load Balancer.
> Jan 05 14:47:12 hostname haproxy[59888]: [WARNING] 004/144712 (59888) :
> Former worker 61331 exited with code 0
> Jan 05 14:47:25 hostname haproxy[59888]: [WARNING] 004/144712 (59888) :
> Reexecuting Master process
> Jan 05 14:47:26 hostname systemd[1]: Reloaded HAProxy Load Balancer.
> Jan 05 14:47:26 hostname haproxy[59888]: [WARNING] 004/144726 (59888) :
> Former worker 61355 exited with code 0
> 
> * I still observe the same issue (here doing an ab during a
> rolling/upgrade of my test app => consequently triggering N reloads on
> HAProxy as long as the app instances are created/destroyed).
> 
> $ ab -n10  http://test-app.tld/
> (..)
> Benchmarking test-app.tld (be patient)
> apr_socket_recv: Connection reset by peer (104)
> Total of 3031 requests completed
> 

I'm able to reproduce, looks like it happens with the nbthread parameter only,
I'll try to find the problem in the code.

-- 
William Lallemand



Re: haproxy without balancing

2018-01-05 Thread Johan Hendriks


Op 05/01/2018 om 11:46 schreef Jonathan Matthews:
> On 5 January 2018 at 10:28, Johan Hendriks  wrote:
>> BTW if this is the wrong list please excuse me.
> This looks to me like it might be the right list :-)
>
>> We have an application running over multiple servers which all have
>> there own subdomain, there are about 12 of them.
>> We can live without loadbalancing, so there is no failover, each server
>> serves a couple of subdomains.
> What protocols are these servers serving?
>
> - HTTP
> - HTTPS
>   - if HTTPS, do you control the TLS certificates and their private keys?
> - Something else?
>   - if something else, what?
>
All protocols are HTTP and HTTPS
>
>> At this moment every server has its own ip, and so every subdomain has a
>> different DNS entry. What we want is a single point of entry and use
>> haproxy to route traffic to the right backend server.
> Are the DNS entries for every subdomain under your control?
> How painful would it be to change one of them?
> How painful would it be to change all of them?
If we go for the one ip, then a simple wildcard would suffice.
>
>> Replacing an server is not easy at the moment. We have a lot of history
>> to deal with. We are working on it to leave that behind but till then we
>> need an solution.
>>
>> I looked at this and i think i have two options.
>> Create for each server in the backend an ip on the haproxy machine and
>> connect a frontend for that IP to the desired backend server.
>> This way we still have multiple ipadresses, but they can stay the same
>> if servers come and go.
>>
>> Secondly we could use a single ip and use ACL to route the traffic to
>> the right backend server.
>> The problem with the second option is that we have around 2000 different
>> subdomains and this number is still growing. So my haproxy config will
>> then consists over 4000 lines of acl rules.
>> and I do not know if haproxy can deal with that or if it will slowdown
>> request to much.
> Haproxy will happily cope with that number of ACLs, but at first
> glance I don't think you need to do it that way.
>
> Assuming you're using HTTP/S, you would probably be able to use a map,
> as describe in this blog post:
> https://www.haproxy.com/blog/web-application-name-to-backend-mapping-in-haproxy/
That looks like a good option indeed.
>
> Also, assuming you're using HTTP/S, if you can relatively easily
> change DNS for all the subdomains to a single IP then I would
> *definitely* do that.
>
> If you're using HTTPS, then SNI client support
> (https://en.wikipedia.org/wiki/Server_Name_Indication#Support) would
> be something worth checking, but as a datapoint I've not bothered
> supporting non-SNI clients for several years now.
>
> All the best,
> J
Thank you Jonathan Matthews and Angelo Hongens for your prompt reply's.
I now know that ACL won't be an issue and then there is mapping.

Time to start testing.
Thanks again.

Regards,
Johan






Re: mworker: seamless reloads broken since 1.8.1

2018-01-05 Thread Pierre Cheynier

>> Hi,
>>
>>> Your systemd configuration is not uptodate.
>>>
>>> Please:
>>> - make sure haproxy is compiled with USE_SYSTEMD=1
>>> - update the unit file: start haproxy with -Ws instead of -W (ExecStart)
>>> - update the unit file: use Type=notify instead of Type=forking
>> In fact that should work with this configuration too.
> OK, I have to admit that we started experiments on 1.8-dev2, at that
> time I had to do that to make it work.
> And true, we build the RPM and so didn't notice there was some updates
> after the 1.8.0 release for the systemd unit file provided in contrib/.
> Currently recompiling, bumping the release on CI / dev environment etc...
>>  
>>> We always ship an uptodate unit file in
>>> contrib/systemd/haproxy.service.in (just make sure you maintain the
>>> $OPTIONS variable, otherwise you are missing the -x call for the
>>> seamless reload).
>> You don't need the -x with -W or -Ws, it's added automaticaly by the master
>> during a reload. 
> Interesting. Is this new ? Because I noticed it was not the case at some
> point.
>>> Run "systemctl daemon-reload" after updating the unit file and
>>> completely stop the old service (don't reload after updating the unit
>>> file), to make sure you have a "clean" situation.
>>>
>>> I don't see how this systemd thing would affect the actual seamless
>>> reload (systemd shouldn't be a requirement), but lets fix it
>>> nonetheless before continuing the troubleshooting. Maybe the
>>> regression only affects non-systemd mode.
>> Shouldn't be a problem, but it's better to use -Ws with systemd.
>>
>> During a reload, if the -x fail, you should have this kind of errors:
>>
>> [WARNING] 004/135908 (12013) : Failed to connect to the old process socket 
>> '/tmp/sock4'
>> [ALERT] 004/135908 (12013) : Failed to get the sockets from the old process!
>>
>> Are you seeing anything like this?
> Yes, in > 1.8.0. If I rollback to 1.8.0 it's fine on this aspect.
>
> I'll give updates after applying Lukas recommendations.
>
> Pierre
>
OK so now that I've applied all of Lukas recos (I kept the -x added ) :

* I don't see any ALERT log anymore.. Only the WARNs

Jan 05 14:47:12 hostname systemd[1]: Reloaded HAProxy Load Balancer.
Jan 05 14:47:12 hostname haproxy[59888]: [WARNING] 004/144712 (59888) :
Former worker 61331 exited with code 0
Jan 05 14:47:25 hostname haproxy[59888]: [WARNING] 004/144712 (59888) :
Reexecuting Master process
Jan 05 14:47:26 hostname systemd[1]: Reloaded HAProxy Load Balancer.
Jan 05 14:47:26 hostname haproxy[59888]: [WARNING] 004/144726 (59888) :
Former worker 61355 exited with code 0

* I still observe the same issue (here doing an ab during a
rolling/upgrade of my test app => consequently triggering N reloads on
HAProxy as long as the app instances are created/destroyed).

$ ab -n10  http://test-app.tld/
(..)
Benchmarking test-app.tld (be patient)
apr_socket_recv: Connection reset by peer (104)
Total of 3031 requests completed

Pierre




signature.asc
Description: OpenPGP digital signature


Re: mworker: seamless reloads broken since 1.8.1

2018-01-05 Thread Pierre Cheynier

> Hi,
>
>>> $ cat /usr/lib/systemd/system/haproxy.service
>>> [Unit]
>>> Description=HAProxy Load Balancer
>>> After=syslog.target network.target
>>>
>>> [Service]
>>> EnvironmentFile=/etc/sysconfig/haproxy
>>> ExecStartPre=/usr/sbin/haproxy -f $CONFIG -c -q
>>> ExecStart=/usr/sbin/haproxy -W -f $CONFIG -p $PIDFILE $OPTIONS
>>> ExecReload=/usr/sbin/haproxy -f $CONFIG -c -q
>>> ExecReload=/bin/kill -USR2 $MAINPID
>>> Type=forking
>>> KillMode=mixed
>>> Restart=always
>> Your systemd configuration is not uptodate.
>>
>> Please:
>> - make sure haproxy is compiled with USE_SYSTEMD=1
>> - update the unit file: start haproxy with -Ws instead of -W (ExecStart)
>> - update the unit file: use Type=notify instead of Type=forking
> In fact that should work with this configuration too.
OK, I have to admit that we started experiments on 1.8-dev2, at that
time I had to do that to make it work.
And true, we build the RPM and so didn't notice there was some updates
after the 1.8.0 release for the systemd unit file provided in contrib/.
Currently recompiling, bumping the release on CI / dev environment etc...
>  
>> We always ship an uptodate unit file in
>> contrib/systemd/haproxy.service.in (just make sure you maintain the
>> $OPTIONS variable, otherwise you are missing the -x call for the
>> seamless reload).
> You don't need the -x with -W or -Ws, it's added automaticaly by the master
> during a reload. 
Interesting. Is this new ? Because I noticed it was not the case at some
point.
>> Run "systemctl daemon-reload" after updating the unit file and
>> completely stop the old service (don't reload after updating the unit
>> file), to make sure you have a "clean" situation.
>>
>> I don't see how this systemd thing would affect the actual seamless
>> reload (systemd shouldn't be a requirement), but lets fix it
>> nonetheless before continuing the troubleshooting. Maybe the
>> regression only affects non-systemd mode.
> Shouldn't be a problem, but it's better to use -Ws with systemd.
>
> During a reload, if the -x fail, you should have this kind of errors:
>
> [WARNING] 004/135908 (12013) : Failed to connect to the old process socket 
> '/tmp/sock4'
> [ALERT] 004/135908 (12013) : Failed to get the sockets from the old process!
>
> Are you seeing anything like this?
Yes, in > 1.8.0. If I rollback to 1.8.0 it's fine on this aspect.

I'll give updates after applying Lukas recommendations.

Pierre




signature.asc
Description: OpenPGP digital signature


Re: mworker: seamless reloads broken since 1.8.1

2018-01-05 Thread William Lallemand
Hi,

> > $ cat /usr/lib/systemd/system/haproxy.service
> > [Unit]
> > Description=HAProxy Load Balancer
> > After=syslog.target network.target
> >
> > [Service]
> > EnvironmentFile=/etc/sysconfig/haproxy
> > ExecStartPre=/usr/sbin/haproxy -f $CONFIG -c -q
> > ExecStart=/usr/sbin/haproxy -W -f $CONFIG -p $PIDFILE $OPTIONS
> > ExecReload=/usr/sbin/haproxy -f $CONFIG -c -q
> > ExecReload=/bin/kill -USR2 $MAINPID
> > Type=forking
> > KillMode=mixed
> > Restart=always
> 
> Your systemd configuration is not uptodate.
> 
> Please:
> - make sure haproxy is compiled with USE_SYSTEMD=1
> - update the unit file: start haproxy with -Ws instead of -W (ExecStart)
> - update the unit file: use Type=notify instead of Type=forking

In fact that should work with this configuration too.
 
> We always ship an uptodate unit file in
> contrib/systemd/haproxy.service.in (just make sure you maintain the
> $OPTIONS variable, otherwise you are missing the -x call for the
> seamless reload).

You don't need the -x with -W or -Ws, it's added automaticaly by the master
during a reload. 

> Run "systemctl daemon-reload" after updating the unit file and
> completely stop the old service (don't reload after updating the unit
> file), to make sure you have a "clean" situation.
> 
> I don't see how this systemd thing would affect the actual seamless
> reload (systemd shouldn't be a requirement), but lets fix it
> nonetheless before continuing the troubleshooting. Maybe the
> regression only affects non-systemd mode.

Shouldn't be a problem, but it's better to use -Ws with systemd.

During a reload, if the -x fail, you should have this kind of errors:

[WARNING] 004/135908 (12013) : Failed to connect to the old process socket 
'/tmp/sock4'
[ALERT] 004/135908 (12013) : Failed to get the sockets from the old process!

Are you seeing anything like this?

-- 
William Lallemand



Re: mworker: seamless reloads broken since 1.8.1

2018-01-05 Thread Lukas Tribus
Hello Pierre,


On Fri, Jan 5, 2018 at 11:48 AM, Pierre Cheynier  wrote:
> Hi list,
>
> We've recently tried to upgrade from 1.8.0 to 1.8.1, then 1.8.2, 1.8.3
> on a preprod environment and noticed that the reload is not so seamless
> since 1.8.1 (easily getting TCP RSTs while reloading).
>
> Having a short look on the haproxy-1.8 git remote on the changes
> affecting haproxy.c, c2b28144 can be eliminated, so 3 commits remains:
>
> * 3ce53f66 MINOR: threads: Fix pthread_setaffinity_np on FreeBSD.  (5
> weeks ago)
> * f926969a BUG/MINOR: mworker: detach from tty when in daemon mode  (5
> weeks ago)
> * 4e612023 BUG/MINOR: mworker: fix validity check for the pipe FDs  (5
> weeks ago)
>
> In case it matters: we use threads and did the usual worker setup (which
> again works very well in 1.8.0).

Ok, so the change in behavior is between 1.8.0 and 1.8.1.



> $ cat /usr/lib/systemd/system/haproxy.service
> [Unit]
> Description=HAProxy Load Balancer
> After=syslog.target network.target
>
> [Service]
> EnvironmentFile=/etc/sysconfig/haproxy
> ExecStartPre=/usr/sbin/haproxy -f $CONFIG -c -q
> ExecStart=/usr/sbin/haproxy -W -f $CONFIG -p $PIDFILE $OPTIONS
> ExecReload=/usr/sbin/haproxy -f $CONFIG -c -q
> ExecReload=/bin/kill -USR2 $MAINPID
> Type=forking
> KillMode=mixed
> Restart=always

Your systemd configuration is not uptodate.

Please:
- make sure haproxy is compiled with USE_SYSTEMD=1
- update the unit file: start haproxy with -Ws instead of -W (ExecStart)
- update the unit file: use Type=notify instead of Type=forking

We always ship an uptodate unit file in
contrib/systemd/haproxy.service.in (just make sure you maintain the
$OPTIONS variable, otherwise you are missing the -x call for the
seamless reload).
Run "systemctl daemon-reload" after updating the unit file and
completely stop the old service (don't reload after updating the unit
file), to make sure you have a "clean" situation.

I don't see how this systemd thing would affect the actual seamless
reload (systemd shouldn't be a requirement), but lets fix it
nonetheless before continuing the troubleshooting. Maybe the
regression only affects non-systemd mode.



Regards,
Lukas



Re: haproxy without balancing

2018-01-05 Thread Angelo Hongens

On 05-01-2018 11:28, Johan Hendriks wrote:

Secondly we could use a single ip and use ACL to route the traffic to
the right backend server.
The problem with the second option is that we have around 2000 different
subdomains and this number is still growing. So my haproxy config will
then consists over 4000 lines of acl rules.
and I do not know if haproxy can deal with that or if it will slowdown
request to much.

Maybe there are other options I did not think about?
For me the second config is the best option because of the single IP,
but i do not know if haproxy can handle 2000 acl rules.


I would choose the second option. I don't think the 2000 acls is a 
problem. I've been running with more than that without any problems.


A single point of entry is easiest.

We run a lot of balancers with varnish+hitch+haproxy+corosync for 
high-available loadbalancing. Perhaps high-availability is not a 
requirement, but it's also nice to be able to do maintenance during the 
day and have your standby node take over..




--

met vriendelijke groet,

Angelo Höngens



mworker: seamless reloads broken since 1.8.1

2018-01-05 Thread Pierre Cheynier
Hi list,

We've recently tried to upgrade from 1.8.0 to 1.8.1, then 1.8.2, 1.8.3
on a preprod environment and noticed that the reload is not so seamless
since 1.8.1 (easily getting TCP RSTs while reloading).

Having a short look on the haproxy-1.8 git remote on the changes
affecting haproxy.c, c2b28144 can be eliminated, so 3 commits remains:

* 3ce53f66 MINOR: threads: Fix pthread_setaffinity_np on FreeBSD.  (5
weeks ago)
* f926969a BUG/MINOR: mworker: detach from tty when in daemon mode  (5
weeks ago)
* 4e612023 BUG/MINOR: mworker: fix validity check for the pipe FDs  (5
weeks ago)

In case it matters: we use threads and did the usual worker setup (which
again works very well in 1.8.0).
Here is a config extract:

$ cat /etc/haproxy/haproxy.cfg:
(...)
user haproxy
group haproxy
nbproc 1
daemon
stats socket /var/lib/haproxy/stats level admin mode 644 expose-fd listeners
stats timeout 2m
nbthread 11
(...)

$ cat /etc/sysconfig/haproxy
(...)
CONFIG="/etc/haproxy/haproxy.cfg"
PIDFILE="/run/haproxy.pid"
OPTIONS="-x /var/lib/haproxy/stats"
(...)

$ cat /usr/lib/systemd/system/haproxy.service
[Unit]
Description=HAProxy Load Balancer
After=syslog.target network.target

[Service]
EnvironmentFile=/etc/sysconfig/haproxy
ExecStartPre=/usr/sbin/haproxy -f $CONFIG -c -q
ExecStart=/usr/sbin/haproxy -W -f $CONFIG -p $PIDFILE $OPTIONS
ExecReload=/usr/sbin/haproxy -f $CONFIG -c -q
ExecReload=/bin/kill -USR2 $MAINPID
Type=forking
KillMode=mixed
Restart=always

Does the behavior observed sounds consistent regarding the changes that
occurred between 1.8.0 and 1.8.1 ? Before trying to bisect, compile,
test etc. I'd like to get your feedback.

Thanks in advance,

Pierre




signature.asc
Description: OpenPGP digital signature


Re: HAProxy 1.8.3 SSL caching regression

2018-01-05 Thread Willy Tarreau
On Thu, Jan 04, 2018 at 02:14:41PM -0500, Jeffrey J. Persch wrote:
> Hi William,
> 
> Verified.
> 
> Thanks for the quick fix,

Great, patch now merged. Thanks!
Willy



Re: haproxy without balancing

2018-01-05 Thread Jonathan Matthews
On 5 January 2018 at 10:28, Johan Hendriks  wrote:
> BTW if this is the wrong list please excuse me.

This looks to me like it might be the right list :-)

> We have an application running over multiple servers which all have
> there own subdomain, there are about 12 of them.
> We can live without loadbalancing, so there is no failover, each server
> serves a couple of subdomains.

What protocols are these servers serving?

- HTTP
- HTTPS
  - if HTTPS, do you control the TLS certificates and their private keys?
- Something else?
  - if something else, what?

> At this moment every server has its own ip, and so every subdomain has a
> different DNS entry. What we want is a single point of entry and use
> haproxy to route traffic to the right backend server.

Are the DNS entries for every subdomain under your control?
How painful would it be to change one of them?
How painful would it be to change all of them?

> Replacing an server is not easy at the moment. We have a lot of history
> to deal with. We are working on it to leave that behind but till then we
> need an solution.
>
> I looked at this and i think i have two options.
> Create for each server in the backend an ip on the haproxy machine and
> connect a frontend for that IP to the desired backend server.
> This way we still have multiple ipadresses, but they can stay the same
> if servers come and go.
>
> Secondly we could use a single ip and use ACL to route the traffic to
> the right backend server.
> The problem with the second option is that we have around 2000 different
> subdomains and this number is still growing. So my haproxy config will
> then consists over 4000 lines of acl rules.
> and I do not know if haproxy can deal with that or if it will slowdown
> request to much.

Haproxy will happily cope with that number of ACLs, but at first
glance I don't think you need to do it that way.

Assuming you're using HTTP/S, you would probably be able to use a map,
as describe in this blog post:
https://www.haproxy.com/blog/web-application-name-to-backend-mapping-in-haproxy/

Also, assuming you're using HTTP/S, if you can relatively easily
change DNS for all the subdomains to a single IP then I would
*definitely* do that.

If you're using HTTPS, then SNI client support
(https://en.wikipedia.org/wiki/Server_Name_Indication#Support) would
be something worth checking, but as a datapoint I've not bothered
supporting non-SNI clients for several years now.

All the best,
J
-- 
Jonathan Matthews
London, UK
http://www.jpluscplusm.com/contact.html



haproxy without balancing

2018-01-05 Thread Johan Hendriks
Hello.
First off all I wish everyone a really good 2018. And hopefully 2018
will serve a lot of good memory's.

BTW if this is the wrong list please excuse me.

We have an application running over multiple servers which all have
there own subdomain, there are about 12 of them.
We can live without loadbalancing, so there is no failover, each server
serves a couple of subdomains.
At this moment every server has its own ip, and so every subdomain has a
different DNS entry. What we want is a single point of entry and use
haproxy to route traffic to the right backend server.
Replacing an server is not easy at the moment. We have a lot of history
to deal with. We are working on it to leave that behind but till then we
need an solution.


I looked at this and i think i have two options.
Create for each server in the backend an ip on the haproxy machine and
connect a frontend for that IP to the desired backend server.
This way we still have multiple ipadresses, but they can stay the same
if servers come and go.

Secondly we could use a single ip and use ACL to route the traffic to
the right backend server.
The problem with the second option is that we have around 2000 different
subdomains and this number is still growing. So my haproxy config will
then consists over 4000 lines of acl rules.
and I do not know if haproxy can deal with that or if it will slowdown
request to much.

Maybe there are other options I did not think about?
For me the second config is the best option because of the single IP,
but i do not know if haproxy can handle 2000 acl rules.

Thank you for your time.

Regards
Johan




Re: [PATCH] Remove rbtree.[ch]

2018-01-05 Thread Willy Tarreau
On Thu, Jan 04, 2018 at 06:03:05PM +0100, Olivier Houchard wrote:
> The rbtree implementation as found in haproxy, is currently unused, and has
> been for quite some time.
> I don't think we will need it again, so the attached patch just removes it.

I'm pretty sure we planned to remove it a very long time ago (in 1.3 or
so) and forgot ;-)

Now applied, thanks!
Willy



Re: Haproxy 1.8 version help

2018-01-05 Thread Willy Tarreau
Hi guys,

On Thu, Jan 04, 2018 at 11:20:32PM +0100, Lukas Tribus wrote:
> On Thu, Jan 4, 2018 at 11:11 PM, Angelo Hongens  wrote:
> > On 03-01-2018 17:39, Lukas Tribus wrote:
> >>
> >> To compile Haproxy 1.8 with threads, at least GCC 4.7 is needed.
> >> CentOs 6 only ships GCC 4.4.7, therefor compilation fails.
> >> You can disable thread support, by adding USE_THREAD= to the make
> >> command (nothing comes after the equal sign):
> >
> >
> > I'm no packaging expert, but 1.8 seems to build fine on my CentOS6 build box
> > without any errors.
> >
> > I'm running gcc version 4.4.7 20120313 on CentOS 6.9.
> >
> > Here's my spec file for building RPM packages:
> >
> > https://github.com/AxisNL/haproxy-rpmbuild/blob/master/SPECS/haproxy-1.8.3.el6.spec
> >
> > Am I doing something strange?? :-)
> 
> Your are using the older build TARGET=linux26, which is for kernels
> older than 2.6.28. It doesn't enable newer features for example thread
> support (which would cause the build issue).
> For kernels >= 2.6.28 (which CentOs 6 is) we have the linux2628 build target.
> 
> Willy is working on thread support for the older kernel though, so the
> build issue will be fixed soon.

And by the way Angelo if you want to test the patch on your centos, please
find it attached. I'm just thinking that after all I should probably merge
it and in the worst case we'll improve it later, given that it at least
works for me.

Cheers,
Willy

>From c27bba26e5bcd772aa8b1c9a1319a2919df13f34 Mon Sep 17 00:00:00 2001
From: Willy Tarreau 
Date: Thu, 4 Jan 2018 18:49:31 +0100
Subject: MINOR: hathreads: add support for gcc < 4.7

Till now the use of __atomic_* gcc builtins required gcc >= 4.7. Since
some supported and quite common operating systems like CentOS 6 still
come with older versions (4.4) and the mapping to the older builtins
is reasonably simple, let's implement it.

This code is only used for gcc < 4.7. It has been quickly tested on a
machine using gcc 4.4.4 and provided expected results.

This patch should be backported to 1.8.
---
 include/common/hathreads.h | 54 ++
 1 file changed, 54 insertions(+)

diff --git a/include/common/hathreads.h b/include/common/hathreads.h
index 9782ca9..503abbe 100644
--- a/include/common/hathreads.h
+++ b/include/common/hathreads.h
@@ -99,6 +99,58 @@ extern THREAD_LOCAL unsigned long tid_bit; /* The bit 
corresponding to the threa
 
 /* TODO: thread: For now, we rely on GCC builtins but it could be a good idea 
to
  * have a header file regrouping all functions dealing with threads. */
+
+#if defined(__GNUC__) && (__GNUC__ < 4 || __GNUC__ == 4 && __GNUC_MINOR__ < 7)
+/* gcc < 4.7 */
+
+#define HA_ATOMIC_ADD(val, i)__sync_add_and_fetch(val, i)
+#define HA_ATOMIC_SUB(val, i)__sync_sub_and_fetch(val, i)
+#define HA_ATOMIC_AND(val, flags)__sync_and_and_fetch(val, flags)
+#define HA_ATOMIC_OR(val, flags) __sync_or_and_fetch(val,  flags)
+
+/* the CAS is a bit complicated. The older API doesn't support returning the
+ * value and the swap's result at the same time. So here we take what looks
+ * like the safest route, consisting in using the boolean version guaranteeing
+ * that the operation was performed or not, and we snoop a previous value. If
+ * the compare succeeds, we return. If it fails, we return the previous value,
+ * but only if it differs from the expected one. If it's the same it's a race
+ * thus we try again to avoid confusing a possibly sensitive caller.
+ */
+#define HA_ATOMIC_CAS(val, old, new)  \
+   ({ \
+   typeof((val)) __val = (val);   \
+   typeof((old)) __oldp = (old);  \
+   typeof(*(old)) __oldv; \
+   typeof((new)) __new = (new);   \
+   int __ret; \
+   do {   \
+   __oldv = *__val;   \
+   __ret = __sync_bool_compare_and_swap(__val, *__oldp, 
__new); \
+   } while (!__ret && *__oldp == __oldv); \
+   if (!__ret)\
+   *__oldp = __oldv;  \
+   __ret; \
+   })
+
+#define HA_ATOMIC_XCHG(val, new)  \
+   ({ \
+   typeof((val)) __val = (val);   \
+   typeof(*(val)) __old;