Re: NFS mounts freezing via Haproxy

2018-05-21 Thread TomK
It's attempting to achieve an HA solution for NFS mounts.  I have 3 NFS 
hosts behind the Haproxy / Keepalived VIP through which clients connect.


Anyway, literally a minute or two after I sent this, I found the issue. 
It had to do with auditd denying writes to the backend NFS server.


Fix was to adjust the SELinux rules using audit2allow:

https://tinyurl.com/y8kzon6w

Cheers,
TK

On 5/22/2018 1:14 AM, Rainer Duffner wrote:



Am 22.05.2018 um 06:46 schrieb TomK >:


Trying to mount an NFS share vi an Haproxy / Keepalived configuration. 
When I mount the NFS share directly from the host, bypassing Haproxy / 
Keepalived, it works fine.  However, when I try via the Haproxy / 
Keepalived combination, it freezes.




Maybe I’m a little slow - but what exactly is this config trying to achieve?







--
Cheers,
Tom K.
-

Living on earth is expensive, but it includes a free trip around the sun.




Re: NFS mounts freezing via Haproxy

2018-05-21 Thread Rainer Duffner


> Am 22.05.2018 um 06:46 schrieb TomK :
> 
> Trying to mount an NFS share vi an Haproxy / Keepalived configuration. When I 
> mount the NFS share directly from the host, bypassing Haproxy / Keepalived, 
> it works fine.  However, when I try via the Haproxy / Keepalived combination, 
> it freezes.



Maybe I’m a little slow - but what exactly is this config trying to achieve?






NFS mounts freezing via Haproxy

2018-05-21 Thread TomK

Hey All,

Trying to mount an NFS share vi an Haproxy / Keepalived configuration. 
When I mount the NFS share directly from the host, bypassing Haproxy / 
Keepalived, it works fine.  However, when I try via the Haproxy / 
Keepalived combination, it freezes.


What's also interesting everything works fine via the Haproxy / 
Keepalived configuration.


It almost seems as if there is some sort of connection limit on this 
where, except for the first client, all others trying to mount via the 
Haproxy / keepalived freeze on subsequent mount attempts.


I did test the services behind the configuration and traced it down to 
the last remaining element running: haproxy.


Wondering if anyone has seen this and could let me know what I could try 
to nudge me forward a bit?


--
Cheers,
Tom K.
-

Living on earth is expensive, but it includes a free trip around the sun.


[ DETAILS ]

haproxy-1.5.18-7.el7.x86_64

[root@nfs03 audit]# cat /etc/haproxy/haproxy.cfg
global
log 127.0.0.1 local2 debug
stats   socket /var/run/haproxy.sock mode 0600 level admin
# stats socket /var/lib/haproxy/stats
maxconn 4000
userhaproxy
group   haproxy
daemon
debug

defaults
modetcp
log global
option  dontlognull
option  redispatch
retries 3
timeout http-request10s
timeout queue   1m
timeout connect 10s
timeout client  1m
timeout server  1m
timeout http-keep-alive 10s
timeout check   10s
maxconn 3000

frontend nfs-in
bindnfs-c01:2049
modetcp
option  tcplog
default_backend nfs-back


backend nfs-back
modetcp
balance source
server  nfs01.nix.my.domnfs01.nix.my.dom:2049 check
server  nfs02.nix.my.domnfs02.nix.my.dom:2049 check
server  nfs03.nix.my.domnfs03.nix.my.dom:2049 check

listen stats
bind :9000
mode http
stats enable
stats hide-version
stats realm Haproxy\ Statistics
stats uri /haproxy-stats
stats auth admin:passw0rd
[root@nfs03 audit]#


Ran keepalived in debug but it showed no log entries during the 
connection attempt.  So doesn't look like the traffic was being passed:



[root@nfs03 haproxy]# cat /etc/keepalived/keepalived.conf
vrrp_script chk_haproxy {
  script "killall -0 haproxy"   # check the haproxy process
  interval 2# every 2 seconds
  weight 2  # add 2 points if OK
}

vrrp_instance VI_1 {
  interface eth0# interface to monitor
  state BACKUP  # MASTER on haproxy1, BACKUP on 
haproxy2,haproxy3 etc

  virtual_router_id 51
  priority 103  # 101 on haproxy1, 102 on 
haproxy2, 103 on haproxy3 etc

  virtual_ipaddress {
192.168.0.80# virtual ip address
  }
  track_script {
chk_haproxy
  }
}
[root@nfs03 haproxy]#




[PATCH] lua & threads

2018-05-21 Thread Thierry Fournier
Hi,

You will two patches in attachment.

 - The first fix some Lua error messages

 - The second fix a build error. This second should be reviewed because, I’m not
   so proud of solution :-) Note that this build error happens for compilation
   without threads on macosx.

BR,
Thierry



0001-MINOR-lua-Improve-error-message.patch
Description: Binary data


0002-BUG-MINOR-thread-build-The-variable-all_threads_mask.patch
Description: Binary data


TCP on maxconn kick old client

2018-05-21 Thread Tiago Coutinho
Hi,

I have a TCP network instrument which only allows a limited number of
connections (3).

I would like to use HAProxy to limit the maximum number of clients but I
would like to have a semantic of: *close the oldest client if maxconn is
reached*. I there a way to configure HAProxy to handle connections like
this?

My clients already handle reconnection automatically so if maxconn + 1
clients are active and HAProxy is able to close the oldest client I could
connect as many clients as I want transparently.

I know that having a lot of active clients could lead to flooding the
network reconnection requests but I prepared to handle the risk since my
use case is very isolated.

Here is my current HAProxy config:

defaults
mode tcp
timeout connect 5000ms
timeout client 5ms
timeout server 5ms

listen wago1
bind :15001
server wago1 wago1:502 maxconn 3


Thanks in advance

Kind regards,
Tiago