Re: Redirect to HTTPS

2018-10-02 Thread Lukas Tribus
On Tue, 2 Oct 2018 at 20:34, Dustin Schuemann  wrote:
>
> I would like to redirect everything from HTTP to HTTPS except a specific URL.

You mean Host header? Because that's what you configured.


> redirect scheme https if !{ ssl_fc } OR !{ hdr(Host) -m -I www.blah.com }

The logic is flawed. If you don't want to redirect when the host is
www.blah.com, then you need to AND this, not OR. Also the ACL
expression is wrong.

This would be it:
redirect scheme https if !{ ssl_fc } !{ hdr(host) -i www.blah.com }



Lukas



Redirect to HTTPS

2018-10-02 Thread Dustin Schuemann
I would like to redirect everything from HTTP to HTTPS except a specific URL. 
Here is what I have but it doesn’t seem to be working. 

redirect scheme https if !{ ssl_fc } OR !{ hdr(Host) -m -I www.blah.com }

Thanks, 


Re: High CPU Usage followed by segfault error

2018-10-02 Thread Olivier Houchard
Hi,

On Tue, Oct 02, 2018 at 08:26:12PM +0530, Soji Antony wrote:
> Hello,
> 
> We are currently using haproxy 1.8.3 with single process multithreaded
> configuration.
> We have 1 process and 10 threads each mapped to a separate core [0-9]. We
> are running our haproxy instances on a c4.4xlarge aws ec2 instance. The
> only other CPU intensive process running on this server is a log shipper
> which is explicity mapped to cpu cores 13 - 16 explicitly using taskset
> command. Also we have given 'SCHED_RR' priority 99 for haproxy processes.
> 
> OS: Ubuntu 14
> Kernel: 4.4.0-134-generic
> 
> The issue we are seeing with Haproxy is all of a sudden CPU usage spikes to
> 100% on cores which haproxy is using & causing latency spikes and high load
> on the server. We are seeing the following error messages in system /
> kernel logs when this issue happens.
> 
> haproxy[92558]: segfault at 8 ip 55f04b1f5da2 sp 7ffdab2bdd40 error
> 6 in haproxy[55f04b10100
> 0+17]
> 
> Sep 29 12:21:02 marathonlb-int21 kernel: [2223350.996059] sched: RT
> throttling activated
> 
> We are using marathonlb for auto discovery and reloads are quite frequent
> on this server. Last time when this issue happened we had seen haproxy
> using 750% of CPU and it went into D state. Also the old process was also
> taking cpu.
> 
> hard-stop-after was not set in our hap configuration and we were seeing
> multiple old pid's running on the server. After the last outage we had with
> CPU we set 'hard-stop-after' to 10s and now we are not seeing multiple hap
> instances running after reload. I would really appreciate if some one can
> explain us why the CPU usage spikes with the above segfault error & what
> this error exactly means.
> 
> FYI: There was no traffic spike on this hap instance when the issue
> happened. We have even seen the same issue in a non-prod hap where no
> traffic was coming & system went down due to CPU usage & found the same
> segfault error in the logs.
> 

A good first step would probably to upgrade to the latest version if possible.
1.8.3 is quite old, and a bunch of bugs have been fixed since then,
especially when using multithreading.

Regards,

Olivier



Few problems seen in haproxy? (threads, connections).

2018-10-02 Thread Krishna Kumar (Engineering)
Hi Willy, and community developers,

I am not sure if I am doing something wrong, but wanted to report
some issues that I am seeing. Please let me know if this is a problem.

1. HAProxy system:
Kernel: 4.17.13,
CPU: 48 core E5-2670 v3
Memory: 128GB memory
NIC: Mellanox 40g with IRQ pinning

2. Client, 48 core similar to server. Test command line:
wrk -c 4800 -t 48 -d 30s http:///128

3. HAProxy version: I am testing both 1.8.14 and 1.9-dev3 (git checkout as
of
Oct 2nd).
# haproxy-git -vv
HA-Proxy version 1.9-dev3 2018/09/29
Copyright 2000-2018 Willy Tarreau 

Build options :
  TARGET  = linux2628
  CPU = generic
  CC  = gcc
  CFLAGS  = -O2 -g -fno-strict-aliasing -Wdeclaration-after-statement
-fwrapv -fno-strict-overflow -Wno-unused-label -Wno-sign-compare
-Wno-unused-parameter -Wno-old-style-declaration -Wno-ignored-qualifiers
-Wno-clobbered -Wno-missing-field-initializers -Wtype-limits
  OPTIONS = USE_ZLIB=yes USE_OPENSSL=1 USE_PCRE=1

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Built with OpenSSL version : OpenSSL 1.0.2g  1 Mar 2016
Running on OpenSSL version : OpenSSL 1.0.2g  1 Mar 2016
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : TLSv1.0 TLSv1.1 TLSv1.2
Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT
IP_FREEBIND
Encrypted password support via crypt(3): yes
Built with multi-threading support.
Built with PCRE version : 8.38 2015-11-23
Running on PCRE version : 8.38 2015-11-23
PCRE library supports JIT : no (USE_PCRE_JIT not set)
Built with zlib version : 1.2.8
Running on zlib version : 1.2.8
Compression algorithms supported : identity("identity"),
deflate("deflate"), raw-deflate("deflate"), gzip("gzip")
Built with network namespace support.

Available polling systems :
  epoll : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Available multiplexer protocols :
(protocols markes as  cannot be specified using 'proto' keyword)
  h2 : mode=HTTP   side=FE
: mode=TCP|HTTP   side=FE|BE

Available filters :
[SPOE] spoe
[COMP] compression
[TRACE] trace

4. HAProxy results for #processes and #threads
#Threads-RPS Procs-RPS
1 20903 19280
2 46400 51045
4 96587 142801
8 172224 254720
12 210451 437488
16 173034 437375
24 79069 519367
32 55607 586367
48 31739 596148

5. Lock stats for 1.9-dev3: Some write locks on average took a lot more time
   to acquire, e.g. "POOL" and "TASK_WQ". For 48 threads, I get:
Stats about Lock FD:
# write lock  : 143933900
# write unlock: 143933895 (-5)
# wait time for write : 11370.245 msec
# wait time for write/lock: 78.996 nsec
# read lock   : 0
# read unlock : 0 (0)
# wait time for read  : 0.000 msec
# wait time for read/lock : 0.000 nsec
Stats about Lock TASK_RQ:
# write lock  : 2062874
# write unlock: 2062875 (1)
# wait time for write : 7820.234 msec
# wait time for write/lock: 3790.941 nsec
# read lock   : 0
# read unlock : 0 (0)
# wait time for read  : 0.000 msec
# wait time for read/lock : 0.000 nsec
Stats about Lock TASK_WQ:
# write lock  : 2601227
# write unlock: 2601227 (0)
# wait time for write : 5019.811 msec
# wait time for write/lock: 1929.786 nsec
# read lock   : 0
# read unlock : 0 (0)
# wait time for read  : 0.000 msec
# wait time for read/lock : 0.000 nsec
Stats about Lock POOL:
# write lock  : 2823393
# write unlock: 2823393 (0)
# wait time for write : 11984.706 msec
# wait time for write/lock: 4244.788 nsec
# read lock   : 0
# read unlock : 0 (0)
# wait time for read  : 0.000 msec
# wait time for read/lock : 0.000 nsec
Stats about Lock LISTENER:
# write lock  : 184
# write unlock: 184 (0)
# wait time for write : 0.011 msec
# wait time for write/lock: 60.554 nsec
# read lock   : 0
# read unlock : 0 (0)
# wait time for read  : 0.000 msec
# wait time for read/lock : 0.000 nsec
Stats about Lock PROXY:
# write lock  : 291557
# write unlock: 291557 (0)
# wait time for write : 109.694 msec
# wait time for write/lock: 376.235 nsec
# read lock   : 0
# read unlock : 0 (0)
# wait time for read  : 0.000 msec
# wait time for read/lock : 0.000 nsec
Stats about Lock SERVER:
# write lock  : 1188511
# write unlock: 1188511 (0)
# wait time for write : 854.171 msec
# wait time for write/lock: 718.690 nsec
# read lock   : 0
# read unlock : 0 (0)
# wait time for read  : 0.000 msec
# wait time for read/lock : 0.000 nsec
Stats about Lock LBPRM:
# write lock  : 1184709
# write unlock: 1184709 (0)
# wait time for write : 778.947 msec
# wait time for write/lock: 657.501 nsec
# read lock   : 0
# read unlock : 0 (0)
# wait time for read  : 0.000 msec
# wait time for read/lock : 0.000 nsec
Stats about Lock BUF_WQ:
# write lock  : 669247
# write unlock: 669247 (0)
# wait time for write : 252.265 msec
# wait time for write/lock: 376.939 nsec
# read lock   

High CPU Usage followed by segfault error

2018-10-02 Thread Soji Antony
Hello,

We are currently using haproxy 1.8.3 with single process multithreaded
configuration.
We have 1 process and 10 threads each mapped to a separate core [0-9]. We
are running our haproxy instances on a c4.4xlarge aws ec2 instance. The
only other CPU intensive process running on this server is a log shipper
which is explicity mapped to cpu cores 13 - 16 explicitly using taskset
command. Also we have given 'SCHED_RR' priority 99 for haproxy processes.

OS: Ubuntu 14
Kernel: 4.4.0-134-generic

The issue we are seeing with Haproxy is all of a sudden CPU usage spikes to
100% on cores which haproxy is using & causing latency spikes and high load
on the server. We are seeing the following error messages in system /
kernel logs when this issue happens.

haproxy[92558]: segfault at 8 ip 55f04b1f5da2 sp 7ffdab2bdd40 error
6 in haproxy[55f04b10100
0+17]

Sep 29 12:21:02 marathonlb-int21 kernel: [2223350.996059] sched: RT
throttling activated

We are using marathonlb for auto discovery and reloads are quite frequent
on this server. Last time when this issue happened we had seen haproxy
using 750% of CPU and it went into D state. Also the old process was also
taking cpu.

hard-stop-after was not set in our hap configuration and we were seeing
multiple old pid's running on the server. After the last outage we had with
CPU we set 'hard-stop-after' to 10s and now we are not seeing multiple hap
instances running after reload. I would really appreciate if some one can
explain us why the CPU usage spikes with the above segfault error & what
this error exactly means.

FYI: There was no traffic spike on this hap instance when the issue
happened. We have even seen the same issue in a non-prod hap where no
traffic was coming & system went down due to CPU usage & found the same
segfault error in the logs.

Thanks

Thanks


Re: PATCH : mux_h2 h2c pointer deref

2018-10-02 Thread Mildis
OK.
Here you are.


0001-BUG-MINOR-checks-queues-null-deref.patch
Description: Binary data


0001-BUG-MINOR-h2-null-deref.patch
Description: Binary data

Mildis

> Le 2 oct. 2018 à 04:23, Willy Tarreau  a écrit :
> 
> Hi,
> 
> On Sun, Sep 23, 2018 at 06:18:37PM +0200, Mildis wrote:
>> Hi,
>> 
>> Here is a patch for a null-deref.
>> It checks if h2c exists before working on it.
>> 
> 
> For these two patches, I'd prefer to have multiple exit labels
> than adding some "if" in the error exit path. It's important that
> the error path is clear, linear and without any ambiguity. For
> example, just add labels likes "fail_no_h2c" and "fail_no_queue"
> pointing to the return. That easily allows to later add intermediary
> branches if some other entries are allocated.
> 
> Willy



HAProxy is not supporting MySQL-8.0 default user authentication plugin (caching_sha2_password)

2018-10-02 Thread Ramesh Sivaraman
Hi Team,

HAProxy is not supporting MySQL-8.0 default user authentication plugin
(caching_sha2_password).

HAProxy verison info :

$ haproxy -vv
HA-Proxy version 1.5.18 2016/05/10
Copyright 2000-2016 Willy Tarreau 

Build options :
  TARGET  = linux2628
  CPU = generic
  CC  = gcc
  CFLAGS  = -O2 -g -fno-strict-aliasing
  OPTIONS = USE_LINUX_TPROXY=1 USE_ZLIB=1 USE_REGPARM=1 USE_OPENSSL=1
USE_PCRE=1

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 8192, maxpollevents = 200

Encrypted password support via crypt(3): yes
Built with zlib version : 1.2.3
Compression algorithms supported : identity, deflate, gzip
Built with OpenSSL version : OpenSSL 1.0.1e-fips 11 Feb 2013
Running on OpenSSL version : OpenSSL 1.0.1e-fips 11 Feb 2013
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports prefer-server-ciphers : yes
Built with PCRE version : 7.8 2008-09-05
PCRE library supports JIT : no (USE_PCRE_JIT not set)
Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT
IP_FREEBIND

Available polling systems :
  epoll : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

$

Error Info :

Sep 27 04:22:55 localhost haproxy[29022]: Server mysql-cluster/node1 is
DOWN, reason: Layer7 wrong status, code: 0, info: "Client does not support
authentication protocol requested by server; consider upgrading MySQL
client", check duration: 0ms. 1 active and 0 backup servers left. 0
sessions active, 0 requeued, 0 remaining in queue.
Sep 27 04:22:56 localhost haproxy[29023]: Server mysql-cluster/node2 is
DOWN, reason: Layer7 wrong status, code: 0, info: "Client does not support
authentication protocol requested by server; consider upgrading MySQL
client", check duration: 0ms. 0 active and 0 backup servers left. 0
sessions active, 0 requeued, 0 remaining in queue.



-- 
Best Regards,

*Ramesh Sivaraman*
*Senior QA Engineer, Percona*
http://www.percona.com/ 
Phone : +91 8606432991
Skype : rameshvs02