外贸企业评价:双喜软件是外贸工作上的好帮手

2017-11-06 Thread stellaking9092

我已邀请您填写以下表单:
外贸企业评价:双喜软件是外贸工作上的好帮手

要填写此表单,请访问:
https://docs.google.com/forms/d/e/1FAIpQLSd_mn2M8UpmcyVUdcp76cWLWfZBhpjLaFCR1vFHxeYLTP_9bg/viewform?c=0w=1usp=mail_form_link

我已邀请您填写表单:

Google表单:创建调查问卷并分析调查结果。


Re: Problem: Connect() failed for backend: no free ports.

2017-11-06 Thread Lukas Tribus
Hallo Michael,



2017-11-06 22:47 GMT+01:00 Michael Schwartzkopff :
> Am 06.11.2017 um 22:39 schrieb Baptiste:
>> On Mon, Nov 6, 2017 at 10:14 PM, Michael Schwartzkopff  wrote:
>>
>>> Hi,
>>>
>>> I have a problem setting up a haproxy 1.6.13 that starts several
>>> processes. In the config I have nbroc 3. In the logs I find lots of
>>> entries like:
>>>
>>> haproxy[: Connect() failed for backend XXX: no free ports
> global
>   maxconn 200
>   nbproc 3
>   cpu-map 1 0
>   cpu-map 2 1
>   cpu-map 3 2
> [...]
> backend IMAP-be
>   option tcp-check
>   tcp-check connect port 143
>   tcp-check expect string * OK
>   default-server on-marked-down shutdown-sessions
>   fullconn 40
>   server proxy01 192.168.0.101 source 192.168.0.201:1-6 check
>   server proxy02 192.168.0.102 source 192.168.0.202:1-6 check
>   server proxy03 192.168.0.103 source 192.168.0.203:1-6 check
>   server proxy04 192.168.0.104 source 192.168.0.204:1-6 check


You are using multiprocess mode together with static source port
ranges. That's a bad idea, because the processes will compete for the
same exact source ports and the syscalls will continue to fail as
different processes are trying to use the same ports.

There are a few possibilities here, but we will have to know:

- why are you using different source IP's for each backend server?
- why are you using static port ranges?

What I would suggest is to make sure that the kernel does the source
port selection, but the kernel needs to be able to use the full
5-tuple at this point, otherwise I imagine you'd run into source port
exhaustion soon.

If you don't require specific source IP's per server, than just remove
the "source ip:port-range" keyword altogether, the kernel will take
care of everything. Just make sure that your sysctls permit a similar
source port range.

If you need specific source IPs (for reasons unrelated to source port
exhaustion), then drop the port range and specify only the IP. However
for the kernel to be able to use the full 5-table, you will need
IP_BIND_ADDRESS_NO_PORT [1], which requires haproxy 1.7, linux 4.2 and
libc 2.23.



> And yes, sysctl is adjusted. Interesting enough, the errors above only
> appear in my test when I start haproxy. After some flapping haproxy does
> not emit any further log entries.

Still, this is a recipe for disaster. Haproxy is fighting among its
own processes on the back of the kernel. I'd advise against using this
configuration in production.


cheers,
lukas


[1] https://github.com/torvalds/linux/commit/90c337da



Re: Problem: Connect() failed for backend: no free ports.

2017-11-06 Thread Michael Schwartzkopff
Am 06.11.2017 um 22:39 schrieb Baptiste:
> On Mon, Nov 6, 2017 at 10:14 PM, Michael Schwartzkopff  wrote:
>
>> Hi,
>>
>> I have a problem setting up a haproxy 1.6.13 that starts several
>> processes. In the config I have nbroc 3. In the logs I find lots of
>> entries like:
>>
>>
>> haproxy[: Connect() failed for backend XXX: no free ports
>>
>>
>> Searching the mailing list this seems to be a known problem when the
>> kernel still thinks some ports are open but haproxy wants to reuse it. I
>> alreay set "option nolinger" but the error messages remain, especially
>> when I start haproxy.
>>
>>
>> Any other solution?
>>
>>
> Hi Michael,
>
> Maybe you could tell us more about your workload and share with us your
> configuration.
> This will help the diagnostic.
> Also, can you confirm you tuned some sysctls? (I mainly think about the
> port range one)
>
> Baptiste
>
global
  maxconn 200
  nbproc 3
  cpu-map 1 0
  cpu-map 2 1
  cpu-map 3 2

defaults
  mode  tcp
  option    tcplog
  option    dontlognull
  option    dontlog-normal
  option    redispatch
  option    nolinger
  balance   leastconn
  retries   5

frontend IMAP-fe
  bind :143 name IMAP tcp-ut 30s
  default_backend IMAP-be
  maxconn 40

backend IMAP-be
  option tcp-check
  tcp-check connect port 143
  tcp-check expect string * OK
  default-server on-marked-down shutdown-sessions
  fullconn 40
  server proxy01 192.168.0.101 source 192.168.0.201:1-6 check
  server proxy02 192.168.0.102 source 192.168.0.202:1-6 check
  server proxy03 192.168.0.103 source 192.168.0.203:1-6 check
  server proxy04 192.168.0.104 source 192.168.0.204:1-6 check
  (...)


And yes, sysctl is adjusted. Interesting enough, the errors above only
appear in my test when I start haproxy. After some flapping haproxy does
not emit any further log entries.


Mit freundlichen Grüßen,

-- 

[*] sys4 AG
 
https://sys4.de, +49 (89) 30 90 46 64
Schleißheimer Straße 26/MG,80333 München
 
Sitz der Gesellschaft: München, Amtsgericht München: HRB 199263
Vorstand: Patrick Ben Koetter, Marc Schiffbauer, Wolfgang Stief
Aufsichtsratsvorsitzender: Florian Kirstein




signature.asc
Description: OpenPGP digital signature


Re: Problem: Connect() failed for backend: no free ports.

2017-11-06 Thread Baptiste
On Mon, Nov 6, 2017 at 10:14 PM, Michael Schwartzkopff  wrote:

> Hi,
>
> I have a problem setting up a haproxy 1.6.13 that starts several
> processes. In the config I have nbroc 3. In the logs I find lots of
> entries like:
>
>
> haproxy[: Connect() failed for backend XXX: no free ports
>
>
> Searching the mailing list this seems to be a known problem when the
> kernel still thinks some ports are open but haproxy wants to reuse it. I
> alreay set "option nolinger" but the error messages remain, especially
> when I start haproxy.
>
>
> Any other solution?
>
>
Hi Michael,

Maybe you could tell us more about your workload and share with us your
configuration.
This will help the diagnostic.
Also, can you confirm you tuned some sysctls? (I mainly think about the
port range one)

Baptiste


Problem: Connect() failed for backend: no free ports.

2017-11-06 Thread Michael Schwartzkopff
Hi,

I have a problem setting up a haproxy 1.6.13 that starts several
processes. In the config I have nbroc 3. In the logs I find lots of
entries like:


haproxy[: Connect() failed for backend XXX: no free ports


Searching the mailing list this seems to be a known problem when the
kernel still thinks some ports are open but haproxy wants to reuse it. I
alreay set "option nolinger" but the error messages remain, especially
when I start haproxy.


Any other solution?


Mit freundlichen Grüßen,

-- 

[*] sys4 AG
 
https://sys4.de, +49 (89) 30 90 46 64
Schleißheimer Straße 26/MG,80333 München
 
Sitz der Gesellschaft: München, Amtsgericht München: HRB 199263
Vorstand: Patrick Ben Koetter, Marc Schiffbauer, Wolfgang Stief
Aufsichtsratsvorsitzender: Florian Kirstein




signature.asc
Description: OpenPGP digital signature


Re: HAProxy as a frontend for Docker Swarm deployment

2017-11-06 Thread Baptiste
On Mon, Nov 6, 2017 at 7:59 PM, Norman Branitsky <
norman.branit...@micropact.com> wrote:

> I just found out another group has started deploying microservice based
> apps on Docker Swarm
>
> using Traefik  saying:
>
>
> “we are starting to deploy applications designed as microservices , but
> have chosen Traefik for our ability
> to dynamically add sites based on Docker Service label.”
>
>
>
> Having read the docs, it appears to be reasonable for internal exposure
> only.
> With respect to dynamic configuration based on Docker Service label,
>
> how does this compare vis a vis HAProxy?
>
>
>
>
>
Hi,

As mentioned by Soluti, you need a "controller" whose main tasks will be:
- watch the orchestrator (or registry)
- generates the configuration accordingly to changes in the orchestrator
(or registry)
- applies the configuration changes (some of them can be applied at run
time, some of them will require a reload)

You can have a look to this project too:
https://github.com/jcmoraisjr/haproxy-ingress
It applies to Kubenetes and embeds some logic to apply changes at runtime
whenever possible.


Now, about Swarm itself, it's a bit messy... First, I'll speak about Swarm
mode only (because the other one seems deprecated).
You need to monitor events for "services" changes. From there, you need to
get the "task list" for each service, so it will give you the list of IPs
where the application is available.
All your application(s) server(s) must run in an overlay network, where
HAProxy has a "leg" too (otherwise HAProxy won't be able to reach the IPs
above). I let you imagine the nightmare from a design point of view...
Last, you have to do the http routing. Nothing is provided by Swarm mode
yet for this purpose. So you must use labels, as traefik has designed it.

Baptiste


Re: HAProxy as a frontend for Docker Swarm deployment

2017-11-06 Thread Soluti Quintiliano
Sure. It still need some work but you can grasp the idea here:

https://github.com/QuintilianoB/kubernetes_haproxy

Hope it helps.

PS: I forgot about the translation. I'll do it latter and update it

Att.



2017-11-06 17:11 GMT-02:00 Norman Branitsky 
:

> I believe Docker Swarm has a similar API.
> Is the code for you listener public?
>
>
>
> *From:* Soluti Quintiliano [mailto:quintili...@soluti.com.br]
> *Sent:* November-06-17 2:08 PM
> *To:* Norman Branitsky 
> *Cc:* haproxy@formilux.org
> *Subject:* Re: HAProxy as a frontend for Docker Swarm deployment
>
>
>
> I don't know how this would be done to Docker Swarm, but we are using
> HAproxy in front of Kubernetes cluster with automatic HAproxy configuration
> for each new services which need external access.
>
> We just wrote an listener to the Kubernetes API whom update HAProxy as
> needed. We choose to change HAproxy and reload it when changes occurs but
> you can also write directly to its sockets for include/exclude backends.
>
> Att.
>
>
>
> Quintiliano.
>
>
>
>
>
> 2017-11-06 16:59 GMT-02:00 Norman Branitsky  com>:
>
> I just found out another group has started deploying microservice based
> apps on Docker Swarm
>
> using Traefik  saying:
>
>
> “we are starting to deploy applications designed as microservices , but
> have chosen Traefik for our ability
> to dynamically add sites based on Docker Service label.”
>
>
>
> Having read the docs, it appears to be reasonable for internal exposure
> only.
> With respect to dynamic configuration based on Docker Service label,
>
> how does this compare vis a vis HAProxy?
>
>
>
> Norman
>
>
>
>
> *Norman Branitsky *Cloud Architect
>
> MicroPact
>
> (o) 416.916.1752 <(416)%20916-1752>
>
> (c) 416.843.0670 <(416)%20843-0670>
>
> (t) 1-888-232-0224 x61752 <(888)%20232-0224>
>
> www.micropact.com
>
> Think it > Track it > Done
>
>
>


RE: HAProxy as a frontend for Docker Swarm deployment

2017-11-06 Thread Norman Branitsky
I believe Docker Swarm has a similar API.
Is the code for you listener public?

From: Soluti Quintiliano [mailto:quintili...@soluti.com.br]
Sent: November-06-17 2:08 PM
To: Norman Branitsky 
Cc: haproxy@formilux.org
Subject: Re: HAProxy as a frontend for Docker Swarm deployment

I don't know how this would be done to Docker Swarm, but we are using HAproxy 
in front of Kubernetes cluster with automatic HAproxy configuration for each 
new services which need external access.

We just wrote an listener to the Kubernetes API whom update HAProxy as needed. 
We choose to change HAproxy and reload it when changes occurs but you can also 
write directly to its sockets for include/exclude backends.
Att.

Quintiliano.


2017-11-06 16:59 GMT-02:00 Norman Branitsky 
>:
I just found out another group has started deploying microservice based apps on 
Docker Swarm
using Traefik saying:

“we are starting to deploy applications designed as microservices , but have 
chosen Traefik for our ability
to dynamically add sites based on Docker Service label.”

Having read the docs, it appears to be reasonable for internal exposure only.
With respect to dynamic configuration based on Docker Service label,
how does this compare vis a vis HAProxy?

Norman

Norman Branitsky
Cloud Architect
MicroPact
(o) 416.916.1752
(c) 416.843.0670
(t) 1-888-232-0224 x61752
www.micropact.com
Think it > Track it > Done



Re: HAProxy as a frontend for Docker Swarm deployment

2017-11-06 Thread Soluti Quintiliano
I don't know how this would be done to Docker Swarm, but we are using
HAproxy in front of Kubernetes cluster with automatic HAproxy configuration
for each new services which need external access.

We just wrote an listener to the Kubernetes API whom update HAProxy as
needed. We choose to change HAproxy and reload it when changes occurs but
you can also write directly to its sockets for include/exclude backends.

Att.

Quintiliano.


2017-11-06 16:59 GMT-02:00 Norman Branitsky 
:

> I just found out another group has started deploying microservice based
> apps on Docker Swarm
>
> using Traefik  saying:
>
>
> “we are starting to deploy applications designed as microservices , but
> have chosen Traefik for our ability
> to dynamically add sites based on Docker Service label.”
>
>
>
> Having read the docs, it appears to be reasonable for internal exposure
> only.
> With respect to dynamic configuration based on Docker Service label,
>
> how does this compare vis a vis HAProxy?
>
>
>
> Norman
>
>
>
>
> *Norman Branitsky *Cloud Architect
>
> MicroPact
>
> (o) 416.916.1752 <(416)%20916-1752>
>
> (c) 416.843.0670 <(416)%20843-0670>
>
> (t) 1-888-232-0224 x61752 <(888)%20232-0224>
>
> www.micropact.com
>
> Think it > Track it > Done
>


HAProxy as a frontend for Docker Swarm deployment

2017-11-06 Thread Norman Branitsky
I just found out another group has started deploying microservice based apps on 
Docker Swarm
using Traefik saying:

"we are starting to deploy applications designed as microservices , but have 
chosen Traefik for our ability
to dynamically add sites based on Docker Service label."

Having read the docs, it appears to be reasonable for internal exposure only.
With respect to dynamic configuration based on Docker Service label,
how does this compare vis a vis HAProxy?

Norman

Norman Branitsky
Cloud Architect
MicroPact
(o) 416.916.1752
(c) 416.843.0670
(t) 1-888-232-0224 x61752
www.micropact.com
Think it > Track it > Done


Re: Encrypted Passwords Documentation Patch

2017-11-06 Thread Willy Tarreau
On Mon, Nov 06, 2017 at 05:06:37PM +0100, Daniel Schneller wrote:
> It adds a warning about the potentially significant CPU cost the modern
> algorithms with their thousands of hashing rounds can incur. In our case it
> made the difference between haproxy's CPU usage being hardly noticeable at all
> to it almost eating a full core, even for a not very busy site.

Good point! Now applied, thanks Daniel.

Willy



Re: [PATCH] Fix SRV records again

2017-11-06 Thread Willy Tarreau
On Mon, Nov 06, 2017 at 05:40:04PM +0100, Olivier Houchard wrote:
> > The attached patch fixes a locking issue that prevented SRV records from
> > working.
> And another one, that fix a deadlock that occurs when checks trigger DNs
> resolutoin.

Both patches merged, thanks Olivier.

Willy



Re: HAProxy dont Support sslv2 Confirmation

2017-11-06 Thread Lukas Tribus
Hello Jean,


2017-11-06 12:25 GMT+01:00 Jean Martinelli :
> Hello
>
> HAProxy does not support native sslv2 enabled. Could you confirm? Is there a
> documentation link for reference?

It's documented in the no-sslv3 parameters [1]:

> Note that SSLv2 is disabled in the code and cannot be enabled using any
> configuration option.


As Andrew mentioned, SSLv2 is completely outdated and insecure.
Current releases of OpenSSL even built without support for SSLv3 and
SSLv2 by default.


Regards,
Lukas

[1] http://cbonte.github.io/haproxy-dconv/1.7/configuration.html#5.2-no-sslv3



Re: [PATCH] Fix SRV records again

2017-11-06 Thread Olivier Houchard
On Mon, Nov 06, 2017 at 03:19:25PM +0100, Olivier Houchard wrote:
> Hi,
> 
> The attached patch fixes a locking issue that prevented SRV records from
> working.
> 
> Regards,
> 
> Olivier
> 


And another one, that fix a deadlock that occurs when checks trigger DNs
resolutoin.

Regards,

Olivier
>From 3cedd71b5338f8689004837cdcaa0ae42e48e39c Mon Sep 17 00:00:00 2001
From: Olivier Houchard 
Date: Mon, 6 Nov 2017 17:30:28 +0100
Subject: [PATCH] BUG/MINOR: dns: Don't lock the server lock in
 snr_check_ip_callback().

snr_check_ip_callback() may be called with the server lock, so don't attempt
to lock it again, instead, make sure the callers always have the lock before
calling it.
---
 src/dns.c|  6 ++
 src/server.c | 10 ++
 2 files changed, 12 insertions(+), 4 deletions(-)

diff --git a/src/dns.c b/src/dns.c
index 1d12c8421..0f93f3ce5 100644
--- a/src/dns.c
+++ b/src/dns.c
@@ -1614,7 +1614,13 @@ static void dns_resolve_recv(struct dgram_conn *dgram)
 * from the cache */
tmpns = ns;
list_for_each_entry(req, >requesters, list) {
+   struct server *s = objt_server(req->owner);
+
+   if (s)
+   SPIN_LOCK(SERVER_LOCK, >lock);
req->requester_cb(req, tmpns);
+   if (s)
+   SPIN_UNLOCK(SERVER_LOCK, >lock);
tmpns = NULL;
}
 
diff --git a/src/server.c b/src/server.c
index adc9fd40c..1a78fb334 100644
--- a/src/server.c
+++ b/src/server.c
@@ -3589,6 +3589,8 @@ int snr_update_srv_status(struct server *s, int has_no_ip)
  * returns:
  *  0 on error
  *  1 when no error or safe ignore
+ *
+ * Must be called with server lock held
  */
 int snr_resolution_cb(struct dns_requester *requester, struct dns_nameserver 
*nameserver)
 {
@@ -3694,7 +3696,9 @@ int snr_resolution_error_cb(struct dns_requester 
*requester, int error_code)
s = objt_server(requester->owner);
if (!s)
return 1;
+   SPIN_LOCK(SERVER_LOCK, >lock);
snr_update_srv_status(s, 0);
+   SPIN_UNLOCK(SERVER_LOCK, >lock);
return 1;
 }
 
@@ -3703,6 +3707,8 @@ int snr_resolution_error_cb(struct dns_requester 
*requester, int error_code)
  * which owns  and is up.
  * It returns a pointer to the first server found or NULL if  is not yet
  * assigned.
+ *
+ * Must be called with server lock held
  */
 struct server *snr_check_ip_callback(struct server *srv, void *ip, unsigned 
char *ip_family)
 {
@@ -3712,8 +3718,6 @@ struct server *snr_check_ip_callback(struct server *srv, 
void *ip, unsigned char
if (!srv)
return NULL;
 
-   SPIN_LOCK(SERVER_LOCK, >lock);
-
be = srv->proxy;
for (tmpsrv = be->srv; tmpsrv; tmpsrv = tmpsrv->next) {
/* we found the current server is the same, ignore it */
@@ -3751,13 +3755,11 @@ struct server *snr_check_ip_callback(struct server 
*srv, void *ip, unsigned char
 (tmpsrv->addr.ss_family == AF_INET6 &&
  memcmp(ip, &((struct sockaddr_in6 
*)>addr)->sin6_addr, 16) == 0))) {
SPIN_UNLOCK(SERVER_LOCK, >lock);
-   SPIN_UNLOCK(SERVER_LOCK, >lock);
return tmpsrv;
}
SPIN_UNLOCK(SERVER_LOCK, >lock);
}
 
-   SPIN_UNLOCK(SERVER_LOCK, >lock);
 
return NULL;
 }
-- 
2.13.5



Encrypted Passwords Documentation Patch

2017-11-06 Thread Daniel Schneller
Hi!

Attached find a documentation patch for the encrypted passwords in userlists.

It adds a warning about the potentially significant CPU cost the modern
algorithms with their thousands of hashing rounds can incur. In our case it
made the difference between haproxy’s CPU usage being hardly noticeable at all
to it almost eating a full core, even for a not very busy site.

Tested with 1.6, but this applies to all versions, if I am not mistaken.

Cheers,
Daniel


-- 
Daniel Schneller
Principal Cloud Engineer
 
CenterDevice GmbH  | Hochstraße 11
   | 42697 Solingen
tel: +49 1754155711| Deutschland
daniel.schnel...@centerdevice.de   | www.centerdevice.de

Geschäftsführung: Dr. Patrick Peschlow, Dr. Lukas Pustina,
Michael Rosbach, Handelsregister-Nr.: HRB 18655,
HR-Gericht: Bonn, USt-IdNr.: DE-815299431



0001-DOC-Add-note-about-encrypted-password-CPU-usage.patch
Description: Binary data


Re: Diagnose a PD-- status

2017-11-06 Thread Mildis
Hi Christopher,

Configuration is attached.
The domain2backend map sends data mostly to bck-traefik.

$ haproxy -vv
HA-Proxy version 1.7.91~bpo7+1 2017/08/24
Copyright 2000-2017 Willy Tarreau 

Build options :
  TARGET  = linux2628
  CPU = generic
  CC  = gcc
  CFLAGS  = -g -O2 -fPIE -fstack-protector --param=ssp-buffer-size=4 -Wformat 
-Werror=format-security -D_FORTIFY_SOURCE=2
  OPTIONS = USE_GETADDRINFO=1 USE_ZLIB=1 USE_REGPARM=1 USE_OPENSSL=1 USE_LUA=1 
USE_PCRE=1

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Encrypted password support via crypt(3): yes
Built with zlib version : 1.2.7
Running on zlib version : 1.2.7
Compression algorithms supported : identity("identity"), deflate("deflate"), 
raw-deflate("deflate"), gzip("gzip")
Built with OpenSSL version : OpenSSL 1.0.1e 11 Feb 2013
Running on OpenSSL version : OpenSSL 1.0.1t  3 May 2016 (VERSIONS DIFFER!)
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports prefer-server-ciphers : yes
Built with PCRE version : 8.30 2012-02-04
Running on PCRE version : 8.30 2012-02-04
PCRE library supports JIT : no (USE_PCRE_JIT not set)
Built with Lua version : Lua 5.3.1
Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT 
IP_FREEBIND

Available polling systems :
  epoll : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Available filters :
[COMP] compression
[TRACE] trace
[SPOE] spoe




haproxy.cfg
Description: Binary data


Mildis


> Le 6 nov. 2017 à 10:10, Christopher Faulet  a écrit :
> 
> Hi,
> 
> 
> Le 02/11/2017 à 17:16, Mildis a écrit :
>> [WARNING] 305/144718 (21260) : HTTP compression failed: unexpected behavior 
>> of previous filters
> 
> This warning is very suspicious. It should not happen. Could you share your 
> configuration and "haproxy -vv" output please ?
> 
> 
> -- 
> Christopher Faulet



Re: [PATCH] MINOR: mworker: do not store child pid anymore in the pidfile

2017-11-06 Thread Pavlos Parissis
On 06/11/2017 03:19 μμ, Willy Tarreau wrote:
> Hi Pavlos,
> 
> On Mon, Nov 06, 2017 at 03:09:10PM +0100, Pavlos Parissis wrote:
>> That will be very much appreciated as it will allow us to have a smooth
>> migration to the new master process model.
> 
> In fact the current behaviour is to continue to dump the pids if you're
> not working in master-worker. If you need to perform some adaptations
> for master-worker, I think it would be easier to do them only once to
> support the single pid instead of having a temporary period where you
> have to deal with N+1 pids and be careful not to touch the workers by
> accident.
> 

Valid point. So, no objections from me about this commit:-)

Cheers,
Pavlos



signature.asc
Description: OpenPGP digital signature


Re: [PATCH] MINOR: mworker: do not store child pid anymore in the pidfile

2017-11-06 Thread Pavlos Parissis
On 06/11/2017 01:35 μμ, William Lallemand wrote:
> On Mon, Nov 06, 2017 at 12:11:13PM +0100, Pavlos Parissis wrote:
>> On 06/11/2017 11:16 πμ, William Lallemand wrote:
>>> The parent process supervises itself the children, we don't need to
>>> store the children pids anymore in the pidfile in master-worker mode.
>>
>> I have a small objection against this. Having PIDs in a file allows external 
>> tools to monitor the
>> workers. If those PIDs aren't written in a the pidfile then those tools have 
>> to use pgrep to find
>> them, unless the master process can provide them in some way(stats socket?).
>>
>> My 2cents,
>> Pavlos
>>
> 
> Hi Pavlos,
> 
> This patch was made to prevent scripts to send signals to the children 
> directly
> instead of sending them to the master which will forward them. That could end
> in reporting a wrong exit code in the master for example.
> 
> One of the problem of the pidfile, is that only the latest children were
> written, so you wouldn't have the remaining PID in this list on a reload.
> With pgrep -P you will have the full list of processes attached to the master.
> 
> Unfortunately there is no stats socket on the master (yet?), so it's not
> possible to do what you suggest.
> 
> However, we can maybe add an option to write all new PIDs in the pidfile if
> it's easier for supervision.
> 

That will be very much appreciated as it will allow us to have a smooth 
migration to the new master
process model.

Cheers,
Pavlos




signature.asc
Description: OpenPGP digital signature


Re: Bad request error due to not allowed caratter in the request

2017-11-06 Thread Lukas Tribus
Ciao Nicolo,


2017-11-06 14:19 GMT+01:00 Nicolo Ballestriero :
> Good morning,
>
> My HAproxy drops http connections made from an Hikvision devices because I
> have a   in the request. I tried to set the HAproxy  according to
> this:

This is really in the request line, its a space between HTTP/1.1 and
\r\n. I don't see how haproxy will ever support such a broken requests
at all.

The option to accept invalid http requests relaxes some checks in
haproxy, however it does not mean that haproxy can parse every invalid
request that is out there.


If your backend server is able to handle the requests, you may switch
to tcp mode, so that haproxy does not look into the HTTP request at
all. But other than that, I don't have any suggestions.


Best regards,
Lukas



Bad request error due to not allowed caratter in the request

2017-11-06 Thread Nicolo Ballestriero

Good morning,

My HAproxy drops http connections made from an Hikvision devices because 
I have a   in the request. I tried to set the HAproxy  according 
to this:


http://cbonte.github.io/haproxy-dconv/1.6/configuration.html#4-option%20accept-invalid-http-request

The result didn't change, can you please give me a Hit? I can't change 
the device behavior, but I need the request to be accepted


Thanks

Nicolo'

echo "show errors" | socat unix-connect:/var/run/haproxy.stat stdio
Total events captured on [06/Nov/2017:14:16:05.751] : 12
 
[06/Nov/2017:14:16:05.342] frontend http (#4): invalid request
  backend  (#-1), server  (#-1), event #11
  src 192.168.5.70:33373, session #18, session flags 0x0088
  HTTP msg state 26, msg flags 0x, tx flags 0x
  HTTP chunk len 0 bytes, HTTP body len 0 bytes
  buffer flags 0x00908002, out 0 bytes, total 1278 bytes
  pending 1278 bytes, wrapping at 16392, error at position 35:
 
  0  POST /onvifevents/test.cgi HTTP/1.1 \r\n
  00038  Accept-Encoding: gzip,deflate \r\n
  00070  Content-Type: application/soap+xml; charset=utf-8\r\n
  00121  Host: 192.168.5.239 \r\n
  00143  Content-Length: 1092\r\n
  00165  Connection: close\r\n
  00184  \r\n
  00186  \n
  00225  http://www.w3.org/2003/05/soap-envelope;  x
  00295+ mlns:tev="http://www.onvif.org/ver10/events/wsdl;  xmlns:wsa="http://w
  00365+ ww.w3.org/2005/08/addressing"  xmlns:tt="http://www.onvif.org/ver10/sc
  00435+ hema"  xmlns:wsnt="http://docs.oasis-open.org/wsn/b-2;  xmlns:tns1="ht
  00505+ tp://www.onvif.org/ver10/topics" >\n
  00540  \n
  00554  http://192.168.5.239:80/onvifevents
  00624+ /test.cgi\n
  00643  http://docs.oasis-open.org/wsn/bw-2/NotificationConsumer/N
  00713+ otify\n
  00732  \n
  00747  \n
  00759  \n
  00773  \n
  00800  http://www.onvif.org/ver10/tev/topicExpression/Co
  00870+ ncreteSet">tns1:Device/Trigger/DigitalInput\n
  00927  \n
  00942  \n
  01014  \n
  01026  \n
  01090  \n
  01103  \n
  01113  \n
  01164  \n
  01175  \n
  01189  \n
  01205  \n
  01233  \n
  01248  \n
  01261  \n

Re: HAProxy dont Support sslv2 Confirmation

2017-11-06 Thread Andrew Smalley
Hello Jean

>From what I read SSLv2 is unused and SSLv3 can be enabled with a warning as
shown below

force-sslv3 :

Enforces the use of SSL protocol version SSLv3.

Note

Not recommended on Internet because of the poodle vulnerability:
https://poodle.io/


​SSLv2 has not been used on the internet in quite a while now and as per
the warning SSLv3 is unused by default but can be turned on.


https://www.haproxy.com/documentation/aloha/7-0/haproxy/tls/


Andruw Smalley

Loadbalancer.org Ltd.

www.loadbalancer.org
+1 888 867 9504 / +44 (0)330 380 1064
asmal...@loadbalancer.org

Leave a Review | Deployment Guides | Blog


On 6 November 2017 at 11:25, Jean Martinelli 
wrote:
>
> Hello
>
> HAProxy does not support native sslv2 enabled. Could you confirm? Is
there a documentation link for reference?
>
>
>
> Att
>
> Jean Martinelli
> Consultoria
>
>
>
> +55 (47) 99948-6156 | +55 (47) 3035-3777
> jean.martine...@teiko.com.br
>
> http://www.teiko.com.br/


HAProxy dont Support sslv2 Confirmation

2017-11-06 Thread Jean Martinelli
Hello
HAProxy does not support native sslv2 enabled. Could you confirm? Is there a 
documentation link for reference?

Att
[o]
Jean Martinelli
Consultoria

+55 (47) 99948-6156 | +55 (47) 3035-3777
jean.martine...@teiko.com.br
http://www.teiko.com.br/


Re: [PATCH] MINOR: mworker: do not store child pid anymore in the pidfile

2017-11-06 Thread Pavlos Parissis
On 06/11/2017 11:16 πμ, William Lallemand wrote:
> The parent process supervises itself the children, we don't need to
> store the children pids anymore in the pidfile in master-worker mode.

I have a small objection against this. Having PIDs in a file allows external 
tools to monitor the
workers. If those PIDs aren't written in a the pidfile then those tools have to 
use pgrep to find
them, unless the master process can provide them in some way(stats socket?).

My 2cents,
Pavlos



signature.asc
Description: OpenPGP digital signature


[PATCH] MINOR: mworker: do not store child pid anymore in the pidfile

2017-11-06 Thread William Lallemand
The parent process supervises itself the children, we don't need to
store the children pids anymore in the pidfile in master-worker mode.
---
 src/haproxy.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/src/haproxy.c b/src/haproxy.c
index bcbbad4a1..4d4bd3b26 100644
--- a/src/haproxy.c
+++ b/src/haproxy.c
@@ -2649,7 +2649,7 @@ int main(int argc, char **argv)
else if (ret == 0) /* child breaks here */
break;
children[proc] = ret;
-   if (pidfd >= 0) {
+   if (pidfd >= 0 && !(global.mode & MODE_MWORKER)) {
char pidstr[100];
snprintf(pidstr, sizeof(pidstr), "%d\n", ret);
shut_your_big_mouth_gcc(write(pidfd, pidstr, 
strlen(pidstr)));
-- 
2.13.6




[PATCH 2/3] MINOR: mworker: allow pidfile in mworker + foreground

2017-11-06 Thread William Lallemand
This patch allows the use of the pidfile in master-worker mode without
using the background option.
---
 src/haproxy.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/src/haproxy.c b/src/haproxy.c
index bbd26b82d..f12e903b2 100644
--- a/src/haproxy.c
+++ b/src/haproxy.c
@@ -2499,7 +2499,7 @@ int main(int argc, char **argv)
}
 
/* open log & pid files before the chroot */
-   if (global.mode & MODE_DAEMON && global.pidfile != NULL) {
+   if ((global.mode & MODE_DAEMON || global.mode & MODE_MWORKER) && 
global.pidfile != NULL) {
unlink(global.pidfile);
pidfd = open(global.pidfile, O_CREAT | O_WRONLY | O_TRUNC, 
0644);
if (pidfd < 0) {
-- 
2.13.6




some mworker + pidfile patches

2017-11-06 Thread William Lallemand
A few patches which help using the pidfile in master-worker mode.




[PATCH 3/3] MINOR: mworker: write parent pid in the pidfile

2017-11-06 Thread William Lallemand
The first pid in the pidfile is now the parent, it's more convenient for
supervising the processus.

You can now reload haproxy in master-worker mode with convenient command
like: kill -USR2 $(head -1 /tmp/haproxy.pid)
---
 src/haproxy.c | 7 +++
 1 file changed, 7 insertions(+)

diff --git a/src/haproxy.c b/src/haproxy.c
index f12e903b2..bcbbad4a1 100644
--- a/src/haproxy.c
+++ b/src/haproxy.c
@@ -2631,6 +2631,13 @@ int main(int argc, char **argv)
}
}
 
+   /* if in master-worker mode, write the PID of the father */
+   if (global.mode & MODE_MWORKER) {
+   char pidstr[100];
+   snprintf(pidstr, sizeof(pidstr), "%d\n", getpid());
+   shut_your_big_mouth_gcc(write(pidfd, pidstr, 
strlen(pidstr)));
+   }
+
/* the father launches the required number of processes */
for (proc = 0; proc < global.nbproc; proc++) {
ret = fork();
-- 
2.13.6




Re: [ANNOUNCE] haproxy-1.8-rc1 : the last mile

2017-11-06 Thread Emmanuel Hocdet

Hi Robert,

> Le 4 nov. 2017 à 14:33, Robert Newson  a écrit :
> 
> It’s only 1.0.1 that’s affected, so I’m inferring that predates support for 
> serving multiple certificate types; it’s not an haproxy regression. 
> 

yes, multiple certificate bundle only work with openssl >= 1.0.2

> I’ve failed in all attempts to try tls 1.3 though. Haproxy segfaults very 
> soon after startup. I tried several OpenSSL versions.
> 
> Sent from my iPhone

++
Manu




Re: Diagnose a PD-- status

2017-11-06 Thread Christopher Faulet

Hi,


Le 02/11/2017 à 17:16, Mildis a écrit :

[WARNING] 305/144718 (21260) : HTTP compression failed: unexpected behavior of 
previous filters


This warning is very suspicious. It should not happen. Could you share 
your configuration and "haproxy -vv" output please ?



--
Christopher Faulet