RE: no free ports && tcp_timestamps

2015-10-22 Thread Lukas Tribus
> Hi Baptiste,
>
> I'll try your suggestiion, but I'd like to understand why if I enable
> tcp_timestamp I have no problems and if I disable it, after few
> minutes on the live system I get the problem.

Clearly this is a kernel issue. Check your kernel logs/dmesg.


Lukas

  


RE: Upgrade from 1.4 -> 1.6, any gotchas?

2015-10-21 Thread Lukas Tribus
> On Wed, Oct 21, 2015 at 7:14 PM, SL  wrote:
>> I'll be doing an upgrade from 1.4 to 1.6 tomorrow. Just wondering if there
>> are any changed defaults, breaking changes, anything like that? Or should
>> my config work as before?
>
> Haproxy 1.5 changed the default connection mode if you use http. Up to
> version 1.5-dev21 "option tunnel" was the default, now it's "option
> http-keep-alive".
> https://cbonte.github.io/haproxy-dconv/configuration-1.5.html#4
>
> This bit me a bit, when I upgraded from 1.4 to 1.5, as the tunnel mode
> works a bit different. But with settings "option http-keep-alive" and
> "option prefer-last-server" specified, everthing was fine.

Yeah, I think thats about the most important change, even between 1.4
and 1.6.

Definitely use latest 1.6.1 if you want to go with 1.6 already.

I don't expect that the new release silently breaks old configurations,
however you may wanna do some testing before going into production.


Everything else depends on what you are trying to do exactly and how
critical your production traffic is.


This may be obvious, but since it has been done before: try not to upgrade
to a new major release and at the same time enable new features. Just
upgrade the code and run in production for a while. Then go and test
new features. If something breaks, you can rollback the last change. If you
do anything at the same time, you will have no idea whats causing the
problem.


Regards,

Lukas

  


RE: [PATCH] MEDIUM: dns: Don't use the ANY query type

2015-10-21 Thread Lukas Tribus
Hi Robin,


> Hey guys,
>
> Actually when you get an NXDOMAIN reply you can just stop resolving that
> domain. Basically there are 2 types of "negative" replies in DNS:
>
> NODATA: basically this is when you don't get an error (NOERROR in dig),
> but not the actual data you are looking for. You might have gotten some
> CNAME data but no A or  record (depending on what you wanted
> obviously). This means that the actual domain name does exist, but
> doesn't have data of the type you requested. The term NODATA is used in
> DNS RFC's but it doesn't actually have its own error code.
>
> NXDOMAIN: This is denoted by the NXDOMAIN error code. It means that
> either the domain you requested itself or the last target domain from a
> CNAME does not exist at all (IE no data whatsoever) and there also isn't
> a wildcard available that matches it. So if you asked for an A record,
> getting an NXDOMAIN means there also won't be an  record.
>
> The above explanation is a bit of an over simplification cause there are
> also things like empty non-terminals which also don't have any data, but
> instead of an NXDOMAIN actually return a NODATA (in most cases, there
> are some authoritative servers that don't do it properly). But the end
> result is that you can pretty much say that when you get NXDOMAIN, there
> really is nothing there for you so you can just stop looking (at least
> at that the current server).

Thanks for clarifying, I didn't know about this. Good thing we didn't
implemented anything yet.

Baptiste, whats the current behavior when an empty response with
NOERROR is received?



Regards,

Lukas

  


RE: 1.6.0 Error: Cannot Create Listening Socket for Frontend and Stats,Proxies

2015-10-20 Thread Lukas Tribus
> Dear Willy,
>
> Thank you for your insights. As you advised, below is the output of
> haproxy -f …cfg -db -V.

Can you run this through strace (strace haproxy -f …cfg -db -V) and
provide the output.

Also, if you have the strace output of a successful startup of 1.5.14 for
comparison, that would be very helpful as well.


Regards,

Lukas

  


RE: [PATCH] MEDIUM: dns: Don't use the ANY query type

2015-10-20 Thread Lukas Tribus
> Hi Andrew,
>
> On Mon, Oct 19, 2015 at 05:39:58PM -0500, Andrew Hayworth wrote:
>> The ANY query type is weird, and some resolvers don't 'do the legwork'
>> of resolving useful things like CNAMEs. Given that upstream resolver
>> behavior is not always under the control of the HAProxy administrator,
>> we should not use the ANY query type. Rather, we should use A or 
>> according to either the explicit preferences of the operator, or the
>> implicit default (/IPv6).
>
> But how does that fix the problem for you ? In your example below,
> the server clearly doesn't provide any A nor  in the response
> so asking it for A or  should not work either if it doesn't
> recurse, am I wrong ?

I don't think this is CNAME specific. ANY will just return what the
recursor has in the cache. If it isn't in the cache, ANY won't make
the recursor ask upstream DNS servers, only A and  (or MX or
any other real qtype) will.

Just switching to ANY is not enough, we still need to fallback
from  to A and vice versa on NX responses for single homed
nodes.



Lukas

  


RE: [PATCH] MEDIUM: dns: Don't use the ANY query type

2015-10-20 Thread Lukas Tribus
> I don't know. I'm always only focused on the combination of user-visible
> changes and risks of bugs (which are user-visible changes btw). So if we
> can do it without breaking too much code, then it can be backported. What
> we have now is something which is apparently insufficient to some users
> so we can improve the situation. I wouldn't want to remove prefer-* or
> change the options behavior or whatever for example.

Ok, if we don't remove existing prefer-* keywords a 1.6 backport sounds
possible without user visible breakage, great.


lukas

  


RE: [PATCH] MEDIUM: dns: Don't use the ANY query type

2015-10-20 Thread Lukas Tribus
Hi,


>> A simple option in the resolvers section to instruct HAPoxy to not
>> forgive on NX and failover to next family:
>> option on-nx-try-next-family
>
> I personally find this confusing from the user's point of view.

Agreed, we should have good and safe defaults, and address corner
cases with additional options, not the other way around.



> When you know that you can only use IPv4 to join the next server, I
> think this :
>
> server remote1 remote1.mydomain check v4only
>
> is more obvious than this :
>
> option on-nx-try-next-family
> server remote1 remote1.mydomain check prefer-ipv4

Actually I think "v4only" would be "prefer-ipv4" without
on-nx-try-next-family, right? Anyway, I agree.

Without automatic AF fallback and without ANY queries, the
"prefer" keyword actually is restricting, and not preferring.



> Also, it covers the case where some servers are known to support both
> protocols while others are limited. This allows for example to join
> the same remote server over two possible families behind a DSL line
> which uses a random IP address after each reconnection :
>
> server home-v4 home-v4.mydomain check v4only
> server home-v6 home-v6.mydomain check v6only
>
> And since we already have v4only/v6only on bind lines, the analogy
> seems easy to remember.

The behavior with v4only or v6only is quite obvious, we just query that
particular address family, but let me clarify: you are implying that
without v4only/v6only keyword, we query one address family and then
fallback to the other address family in case we get a NX response, right?

I think thats a good solution.


Question: are we still talking about 1.6 here? It seems we have to
make some intrusive changes that may break configurations (but they
seem mandatory to get consistent and predictable behavior).

By the amount of people that already hit the ANY issue (3 or more?),
I would say we better break a small number of configurations between
1.6 and 1.6.1, then having to deal with the fallout of the ANY issue
(because the ANY removal changes resolve-prefer behavior as well)
for the time that 1.6 is supported.



Regards,

Lukas

  


RE: haproxy + ipsec -> general socket error

2015-10-16 Thread Lukas Tribus
> when using ipsec on the backend side, this error pops up in the haproxy 
> log from time to time: 
> 
> Layer4 connection problem, info: "General socket error (No buffer space 
> available) 
> 
> 
> we have tried both strongswan and libreswan, error is still the same. 
> there is nothing strange in the ipsec logs, connection seems stable. 
> but as soon as we start generating some light traffic, haproxy loses 
> connectivity with the backend nodes. 
> we are running centos 7, standard repositories. 
> 
> any ideas what could be wrong? 

The error comes from the kernel, you will have to troubleshoot on
there (both strongswan and libreswan probably use the kernel's
ipsec stack, so that's why the behavior is the same).

- make sure you use the latest centos 7 kernel.
- try increasing /proc/sys/net/ipv4/xfrm4_gc_thresh
- report the issue (to CentOs/RedHat)


There is nothing that can be done in userspace/haproxy (except maybe
lowering the load by using keep-alive and connection pooling).


Regards,

Lukas

  


RE: 1.6 segfaults

2015-10-15 Thread Lukas Tribus
> So you may be right on the two certs on the same line bug. Just removed 
> one of the certs and so far, so good. Can you verify? 

Are both or one of them (first or second one) wildcard certificates?



Thanks,

Lukas


  


RE: [call to comment] HAProxy's DNS resolution default query type

2015-10-15 Thread Lukas Tribus
Hi folks,


> Hey guys,
>
> by default, HAProxy tries to resolve server IPs using an ANY query
> type, then fails over to resolve-prefer type, then to "remaining"
> type.
> So ANY -> A ->  or ANY ->  -> A.

We can't really rely on ANY queries, no. Also see [1], [2].



> Today, 0yvind reported that weave DNS server actually answers with an
> NX response, preventing HAProxy to failover to next query type (this
> is by design).
>
> Jan, a fellow HAProxy user, already reported me that ANY query types
> are less and less fashion (for many reasons I'm not going to develop
> here).
>
> Amongs the many way to fix this issue, the one below has my preference:
> A new resolvers section directive (flag in that case) which prevent
> HAProxy from sending a ANY query type for the nameservers in this
> section ie "option dont-send-any-qtype".
>
> An other option, would to make HAProxy to failover to next query type
> in case of NX response.

In my opinion we need both, because when we no longer use ANY, but
 with A fallback (or vice versa), NX is actually an expected
and valid answer that is SUPPOSED to make us retry the next
qtype, otherwise we have the same exact problem as we have are
having with ANY in the first place (as we can't and won't require that
all our backends are dual-stacked).

In many environments the administrator of the haproxy box is not
administrating the backend server as well, therefor we cannot tell users
"use resolve-prefer to set the address family correctly".

Reality is that we don't know if the backend server is
ipv4-only, dual-stacked or ipv6-only, and if we stop
querying after a address-family specific NX response, we
basically introduce a new problem.



Regards,

Lukas


[1] https://blog.cloudflare.com/deprecating-dns-any-meta-query-type/
[2] https://lists.dns-oarc.net/pipermail/dns-operations/2013-January/009501.html

  


RE: [call to comment] HAProxy's DNS resolution default query type

2015-10-15 Thread Lukas Tribus
> Jan, a fellow HAProxy user, already reported me that ANY query types
> are less and less fashion (for many reasons I'm not going to develop
> here).
>
> Amongs the many way to fix this issue, the one below has my preference:
> A new resolvers section directive (flag in that case) which prevent
> HAProxy from sending a ANY query type for the nameservers in this
> section ie "option dont-send-any-qtype".

Actually, I would remove ANY altogether:

ANY will provide wrong results on RFC-compliant recursive resolvers,
more often than not.

For example, if an A record is in the cache, but an  is
not cached, ANY will only return A, even if we "resolve-prefer"
ipv6.

It makes no sense to keep it, especially if it remains default.



Regards,

Lukas




  


RE: Segfault bug in 1.6.0 release (SNI related maybe)

2015-10-15 Thread Lukas Tribus
Hi Øyvind,

> Hi,
>
> When testing the 1.6.0 release we encountered a segfault bug on the
> server when trying to run the https://www.ssllabs.com/ssltest/ test on
> our two sites running with two different SSL certs. The test runs fine
> when its run against one of the sites / certificates, but when run
> against the second site / cert the server segfaults.


Its the same issue as in "haproxy 1.6.0 crashes‏" and
"1.6 segfaults".

A fix is available here [1] and its currently pending review.


Regards,

Lukas


[1] http://marc.info/?l=haproxy=144491072111043=2   
  

RE: SIGUSR1 soft stop does not send "Connection: close"

2015-10-15 Thread Lukas Tribus
Hi,

>> If the session is transferring HTTP body between client and backend server, 
>> we
>> can't insert HTTP headers either. If you are waiting for the next request
>> in that particular session, why wouldn't we just close it after the HTTP body
>> has been transfered?
>
> That would be fine, does that work at present, if the connection
> is persistent?

First of all, SIGUSR1 is not supposed to kill any ongoing transfers.
So in the worst case, you would end up with the process not exiting
for some time.


But yes, since 1.4 at least, haproxy will *disable* keep-alive [1],
once SIGUSR1 is received.


Regards,

Lukas

[1] 
http://www.haproxy.org/git?p=haproxy-1.5.git;a=commit;h=c3e8b25c795461331b142bf0af82e21d7771f68a
  


RE: responses from disabled servers

2015-10-15 Thread Lukas Tribus
Hi David,


> I just want to say first of all that haproxy is incredibly useful and
> I've enjoyed working with it tremendously. Thank you!
>
> My question is if a server is disabled because of a failed http health
> check and there are requests in flight, will the requests from the
> disabled app be returned to the client?

Yes, the response will be returned. We don't kill because of a failed
health check.



> We are artificially marking servers as down in the event that the
> server is going into maintenance mode and are trying to avoid
> losing at requests.

Note that there is a proper way to do it, checkout:
set server / state [ ready | drain | maint ]

http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#9.2-set%20server



Regards,

Lukas

  


RE: SIGUSR1 soft stop does not send "Connection: close"

2015-10-15 Thread Lukas Tribus
> From my reading of the code SIGUSR1 does not send a "Connection: close" to the
> client or server. This means it is not possible to safely close a keep-alive
> session, before terminating HAProxy.
>
> Would there be interest in a patch to send "Connection: close" on both the
> request and the response, once a SIGUSR1 is received?

What request/response, aren't we talking about an idle session here?

Initiating a close on the transport layer (when unencrypted) or the session
layer (when TLS encrypted) is perfectly fine [1], its also what both browser
and webservers do when they timeout an idle session.


Regards,

Lukas


[1] https://tools.ietf.org/html/rfc2616#section-8.1.4   
  


RE: [call to comment] HAProxy's DNS resolution default query type

2015-10-15 Thread Lukas Tribus
> I second this opinion. Removing ANY altogether would be the best case.
>
> In reality, I think it should use the OS's resolver libraries which
> in turn will honor whatever the admin has configured for preference
> order at the base OS level.
>
>
> As a sysadmin, one should reasonably expect that tweaking the
> preference knob at the OS level should affect most (and ideally, all)
> applications they are running rather than having to manually fiddle
> knobs at the OS and various application levels.
> If there is some discussion and *good* reasons to ignore the OS
> defaults, I feel this should likely be an *optional* config option
> in haproxy.cfg ie "use OS resolver, unless specifically told not to
> for $reason)

Its exactly like you are saying.

I don't think there is any doubt that HAproxy will bypass OS level
resolvers, since you are statically configuring DNS server IPs in the
haproxy configuration file.

When you don't configure any resolvers, HAproxy does use libc's
gethostbyname() or getaddrinfo(), but both are fundamentally broken.

Thats why some applications have to implement there own resolvers
(including nginx).

First of all the OS resolver doesn't provide the TTL value. So you would
have to guess or use fixed TTL values. Second, both calls are blocking,
which is a big no-go for any event-loop based application (for this
reason, it can only be queried at startup, not while the application
is running).

Just configure a hostname without resolver parameters, and haproxy
will resolve your hostnames at startup via OS (and then maintain those
IP's).


Applications either have to implement a resolver on their own (haproxy,
nginx), or use yet another external library, like getdnsapi [1].


The point is: there is a reason for this implementation, and you can
fallback to OS resolvers without any problems (just with their drawbacks).




Regards,

Lukas


[1] https://getdnsapi.net/
  


RE: SIGUSR1 soft stop does not send "Connection: close"

2015-10-15 Thread Lukas Tribus
> On Thu, Oct 15, 2015 at 12:26 PM, Lukas Tribus <luky...@hotmail.com> wrote:
>> What request/response, aren't we talking about an idle session here?
>
> No, I am concerned with a non idle persistent session.

When specifically would you intervene? Could you elaborate what you
have in mind?

If the session is transferring HTTP body between client and backend server, we
can't insert HTTP headers either. If you are waiting for the next request
in that particular session, why wouldn't we just close it after the HTTP body
has been transfered?


Lukas

  


RE: req_ssl_ver ACL not working

2015-10-14 Thread Lukas Tribus
Hi Julien,


> Still, I would like to take a look at the patch and get it fixed properly.

Your patch works for me if I only apply the one-line change at
"version = (data[9] << 16) + data[10];"

Can you confirm that this works for you as well and resubmit it
for inclusion?



Thanks,

Lukas

  


RE: req_ssl_ver ACL not working

2015-10-10 Thread Lukas Tribus
>> jve.linuxwall.info as SNI value? I suggest to remove the
>> SNI if statement while testing the TLS ACL.
>
> Argh... I can't count the number of times forgetting -servername in
> openssl s_client got me looking for a bug. This one included.
>
> "acl tls12 req.payload(9,2) -m bin 0303" works as expected. My patch
> still doesn't, but at least I have an environment that makes sense :)

Ok, great.
Still, I would like to take a look at the patch and get it fixed properly.

I will try to take a look at it next week.


Thanks,

Lukas

  


RE: HA-Proxy IP ranges for acl

2015-10-09 Thread Lukas Tribus
> acl allowed_clients hdr_sub(X-Real-IP) 10.10.200.0/24 [...]

This is a *string* comparison. You will have to use "req.hdr_ip" [1]:

acl allowed_clients req.hdr_ip(X-Real-IP,-1) 10.10.200.0/24 [...]



Regards,

Lukas


[1] 
http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#7.3.6-req.hdr_ip

  


RE: req_ssl_ver ACL not working

2015-10-08 Thread Lukas Tribus
> Attached is a patch that should work but doesn't. (bare with me, I'm in
> unknown codebase territory here).
>
> I also tried to match directly using req.payload, and I can't get the
> ACL to match:
> acl tls12 req.payload(9,2) -m bin 0303

"req.payload(9,2) -m bin 0303" is imho correct, this should work.
You did configure inspect-delay [1], right? Something like:
tcp-request inspect-delay 2s


Regards,

Lukas

[1] 
http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#4.2-tcp-request%20inspect-delay

  


RE: req_ssl_ver ACL not working

2015-10-08 Thread Lukas Tribus
> frontend https-in
> bind 0.0.0.0:443
> mode tcp
> tcp-request inspect-delay 5s
> tcp-request content accept if { req_ssl_hello_type 1 }
>
> acl sni_jve req.ssl_sni -i jve.linuxwall.info
> acl tls12 req.payload(9,2) -m bin 0303
> acl sslv3 req_ssl_ver 3.0
>
> use_backend jve_https if sni_jve tls12
> use_backend jve_https_sha1_ssl3 if sslv3
> # fallback to backward compatible sha1
> default_backend jve_https_sha1

Are you sure your TLSv1.2 client is actually sending
jve.linuxwall.info as SNI value? I suggest to remove the
SNI if statement while testing the TLS ACL.

The ACL works fine for me:

frontend https-in
 bind 10.0.0.55:443
 mode tcp
 tcp-request inspect-delay 5s
 tcp-request content accept if { req_ssl_hello_type 1 }
 
 acl tls12 req.payload(9,2) -m bin 0303
 use_backend google if tls12
 
 default_backend microsoft

backend google
 server google google.com:443

backend microsoft
 server hotmail microsoft.com:443


"curl -k -v https://10.0.0.55 --tlsv1.2" --> connects to Google
"curl -k -v https://10.0.0.55 --tlsv1.1" --> connects to MS



  


RE: HA-Proxy IP ranges for acl

2015-10-08 Thread Lukas Tribus
> Hi!
>
> I'd like to report a bug I do experience,
> maybe I'm not the first one to report it:
> it's about IP network ranges and acl in haproxy (1.5.8).
> It's working… sometimes.
> I have no issue with ranges like /24 (like 10.10.200.0/24)
> But it is not working with a range like /22 ; /28 ; /27 or /25.
>
> For example without any ACL,
> this IP will reach backend : 213.254.248.97
>
> But with the range 213.254.248.96/27 with acl, it is rejected (#403).
> At this time acl are working fine with single IPs.
> And this IP adress (213.254.248.97) *should be* authorized by
> the 213.254.248.96/27 range, right?

You really need to post the actual configuration, because we don't
have any idea what you are trying to do and how you configured it.

But yes, 213.254.248.96/27 covers 32 IPs starting from 213.254.248.96
until 213.254.248.127.


Lukas

  


RE: redirect prefix in v1.5.14 and v1.4.22

2015-10-08 Thread Lukas Tribus
Hi Diana,


> Hello, 
>  
> I have two hosts, one has haproxy 1.4.22 installed and the other has  
> haproxy 1.5.14 installed. 
> The following rewrite config works as expected in 1.5.14, but not in v1.4.22:

You probably want to check whether both 1.4.22 and 1.5.14 executables
have been build with PCRE.

Compare "haproxy -vv" output for this.

I think maybe the 1.4.22 has been compiled without PCRE. What then
happens is that haproxy falls back to the libc's regex engine, which may
not support the syntax you are using.



Regards,

Lukas

  


RE: OPTIM : IPv6 literal address parsing

2015-10-06 Thread Lukas Tribus
Hi Mildis,


>> And regarding "2001:db8::1234", you can't forbit it simply because you
>> don't know if 1234 is a port or not in this context, as you have
>> reported.
>
> Sure. In this very specific case 1234 can’t be a port as 2001:db8:: is
> then a subnet.

For the record: you can't know that, unless you know the subnetmask.

I can assign 2001:db8::/128 to a loopback and bind a service to it,
I can bind 2001:db8::/127 to one box and connect it to a box with
2001:db8::1/127 on the other side.

I can also configure 2001:db8::/16 on box on my private network,
where 2001:: is the subnet IP, not 2001:db8::.

A lot of valid configurations out there, you can't assume that all
configurations are simple and straightforward unicast LAN networks.

It is then the kernels job to reject binds to adresses it considers
invalid (for example due to subnetting), but the application does not
(and imho *must not*) be subnet aware.



Regarding the patch: I think this is very useful and I like the square
brackets very much. I'm always scratching my head when I see a ipv6 bind
configuration in haproxy and the square brackets fix this interpretation
problem once and for all (for users that want to use them, others just
keep using current notation).


Thanks for this!



Regards,

Lukas

  


RE: req_ssl_ver ACL not working

2015-10-05 Thread Lukas Tribus
Hi Julien,



>> Maybe you can also try with "curl --tlsv1.2" which should use a 3.3
>> version.
>
> That's a very interesting details. Indeed curl sets the HELLO version to
> 0x0303
> whereas OpenSSL uses 0x0301. Interestingly, both Firefox and Chrome also
> use 0x0301
> in the version of the record layer. In all cases though, the version of
> the handshake
> message is correctly set to 0x0303, as you would expect for TLS1.2.
>
> $ openssl s_client -connect jve.linuxwall.info:443 -tls1_2 -servername
> jve.linuxwall.info -debug|head
> CONNECTED(0003)
> write to 0x97a510 [0x984043] (342 bytes => 342 (0x156))
>  - 16 03 01 01 51 01 00 01-4d 03 03 22 95 43 27 f9
> Q...M..".C'.
> ^ ^^
> record layer version handshake version
>
> I would argue that HAProxy is doing the wrong thing here: the
> req_ssl_ver variable
> should return the handshake version, not the record layer version.

Agreed.


We really should ignore the record layer and use the client hello
version instead (smp_fetch_ssl_hello_sni() has code checking for both
if anyone has time to come up with a patch for req_ssl_ver).

We had similar bugs in the past in those code paths (parsing SSL manually,
see below).

For this use case specifically most SSL libraries and browsers try to be
most compatible, and since the record layer version doesn't impact the
handshake in any way (other than some hanging SSL servers if the record
layer is set to TLSv1.2), it is most often set to something low like
SSLv3 (GnuTLS) or TLSv1.0 (OpenSSL), because the record layer simply
doesn't matter.


Also see:
- rfc5246#appendix-E
- http://marc.info/?l=haproxy=139710628814932=2
- commit 57d229747 ("BUG/MINOR: acl: req_ssl_sni fails with SSLv3 record 
version")




Regards,

Lukas

  


RE: TCP_NODELAY in tcp mode

2015-08-28 Thread Lukas Tribus
 Hello,

 The flag TCP_NODELAY is unconditionally set on each TCP (ipv4/ipv6)
 connections between haproxy and the server, and beetwen the client and
 haproxy.

That may be true, however HAProxy uses MSG_MORE to disable and
enable Nagle based on the individual situation.

Use option http-no-delay [1] to disable Nagle unconditionally.



Regards,

Lukas


[1] 
http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#4-option%20http-no-delay
 


RE: TCP_NODELAY in tcp mode

2015-08-28 Thread Lukas Tribus
 Use option http-no-delay [1] to disable Nagle unconditionally.


 This option requires HTTP mode, but I must use TCP mode because our
 protocol is not HTTP (some custom protocol over TCP)

Ok, you may be hitting a bug. Can you provide haproxy -vv output?


Thanks,

Lukas

  


RE: TCP_NODELAY in tcp mode

2015-08-28 Thread Lukas Tribus
 Ok, you may be hitting a bug. Can you provide haproxy -vv output?



 What do you mean? I get the following warning when trying to use this
 option in tcp backend/frontend:

Yes I know (I didn't realize you are using tcp mode). I don't mean the
warning is the bug, I mean the tcp mode is supposed to not cause any
delays by default, if I'm not mistaken.

You are running freebsd, so splicing (Linux) can't be an issue either.
Is strace available on your OS (afaik 64bit freebsd doesn't have strace)?

Can you try disabling kqueue [1], to see if the behavior changes? If
not, try disabling poll as well [2]. That way haproxy falls back to
select().

Having all syscalls (strace) and tcpdumps from the front and backend
traffic would be helpful. Especially interesting would be if haproxy sets
TCP_NODELAY and MSG_MORE. It should set the former, but not the
latter.



Regards,

Lukas





[1] http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#3.2-nokqueue
[2] http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#nopoll


  


RE: Reg: Invalid response received on specific page

2015-08-19 Thread Lukas Tribus
 ilan@ilan-laptop$echo show errors | sudo socat 
 /run/haproxy/admin.sock stdio 
 
 Total events captured on [19/Aug/2015:15:36:43.378] : 3 
 
 [19/Aug/2015:15:36:18.452] backend nodes (#4): invalid response 
 frontend localnodes (#2), server web01 (#1), event #2 
 src 127.0.0.1:40332http://127.0.0.1:40332, session #119, session 
 flags 0x00ce 
 HTTP msg state 26, msg flags 0x, tx flags 0x2800 
 HTTP chunk len 0 bytes, HTTP body len 0 bytes 
 buffer flags 0x8002, out 0 bytes, total 1024 bytes 
 pending 1024 bytes, wrapping at 16384, error at position 0: 

We need the complete output here, as it will show the error.

It appears haproxy doesn't like what your backend responds.
Therefor we need to understand how that response looks like.

If you could share a tcpdump capture (-s 0) of the backend
traffic, that could be useful as well.


Regards,

Lukas 


RE: HTTPS to HTTP reverse proxy

2015-08-12 Thread Lukas Tribus
 yes. Sorry about that. I was changing my configuration and forgot to 
 rollback some of the changes. But even after removing, ssl verify 
 none, the problem is still there. 

You will have to look at those specific request that don't work. (like
a CSS file), try what happens when you request them with curl
(curl -v https://), check haproxy and webserver logs.

There is no why to tell the reason of this behavior with the
informations we have.



Regards,

Lukas

  


RE: HTTPS to HTTP reverse proxy

2015-08-11 Thread Lukas Tribus
Hi Roman,


 I am publishing horde webmail application. The horde itself is served 
 internally via http protocol on apache.

I suspect the error is that you are enabling SSL on the backend servers
towards port 80? Remove ssl verify none from the backend
server configurations.



Lukas

  


RE: REg: Connection field in HTTP header is set to close while sending to backend server

2015-08-07 Thread Lukas Tribus
 Hi Baptiste, 
 
 Thank you very much for the response.That was quick. 
 
 I tired enabling but got following error, 

Looks like you're on haproxy 1.4. In your current configuration you are
now using tunnel-mode.

If this is a new deployment, I would recommend upgrading to haproxy
1.5.


Regards,

Lukas

  


RE: Cipher strings when cert has empty CN

2015-07-28 Thread Lukas Tribus
Hi,


 I spent more time debugging the problem.
 Here¹s the source snippet from 1.5.2 version of haproxy
 (I believe the latest 1.5.14 has the same issue).

This is fixed by commit 8068b03467 (BUG/MINOR: ssl: correctly
initialize ssl ctx for invalid certificates) [1], which is in
Haproxy 1.5.7 and later.


Regards,

Lukas


[1] http://www.haproxy.org/git?p=haproxy-1.5.git;a=commit;h=8068b03467  
  


RE: ocsp

2015-07-20 Thread Lukas Tribus
 Hi Lukas,

 I made a mistake in my previous email : it works locally AND remotely !

What fixed the problem? This may be useful for others as well.


Lukas

  


RE: ocsp

2015-07-20 Thread Lukas Tribus
 Hi Lukas,

 frontend cluster:443
 bind 1.2.3.4:443 ssl strict-sni crt /home/provisionning/0.pem crt 
 /home/provisionning/cluster.d
 default_backend cluster
 capture request header Host len 255

Can you confirm there is no SSL intercepting device in front of the webserver, 
like
hardware firewalls/UTM and whatnot?

Could you try with just a single certificate (single crt config pointing to a 
single certificate file, not a
directory)?

Can you make the openssl tests from the server, connecting locally without any 
intermediate
devices?



Thanks,

Lukas

  


RE: ocsp

2015-07-20 Thread Lukas Tribus
Hi Marc,


 Hi Lukas,

 great intuition :)

 ---

 CONNECTED(0003)
 TLS server extension server name (id=0), len=0
 TLS server extension renegotiation info (id=65281), len=1
 0001 - SPACES/NULS
 TLS server extension EC point formats (id=11), len=4
  - 03 00 01 02 
 TLS server extension session ticket (id=35), len=0
 TLS server extension status request (id=5), len=0
 TLS server extension heartbeat (id=15), len=1
  - 01 .
 depth=2 C = BE, O = GlobalSign nv-sa, OU = Root CA, CN = GlobalSign Root CA
 verify return:1
 depth=1 C = BE, O = GlobalSign nv-sa, CN = AlphaSSL CA - SHA256 - G2
 verify return:1
 depth=0 OU = Domain Control Validated, CN = *.makeprestashop.com
 verify return:1
 OCSP response:
 ==
 OCSP Response Data:
 OCSP Response Status: successful (0x0)
 Response Type: Basic OCSP Response
 Version: 1 (0x0)
 Responder Id: 9F10D9EDA5260B71A677124526751E17DC85A62F
 Produced At: Jul 20 16:42:53 2015 GMT
 Responses:
 Certificate ID:
 Hash Algorithm: sha1
 Issuer Name Hash: 84D56BF8098BD307B766D8E1EBAD6596AA6B6761
 Issuer Key Hash: F5CDD53C0850F96A4F3AB797DA5683E669D268F7
 Serial Number: 11210839AC1CC2D1DC09BA07A33700E3E681
 Cert Status: good
 This Update: Jul 20 16:42:53 2015 GMT
 Next Update: Jul 21 04:42:53 2015 GMT

 [...]

 ---

 It works locally or remotely !

Not sure I understand. Does that mean it works locally, but not remotely?



Regards,

Lukas


  


RE: ocsp

2015-07-17 Thread Lukas Tribus
Hi Marc,



 Hi all,

 I have some problem making ocsp stapling working. here is what i did :

 I have 8150.pem with chain, cert and key in it.

 I have 8150.pem.ocsp that seems ok :

 # openssl ocsp -respin 8150.pem.ocsp -text -CAfile alphassl256.chain
 OCSP Response Data:
 OCSP Response Status: successful (0x0)
 Response Type: Basic OCSP Response
 Version: 1 (0x0)
 Responder Id: 9F10D9EDA5260B71A677124526751E17DC85A62F
 Produced At: Jul 9 09:47:04 2015 GMT
 Responses:
 Certificate ID:
 Hash Algorithm: sha1
 Issuer Name Hash: 84D56BF8098BD307B766D8E1EBAD6596AA6B6761
 Issuer Key Hash: F5CDD53C0850F96A4F3AB797DA5683E669D268F7
 Serial Number: 11216784E7CA1813F3AD922B60EAF6428EE0
 Cert Status: good
 This Update: Jul 9 09:47:04 2015 GMT
 Next Update: Jul 9 21:47:04 2015 GMT

 No error/warn at haproxy launching but not sure haproxy is loading .ocsp file 
 because no notice in log.

 But nothing in tlsextdebug :

 echo Q | openssl s_client -connect www.beluc.fr:443 -servername www.beluc.fr 
 -tlsextdebug -status -CApath /etc/ssl/certs
 [...]
 OCSP response: no response sent
 [...]

 Do you see smth wrong ? What can i do in order to debug?

Can you provide the output of haproxy -vv please and a
config snippet (the frontend ssl configuration)?

Do you see a warning if 8150.pem.ocsp contains garbage when you restart
haproxy?



Regards,

Lukas


  


RE: Contribution for HAProxy: Peer Cipher based SSL CTX switching

2015-07-14 Thread Lukas Tribus
 Hey guys,

 I haven’t gotten any feedback for this feature. Unless there’s severe
 objections, I’ll go ahead and push this to up to master.

Emeric responded here:
http://marc.info/?l=haproxym=143643724320705w=2

Not sure what you mean by pushing this to master...?



Lukas

  


RE: Test HAProxy configuration file

2015-07-13 Thread Lukas Tribus
 Hi Lukas, 
 
 the output of haproxy -c is not helpful. 
 Configuration file is valid“ 

I though thats what you want.


 I need a more verbose output with a complete overview of the configuration. 
 I want to check if options configured in the default or global sections 
 works for all the backends for example. 

There is no such thing. Refer to the documentation to understand how single
options propagate.


Lukas

  

RE: Test HAProxy configuration file

2015-07-13 Thread Lukas Tribus
Hi Erik,


 Hi, 
 
 is it possible to show and test the configuration of haproxy 
 like apache2ctl -S? 
 I want to check with which configuration options haproxy starts. 
 
 Thanks for help. 

Yes, see haproxy -h (haproxy -c).


Lukas

  

RE: Segfault when parsing a configuration file

2015-07-11 Thread Lukas Tribus
Hi Tomas,


 Hello,

 we have a server with some config running an old version (1.4.25-1) of
 haproxy under Debian wheezy. The reason we've not updated it is that any
 new versions we had access to would crash.

 Today I was able to pinpoint where the problem lies:

Thanks for the detailed repro. This bug is fixed in release 1.5.10 by commit
ed061c0590 (BUG/MEDIUM: config: do not propagate processes between stopped
processes) [1].

Quoting from the commit:
Immo Goltz reported a case of segfault while parsing the config where
we try to propagate processes across stopped frontends (those with a
disabled statement). The fix is trivial. The workaround consists in
commenting out these frontends, although not always easy.


You can get latest haproxy build for debian here [2].


Maybe Vincent could queue this fix for a debian backport?



Regards,

Lukas


[1] 
http://git.haproxy.org/?p=haproxy-1.5.git;a=commit;h=ed061c0590109dde6cd77cd963bebc46ba0cd0cc
[2] http://haproxy.debian.net/

  


RE: [PATCH] MINOR: Add sample fetch to detect Supported Elliptic Curves Extension

2015-07-09 Thread Lukas Tribus
   The deprecated req_ssl_* keywords were for compatibility with historic 
 versions
 and should not be introduced right now, so I'd rather not add it now to 
 remove
 it in next version. If you're OK with me removing it by hand I can fix it
 myself, but if you prefer to resubmit that's fine as well. Just let me know!


 Sure, you can remove it by hand, no problems there.

 Perfect, patch merged then!

I like this, I'm glad we have this possibility now. It isn't however an 
alternative to Dave
Zhu's work, its rather an additional possibility.

We still ought to work with Dave to get his proposals merged, imho.



Thanks!

Lukas

  


RE: [SPAM] HAProxy soft server turnoff issues

2015-07-09 Thread Lukas Tribus
Hi Alexander,


 Hello! 
 
 My name is Alexander and I am writing on behalf of OWOX company, that 
 supports the most visited Ecommerce website in Ukraine 
 (rozetka.com.uahttp://rozetka.com.ua). 
 
 We are using haproxy as a well-performance server to balance load 
 between our database servers. We are using several DB-servers, and 
 sometimes we need to softly turn off one of them for maintenance. In 
 case, when technical problems occur (like extreme CPU usage or 
 something) while high load hour, we need to prevent application errors 
 and turn off our server from HAProxy softly. It means, we want to 
 complete previously sent requests over haproxy to this server and get 
 response from it, but we don't want to send new requests. 
 
 I could not find this case in documentation you provide, and did not 
 find a way to do that through the configuration. 

You can set the server mode to DRAIN from the admin socket, that
should achieve exactly what you want:

set server backend/server state [ ready | drain | maint ]
http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#9.2-set%20server



Regards,

Lukas

  


RE: Issues with force-sslv3

2015-07-03 Thread Lukas Tribus
Hi,


 Hi there, 
 
 I'm running haproxy 1.5.12 and I have set 'ssl-default-bind-options 
 no-sslv3 no-tlsv10' (without the quotes of course) under the global 
 section as I want all my front-ends not to support SSLv3 or TLS1.0. 
 
 However I do have a client that still requires SSLv3 support (for their 
 own reasons). I have tried using force-sslv3 on the server line in the 
 backend that matches their site, however this does not seem to be 
 working as all.

I don't think this is a supported configuration. Afaik force-sslv3 doesn't
invert a previous no-sslv3 setting and that is indeed the behavior you
are seeing, so I would say this is expected.

force-sslv3 sets SSLv3_method, no-sslv3 sets SSL_OP_NO_SSLv3 [1].
Setting both together doesn't make any sense. Thats the how the
OpenSSL API is.



Regards,

Lukas

 
[1] https://www.openssl.org/docs/ssl/SSL_CTX_new.html   
  


RE: [ANNOUNCE] haproxy-1.5.14

2015-07-03 Thread Lukas Tribus
 Hi, just to let you know changelog is missing 1.5.14 infos ;)

Its there, its probably just cached in your browser (try ctrl+shift+R).

Lukas

  


RE: Issues with force-sslv3

2015-07-03 Thread Lukas Tribus
 Thanks Lukas,

 So its either SSLv3 is enable for all, or its disable for all?

No, you can disable it per bind line, only that you need to
do it the other way around, specifying no-sslv3 on all other
bind lines, not the one where you need sslv3 (and not in the
defaults).


Lukas

  


RE: Now follows SNI rules, except from curl on OSX

2015-07-03 Thread Lukas Tribus
That should have read:

 The capture shows that there is *no* SNI emitted by the client. I think your
 node.js SNI tests was bogus, and that curl doesn't properly support SNI
 *if* the crypto library is SecureTransport instead of openssl, gnutls or
 cyassl.

  


RE: Now follows SNI rules, except from curl on OSX

2015-07-03 Thread Lukas Tribus
 Yep, it's OS X curl. 
 
 curl 7.37.1 (x86_64-apple-darwin14.0) libcurl/7.37.1 
 SecureTransport zlib/1.2.5 
 Protocols: dict file ftp ftps gopher http https imap imaps ldap 
 ldaps pop3 pop3s rtsp smtp smtps telnet tftp 
 Features: AsynchDNS GSS-Negotiate IPv6 Largefile NTLM NTLM_WB SSL libz 
 
 I decided to take a dump: 
 
 sudo tcpdump -ps0 -i eth0 -w coolaj86.com.eth0.443.cap tcp port 443 
 
 https://dropsha.re/files/afraid-wolverine-84/coolaj86.com.eth0.443.cap

In this case, your client DOES send SNI, so thats why it works.

The big question is why does curl on mac somethimes send the SNI value
and sometimes not?

Maybe it has someting todo with the -k/--insecure argument?

Which certificate are you getting if you do:
curl --insecure https://coolaj86.com



Regards,

Lukas
  


RE: Now follows SNI rules, except from curl on OSX

2015-07-03 Thread Lukas Tribus
  sudo tcpdump -ps0 -i eth0 -w eth0.64443.cap tcp port 64443 
  
 And then this on my Yosemite Mac 
  
  curl  
 --insecure https://baz.example.com:64443https://baz.example.com:64443/ 
  
 And here's the result

The capture shows that there is now SNI emitted by the client. I think your
node.js SNI tests was bogus, and that curl doesn't properly support SNI
with the crypto library is SecureTransport instead of openssl, gnutls or
cyassl.

Try: curl https://sni.velox.ch/ -k

You will see that SNI doesn't work with this client.

Also see:
https://mumble.org.uk/blog/2014/03/12/gpg-and-openssl-and-curl-and-osx/



Lukas

  


RE: very simple SNI rules are only sometimes followed

2015-07-02 Thread Lukas Tribus
 oops, I still had the link to the pastebinit, which doesn't work on  
 binary files. 
  
 https://dropsha.re/files/orange-hound-85/64443-traffic.default.cap 
 https://dropsha.re/files/angry-dragon-19/64443-traffic.baz.cap 

Looks alright. Can you configure logging and check the result:

global
 log syslog destination ip local0
frontend foo_ft_https
 log global
backend foo_bk_default
 log global
backend foo_bk_bar
 log global
backend foo_bk_baz
 log global



Thanks,

Lukas

  


RE: Now follows SNI rules, except from curl on OSX

2015-07-02 Thread Lukas Tribus

 But when I use curl bundled with Yosemite (or from Brew) on my macbook, 
 it's not switching. 
 
 curl --insecure https://bar.example.com:64443 
 Default on 1443 
 
 These are the versions I'm testing with: 
 
 curl --version 
 curl 7.37.1 (x86_64-apple-darwin14.0) libcurl/7.37.1 
 SecureTransport zlib/1.2.5 
 
 /usr/local/opt/curl/bin/curl --version 
 curl 7.42.1 (x86_64-apple-darwin14.3.0) libcurl/7.42.1 
 SecureTransport zlib/1.2.5 
 
 Yet I have a node.js (io.js v2.3.1) service that switches based on SNI 
 which is working just fine with curl. 

Sounds like that client hello from curl@mac looks different
than we expect, therefor SNI parsing fails. Can you provide
the same tcpdump captures again, this time from the mac
curl client that fails?


Regards,
Lukas 


RE: very simple SNI rules are only sometimes followed

2015-07-02 Thread Lukas Tribus
 To limit verbosity I just captured one full request where it succeeded  
 and then another when it didn't 
  
  # this is the one that worked as expected 
  pastebinit dump.1.tls.bin 
  http://paste.ubuntu.com/11811750/ 
  
  # this is the one that went to default anyway 
  pastebinit dump.2.tls.bin 
  http://paste.ubuntu.com/11811751/ 
  
 Both were produced by curl --insecure https://baz.example.com:64443 
  
 I was expecting that the -k option would require just my server's key  
 and that it would be able to decrypt data to plaintext, however, I see  
 that it didn't decrypt, so perhaps I need to convert the keyfile to  
 another format or bundle the certificate with the keys?

The handshake negotiated a ECDHE cipher suite, so its not possible to
decrypt it with just the private key.

No need to decrypt though, I just wanted to see the actual SNI value of the 
client
hello on the wire (or loopback, in this case). But it looks like ssldump 
doesn't show
the SNI value at all, so this doesn't help.

Can you provide a tcpdump capture of the frontend traffic
(tcpdump -ps0 -i lo -w 64443-traffic.cap tcp port 64443)?


Also, did you fix the backend IPs in the configuration? Although the particular
scenario is supposed to work (because the frontend destination IP is actually 
127.0.0.1),
I would rather not leave that variable in place while troubleshooting this.


Regards,

Lukas

  


RE: very simple SNI rules are only sometimes followed

2015-07-02 Thread Lukas Tribus

  sudo haproxy -db -f /etc/haproxy/haproxy.cfg 

Backend IPs are 0.0.0.0. Thats probably not what you want.
Should be 127.0.0.1 if I understand correctly.



 I've edited /etc/hosts so that baz.example.comhttp://baz.example.com  
 points to 127.0.0.1 
  
 I've created a few bogus servers 
  
  npm install -g serve-https 
  serve-https -p 1443 -c 'Default on 1443'  
  serve-https -p 2443 -c 'bar on 2443'  
  serve-https -p 3443 -c 'baz on 3443'  
  
 And then I test, but I get random results. It only follows the SNI  
 rules sometimes 
  
  curl --insecure https://baz.example.com:64443 
  baz 
  
  curl --insecure https://baz.example.com:64443 
  Default on 1443

Can you post ssldumpcaptures of this traffic (working and
non working)?


Regards,
Lukas

  


RE: Contribution for HAProxy: Peer Cipher based SSL CTX switching

2015-06-25 Thread Lukas Tribus
 Thank you for pointing this out, I missed it in my brief look of the code.
 To me, this is reason enough to move to 1.0.2 (in addition to all the
 other reasons given by you and Nenad).

 I¹ll start prototyping the code using 1.0.2.

Agreed.

What I would also urge is to not use any openssl internals at all. We
already have a few forward compatibility issues with openssl (haproxy
linked with -DOPENSSL_NO_SSL_INTERN against current stable openssl
or linking against the openssl 1.1.0 branch).

Openssl 1.1.0 is expected to be released by the end of 2015, we should
try hard to not introduce new compatibility issues - which mostly comes
from accessing openssl internals. Of course we can't predict API breakage,
but we do already know that direct access to internal APIs will no longer
be possible.


Thanks for this work, Dave, its much appreciated!


Regards,
Lukas

  


RE: Segfault with a badly configured password

2015-06-25 Thread Lukas Tribus

 This line in the userlist will cause the segfault when you try to view  
 stats as the user test: 
  
 user test password =testing 
  
  
  
 The segfault error from messages is: 
  
 Jun 25 21:33:41 dev-tsl-haproxy-001 kernel: [ 4147.107578]  
 haproxy[6780]: segfault at 0 ip 7f6ae5fcfef6 sp 7ffc0e6a04d8  
 error 4 in libc-2.17.so[7f6ae5e9e000+1b6000] 
  
  
  
 I’m running HAProxy version 1.5.2 in the AWS OpsWorks HAProxy layer on  
 Amazon Linux 2015.03 

This is fixed in 1.5.4.



Regards,

Lukas

  


RE: Need your help on HAProxy Load balancing algorithms

2015-06-24 Thread Lukas Tribus

 Hi Vinod,

 First, good luck in your PhD.
 For load-balancing algorithm, you want to read this part of the doc:
 http://cbonte.github.io/haproxy-dconv/snapshot/configuration-1.6.html#balance

 about the source code, it's available here:
 http://git.haproxy.org/?p=haproxy.git

Also checkout doc/architecture.txt: although its not up to date, it still 
provides
important informations regarding the architecture.


Regards,

Lukas

  


RE: Contribution for HAProxy: Peer Cipher based SSL CTX switching

2015-06-24 Thread Lukas Tribus
 Currently, I?ve coded it so that this only happens when the client does not
 specify an SNI, but I?m looking for guidance on what you would consider to be
 the best solution. This approach can certainly be taken to be compatible with
 SNI.

 Is this something that you would be interested in folding into the codebase?

 Well, you explained what it does but not the purpose. In what does this
 constitute an improvement, for what use case ? Does it fix a connection
 trouble for some clients, or does it improve security and/or performance ?

 I must say I don't really understand the purpose. Maybe you and/or Olivier
 who would like this as well and/or anyone else could put some insights here ?

Currently we mostly use RSA certificates. ECC (ECDSA) are different 
certificates and
until RSA certificates are fully removed from the industry, we will have to
support both.

The change, if I understand correctly, allows serving the ECC/ECDSA certificate
when the client supports it (via ciphers list), and RSA otherwise.

Do we need this? Absolutely yes. But we will have to verify exactly whats the
best way to do this, and how openssl can help with this. I believe openssl 1.0.2
introduces a new API which makes things simpler.

Apache 2.4 can already do this, nginx not yet.


Some discussions and further informations:

https://github.com/igrigorik/istlsfastyet.com/issues/38
http://mailman.nginx.org/pipermail/nginx-devel/2013-October/004376.html
https://blog.cloudflare.com/ecdsa-the-digital-signature-algorithm-of-a-better-internet/
https://blog.joelj.org/2015/06/19/dual-rsaecdsa-certificates-in-apache-2-4/
https://securitypitfalls.wordpress.com/2014/10/06/rsa-and-ecdsa-performance/



Regards,

Lukas

  


RE: Contribution for HAProxy: Peer Cipher based SSL CTX switching

2015-06-24 Thread Lukas Tribus
 Hey Willy,

 Lukas explained it pretty well, but I can expound on it some more.

 You can imagine a situation where HAProxy has 2 certificates of different
 key types; one ECDSA and one RSA. In the current codebase, if no SNI is
 used, the certificate that is used will be whichever certificate is the
 default (i.e. the one that is first specified in the config). So we would
 have 2 possible paths:

 1. ECDSA was specified first. This has the effect of only supporting
 cipher suites that has ECDSA. If the client does not support ECDSA, then
 it would mean that the connection will fail, even though the server has an
 RSA certificate.
 2. RSA was specified first. This means that ECDSA cipher suites would
 never be used, which can decrease performance for initial handshakes as
 well as have a negative security impact.

 What I propose would address both of these issues. If the client prefers
 RSA or only supports RSA, then the RSA certificate is presented. However,
 if the client supports ECDSA, then we would use the ECDSA certificate.

 Lukas,
 I believe the reason that apache calls out 1.0.2 is this line in the
 OpenSSL (1.0.1l to 1.0.2a) changelog:

 * Add support for certificate stores in CERT structure. This makes it
 possible to have different stores per SSL structure or one store in the
 parent SSL_CTX. Include distint stores for certificate chain verification
 and chain building. New ctrl SSL_CTRL_BUILD_CERT_CHAIN to build and store
 a certificate chain in CERT structure: returing(sic) an error if the chain
 cannot be built: this will allow applications to test if a chain is
 correctly configured.

 In openssl = 1.0.1, we can load multiple certs/keys into a single
 SSL_CTX. However, they must all have the same cert-chain. I believe that
 this 1.0.2 feature addresses this issue via certificate stores, and so can
 then use the existing s3server cipher suite selection code to select the
 correct certificate/key inside the library. This would alleviate the need
 to hook into a callback, which is what I¹m doing here.

Does your code correctly handy ECC vs RSA intermediate certificates
in all cases?


Lukas

  


RE: Odd SSL performance

2015-06-18 Thread Lukas Tribus
Hi Phil,


 Hello all:

 we are rolling out a new system and are testing the SSL performance with
 some strange results. This is all being performed on a cloud hypervisor
 instance with the following:

You are saying nginx listens on 443 (SSL) and 80, and you connect to those
ports directly from ab. Where in that picture is haproxy?



 Have tried adding the option prefer-last-server but that did not make a
 great deal of difference. Any thoughts please as to what could be wrong ?

Without keepalive it won't make any difference. Enable keepalive with ab (-k).



Lukas

  


RE: http-server-close when a request timeouts after a success HAProxy does not send 504

2015-06-17 Thread Lukas Tribus
Hi Brendan,


 Hi I am having an issue with HAProxy in http-server-close mode, when more
 then one request is sent in a stream and one timeouts after a success it
 re-sends that request. On the second request HAProxy send the 504 and the
 request is not resent again.

I'm sorry, I don't really get what you are saying. Are you firing 2 simultaneous
requests in a pipelined HTTP session? Or are those 2 consecutive request in
a keep-alived session?



 Here is the wireshark logs detailing the events I am seeing.

Yeah we gotta need the full pcap to take a look at the content. Also, please
provide front and backend traffic and the output from the haproxy (http) log.


Regards,

Lukas

  


RE: HAProxy Redirect Domain While Retaining Original Domain Name In URL

2015-06-16 Thread Lukas Tribus
Hi Brian,


 Thanks for your suggestion. Unfortunately it's not clear how I would
 use this command and format it when doing a redirect. I read the
 HAProxy 1.5 documentation, but it wasn't detailed enough.

You don't want a redirect, you explicitly asked for a rewrite:
retaining original domain name in URL.

This is a rewrite, not a redirect.

With my suggestion you are replacing the Host header.

Is that not what you want? What do you want?



Please Replay to All including the mailing list, thank you.


Lukas

  


RE: Does haproxy use lt or et mode of epoll ?

2015-06-15 Thread Lukas Tribus
 Subject: Does haproxy use lt or et mode of epoll ? 
 
 thanks 

Level-triggered, if I understand the following commit correctly:

http://www.haproxy.org/git?p=haproxy.git;a=commit;h=6c11bd2f89eb043fd493d77b784198e90e0a01b2


Lukas

  

RE: haproxy stats page returns 503 error

2015-06-15 Thread Lukas Tribus
Hi Atul,


 Hi, 
 
 
 
 using a browser to query the stats from haproxy, I'm facing a non 
 consistent behavior where about One time every 2 attempts I get a 503 
 error. 
 
 
 
 Can you please let me know how to correct this.

Can you provide configuration and logs of the failed request?



Lukas

  

RE: HAProxy Redirect Domain While Retaining Original Domain Name In URL

2015-06-12 Thread Lukas Tribus
Hi!


 Hello, 
 
 
 
 I’m trying to determine how to redirect from an incoming domain 
 (alias.com) to another domain (domain.com), yet retain the original 
 incoming domain (alias.com) in the user’s browser URL address bar. I 
 believe I need to use “http-request replace-header”, but not sure how 
 to format the whole command, especially with regex settings. Has 
 anyone done this before and how would I format the command? 

I would guess:
http-request replace-header Host alias\.com domain.com



Lukas

  


RE: Need Your Suggestion to upgrade HAProxy Version

2015-06-12 Thread Lukas Tribus
Hi,


 Hi Lukas

 We are getting following warning while running haproxy after migration
 from

 [WARNING] 163/012854 (8453) : Setting tune.ssl.default-dh-param to 1024
 by default, if your workload permits it you should set it to at least
 2048. Please set a value= 1024 to make this warning disappear.

Please keep posting to the mailing list, so others can respond too.


To silence this warning, follow the suggestion in the warning and set the
parameter to 1024.


Read more about this here:
http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#tune.ssl.default-dh-param


My suggestion would be to generate the dh param yourself and add it to
the certificates, especially if you have to use 1024 bit dh groups.



Lukas

  


RE: The cause for 504's

2015-06-11 Thread Lukas Tribus
Hi Jeff,


 504's are killing us and we have no clue why we get them 
 
 Here's a sample log entry: 
 
 Jun 10 17:27:33 localhost haproxy[23508]: 10.126.160.11:37139 
 [10/Jun/2015:17:26:03.027] http-in resub-bb-default/njorch0pe16 
 30935/0/1/-1/90937 504 194 - - sH-- 16/14/0/0/0 0/0 
 {569760396|297|RESUB|EMAIL|0|9001|0|0|1.0|NJ|60} POST /somepath 
 HTTP/1.1 

Read:
http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#8.5


This is a timeout on the server side. Increase timeout server.



Lukas

  


RE: Haproxy 1.6 segfault on FreeBSD

2015-06-11 Thread Lukas Tribus
Hi!


 Hi everyone,
 It seems that since some times haproxy 1.6 segfault on freebsd

 Eg: at commit 80b59eb0d20245b4040f8ee0baae0d36b6c446b5

I can't find that commit? Where are you pulling/cloning from?


Lukas

  


RE: Need Your Suggestion to upgrade HAProxy Version

2015-06-11 Thread Lukas Tribus
Hi Devendra,



 Hi Lukas, 
 
 Thanks For your valuable reply. 
 
 Please find attach config file of current HAProxy server. 
 Please let me know , whether i should upgrade my server with latest 
 stable version of HA Proxy. 
 
 Need your suggestion. 
 
 
 
 
 --
  
 HA Proxy Config 
 --
  
 
 global 
 daemon 
 maxconn 2 
 
 defaults 
 mode http 
 timeout connect 15000ms 
 timeout client 5ms 
 timeout server 5ms 
 timeout queue 60s 
 stats enable 
 stats auth Admin:MyPassword 
 stats refresh 5s 
 
 backend backend_http 
 mode http 
 cookie JSESSIONID prefix 
 balance leastconn 
 option forceclose 
 option persist 
 option redispatch 
 option forwardfor 
 server server3 192.168.1.21:80 cookie server3_cokkie maxconn 
 1024 check 
 server server4 192.168.1.22:80 cookie server4_cookie maxconn 
 1024 check 
 acl force_sticky_server3 hdr_sub(server3_cookie) TEST=true 
 force-persist if force_sticky_server3 
 acl force_sticky_server4 hdr_sub(server4_cookie) TEST=true 
 force-persist if force_sticky_server4 
 #Remove Some Reponse header for security 
 rspidel ^Server:.* 
 rspidel ^X-Powered-By:.* 
 rspidel ^AMF-Ver:.* 
 
 listen frontend_http *:80 
 mode http 
 maxconn 2 
 default_backend backend_http 
 
 listen frontend_https 
 mode http 
 maxconn 2 
 bind *:443 ssl crt /opt/haproxy-ssl/conf/ssl/naaptol.pem 
 #Adding Request Header to identify https request at Backend 
 reqadd X-Forwarded-Proto:\ https 
 reqadd X-Forwarded-Protocol:\ https 
 reqadd X-Forwarded-Port:\ 443 
 reqadd X-Forwarded-SSL:\ on 
 acl valid_domains hdr_end(host) -i gateway.naaptol.com 
 www.naaptol.comhttp://www.naaptol.com m.naaptol.com 
 redirect scheme http if !valid_domains 
 default_backend backend_http if valid_domains 
 
 --
  
 
 
 Awaiting for your valuable reply.

Config should work fine without any changes if
you update to latest stable release.


If, after the release, you want further optimization, I suggest replacing
option forceclose 

with
option http-keep-alive
option prefer-last-server


Also read:
http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#option%20http-keep-alive


Regards,

Lukas

  


RE: Haproxy 1.6 segfault on FreeBSD

2015-06-11 Thread Lukas Tribus
 Hi Lukas,
 This is the last commit available on github for haproxy/haproxy
 https://github.com/haproxy/haproxy/commit/80b59eb0d20245b4040f8ee0baae0d36b6c446b5

That is a unofficial mirror, updated manually and often
outdated (like right now).

Please clone from the official mirror at:
http://git.haproxy.org/git/haproxy.git/


This will probably not help with the issue you are facing, but
at least we have the same commit hash.




Thanks,

Lukas

  


RE: Configuration help with SPDY Virtual Hosts

2015-06-10 Thread Lukas Tribus
 Bump. Turns out a bunch of scripts/programs hit my sites that don't do
 SNI. Any ideas?

Virtual HTTPS hosting needs SNI. If your clients/script doesn't support SNI,
you cannot host more than one certificate with one IP.

Doesn't have anything todo with SPDY or Haproxy, its just how things are.

If you don't have different webseits (domains), use a default-backend.




Lukas 


RE: Need Your Suggestion to upgrade HAProxy Version

2015-06-08 Thread Lukas Tribus
 HI team, 
 
 I need your help in upgrading my HA-Proxy version from 
 haproxy-1.5-dev21 to latest Stable version. 
 Can i upgrade directly or i have to change any settings.. 


That depends on your configuration and on what you
expect from HAProxy.

Without any further informations I would say NO. dev22
changes default setting from tunnel to keep-alive mode,
this may or may not have impact in your configuration.


Provide and explain your configuration and services, then
we may be able to answer your question.


Lukas

  


RE: Configuration help with SPDY Virtual Hosts

2015-06-08 Thread Lukas Tribus
 A stupid question:

 Does SPDY require to use SNI on client side?

SPDY requires NPN or ALPN. I'm not sure if the SPDY specification
insist on SNI, but basically all SPDY clients also support SNI.

This is imo a non-problem.



 If not, what does it happen if the client doesn't send any SNI field?

It doesn't work.



Lukas

  


RE: Configuration help with SPDY Virtual Hosts

2015-06-08 Thread Lukas Tribus
 Also more importantly, can I use proxy protocol with TCP backends? I
 need TCP backends to support SPDY.

Yes, thats exactly the point of the proxy protocol.


Lukas

  


RE: Configuration help with SPDY Virtual Hosts

2015-06-05 Thread Lukas Tribus
Hi Viranch,


 tcp-request inspect-delay 5s
 tcp-request content accept if HTTP

Whats that configuration supposed to do? It doesn't
make any sense.



 acl spdy ssl_fc_npn -i spdy/3.1
 acl site1 req.hdr(Host) -i site1.foo.com
 acl site2 req.hdr(Host) -i site2.foo.com

 use_backend site1_spdy if spdy site1

You can't match a Host Header if the protocol is not HTTP
(but SPDY).


Try using SNI instead, and distinguish plaintext and SNI
ACLs:


acl spdy ssl_fc_npn -i spdy/3.1

acl site1_sni ssl_fc_sni -i site1.foo.com
acl site2_sni ssl_fc_sni -i site2.foo.com

acl site1_plaintext req.hdr(Host) -i site1.foo.com
acl site2_plaintext req.hdr(Host) -i site2.foo.com

use_backend site1_spdy if spdy site1_sni
use_backend site1_http if site1_plaintext

use_backend site2_spdy if spdy site2_sni
use_backend site2_http if site2_plaintext




Regards,

Lukas

  


RE: Syslog messages get truncated at 1kb (syslog server config is ok)

2015-06-04 Thread Lukas Tribus
 Hi Lukas, thank you for the time !

 I compiled haproxy with DEFINE=-DREQURI_LEN=8192 and everything
 seems to be fine now, recompiling is not a problem.
 Tomorrow I'll deploy the changes from staging to production and let
 you know, we have around 1200 queries per second and the process takes
 only 80 megabytes, so we can take the risk :)

I would monitor memory usage anyway ... just in case.



Regards,

Lukas

  


RE: Syslog messages get truncated at 1kb (syslog server config is ok)

2015-06-04 Thread Lukas Tribus
Hi Damiano,


 Dear all, an update: logging using sockets doesn't change anything.
 After some grepping the code and tinkering I found that changing REQURI_LEN
 in include/common/defaults.h does the job

Thanks for your analysis.



 the strange thing is that there's also #define MAX_SYSLOG_LEN 1024 in the
 same file but it doesn't modify logging behaviour.

Thats because thats just a default, overwritten by your len configuration.
Syslog length is not the problem, URI length is.



 I don't know the side effect of this: maybe increased memory usage for each
 request ? Do I have to file a bug ?

Yes it will definitly increase memory usage.

Reading the following thread, I think this is expected behavior:
http://thread.gmane.org/gmane.comp.web.haproxy/3679/focus=3689


Workaround is compile with DEFINE=-DREQURI_LEN=2048 (supported since
1.5-dev19) - at least you avoid source code patches, however you still
have to recompile.

I guess a runtime configuration parameter would be nice.



Regards,

Lukas

  


RE: Syslog messages get truncated at 1kb (syslog server config is ok)

2015-06-03 Thread Lukas Tribus
 Hi Lukas, my mtu is set to 1500 and the message looks truncated.
 I am able to ping the server using that mtu

 root@lbha01:~# ping -s 1500 syslog

-s 1472 -M do is what you would use for this test. Instead, you are sending
ICMP requests at 1528 Bytes MTU without DF bit, so it will get fragmented.
Anyway, its unlikely that this is the problem.



 this is my dump (tcpdump -X) (the message is truncated and I don't
 see other packets flowing).

Ok, can you confirm that haproxy has been reloaded/restartet after
adding the len keyword to your logging configuration?



 With the logger utility this line gets splitted into multiple packets

I'm not familiar with this utility. Can you elaborate whether this SENDS packets
to your syslog-ng or if it recieves logs from haproxy?

Iirc, a syslog message must fit into a single packet.




Regards,

Lukas

  


RE: Syslog messages get truncated at 1kb (syslog server config is ok)

2015-06-03 Thread Lukas Tribus
Hi Damiano,


 Even if I set 8192 as length the message gets truncated after 1024 
 chars, we use syslog-ng and configured it to accept a ridiculously huge 
 length (log_msg_size(262144) defined in /etc/syslog-ng.conf), I also 
 tried using the logger utility to check if the message gets delivered 
 correctly and it does. 

How does the syslog packet look on the wire (tcpdump/wireshark)?
What is your MTU and can you successfully ping the syslog server with
that MTU?



Regards,

Lukas

  


RE: OCSP stapling troubleshooting

2015-06-02 Thread Lukas Tribus
Hi Shawn,


 I've done a Qualys Labs SSL test against my setup fronted with haproxy,
 using this URL:

 https://www.ssllabs.com/ssltest/index.html

 I thought I had OCSP stapling correctly configured, but Qualys says it's
 not there. I ave a cronjob that uses openssl to retrieve the .ocsp file
 for each certificate:

 -rw--- 1 root root 6151 May 31 14:47 wildcard.stg.REDACTED.com.pem
 -rw-r--r-- 1 root root 1609 Jun 2 10:17 wildcard.stg.rEDACTED.com.pem.ocsp

 As far as I knew, there was nothing special required in the haproxy
 config. How can I troubleshoot this, and is there something I've done
 wrong?

Share your cronjob script, your configuration, and SSLtest output at least (you
basically didn't share any OCSP related informations).

Try to work through this post if you can't post the URL of the site:
https://raymii.org/s/articles/OpenSSL_Manually_Verify_a_certificate_against_an_OCSP.html


You probably don't want to share the openssl outputs, so you will have
to read and understand them yourself.


Lukas

  


RE: A few thoughts on Haproxy and weakdh/logjam

2015-06-01 Thread Lukas Tribus
Hi Willy,



 Thank you, that was pretty clear and easy. I checked that I was running
 with about 2 kb of entropy before the tests and that I was alone on the
 machine, so I'm confident that what I did wasn't skewed.

 I pushed this into 1.6. I'd rather issue -dev2 with it, wait a little bit
 then backport it into 1.5 if we don't get any negative feedback. We might
 have to help distro maintainers prepare some arguments to backport this.

For the record I checked out current nginx [1] and apache [2] sources and
they don't seem to care about this at all. Nginx has a static 1024bit group
in the source (nothing else) and Apache gets 2048bit+ groups from openssl
(as we did previously).

I still think that our approach is suboptimal, mainly because I would
rather not get involved (by introducing a static key) in such advanced
crypto stuff.

A proper solution or proposal should imho come from openssl. They can't
possibly expect application developers to take of such low-level crypto
things. At least a recommendation would be nice (get_rfc2409_prime_1024
is unsafe, don't use it? get_rfc2409_prime_2048 can be considered safe?).


Anyway, it doesn't look like there the is a simple answer to the question
about whats the right thing to do ...



Regards,

Lukas


[1] 
http://hg.nginx.org/nginx/file/e034af368274/src/event/ngx_event_openssl.c#l905
[2] https://github.com/apache/httpd/blob/trunk/modules/ssl/ssl_engine_init.c#L70

  


RE: A few thoughts on Haproxy and weakdh/logjam

2015-05-28 Thread Lukas Tribus
 On Tuesday, May 26, 2015 5:12 PM Remi Gacogne wrote:

 On 05/23/2015 08:47 AM, Willy Tarreau wrote:
 Do you have any idea about the ratio of clients (on the net) which don't
 support ECDHE right now but support DHE ?

 Basically, by totally removing DHE, we would be losing forward secrecy for:
 - Java = 6 ;
 - OpenSSL = 1.0.0 ;
 - Android = 3.

 Note that moving to a DH size of 2048-bit is an issue if you have Java 6
 clients anyway (Java 7 does not support DHE 1024-bit either, but does
 support ECDHE).

 What about other clients (ie. browsers running on different OS combinations) 
 - especially legacy systems?

If your refer to long EOL'ed system, then they probably don't support DHE at 
all.



 Will IE7 on Windows XP have problems if I change to a 2048 or even a 4096 DH 
 group?

Scannel on Windows XP doesn't support DHE with RSA, therefor IE6/7/8 will 
connect just
fine (without FS).



 If changing to a higher DH group breaks connectivity with even just a few 
 ordinary
 browser/OS combinations I am afraid that I have no choice to stick with the 
 current
 vunerable group..

DHE is not necessary to connect. Your legacy clients will just negotiate a 
non-FS ciphersuite.



Lukas 


RE: A few thoughts on Haproxy and weakdh/logjam

2015-05-28 Thread Lukas Tribus
 If your refer to long EOL'ed system, then they probably don't support DHE at 
 all.

 Alas EOL'ed systems doesn't hinder its use - even if it unwise..

Thats not what I'm saying. What I'm saying is that since they are so old they 
don't
even support DHE, therefor the dh group doesn't matter.



 Scannel on Windows XP doesn't support DHE with RSA, therefor IE6/7/8 will 
 connect just
 fine (without FS).

 I assume you mean Schannel, and yes - I just did a small test on a public low 
 volume site
 using a VM based IE7 and SSLLabs SSLTest[1], and can see that both IE7 and 
 IE8 on
 Windows XP uses the cipher TLS_RSA_WITH_3DES_EDE_CBC_SHA (the OpenSSL name is
 DES-CBC3-SHA) when connecting.

 As far as I can see the only client that cannot connect in that test is a 
 Java 1.6 based one -
 all others are fine (just as you said).

Ok, thanks for confirming.



 A follow up question:

 How much dos the size of my chosen DH group affect clients and the server 
 when negotiating the
 connection?

*Very* much on the server side. It will kill your CPU.



 The SSLLabs test did not take any longer using a 4096 bit DH group instead of 
 a 2048bit one.

Because you have 1 server dedicated to 1 client. Also SSLLabs is not exactly a 
performance test.



 Could I (at least in theory) make a 8192 bit DH group, and not expect any 
 performance
 problems?

Absolutely not, no, not even in theory. Don't do this. HAProxy users have had 
severe performance
regression because of this.


Lukas

  


RE: Listening only server within backend

2015-05-26 Thread Lukas Tribus
 Hi the list 
 
 In my backend I've many servers, and I'd like to add some that receive 
 a copy of all the requests arriving to the backend. Of course haproxy 
 won't reply to them after sending the request. 
 I don't find any option for 'server' in section 5 of the docs, that 
 will allow me to define such 'spy' servers. 
 Is that possible ? 

No, you can only send a request once, to a single server.



Lukas

  


RE: SSL custom dhparam problem

2015-05-24 Thread Lukas Tribus
 Honestly, I'm opting for removing the DH fallback in haproxy altogether and
 simple always warn when the certificate (or a dedicated DH file parameter 
 like
 nginx does, which was requested earlier this week and makes sense) does not
 have the DH parameters.

 I'm having a mixed opinion here. We've seen a number of times that many
 users don't understand the principle of concatenating dhparams to their
 certs, especially those who migrate from other servers or those who don't
 know openssl at all. When users have to copy/paste from random blogs some
 commands they don't understand, it can result in real security issues,
 because they will do whatever they can to shut an error.

Which is why I would opt for simple and well-documented behavior in
haproxy.

Currently its not particularly straightforward:
http://marc.info/?l=haproxym=143228478812983w=2


When we have an dedicated option to point to a dhparam file, like nginx does,
its simple and easy to document. No need for the user to google around.



 However I would find it useful to let the admin provide a file for dhparam
 (or files, one per size if that makes sense). That would maintain the ease
 of porting certificates without having to modify them.

Yes, that is definitly something we need to do, either way.



 For 1024, what we could do :

 - in 1.6 : we wouldn't provide one anymore, which means that users could
 only load it from a file they would generate if they need one ;

You are implying that we will provide 2048 bit dhparams, correct?



 - in 1.5 : we'd regenerate a new one which differs from the fragile one,
 just to ensure that haproxy is not targetted at the same time as other
 servers. The rationale behind this is that if a nation has the computing
 power and storage to precompute all possible values, they'll rather do
 it for the group everyone users than the group haproxy 1.5 users only
 use.

Sure, we can't break 1.5 setups now.



 For 1.6, an alternative could be that we re-compute some groups and put
 them into files, or provide the standard ones in files. But that's causing
 more moving parts to be deployed and maintained and it could result in the
 opposite effect of the desired one : users would store these files in the
 config directory, and if one group becomes weak, we couldn't replace it
 by delivering a code update, and most users who are not aware of these
 issues would never replace their files.

 And the problem is already present, because users had to either force 1024
 in their config or put a 1024 dhparam into their cert files. All those who
 opted for the second option will keep it as-is without ever fixing it, while
 the ones who relied on 1024 being forced in the config will automatically
 get a different param when they update.

 And I'd rather not start to blacklist known fragile dhparams and end up
 with a blacklist in the code like openssh does for weak keys...

Its unlikely that we will know when 2048 bit dhparams are broken, therefor
the best long-term solution imho would be to not include any pre-computed
groups at all.

Also, I'm not sure if a code upgrade to deal with a compromised dhparam group
is an efficient way to push a new group out there. We would have to see this
as security issue and assign CVE candidates to make package maintainers even
consider a backport.


As with many of the SSL/TLS problems lately, the admin needs to understand
and configure the server according to best pratices. I think we should push
in that direction, even if usability suffers a tiny little bit.

For the record, I don't think:
openssl dhparam 2048dhparamfile
#grep dh-param-file /etc/haproxy.cfg
tune.ssl.dh-param-file dhparamfile

is hard do document, understand and configure at all. Its an additional
task the admin needs to do, thats correct. But in the long run its the
better thing to do.


Btw SSLLabs already provides a test to check for common DH prime
numbers.



Just my two cents.


Regards,

Lukas

  


RE: SSL custom dhparam problem

2015-05-23 Thread Lukas Tribus
 OK so now we need to find what to do in the end. From what I understood,
 just removing the lines was a test and is not viable because we'll always
 emit the warning, right ?

Honestly, I'm opting for removing the DH fallback in haproxy altogether and
simple always warn when the certificate (or a dedicated DH file parameter like
nginx does, which was requested earlier this week and makes sense) does not
have the DH parameters.

The fallback we currently have is not very portable (towards openssl forks
I mean - this could be ignored), it has much fragile logic (like trying to 
understand
if DHE ciphers are enabled) and with logjam it is now a security problem.

HAproxy should not try (to hard) to fix the security for the user, it should 
point
to possible issues via warnings, so that the user can understand and fix them.

Using user generated dhparams is best pratice for TLS setups, unless someone
deliberately disables DHE ciphers I don't see who would not want to do it.

Lets steer our users to a best practices setup instead of code fallbacks.


This proposal could probably be done only in 1.6.


What do you think?


Regards,
Lukas

  


RE: broken packets with usesrc clientip

2015-05-20 Thread Lukas Tribus
 Hi,

 my current traffic flow with source 0.0.0.0 usesrc clientip and with
 source publichaproxyip usesrc clientip:

 haproxy receives a SYN from the client and does a normal tcp handshake
 which works fine. Additionally haproxy forwards the SYN to the backend
 with the client ip as source ip, backend sends SYN/ACK back to haproxy,
 haproxy sends this to the client. client is confused because he send one
 SYN but receives two SYN/ACK.

 it would be perfect if haproxy would establish a connection with the
 node and a second with the backend, any ideas on how to tell haproxy to
 not forward packets from the backend but to answer them by himself?

You need to take a decision:

do you want a local source IP (your test)
OR
a remote source IP (which is what you seem to want in the end)

You cannot have both.


If the former is the case, then disable ip_forwarding in your kernel.
For the latter, you won't have the problem you just mentioned.


Lukas

  


RE: 1.4 - 1.5 migration resulted in degraded performance

2015-05-20 Thread Lukas Tribus
 So think that somehow, 1.5 was creating or keeping a lot more open 
 connections at a time, and depriving the kernel, or its own limits of 
 available connections?

Not necessarly the kernel itself. Some stateful inspection firewall between
the proxy and the backend, this includes conntrack on those boxes itself, but
can also be a third party firewall.




 I guess what I should do - is try 1.5 during quiet time, and compare 
 the environment (open fds, etc) with 1.4, and see what is different... 

Yes. And we would also need to know about the load when the performance
was bad. Like high CPU? In userspace or system or maybe interrupts?




Regards,

Lukas

  


RE: SSL handshake failure when setting up no-tlsv10

2015-05-20 Thread Lukas Tribus
 yes i figured since it is a ubuntu 10.04 machine it has old version of  
 openssl 
  
 so i looked around for upgrading the openssl and found this link  
 https://sandilands.info/sgordon/upgrade-latest-version-openssl-on-ubuntu 
  
 so can i just upgrade to openssl 1.0.1 and add it to the correct path  
 and just restart the haproxy service? 

Please don't.

As long as you don't *exactly* know what you are doing, ONLY use your
OS internal packaging system and don't follow tips you find on google.
This particular blog post for example makes you install a ancient version
of openssl (just look at the date of the post), with numerous issues and
bugs. Also you would very likely mess up your whole system.

Ubuntu 10.04 is EOL, you don't use an EOL'ed OS in production, period.

Upgrade to the next Ubuntu LTS edition by following the howto of your
OS vendor:
https://help.ubuntu.com/community/PreciseUpgrades


Lukas

  


RE: broken packets with usesrc clientip

2015-05-20 Thread Lukas Tribus

 so it is not possible to let haproxy answer backend packets to client ips?

I don't know what this question is supposed to mean, I don't get it.


You can use the source ip of your syslog clients to connect to your backend
by using plain old tproxy, this has been done for years and works fine.

I don't see why your setup would be any different of a normal tproxy setup?



 tproxy support isn't buildin in my kernel so I willl probably choose
 option2.

What's your OS/kernel? Unless your kernel is ancient, you probably do have
tproxy support.


Lukas

  


RE: socket bind error

2015-05-20 Thread Lukas Tribus
 hi all, 
  
 I'm working on standing up a new haproxy instance to manage redis  
 directly on our redis hosts since our main load-balancer does periodic  
 reloads and restarts for things like OCSP stapling that good ol'  
 amnesiac HTTP handles just fine, but longer-lived TCP connections like  
 our redis clients don't care too much for. 
  
 I managed to put together a configuration that works fine in local  
 testing (vagrant configured by test-kitchen), but for some reason when  
 I try to push this to staging, haproxy is refusing to start,  
 complaining that it can't bind to the keepalived-managed VIP. For the  
 life of me I can't figure out what the problem is, but hopefully  
 someone here will be able to give me some pointers?

Not sure, can you run haproxy directly (without systemd) through strace,
to see what exactly the kernel returns?

Whats the kernel release anyway?

What happens if you add the transparent keyword on the bind
configuration line (so that the sysctl setting is not needed)?



Regards,

Lukas

  


RE: broken packets with usesrc clientip

2015-05-19 Thread Lukas Tribus
 listen logstash01
 bind 10.111.2.249:514 ssl ca-file /etc/haproxy/ca.pem crt
 /etc/haproxy/logstash.pem verify required crl-file /etc/haproxy/crl.pem
 ciphers
 EDH+CAMELLIA:EDH+aRSA:EECDH+aRSA+AESGCM:EECDH+aRSA+SHA384:EECDH+aRSA+SHA256:EECDH:+CAMELLIA256:+AES256:+CAMELLIA128:+AES128:+SSLv3:!aNULL:!eNULL:!LOW:!3DES:!MD5:!EXP:!PSK:!DSS:!RC4:!SEED:!ECDSA:CAMELLIA256-SHA:AES256-SHA:CAMELLIA128-SHA:AES128-SHA
 mode tcp
 balance roundrobin
 option tcplog
 server clusternode1 192.168.1.11:514 check
 server clusternode2 192.168.1.8:514 check
 source 0.0.0.0 usesrc clientip


 logstash needs the client ip as a source, so I'm trying to use source
 0.0.0.0 usesrc clientip. Do I need any additional iptables magic on the
 haproxy server to make this work?

Yes, see [1] and [2], and you also need:
- to be in the forwarding path of your backend
- enable ip_forwarding


[1] https://www.kernel.org/doc/Documentation/networking/tproxy.txt
[2] http://wiki.squid-cache.org/Features/Tproxy4

  


RE: broken packets with usesrc clientip

2015-05-19 Thread Lukas Tribus

 in my opinion I do not need a transparent proxy. my rsyslog nodes
 directly connect to an ip address which is configured on the haproxy
 server. So I don't need non_local_bind and no tproxy?

Mmh, I'm not sure. Try:
source  usesrc clientip

Where  is the real IP from HAproxy. That way tproxy4 is not
used, but the client still connects from the clientip.


You will have to play around with those things a bit, especially your
case is not exactly common. Check tcpdumps and strace haproxy with those
configurations.


I still don't get what you are doing: TLS encrypted logs come from
localhost basically, and you are sending them unencrypted to your
remote backend? Why not just send unencrypted logs directly to your
backend?




Lukas

  

RE: broken packets with usesrc clientip

2015-05-19 Thread Lukas Tribus

 in my opinion I do not need a transparent proxy. my rsyslog nodes
 directly connect to an ip address which is configured on the haproxy
 server. So I don't need non_local_bind and no tproxy?

(previous mail got messed up, sorry about that)

Mmh, I'm not sure. Try:
source usesrc clientip Where is the real IP from HAproxy.

That way tproxy4 is not used (but tproxy?), but the client still connects
from the  clientip. You will have to play around with those things a bit,
especially your case is not exactly common. Check tcpdumps and strace
haproxy with those configurations.

I still don't get what you are doing though: TLS encrypted logs come from
localhost basically, and you are sending them unencrypted to your remote
backend? Why not just send unencrypted logs directly to your backend?


Lukas

  


RE: broken packets with usesrc clientip

2015-05-19 Thread Lukas Tribus
 Mmh, I'm not sure. Try:
 source usesrc clientip Where is the real IP from HAproxy.

Just realized that the config is still messed up.
This should have been:

source haproxyip usesrc clientip

where haproxyip is the real IP from HAproxy.

  


RE: HAProxy segfault

2015-05-15 Thread Lukas Tribus
Hi David,


 Hi! 
 
 HAProxy 1.6-dev1, CentOS6 
 
 Getting a segfault when trying connect to port 3389. 
 
 segfault at 0 ip (null) sp 7fff18a41268 error 14 in haproxy[40+a4000] 

You are using the development tree, please upgrade
latest git first, there are 233 commits since 1.6-dev1.


Lukas

  


RE: HAProxy segfault

2015-05-15 Thread Lukas Tribus
 Thanks. tried this version, it works fine.. 

Ok, thanks for confirming.

  


RE: HTTP 408/409 server too busy

2015-04-20 Thread Lukas Tribus
 Please help us. This is impacting our production

Please refrain from pushing your threads like this (after only 1 hour),
and CC'ing unrelated people only because they helped you in the past.

This doesn't get you the answers you are looking for any faster, quite
the opposite in fact.

If you need SLA covered-response times, I'm sure you can work out
something with the guys at haproxy.com, but this is not that.


As for your problem, I suggest you provide logs and wireshark traces.


Lukas

  


RE: Backend connection resets in http-heep-alive mode

2015-04-16 Thread Lukas Tribus
 Hi,

 I'm experiencing a problem with backend TCP connections being reset
 [RST+ACK] by HAProxy after serving one HTTP request and receiving the
 [ACK] for the HTTP response. Delay between backend's [ACK] and
 haproxy's [RST+ACK] seems random, ranging from single seconds to
 several minutes.

 What I wanted to achieve is: for each backend server keep a limited
 pool of backend TCP connections open, and when a HTTP request comes in
 through the frontend, reuse one of the existing connections (in HTTP
 keep-alive mode), creating one if necessary.

What you are describing is connection pooling/multiplexing, but thats not
supported (yet).



 It was my understanding that 'timeout server 5m' should keep backend
 connections opened for 5 minutes before closing them. Was I mistaken?

Yes, this is a timeout for the case when the server is supposed to send
something, but doesn't [1], *not for a keep-alive use-case*.



 Would timeout tunnel allow me to specify such timeout, despite
 HAProxy working in http-keep-alive mode?

No.


Here is what you need to know (valid for the 1.5 stable releases):

- the frontend and backend connections are 1:1, meaning one frontend
  connection always has one backend connection. If either of those two
  connections are closed, the other side needs to close as well. You
  cannot reuse a backend connection for a request coming from a
  different/new frontend connection. You cannot have a backend connection
  pool and reuse them for frontend requests. Its possible that this will
  come in 1.6.

- with option http-tunnel [2], which was the default in 1.4 release,
  the HTTP connection is transformed into a TCP tunnel, so after the
  first request, HAproxy just forwards TCP between the client and the
  server. This brings some problems with it. For example, ACL, content
  switching and all HTTP based feature cannot work for subsequent
  HTTP request in a TCP session. Keepalive does work if server and
  client support it. Keepalive timeout is specified by timeout tunnel.
  
- with option http-keep-alive [3], which is the new 1.5 default, HAProxy
  understands the keep-alive part, and looks and understand every request.
  The 1:1 mapping is still valid and you still can't do connection pooling.
  Keep-alive timeout is specified by timeout http-keep-alive. option
  prefer-last-server [4] is recommended if you don't have any other client
  stickiness configurations.
  
- option http-server-close [5] does keep-alive on the client side only.
  timeout http-keep-alive is used here as well.


Hope this helps,

Lukas

  
[1] 
http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#4.2-timeout%20server
[2] 
http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#option%20http-tunnel
[3] 
http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#option%20http-keep-alive
[4] 
http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#option%20prefer-last-server
[5] 
http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#option%20http-server-close



  


RE: HAProxy 1.4.18 performance issue

2015-04-13 Thread Lukas Tribus
 On Mon, Apr 13, 2015 at 4:58 PM, Lukas Tribus 
 luky...@hotmail.commailto:luky...@hotmail.com wrote: 
 Hi, 
 
 I'm experiencing latency problems while running HAProxy 1.4.18. 
 
 Our backend servers reply to HAProxy almost instantly (~4ms), but some 
 of those replies are sent to the clients more than 100ms later. 
 
 We have approx. 50k sessions opened at any time, with a HTTP request 
 coming in approximately every minute over each session. 
 
 I suggest you try option http-no-delay but really try to understand 
 the implications: 
 http://cbonte.github.io/haproxy-dconv/configuration-1.4.html#option%20http-no-delay
  
 
 
 Thanks Lucas, option http-no-delay seems to have solved the problem.

Good, but this will just hide the real problem and may cause others (as per the 
documentation).
Both 1.4.23 and 1.4.20 fix latency (MSG_MORE/DONTWAIT) related problems, also, 
if you always
expect zero latency from the proxy then you are misusing HTTP.

I strongly suggest you consider upgrading to latest stable (either 1.4 or 
better yet 1.5) and retry
without this command.

You didn't provide your configuration so its not possible to tell if your are 
running into those
already fixed bugs, or if you simply need zero latency in all cases per 
application design.



Lukas

  


<    4   5   6   7   8   9   10   11   12   13   >