FYI: OpenSSL's CVE-2014-0160

2014-04-08 Thread Lukas Tribus
Hi list,

anyone running openssl 1.0.1 is likely affected by the quite serious TLS
heartbeat read overrun bug (CVE-2014-0160) in OpenSSL:

https://www.openssl.org/news/secadv_20140407.txt
http://heartbleed.com/


Upgrading to 1.0.1g fixes this issue, 0.9.8 and 1.0.0 are unaffected.




Regards,

Lukas 


Re: FYI: OpenSSL's CVE-2014-0160

2014-04-08 Thread Baptiste
Hi Lukas,

Thanks for sharing :)

Baptiste

On Tue, Apr 8, 2014 at 9:41 AM, Lukas Tribus luky...@hotmail.com wrote:
 Hi list,

 anyone running openssl 1.0.1 is likely affected by the quite serious TLS
 heartbeat read overrun bug (CVE-2014-0160) in OpenSSL:

 https://www.openssl.org/news/secadv_20140407.txt
 http://heartbleed.com/


 Upgrading to 1.0.1g fixes this issue, 0.9.8 and 1.0.0 are unaffected.




 Regards,

 Lukas



Re: FYI: OpenSSL's CVE-2014-0160

2014-04-08 Thread duncan hall

You can test if you are vulnerable here: http://filippo.io/Heartbleed/

On 04/08/2014 05:57 PM, Baptiste wrote:

Hi Lukas,

Thanks for sharing :)

Baptiste

On Tue, Apr 8, 2014 at 9:41 AM, Lukas Tribus luky...@hotmail.com wrote:

Hi list,

anyone running openssl 1.0.1 is likely affected by the quite serious TLS
heartbeat read overrun bug (CVE-2014-0160) in OpenSSL:

https://www.openssl.org/news/secadv_20140407.txt
http://heartbleed.com/


Upgrading to 1.0.1g fixes this issue, 0.9.8 and 1.0.0 are unaffected.




Regards,

Lukas





Re: FYI: OpenSSL's CVE-2014-0160

2014-04-08 Thread Philipp

Am 08.04.2014 10:31 schrieb duncan hall:

You can test if you are vulnerable here: http://filippo.io/Heartbleed/


Or test yourself (without leaking information to some website):
http://s3.jspenguin.org/ssltest.py

RHEL/Centos has an update (cherrypick fix) to 1.0.1e-16.el6_5.7



Re: FYI: OpenSSL's CVE-2014-0160

2014-04-08 Thread Martijn Otto
Please also note that although upgrading (and reloading haproxy) will
stop any new keys from being leaked, this bug has been open for two
years and it is possible your key was already leaked before.

Best course of action is to revoke current keys and reissue.

On Tue, 2014-04-08 at 09:41 +0200, Lukas Tribus wrote:
 Hi list,
 
 anyone running openssl 1.0.1 is likely affected by the quite serious TLS
 heartbeat read overrun bug (CVE-2014-0160) in OpenSSL:
 
 https://www.openssl.org/news/secadv_20140407.txt
 http://heartbleed.com/
 
 
 Upgrading to 1.0.1g fixes this issue, 0.9.8 and 1.0.0 are unaffected.
 
 
 
 
 Regards,
 
 Lukas   





Weird timing values in http log

2014-04-08 Thread Andreas Mock
Hi all,

I'm using haproxy snapshot 20140401 at the moment.
I'm pretty sure to have a straight forward configuration:
8
defaults
balance roundrobin  # RoundRobin Verfahrung
log global
modehttp# HTTP Modus (Layer7)
option  httplog # HTTP Logging - alle Logformat-Optionen
option  dontlognull # Enable or disable logging of null 
connections
retries 3   # Set the number of retries to perform on a 
server after a connection failure
option redispatch   # Enable or disable session redistribution 
in case of connection failure
#no option httpclose

option http-server-close# Enable or disable HTTP connection closing 
on the server side
timeout http-request 5s # Set the maximum allowed time to wait for 
a complete HTTP request
timeout connect 5s  # Set the maximum time to wait for a 
connection attempt to a server to succeed.
timeout server 10s  # Set the maximum inactivity time on the 
server side.
timeout client 60s  # Set the maximum inactivity time on the 
client side

frontend fe_something
bind ip-adress:443 some cipher related entries
bind ip-adress:80
default_backend be_something 

backend be_something
option httpchk GET /cluster-test.html
http-check expect string okay
option forwardfor
acl ssl ssl_fc
reqidel ^X-Forwarded-Proto:.*
reqadd X-Forwarded-Proto:\ https if ssl
reqadd X-Forwarded-Proto:\ http unless ssl
server server01 172.30.1.120:80 check maxconn 15 weight 100
server server02 172.30.2.120:80 check maxconn 15 weight 100
8

Now, when the first request comes in I see in the log the
following timing values (Tq '/' Tw '/' Tc '/' Tr '/' Tt*) 
which are o.k.

7/0/0/255/263

As soon as I request the second page while keep alive is still alive,
I get the following values:

2548/0/0/668/3217

So, it seems that the measures refer to the initial session startup
time. Is this true? Is this the intended behaviour?
I'm asking because documentations says [...] Large
times here generally indicate network trouble between the client and
haproxy.[...] which is not the case in my scenario.

Thank you in advance.


Best regards
Andreas Mock




Re: Weird timing values in http log

2014-04-08 Thread Cyril Bonté

Hi Andreas,

Le 08/04/2014 12:22, Andreas Mock a écrit :

Hi all,

I'm using haproxy snapshot 20140401 at the moment.
I'm pretty sure to have a straight forward configuration:
8
(...)
Now, when the first request comes in I see in the log the
following timing values (Tq '/' Tw '/' Tc '/' Tr '/' Tt*)
which are o.k.

7/0/0/255/263

As soon as I request the second page while keep alive is still alive,
I get the following values:

2548/0/0/668/3217

So, it seems that the measures refer to the initial session startup
time. Is this true?


Not exactly, it refers to the end of the previous request.


Is this the intended behaviour?
I'm asking because documentations says [...] Large
times here generally indicate network trouble between the client and
haproxy.[...] which is not the case in my scenario.


Yes, this behaviour is documented somewhere else when using option 
http-server-close or option http-keep-alive.


http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#option%20http-server-close

(...)At the moment, logs will not indicate whether requests came from 
the same session or not. The accept date reported in the logs 
corresponds to the end of the previous request, and the request time 
corresponds to the time spent waiting for a new request. (...)




--
Cyril Bonté



AW: Weird timing values in http log

2014-04-08 Thread Andreas Mock
Hi Cyril,

thank you very much for the fast answer
and the pointer into the documentation.

Now I have to think about whether I'm happy
with it...  ;-)

In this case I suggest adding a cross reference
to the paragraph citied by me, so that there are
configuration circumstances where a high request time
does NOT mean a network problem.


Best regards
Andreas Mock



-Ursprüngliche Nachricht-
Von: Cyril Bonté [mailto:cyril.bo...@free.fr] 
Gesendet: Dienstag, 8. April 2014 12:41
An: Andreas Mock
Cc: Haproxy
Betreff: Re: Weird timing values in http log

Hi Andreas,

Le 08/04/2014 12:22, Andreas Mock a écrit :
 Hi all,

 I'm using haproxy snapshot 20140401 at the moment.
 I'm pretty sure to have a straight forward configuration:
 8
 (...)
 Now, when the first request comes in I see in the log the
 following timing values (Tq '/' Tw '/' Tc '/' Tr '/' Tt*)
 which are o.k.

 7/0/0/255/263

 As soon as I request the second page while keep alive is still alive,
 I get the following values:

 2548/0/0/668/3217

 So, it seems that the measures refer to the initial session startup
 time. Is this true?

Not exactly, it refers to the end of the previous request.

 Is this the intended behaviour?
 I'm asking because documentations says [...] Large
 times here generally indicate network trouble between the client and
 haproxy.[...] which is not the case in my scenario.

Yes, this behaviour is documented somewhere else when using option 
http-server-close or option http-keep-alive.

http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#option%20http-server-close

(...)At the moment, logs will not indicate whether requests came from 
the same session or not. The accept date reported in the logs 
corresponds to the end of the previous request, and the request time 
corresponds to the time spent waiting for a new request. (...)



-- 
Cyril Bonté



unsubscribe

2014-04-08 Thread Martin Karbon





RE: AW: Weird timing values in http log

2014-04-08 Thread Lukas Tribus
 Hi Cyril,

 thank you very much for the fast answer
 and the pointer into the documentation.

 Now I have to think about whether I'm happy
 with it... ;-)

 In this case I suggest adding a cross reference
 to the paragraph citied by me, so that there are
 configuration circumstances where a high request time
 does NOT mean a network problem.

Could you send a doc patch for that?


Regards,

Lukas

  


redirecting based on Accept-Language

2014-04-08 Thread Marc Fournier

as per the subject, has anyone done something like this?

we’re setting up two backend pools, one geared to RTL languages, one to LTR … 
I’d like to set it up so that its transparent to the end  user, so that if they 
come in requesting, for instance, Arabic, they get directed to the RTL pool, 
and if they come in requesting English, they go to the LTR …

I’ve searched for examples of doing this (figure I can use req.hdr() for this), 
but have drawn a blank … 

Before I try and  do it from scratch, just figured I’d see if I wasn’t 
re-creating the wheel first …

Thanks ...


RE: [PATCH] Fetching TLS Unique ID

2014-04-08 Thread Lukas Tribus
Hi Dave,


 Hello
 The TLS unique id, or unique channel binding, is a byte string that can be
 pulled from a TLS connection and it is unique to that connection. It is
 defined in RFC 5929 section 3.  The value is used by various upper layer
 protocols as part of an extra layer of security.  For example XMPP
 (RFC 6120) and EST (RFC 7030).
 
 I created this patch on top of dev22 to extract this value so it can be
 passed from the front end to the back end when TLS is terminated at the
 front end.
 Here is an example configuration using it:
 
 server backend 127.0.0.1:80
  http-request set-header X-TLS-UNIQUE-ID %{+Q}[ssl_fc_unique_id]
 
 
 If you accept this patch, I'd also be happy to update configuration.txt.
 
 This is my first contribution, so please let me know the correct the
 procedure if I've missed something.

I gave it a try and it works as expected. I don't have the knowledge to
actually review the code, but my impression of the patch is positive, I
like it.


Patch applies fine to dev22, but it doesn't apply to current git/master.

My suggestion would be that you rebase this so that it applies cleanly
to the current tree (preferably with git, otherwise you can also just
get the latest snapshot [1]) and include the doc update in the patch
(small note in section 7.3.3 should be enough).

Furthermore please include a short description of what the patch does
(2 - 3 sentences) for the commit message.



Regards,

Lukas



[1] http://haproxy.1wt.eu/download/1.5/src/snapshot/

  


ha pool haproxy

2014-04-08 Thread Rafaela
hi ,

How can I have high availability and load balancing in my HA PROXY? Using
keepalived only guarantees me an online machine and is not load balancing
between nodes HAproxy. The haproxy run on Linux (debian) and my backend is
W2k8r2 (IIS 7.5)


RE: ha pool haproxy

2014-04-08 Thread Lukas Tribus
Hi,


 How can I have high availability and load balancing in my HA PROXY?
 Using keepalived only guarantees me an online machine and is not load
 balancing between nodes HAproxy.

Haproxy load balances traffic and guarantess high availability for your
backends. Haproxy cannot load balance its own incoming traffic if thats
what you are referring to.

Is your question how to balance load on two haproxy instances?

Take a look at the this thread:
http://thread.gmane.org/gmane.comp.web.haproxy/14320



Regards,

Lukas
  


suppress reqrep / use_backend warning

2014-04-08 Thread Patrick Hemmer
Would it be possible to get an option to suppress the warning when a
reqrep rule is placed after a use_backend rule?
[WARNING] 097/205824 (4777) : parsing
[/var/run/hapi/haproxy/haproxy.cfg:1443] : a 'reqrep' rule placed after
a 'use_backend' rule will still be processed before.

I prefer keeping my related rules grouped together, and so this message
pops up every time haproxy is (re)started. Currently it logs out 264
lines each start (I have a lot of rules), and is thus fairly annoying. I
am well aware of what the message means and my configuration is not
affected by it.

-Patrick


Re: ha pool haproxy

2014-04-08 Thread Rafaela
Tks Lukas!

The threads of the list, it is not possible to climb HAproxy horizontally,
how to maintain availability is using the vrrp (master and slave) or dns
round robin (despite losing a part of traffic if you do not have to check).
Correct?
My traffic is great/high and working with virtual machines, only one
haproxy working in isolation will not support all the traffic. Any other
suggestions?


2014-04-08 17:33 GMT-03:00 Lukas Tribus luky...@hotmail.com:

 Hi,


  How can I have high availability and load balancing in my HA PROXY?
  Using keepalived only guarantees me an online machine and is not load
  balancing between nodes HAproxy.

 Haproxy load balances traffic and guarantess high availability for your
 backends. Haproxy cannot load balance its own incoming traffic if thats
 what you are referring to.

 Is your question how to balance load on two haproxy instances?

 Take a look at the this thread:
 http://thread.gmane.org/gmane.comp.web.haproxy/14320



 Regards,

 Lukas



Re: [PATCH] Fetching TLS Unique ID

2014-04-08 Thread David S
Thank you Lukas.
Here is the rebased patch.
I also made one correction, I had added ssl_fc_unique_id as an ACL keyword,
but that does not make sense.  I removed that added line from my patch.
Answering a question I received offline:
base64 is the common way to encode this value.  SCRAM (RFC 5802), EST (RFC
7030), and XMPP (RFC 3920) all consume this value in this format.

For a commit description:

Add the ssl_fc_unique_id keyword and corresponding sample fetch method.
 Value is retrieved from OpenSSL and base64 encoded as described in RFC
5929 section 3.

Thanks,
--Dave


On Tue, Apr 8, 2014 at 4:18 PM, Lukas Tribus luky...@hotmail.com wrote:

 Hi Dave,


  Hello
  The TLS unique id, or unique channel binding, is a byte string that can
 be
  pulled from a TLS connection and it is unique to that connection. It is
  defined in RFC 5929 section 3.  The value is used by various upper layer
  protocols as part of an extra layer of security.  For example XMPP
  (RFC 6120) and EST (RFC 7030).
 
  I created this patch on top of dev22 to extract this value so it can be
  passed from the front end to the back end when TLS is terminated at the
  front end.
  Here is an example configuration using it:
 
  server backend 127.0.0.1:80
   http-request set-header X-TLS-UNIQUE-ID %{+Q}[ssl_fc_unique_id]
 
 
  If you accept this patch, I'd also be happy to update configuration.txt.
 
  This is my first contribution, so please let me know the correct the
  procedure if I've missed something.

 I gave it a try and it works as expected. I don't have the knowledge to
 actually review the code, but my impression of the patch is positive, I
 like it.


 Patch applies fine to dev22, but it doesn't apply to current git/master.

 My suggestion would be that you rebase this so that it applies cleanly
 to the current tree (preferably with git, otherwise you can also just
 get the latest snapshot [1]) and include the doc update in the patch
 (small note in section 7.3.3 should be enough).

 Furthermore please include a short description of what the patch does
 (2 - 3 sentences) for the commit message.



 Regards,

 Lukas



 [1] http://haproxy.1wt.eu/download/1.5/src/snapshot/




tlsunique.patch
Description: Binary data