Re: Consistent hashing based on cookie - across multiple HAProxy boxes

2013-02-07 Thread Baptiste
ahah, you can call me Baptiste :)

You miss a "stick on cookie(PHPSESSID)".
Also consider using the same expire delay you have on your application server.

And last but not least, add a "peers" section (and a peer directive on
the stick-table definition) where you provide all your HAProxy server
IPs in order to get the table of each HAProxy synchronized.

then you're done.

Baptiste

On 2/8/13, Alex Davies  wrote:
> Hi Willy,
>
> Thanks for your suggestion. I'm guessing you mean something like this
> backend:
>
> backend x
> balance roundrobin
> stick-table type string size 32k peers other_haproxy_server expire 24h
> stick store-response set-cookie(PHPSESSID)
>
> If I understand you correctly, you are saying that this will only mean that
> sessions become "persistant" once PHPSESSID is set. So, to translate into
> practicality, as long as the login page creates the relevant cooke (and it
> does not subsequently change once logged in), this should work nicely.
>
> Thanks,
>
> -Alex
>
>
>
> On Sun, Feb 3, 2013 at 7:59 AM, Baptiste  wrote:
>
>> Hi,
>>
>> the only way you could do what you want for now is using stick tables
>> (and haproxy 1.5-dev17).
>> You can learn the Set-Cookie from the server and match the Cookie in
>> the table from the client.
>> That way, all the request from a user will be sent to the same server,
>> from the first to the last one.
>>
>> Today, haproxy is able to hash a HTTP header for load-balancing, so a
>> configuration like:
>>  balance hdr(Cookie)
>> could do the trick, but it means that ALL clients cookie to
>> load-balance. And worste, since there is no phpsessionid cookie on the
>> first request, there are chances that the first and the second
>> requests won't be routed to the same server.
>>
>> I guess it would be possible soon to have a:
>>  balance cook(PHPSessionID)
>> but it won't fix the sticking issue between first and second request
>> since the cookie is not present in the first request.
>>
>> So if you really want using the algorithm method, you must be able to
>> share cookies between your backend servers, only lucky people will be
>> able to get authenticated.
>> Well, maybe there are some dirty tricks like managing a farm for
>> cookie-less clients and configuring PHP to learn an unknown session on
>> the fly.
>>
>> Baptiste
>>
>>
>> On Sun, Feb 3, 2013 at 2:03 AM, Alex Davies  wrote:
>> > Hi All,
>> >
>> > What is the best way to configure haproxy to hash based on an
>> > application
>> > cookie (such as PHPSESSID), in a way that is consistent (meaning
>> > multiple
>> > haproxy servers will route to the same backend), ideally including the
>> > ability to weight backends (the configuration would clearly have to be
>> the
>> > same on these different boxes).
>> >
>> > appsession obviously allows this for a single HAProxy server, but it
>> seems
>> > from the documentation that it generates a server based on the hash at
>> the
>> > start of each session, so if the same session hit a different but
>> > identically configured haproxy server it would end up with a
>> >
>> > Thanks,
>> >
>> > -Alex
>>
>



Re: Consistent hashing based on cookie - across multiple HAProxy boxes

2013-02-07 Thread Alex Davies
Hi Willy,

Thanks for your suggestion. I'm guessing you mean something like this
backend:

backend x
balance roundrobin
stick-table type string size 32k peers other_haproxy_server expire 24h
stick store-response set-cookie(PHPSESSID)

If I understand you correctly, you are saying that this will only mean that
sessions become "persistant" once PHPSESSID is set. So, to translate into
practicality, as long as the login page creates the relevant cooke (and it
does not subsequently change once logged in), this should work nicely.

Thanks,

-Alex



On Sun, Feb 3, 2013 at 7:59 AM, Baptiste  wrote:

> Hi,
>
> the only way you could do what you want for now is using stick tables
> (and haproxy 1.5-dev17).
> You can learn the Set-Cookie from the server and match the Cookie in
> the table from the client.
> That way, all the request from a user will be sent to the same server,
> from the first to the last one.
>
> Today, haproxy is able to hash a HTTP header for load-balancing, so a
> configuration like:
>  balance hdr(Cookie)
> could do the trick, but it means that ALL clients cookie to
> load-balance. And worste, since there is no phpsessionid cookie on the
> first request, there are chances that the first and the second
> requests won't be routed to the same server.
>
> I guess it would be possible soon to have a:
>  balance cook(PHPSessionID)
> but it won't fix the sticking issue between first and second request
> since the cookie is not present in the first request.
>
> So if you really want using the algorithm method, you must be able to
> share cookies between your backend servers, only lucky people will be
> able to get authenticated.
> Well, maybe there are some dirty tricks like managing a farm for
> cookie-less clients and configuring PHP to learn an unknown session on
> the fly.
>
> Baptiste
>
>
> On Sun, Feb 3, 2013 at 2:03 AM, Alex Davies  wrote:
> > Hi All,
> >
> > What is the best way to configure haproxy to hash based on an application
> > cookie (such as PHPSESSID), in a way that is consistent (meaning multiple
> > haproxy servers will route to the same backend), ideally including the
> > ability to weight backends (the configuration would clearly have to be
> the
> > same on these different boxes).
> >
> > appsession obviously allows this for a single HAProxy server, but it
> seems
> > from the documentation that it generates a server based on the hash at
> the
> > start of each session, so if the same session hit a different but
> > identically configured haproxy server it would end up with a
> >
> > Thanks,
> >
> > -Alex
>


Re: installing SSL, and backend communication is non-ssl

2013-02-07 Thread Robin Lee Powell
On Thu, Feb 07, 2013 at 11:54:56AM -0500, S Ahmed wrote:
> Is it hard to install SSL with haproxy?
> 
> I want all incoming connections to use SSL, but when haproxy
> communicates with the backends I don't want them to be ssl based.
> 
> ANy tutorials on setting this up?

With 1.5-dev17 (or whatever's the latest) that's fairly easy.
Here's a config snippet.  The ":::443" thing is to make it bind to
ipv6, which *:443 doesn't.  The cert file has to be all the relevant
certs concatenated; see the docs for more info.

# Listen for ssl requests
# 443, but get passed on to http-based ports for Apache
listen https
balance roundrobin
mode http
option http-server-close
option forwardfor
option httpchk HEAD /cytobank/images/logo_bigger.gif


bind :::443 ssl crt /opt/haproxy/etc/wildcard.cert


default_backend https_apache_localhost

backend https_apache_localhost
server server_0 localhost:83 check inter 3000 rise 1 fall 1 
error-limit 1 on-error mark-down


-Robin



Re: could a single ha proxy server sustain 1500 requests per second

2013-02-07 Thread Willy Tarreau
On Thu, Feb 07, 2013 at 11:34:43AM -0500, S Ahmed wrote:
> Thanks Willy.
> 
> On the same note you said not to run anything on the same machine, to lower
> costs I want to run other things on the haproxy front-end load balancer.
> 
> What are the critical things to watch for on the server so I can be
> notified at what point having 2 things on the server are becoming a problem?

First, you need to ensure that the machine never ever swaps. This is absolutely
critical. The second important point to consider is that you don't want another
process to use a lot of CPU on the same machine, or you want to dedicate some
CPUs to other processes. And last point is that you don't want other processes
to harm the network stack (eg: eat all the source ports by doing nasty things
such as connecting and closing as a client, rendering the source port unusable
for doing real work for 2 minutes).

There are people who live very well with their LB doing several things, but
the ones who do it without taking much care can really regret it. After all,
the LB is the point where *all* your traffic passes, you certainly don't want
it to slow down because a stupid process was started on it by accident. And
some web sites can lose so much per minute of failure that they don't want to
risk mixing workloads to save a $500 machine !

Regards,
Willy




Re: SSL handshake failure

2013-02-07 Thread Willy Tarreau
On Thu, Feb 07, 2013 at 09:22:37PM +0400, Samat Galimov wrote:
> Funny, with patch applied it establishes first connection after start
> normally.
> Then old thing continues.

I'm unsure what you mean, do you mean the patch has slightly improved the
situation but not completely ?

Willy




Re: SSL handshake failure

2013-02-07 Thread Samat Galimov
Funny, with patch applied it establishes first connection after start
normally.
Then old thing continues.


On Thu, Feb 7, 2013 at 6:58 PM, Willy Tarreau  wrote:

> On Thu, Feb 07, 2013 at 06:49:14PM +0400, Samat Galimov wrote:
> > Thank you very much, overlooked your email due to filters, sorry for
> delay.
> > I am very happy to help, sure I would accept a patch.
> > Server is available from outside world but is not heavily used ??? we
> dont
> > point load to it because of this SSL errors.
> >
> > By the way, I am using default haproxy-devel port in FreeBSD tree, so
> >
> http://haproxy.1wt.eu/download/1.5/src/devel/haproxy-1.5-dev17.tar.gzsource
> > is being used.
>
> OK, please find it attached. You should apply it on top of your current
> source tree and rebuild.
>
> Thanks,
> Willy
>
>


Re: could a single ha proxy server sustain 1500 requests per second

2013-02-07 Thread S Ahmed
Thanks Willy.

On the same note you said not to run anything on the same machine, to lower
costs I want to run other things on the haproxy front-end load balancer.

What are the critical things to watch for on the server so I can be
notified at what point having 2 things on the server are becoming a problem?


On Wed, Dec 5, 2012 at 2:00 AM, Willy Tarreau  wrote:

> On Tue, Dec 04, 2012 at 02:19:30PM -0500, S Ahmed wrote:
> > Hi,
> >
> > So 500 Mbits is 1/2 usage of a 1 Gbps port (haproxy and the back-end
> > servers will have 1 Gbps connections).
>
> No, the traffic goes in opposite directions and the link is full duplex,
> so you can effectively have 1 Gbps in and 1 Gbps out at the same time.
>
> > How does latency change things? e.g. what if it takes 90% clients 1
> second
> > to send the 20K file, while some may take 1-3 seconds.
>
> it's easy, you said you were counting on 1500 req/s :
>
>- 90% of 1500 req/s = 1350 req/s
>- 10% of 1500 req/s =  150 req/s
>
> 1350 req/s are present for one second => 1350 concurrent requests.
> 150 req/s are present for 3 seconds => 450 concurrent requests.
> => you have a total of 1800 concurrent requests (with one connection
>each, it's 1800 concurrent connections).
>
> What we can say with such numbers :
>   - 1500 connections/s is light, even if conntrack is loaded and correctly
> tuned, you won't notice (we're doing twice this on a 500 Mbps Geode
> running on 1 watt).
>
>   - 1800 concurrent connections is light too, multiply that by 16 kB, it's
> 30MB of RAM for the kernel-side sockets, and twice that at most for
> haproxy, so less than 100 MB of RAM.
>
>   - 250 Mbps in both directions should not be an issue either, even my
> pc-card realtek NIC does it on my 8-years old pentium-M.
>
> At only 1800 concurrent connections, the latency will probably be mostly
> related to the NIC's interrupt rate. But we're speaking about hundreds of
> microseconds here.
>
> If you're concerned about latency, use a correct NIC, don't run any other
> software on the machine, and obviously don't run this in a VM !
>
> Hoping this helps,
> Willy
>
>


Re: -sf/-st not working

2013-02-07 Thread Marc-Antoine Perennou
It is totally normal that systemd kills the new process as the main one
which was the first has exited. This is the expected behaviour.

I'm currently patching haproxy to fully support systemd, I'll probably
submit my patches by tomorrow (It's fully functionnal here, only needs a
little cleaning)


On 7 February 2013 16:31, Eugene Istomin  wrote:

> **
>
> I think the main problem is in systemd:
>
>
>
> - from commandline -sf working as expected
>
> - from sysvinit -sf working as expected
>
> - from systemd -sf only stop process.
>
>
>
> I try both init.d & systemd scripts in systemd-based linux - all results
> are the same:
>
>
>
> Loaded: loaded (/lib/systemd/system/haproxy.service; disabled)
>
> Active: failed (Result: signal) since Thu, 07 Feb 2013 17:18:43 +0200;
> 12s ago
>
> Process: 28125 ExecReload=/usr/sbin/haproxy -D -f
> /etc/haproxy/haproxy.cfg -p /var/run/haproxy.pid -sf $MAINPID (code=exited,
> status=0/SUCCESS)
>
> Process: 28118 ExecStart=/usr/sbin/haproxy -D -f /etc/haproxy/haproxy.cfg
> -p /var/run/haproxy.pid (code=exited, status=0/SUCCESS)
>
> Process: 28115 ExecStartPre=/usr/sbin/haproxy -c -q -f
> /etc/haproxy/haproxy.cfg (code=exited, status=0/SUCCESS)
>
> Main PID: 28126 (code=killed, signal=KILL)
>
> CGroup: name=systemd:/system/haproxy.service
>
>
>
>
>
> systemd script:
>
> [Unit]
>
> Description=HAProxy For TCP And HTTP Based Applications
>
> After=network.target
>
>
>
> [Service]
>
> Type=forking
>
> PIDFile=/var/run/haproxy.pid
>
> ExecStartPre=/usr/sbin/haproxy -c -q -f /etc/haproxy/haproxy.cfg
>
> ExecStart=/usr/sbin/haproxy -D -f /etc/haproxy/haproxy.cfg -p
> /var/run/haproxy.pid
>
> ExecReload=/usr/sbin/haproxy -D -f /etc/haproxy/haproxy.cfg -p
> /var/run/haproxy.pid -sf $MAINPID
>
>
>
> [Install]
>
> WantedBy=multi-user.target
>
>
>
> --
>
> Best regards,
>
> Eugene Istomin
>
>
>
>
> On Thursday 07 February 2013 14:07:44 Baptiste wrote:
>
> > You should have a new HAProxy process started using the new
>
> > configuration and binding the ports...
>
> >
>
> > cheers
>
> >
>
> > On 2/7/13, Eugene Istomin  wrote:
>
> > > Thanks for the answer,
>
> > >
>
> > > as written in
>
> > > http://www.mgoff.in/2010/04/18/haproxy-reloading-your-config-with-
>
> > > minimal-service-impact/
>
> > > "The end-result is a reload of the configuration file which is not
> visible
>
> > > by
>
> > > the customer"
>
> > >
>
> > > But in our case it leads to unbinding from all ports and finishing
> haproxy
>
> > > process.
>
> > > Can this issue related to rpm build options? RPM build log is
>
> > >
> https://build.opensuse.org/package/rawlog?arch=x86_64&package=haproxy-1.5&;
>
> > > project=server%3Ahttp&repository=openSUSE_12.2
>
> > >
>
> > >
>
> > > --
>
> > > Best regards,
>
> > > Eugene Istomin
>
> > >
>
> > > On Thursday 07 February 2013 07:28:17 Willy Tarreau wrote:
>
> > >> Hello Eugene,
>
> > >>
>
> > >> On Wed, Feb 06, 2013 at 08:29:33PM +0200, Eugene Istomin wrote:
>
> > >> > Hello,
>
> > >> >
>
> > >> > We have problem with reload/HUP:
>
> > >> > if i run #/usr/sbin/haproxy -D -f /etc/haproxy/haproxy.cfg -p
>
> > >> > /var/run/haproxy.pid -sf $(cat /var/run/haproxy.pid) - haproxy
> process
>
> > >> > is
>
> > >> > shutting down and exit
>
> > >>
>
> > >> This is the intended behaviour, it unbinds from its ports so that the
> new
>
> > >> process can bind, then waits for all existing connections to terminate
>
> > >> and leaves. Isn't it what you're observing ? What would you have
> expected
>
> > >> instead ?
>
> > >>
>
> > >> Willy
>


Re: -sf/-st not working

2013-02-07 Thread Eugene Istomin
I think the main problem is in systemd: 

- from commandline -sf working as expected
- from sysvinit -sf working as expected
- from systemd -sf  only stop process.

I try both init.d & systemd scripts in systemd-based linux - all results are 
the same:

  Loaded: loaded (/lib/systemd/system/haproxy.service; disabled)
  Active: failed (Result: signal) since Thu, 07 Feb 2013 17:18:43 +0200; 12s 
ago
  Process: 28125 ExecReload=/usr/sbin/haproxy -D -f /etc/haproxy/haproxy.cfg -
p /var/run/haproxy.pid -sf $MAINPID (code=exited, status=0/SUCCESS)
  Process: 28118 ExecStart=/usr/sbin/haproxy -D -f /etc/haproxy/haproxy.cfg -p 
/var/run/haproxy.pid (code=exited, status=0/SUCCESS)
  Process: 28115 ExecStartPre=/usr/sbin/haproxy -c -q -f 
/etc/haproxy/haproxy.cfg (code=exited, status=0/SUCCESS)
Main PID: 28126 (code=killed, signal=KILL)
  CGroup: name=systemd:/system/haproxy.service


systemd script:
[Unit]
Description=HAProxy For TCP And HTTP Based Applications
After=network.target

[Service]
Type=forking
PIDFile=/var/run/haproxy.pid
ExecStartPre=/usr/sbin/haproxy -c -q -f /etc/haproxy/haproxy.cfg
ExecStart=/usr/sbin/haproxy -D -f /etc/haproxy/haproxy.cfg -p 
/var/run/haproxy.pid
ExecReload=/usr/sbin/haproxy -D -f /etc/haproxy/haproxy.cfg -p 
/var/run/haproxy.pid -sf $MAINPID

[Install]
WantedBy=multi-user.target

-- 
Best regards,
Eugene Istomin


On Thursday 07 February 2013 14:07:44 Baptiste wrote:
> You should have a new HAProxy process started using the new
> configuration and binding the ports...
> 
> cheers
> 
> On 2/7/13, Eugene Istomin  wrote:
> > Thanks for the answer,
> > 
> > as written in
> > http://www.mgoff.in/2010/04/18/haproxy-reloading-your-config-with-
> > minimal-service-impact/
> > "The end-result is a reload of the configuration file which is not visible
> > by
> > the customer"
> > 
> > But in our case it leads to unbinding from all ports and finishing haproxy
> > process.
> > Can this issue related to rpm build options? RPM build log is
> > https://build.opensuse.org/package/rawlog?arch=x86_64&package=haproxy-1.5&;
> > project=server%3Ahttp&repository=openSUSE_12.2
> > 
> > 
> > --
> > Best regards,
> > Eugene Istomin
> > 
> > On Thursday 07 February 2013 07:28:17 Willy Tarreau wrote:
> >> Hello Eugene,
> >> 
> >> On Wed, Feb 06, 2013 at 08:29:33PM +0200, Eugene Istomin wrote:
> >> > Hello,
> >> > 
> >> > We have problem with reload/HUP:
> >> > if i run #/usr/sbin/haproxy -D -f /etc/haproxy/haproxy.cfg -p
> >> > /var/run/haproxy.pid -sf $(cat /var/run/haproxy.pid)  - haproxy process
> >> > is
> >> > shutting down and exit
> >> 
> >> This is the intended behaviour, it unbinds from its ports so that the new
> >> process can bind, then waits for all existing connections to terminate
> >> and leaves. Isn't it what you're observing ? What would you have expected
> >> instead ?
> >> 
> >> Willy

Re: SSL handshake failure

2013-02-07 Thread Willy Tarreau
On Thu, Feb 07, 2013 at 06:49:14PM +0400, Samat Galimov wrote:
> Thank you very much, overlooked your email due to filters, sorry for delay.
> I am very happy to help, sure I would accept a patch.
> Server is available from outside world but is not heavily used ??? we dont
> point load to it because of this SSL errors.
> 
> By the way, I am using default haproxy-devel port in FreeBSD tree, so
> http://haproxy.1wt.eu/download/1.5/src/devel/haproxy-1.5-dev17.tar.gzsource
> is being used.

OK, please find it attached. You should apply it on top of your current
source tree and rebuild.

Thanks,
Willy

diff --git a/src/ssl_sock.c b/src/ssl_sock.c
index 87eff2b..07b1ca7 100644
--- a/src/ssl_sock.c
+++ b/src/ssl_sock.c
@@ -128,7 +128,8 @@ int ssl_sock_verifycbk(int ok, X509_STORE_CTX *x_store)
}
 
if (objt_listener(conn->target)->bind_conf->ca_ignerr & (1ULL 
<< err)) {
-   ERR_clear_error();
+   if (ERR_peek_error())
+   ERR_clear_error();
return 1;
}
 
@@ -141,7 +142,8 @@ int ssl_sock_verifycbk(int ok, X509_STORE_CTX *x_store)
 
/* check if certificate error needs to be ignored */
if (objt_listener(conn->target)->bind_conf->crt_ignerr & (1ULL << err)) 
{
-   ERR_clear_error();
+   if (ERR_peek_error())
+   ERR_clear_error();
return 1;
}
 
@@ -885,6 +887,9 @@ int ssl_sock_handshake(struct connection *conn, unsigned 
int flag)
if ((conn->flags & CO_FL_CONNECTED) && 
SSL_renegotiate_pending(conn->xprt_ctx)) {
char c;
 
+   if (unlikely(ERR_peek_error()))
+   ERR_clear_error();
+
ret = SSL_peek(conn->xprt_ctx, &c, 1);
if (ret <= 0) {
/* handshake may have not been completed, let's find 
why */
@@ -942,6 +947,9 @@ int ssl_sock_handshake(struct connection *conn, unsigned 
int flag)
goto reneg_ok;
}
 
+   if (unlikely(ERR_peek_error()))
+   ERR_clear_error();
+
ret = SSL_do_handshake(conn->xprt_ctx);
if (ret != 1) {
/* handshake did not complete, let's find why */
@@ -1008,7 +1016,8 @@ reneg_ok:
 
  out_error:
/* Clear openssl global errors stack */
-   ERR_clear_error();
+   if (ERR_peek_error())
+   ERR_clear_error();
 
/* free resumed session if exists */
if (objt_server(conn->target) && 
objt_server(conn->target)->ssl_ctx.reused_sess) {
@@ -1062,6 +1071,9 @@ static int ssl_sock_to_buf(struct connection *conn, 
struct buffer *buf, int coun
 * EINTR too.
 */
while (try) {
+   if (unlikely(ERR_peek_error()))
+   ERR_clear_error();
+
ret = SSL_read(conn->xprt_ctx, bi_end(buf), try);
if (conn->flags & CO_FL_ERROR) {
/* CO_FL_ERROR may be set by ssl_sock_infocbk */
@@ -1084,7 +1096,8 @@ static int ssl_sock_to_buf(struct connection *conn, 
struct buffer *buf, int coun
conn->flags |= CO_FL_ERROR;
 
/* Clear openssl global errors stack */
-   ERR_clear_error();
+   if (ERR_peek_error())
+   ERR_clear_error();
}
goto read0;
}
@@ -1118,7 +1131,8 @@ static int ssl_sock_to_buf(struct connection *conn, 
struct buffer *buf, int coun
return done;
  out_error:
/* Clear openssl global errors stack */
-   ERR_clear_error();
+   if (ERR_peek_error())
+   ERR_clear_error();
 
conn->flags |= CO_FL_ERROR;
return done;
@@ -1158,6 +1172,9 @@ static int ssl_sock_from_buf(struct connection *conn, 
struct buffer *buf, int fl
if (buf->data + try > buf->p)
try = buf->data + try - buf->p;
 
+   if (unlikely(ERR_peek_error()))
+   ERR_clear_error();
+
ret = SSL_write(conn->xprt_ctx, bo_ptr(buf), try);
if (conn->flags & CO_FL_ERROR) {
/* CO_FL_ERROR may be set by ssl_sock_infocbk */
@@ -1201,7 +1218,8 @@ static int ssl_sock_from_buf(struct connection *conn, 
struct buffer *buf, int fl
 
  out_error:
/* Clear openssl global errors stack */
-   ERR_clear_error();
+   if (ERR_peek_error())
+   ERR_clear_error();
 
conn->flags |= CO_FL_ERROR;
return done;
@@ -1226,7 +1244,8 @@ static void ssl_sock_shutw(struct connection *conn, int 
clean)
/* no handshake was in progress, try a clean ssl shutdown */
if (clean && (SSL_shutdown(conn->xprt_ctx) <= 0)) {
/* Clear openssl global errors stack */
-   ERR_clear_error()

Re: SSL handshake failure

2013-02-07 Thread Samat Galimov
Thank you very much, overlooked your email due to filters, sorry for delay.
I am very happy to help, sure I would accept a patch.
Server is available from outside world but is not heavily used — we dont
point load to it because of this SSL errors.

By the way, I am using default haproxy-devel port in FreeBSD tree, so
http://haproxy.1wt.eu/download/1.5/src/devel/haproxy-1.5-dev17.tar.gzsource
is being used.


On Wed, Feb 6, 2013 at 10:56 AM, Willy Tarreau  wrote:

> Hello Samat,
>
> On Tue, Feb 05, 2013 at 12:39:20PM +0400, Samat Galimov wrote:
> > Hello,
> >
> > I have very strange behaviour of HA-Proxy version 1.5-dev17 2012/12/28 on
> > FreeBSD 9.0-Stable
> >
> > % openssl s_client -debug -servername dharma.zvq.me -connect
> > dharma.zvq.me:443 /usr/local/etc
> > CONNECTED(0003)
> > write to 0x801407160 [0x801525000] (128 bytes => 128 (0x80))
> >  - 16 03 01 00 7b 01 00 00-77 03 01 51 10 6a 26 66 {...w..Q.j&f
> > 0010 - e8 2b 77 63 f9 ea 25 e8-b7 cb 51 84 0a d7 0d 7c .+wc..%...Q???.|
> > 0020 - 58 2c 32 6f 0f 54 94 c6-29 57 c4 00 00 34 00 39 X,2o.T..)W???4.9
> > 0030 - 00 38 00 35 00 88 00 87-00 84 00 16 00 13 00 0a .8.5..??
> > 0040 - 00 33 00 32 00 2f 00 45-00 44 00 41 00 05 00 04 .3.2./.E.D.A???.
> > 0050 - 00 15 00 12 00 09 00 14-00 11 00 08 00 06 00 03 .??.
> > 0060 - 00 ff 01 00 00 1a 00 00-00 12 00 10 00 00 0d 64 .??d
> > 0070 - 68 61 72 6d 61 2e 7a 76-71 2e 6d 65 00 23 harma.zvq.me.#
> > 0080 - 
> > read from 0x801407160 [0x801577000] (7 bytes => 0 (0x0))
> > 42642:error:140790E5:SSL routines:SSL23_WRITE:ssl handshake
> >
> failure:/mnt/jq032hgn/usr/src/secure/lib/libssl/../../../crypto/openssl/ssl/s23_lib.c:182:
> > OpenSSL is 0.9.8q 2 Dec 2010
> >
> > It's randomly gives such a weird error, 50% chance, as I see.
>
> Are you the only one to access this service or is it in production and
> used by other people ? I'm asking because we had a similar report a few
> weeks ago of 0.9.8 on solaris experiencing random errors, and we suspected
> that the error queue was probably sometimes filled by some SSL calls
> without returning an error, and thus was not flushed.
>
> Would you accept to try a patch ? We have one to change the behaviour
> that we have still not merged due to the lack of testers experiencing
> the issue !
>
> > On server side (i run haproxy with -d) i get:
> > 000c:https.accept(0005)=0007 from [5.9.11.40:43423]
> > 000c:https.clicls[0007:0008]
> > 000c:https.closed[0007:0008]
> >
> > Here is my config:
> (...)
>
> I see nothing wrong in your configuration, and a config should not cause
> a random behaviour anyway. Also you're not in a chroot so it cannot be
> caused by a lack of entropy caused by the inability to access /dev/urandom.
>
> Willy
>
>


Re: -sf/-st not working

2013-02-07 Thread Baptiste
You should have a new HAProxy process started using the new
configuration and binding the ports...

cheers

On 2/7/13, Eugene Istomin  wrote:
> Thanks for the answer,
>
> as written in
> http://www.mgoff.in/2010/04/18/haproxy-reloading-your-config-with-
> minimal-service-impact/
> "The end-result is a reload of the configuration file which is not visible
> by
> the customer"
>
> But in our case it leads to unbinding from all ports and finishing haproxy
> process.
> Can this issue related to rpm build options? RPM build log is
> https://build.opensuse.org/package/rawlog?arch=x86_64&package=haproxy-1.5&project=server%3Ahttp&repository=openSUSE_12.2
>
>
> --
> Best regards,
> Eugene Istomin
>
>
> On Thursday 07 February 2013 07:28:17 Willy Tarreau wrote:
>> Hello Eugene,
>>
>> On Wed, Feb 06, 2013 at 08:29:33PM +0200, Eugene Istomin wrote:
>> > Hello,
>> >
>> > We have problem with reload/HUP:
>> > if i run #/usr/sbin/haproxy -D -f /etc/haproxy/haproxy.cfg -p
>> > /var/run/haproxy.pid -sf $(cat /var/run/haproxy.pid)  - haproxy process
>> > is
>> > shutting down and exit
>>
>> This is the intended behaviour, it unbinds from its ports so that the new
>> process can bind, then waits for all existing connections to terminate
>> and leaves. Isn't it what you're observing ? What would you have expected
>> instead ?
>>
>> Willy



compress only if response size is big enough

2013-02-07 Thread Dmitry Sivachenko
Hello!

It would be nice to add some parameter .
So haproxy will compress HTTP response only if response size is bigger than 
that value.

Because compressing small data can lead to size increase and is useless.

Thanks.


Re: failing to redirect http to https using HAProxy 1.5dev15

2013-02-07 Thread Guillaume Castagnino
Hi,

You should consider to use the brand new redirect just ment for that :

redirect scheme https code 301 if ! secure



Regards


Le jeudi 07 février 2013 11:38:34 Robbert van Waveren a écrit :
> Hi,
> 
> I'm trying out HAProxy and would like to use as our general purpose
> proxy/loadbalancer.
> Currently I've all requirements tackled except forcing the use of
> https (by means of a redirection).
> We're planning host many different sub domains so I really need to
> redirect to be as generic as possible.
> 
> On the web I found a solution proposal using "reqirep" to rewrite the
> host header to https and then a generic redirect.
> However I can't get it to work as the redirect seems to fail to add
> the protocol+host part in the redirect url.
> (Leading to a redirect loop)
> 
> Below is a simplified configuration that I'm currently trying to get
> working.
> Note that I´m using HAProxy itself to deal with SSL termination.
> 
> global
> maxconn 4096
> daemon
> nbproc  2
> defaults
> clitimeout  6
> srvtimeout  3
> contimeout  4000
> modehttp
> 
> frontend fe_default
>   bind :443 ssl crt /opt/haproxy/ppc.pem crt /opt/haproxy/keystore/
>   bind :80
>   acl secure dst_port 443
>   reqirep ^Host:[\ ]*\(.*\)  Host:\ https://\1 if ! secure
>   redirect prefix / if ! secure
>   default_backend be_default
> 
> backend be_default
>   balance roundrobin
>   option httpchk
>   cookie srv insert postonly indirect
>   server civ1 10.2.32.175:443 weight 1 maxconn 512 check cookie one
>   server civ2 10.2.32.176:443 weight 1 maxconn 512 check cookie two
> 
> 
> Any help is much appreciated.
> 
> Regards,
> 
> Robbert
-- 
Guillaume Castagnino
ca...@xwing.info / guilla...@castagnino.org




Re: -sf/-st not working

2013-02-07 Thread Eugene Istomin
Thanks for the answer,

as written in http://www.mgoff.in/2010/04/18/haproxy-reloading-your-config-with-
minimal-service-impact/
"The end-result is a reload of the configuration file which is not visible by 
the customer"

But in our case it leads to unbinding from all ports and finishing haproxy 
process.
Can this issue related to rpm build options? RPM build log is  
https://build.opensuse.org/package/rawlog?arch=x86_64&package=haproxy-1.5&project=server%3Ahttp&repository=openSUSE_12.2
 

-- 
Best regards,
Eugene Istomin


On Thursday 07 February 2013 07:28:17 Willy Tarreau wrote:
> Hello Eugene,
> 
> On Wed, Feb 06, 2013 at 08:29:33PM +0200, Eugene Istomin wrote:
> > Hello,
> > 
> > We have problem with reload/HUP:
> > if i run #/usr/sbin/haproxy -D -f /etc/haproxy/haproxy.cfg -p
> > /var/run/haproxy.pid -sf $(cat /var/run/haproxy.pid)  - haproxy process is
> > shutting down and exit
> 
> This is the intended behaviour, it unbinds from its ports so that the new
> process can bind, then waits for all existing connections to terminate
> and leaves. Isn't it what you're observing ? What would you have expected
> instead ?
> 
> Willy