Re: haproxy: *** glibc detected *** /usr/sbin/haproxy: double free or corruption (out): 0x0000000001ef41a0 ***

2012-05-22 Thread Sander Klein

Hmmm, I thought I typed more text...

On 22.05.2012 11:06, Sander Klein wrote:

Hi,

When I reload haproxy I get this message:

May 22 11:02:45 lb01-a haproxy: *** glibc detected ***
/usr/sbin/haproxy: double free or corruption (out): 
0x01ef41a0

***

I'm running haproxy 1.5-dev10 2012/05/13

If any more info is needed please let me know.


I was wondering if this message is a problem or some bug.

Regards,

Sander



SSL farm

2012-05-22 Thread Allan Wind
I read through the last 6 months of archive and the usual answer 
for SSL support is put nginx/stunnel/stud in front.  This, as far 
as I can tell, means a single server handling SSL, and this is 
the what http://haproxy.1wt.eu/#desi suggest is a non-scalable 
solution.

You can obviously configure haproxy to route ssl connections to a 
form via the tcp mode, but you then lose the client IP.  The 
transparent keyword is promising but apparently requires haproxy 
box to be the gateway.  Not sure that is possible with our cloud
environment.  

I understand from:
http://vincent.bernat.im/en/blog/2011-ssl-session-reuse-rfc5077.html#setting-a-session-cache-with-apache-nginx
that session reuse (i.e. mod_gnutls in our case) would need to be
configured on the backend to permit ssl resume.

But how do you go about distributing traffic to a ssl form 
without losing the client IP?


/Allan
-- 
Allan Wind
Life Integrity, LLC
http://lifeintegrity.com



Value Based Fee Arrangement – Chief Counsel’s Approach to Structuring Relationships

2012-05-22 Thread Tracy Richard
Dear Reader,

Invitation to attend - web event led by Deepak Malhotra, Chief Legal 
Administrative Officer and Global Counsel, Fusion Universal, UK

Topic: Value Based Fee Arrangement – Chief Counsel’s Approach to
Structuring Relationships

Venue: At your desk: On your Laptop/PC or Phone
Date: 23 May 2012
Time: 9:00 am PDT/11:00 am CDT/12:00 pm EDT
Duration: 60 minutes including Q  A
Live Participation Fee: US$ 49 (Free for Gold Members and General
Counsel)
Access the Recorded Version: US$ 49

The purchase price of this event will also give you FREE access to one
more LPO/IP Offshoring Podcasts at your choice (same or less price!).

Please indicate if you/your staff member are interested in
registration.

Should you participate in this web event, you will be awarded with the
Silver Membership at GOAL, absolutely free of cost.


Best regards,
Tracy Richard
Executive, Global Outsourcing Association of Lawyers (GOAL)

PS: To access the library of Podcasts of our previous Legal/IP
Offshoring webinars, please indicate. We can send you the requisite
information.

If you don’t want to receive emails, please reply to this email with
the subject line ‘unsubscribe’.



Re: SSL farm

2012-05-22 Thread Vincent Bernat
OoO Lors  de la soirée naissante du  mardi 22 mai 2012,  vers 17:52, Bar
Ziony bar...@gmail.com disait :

 You need to place a packet load balancer such as LVS in front of
 haproxy, which directs SSL traffic to an SSL farm (which saves the
 client IP), and regular HTTP access to haproxy.

 That's how I understand it at least.

Yes. And  solve session problem by  using some kind  of persistence, for
example source hashing load balancing algorithm.
-- 
Vincent Bernat ☯ http://vincent.bernat.im

panic (No CPUs found.  System halted.\n);
2.4.3 linux/arch/parisc/kernel/setup.c



Re: SSL farm

2012-05-22 Thread Allan Wind
On 2012-05-22 19:46:45, Vincent Bernat wrote:
 Yes. And  solve session problem by  using some kind  of persistence, for
 example source hashing load balancing algorithm.

Persistence here meaning ssl packets for a given session goes to 
the same ssl server?  If so what happens if that ssl server dies?


/Allan
-- 
Allan Wind
Life Integrity, LLC
http://lifeintegrity.com



Re: [ANNOUNCE] haproxy 1.4.21

2012-05-22 Thread Vivek Malik
A recommended upgrade for all production users. While we are not
(generally) affected by the bugs fixed in haproxy stable version. I
recommend updating haproxy.

I can update haproxy bin in puppet and can check it in (we distribute
haproxy binary via puppetmaster).

Aiman,

Please update puppetmaster when you see fit and also in general, please
ensure that puppet client is running on all machines.

Thanks,
Vivek

On Mon, May 21, 2012 at 1:43 AM, Willy Tarreau w...@1wt.eu wrote:

 Hi all,

 a number of old bugs were reported recently. Some of them are quite
 problematic because they can lead to crashes while parsing configuration
 or when starting up, which is even worse considering that startup scripts
 will generally not notice it.

 Among the bugs fixed in 1.4.21, we can enumerate :
  - risk of crash if using reqrep/rsprep and having tune.bufsize manually
configured larger than what was compiled in. The cause is the trash
buffer used for the replace was still static, and I believed this was
fixed months ago but only my mailbox had the fix! Thanks to Dmitry
Sivachenko for reporting this bug.

  - risk of crash when using header captures on a TCP frontend. This is a
configuration issue, and this situation is now correctly detected and
reported. Thanks to Olufemi Omojola for reporting this bug.

  - risk of crash when some servers are declared with checks in a farm which
does not use an LB algorithm (eg: option transparent or dispatch).
This happens when a server state is updated and reported to the non-
existing LB algorithm. Fortunately, this happens at start-up when
reporting the servers either up or down, but still it's after the fork
and too late for being easily recovered from by scripts. Thanks to David
Touzeau for reporting this bug.

  - balance source did not correctly hash IPv6 addresses, so IPv4
connections to IPv6 listeners would always get the same result. Thanks
to Alex Markham for reporting this bug.

  - the connect timeout was not properly reset upon connection
 establishment,
resulting in a retry if the timeout struck exactly at the same
 millisecond
the connect succeeded. The effect is that if a request was sent as part
 of
the connect hanshake, it is not available for resend during the retry
 and
a response timeout is reported for the server. Note that in practice,
 this
only happens with erroneous configurations. Thanks to Yehuda Sadeh for
reporting this bug.

  - the error captures were wrong if the buffer wrapped, which happens when
capturing incorrectly encoded chunked responses.

 I also backported Cyril's work on the stats page to allow POST params to be
 posted in any order, because I know there are people who script actions on
 this page.

 This release also includes doc cleanups from Cyril, Dmitry Sivachenko and
 Adrian Bridgett.

 Distro packagers will be happy to know that I added explicit checks to shut
 gcc warnings about unchecked write() return value in the debug code.

 While it's very likely that almost nobody is affected by the bugs above,
 troubleshooting them is annoying enough to justify an upgrade.

 Sources, Linux/x86 and Solaris/sparc binaries are at the usual location :

site index : http://haproxy.1wt.eu/
sources: http://haproxy.1wt.eu/download/1.4/src/
changelog  : http://haproxy.1wt.eu/download/1.4/src/CHANGELOG
binaries   : http://haproxy.1wt.eu/download/1.4/bin/

 Willy





Re: SSL farm

2012-05-22 Thread Bar Ziony
if a SSL server dies, LVS can direct the traffic to another server.
Alternatively you can save SSL sessions in memcached for example, to share
between SSL servers in the SSL farm. I once stumbled upon a patch for nginx
that can do that.

Regards,
Bar.


On Tue, May 22, 2012 at 9:16 PM, Allan Wind allan_w...@lifeintegrity.comwrote:

 On 2012-05-22 19:46:45, Vincent Bernat wrote:
  Yes. And  solve session problem by  using some kind  of persistence, for
  example source hashing load balancing algorithm.

 Persistence here meaning ssl packets for a given session goes to
 the same ssl server?  If so what happens if that ssl server dies?


 /Allan
 --
 Allan Wind
 Life Integrity, LLC
 http://lifeintegrity.com