RE: how to sync HaProxy config with ZooKeeper

2014-07-10 Thread Зайцев Сергей Александрович
Hi, thanks!

Looking forward to look at it and try to use

С уважением,

Зайцев Сергей Александрович,
Департамент Реализации Интеграционных Решений
 
Т.   +7 (495) 514 1410 (доб. 4236)
M. +7 (926) 143-4858 

s.zayt...@r-style.com
www.r-style.com

 


-Original Message-
From: Steven Le Roux [mailto:ste...@le-roux.info] 
Sent: Thursday, July 10, 2014 1:21 AM
To: Зайцев Сергей Александрович
Cc: Vincent Bernat; haproxy@formilux.org
Subject: Re: how to sync HaProxy config with ZooKeeper

Hi,

we have a component that is exactly doing this.

I will see if we can opensource it.

It's written in Golang using gozk.

I was planning to give a glimpse at etcd doing the same thing.


On Wed, Jul 9, 2014 at 4:38 PM, Зайцев Сергей Александрович 
s.zayt...@r-style.com wrote:
 Thanks a lot!

 Seems to be exactly what I was looking for )

 Gonna check it up

 С уважением,

 Зайцев Сергей Александрович,
 Департамент Реализации Интеграционных Решений

 Т.   +7 (495) 514 1410 (доб. 4236)
 M. +7 (926) 143-4858

 s.zayt...@r-style.com
 www.r-style.com




 -Original Message-
 From: Vincent Bernat [mailto:ber...@luffy.cx]
 Sent: Wednesday, July 09, 2014 6:36 PM
 To: Зайцев Сергей Александрович
 Cc: haproxy@formilux.org
 Subject: Re: how to sync HaProxy config with ZooKeeper

  ❦  9 juillet 2014 14:28 GMT, Зайцев Сергей Александрович 
 s.zayt...@r-style.com :

 I want to automatically udpate HaProxy's configuration depending on 
 my app's state. I mean, that when I have a number of components 
 running, I update my ZooKeeper configuration as soon as new node 
 joins the cluster ( an leaves it also ). But what I also need - is a 
 way ZooKeeper's watcher being able to udpate HaProxy configuraion in 
 order to provide actual information for the load balance to avoid 
 balance for absent cluster nodes.

 So the question is - is the a way to synchronized HaProxy's 
 configuration with ZooKeeper ( somehow ).

 It is unlikely that someone has integrated Zookeeper into HAProxy but you can 
 rely on an external process to watch zookeeper, update HAProxy state and 
 reload.

  https://github.com/rs/zkfarmer
  https://github.com/twitter/twitcher

 The first one needs some additional glue to watch file changes and rebuild 
 HAProxy configuration.
 --
 printk(Entering UltraSMPenguin Mode...\n);
 2.2.16 /usr/src/linux/arch/sparc64/kernel/smp.c



--
Steven Le Roux
Jabber-ID : ste...@jabber.fr
0x39494CCB ste...@le-roux.info
2FF7 226B 552E 4709 03F0  6281 72D7 A010 3949 4CCB


Re: how to sync HaProxy config with ZooKeeper

2014-07-10 Thread David Birdsong
It's not zookeeper backed, but I'm curious if anybody's using:
https://github.com/kelseyhightower/confd to rewrite haproxy config and
reloading.


On Thu, Jul 10, 2014 at 12:19 AM, Зайцев Сергей Александрович 
s.zayt...@r-style.com wrote:

 Hi, thanks!

 Looking forward to look at it and try to use

 С уважением,

 Зайцев Сергей Александрович,
 Департамент Реализации Интеграционных Решений

 Т.   +7 (495) 514 1410 (доб. 4236)
 M. +7 (926) 143-4858

 s.zayt...@r-style.com
 www.r-style.com




 -Original Message-
 From: Steven Le Roux [mailto:ste...@le-roux.info]
 Sent: Thursday, July 10, 2014 1:21 AM
 To: Зайцев Сергей Александрович
 Cc: Vincent Bernat; haproxy@formilux.org
 Subject: Re: how to sync HaProxy config with ZooKeeper

 Hi,

 we have a component that is exactly doing this.

 I will see if we can opensource it.

 It's written in Golang using gozk.

 I was planning to give a glimpse at etcd doing the same thing.


 On Wed, Jul 9, 2014 at 4:38 PM, Зайцев Сергей Александрович 
 s.zayt...@r-style.com wrote:
  Thanks a lot!
 
  Seems to be exactly what I was looking for )
 
  Gonna check it up
 
  С уважением,
 
  Зайцев Сергей Александрович,
  Департамент Реализации Интеграционных Решений
 
  Т.   +7 (495) 514 1410 (доб. 4236)
  M. +7 (926) 143-4858
 
  s.zayt...@r-style.com
  www.r-style.com
 
 
 
 
  -Original Message-
  From: Vincent Bernat [mailto:ber...@luffy.cx]
  Sent: Wednesday, July 09, 2014 6:36 PM
  To: Зайцев Сергей Александрович
  Cc: haproxy@formilux.org
  Subject: Re: how to sync HaProxy config with ZooKeeper
 
   ❦  9 juillet 2014 14:28 GMT, Зайцев Сергей Александрович
 s.zayt...@r-style.com :
 
  I want to automatically udpate HaProxy's configuration depending on
  my app's state. I mean, that when I have a number of components
  running, I update my ZooKeeper configuration as soon as new node
  joins the cluster ( an leaves it also ). But what I also need - is a
  way ZooKeeper's watcher being able to udpate HaProxy configuraion in
  order to provide actual information for the load balance to avoid
  balance for absent cluster nodes.
 
  So the question is - is the a way to synchronized HaProxy's
  configuration with ZooKeeper ( somehow ).
 
  It is unlikely that someone has integrated Zookeeper into HAProxy but
 you can rely on an external process to watch zookeeper, update HAProxy
 state and reload.
 
   https://github.com/rs/zkfarmer
   https://github.com/twitter/twitcher
 
  The first one needs some additional glue to watch file changes and
 rebuild HAProxy configuration.
  --
  printk(Entering UltraSMPenguin Mode...\n);
  2.2.16 /usr/src/linux/arch/sparc64/kernel/smp.c



 --
 Steven Le Roux
 Jabber-ID : ste...@jabber.fr
 0x39494CCB ste...@le-roux.info
 2FF7 226B 552E 4709 03F0  6281 72D7 A010 3949 4CCB



14th China(Guzhen)International Lighting Fair

2014-07-10 Thread lighting fair
Title: edm


  unsubscribe Report Spam   




	
	
		
	
	
		
		
		The industry event, and ideal platform for the lighting industry
		22nd October – 26th October,2014
		Guangdong China, the Zhongshan Guzhen Convention and Exhibition Center
		
		Brochure of GILF
	
	
	
	
		
		
		
		Sourcing for the best price, Trading at the “Lighting Capital”
		The 14th China (Guzhen) International Lighting Fair will be hosted at Guangdong China, the Zhongshan Guzhen Convention
 and Exhibition Center on 22-26 October 2014.
 		The Lighting Fair strives to build a professional, integrated marketing procurement platform for lighting manufacturers, 
buyers and agencies. It relies on Guzhen Town, integrates the useful resources of the lighting industry, leads industry trends
and creates strong areas such as architectural lighting, indoor lighting, LED lighting and landscape lighting.
		Click here for the 2013 post show report
	
	
	
	
		The “Lighting Capital” - Guzhen
		
			
Guzhen Town - China’s Lighting Capital
Guzhen town is located in the northwest of Zhongshan City, 
Guangdong Province, China, and is one of the transport hubs on
the Pearl River Delta, there is only 45 minutes from Guangzhou 
and 75 minutes from Hongkong.
			
			

			
		
			
		
			
			
Born for lights, the whole town did one things – 
making lightings
After 31 years of development,Guzhen has over ten thousand 
lighting enterprises and forms about 7 km of the "lighting street". 
It sells products to Hong Kong, America, Japan, Southeast Asia 
and Europe, a total of more than 130 countries and regions. 
Guzhen has become the largest lighting production base and 
wholesale market in China.
			
			

		
		
		
		
			
Famous for its strong background in the lighting 
industry
Gathering together high quality and competitive price products 
selected from more than ten thousand lighting industries in Guzhen
There is more than fifty thousand people work in the lighting 
industry.
In 2011, total value of lighting out-put reached 17.08 billion RMB, 
and the total amount of export reached to usd 500 million.
			
			

			
		
			
	
	
		China (Guzhen) International Lighting Fair - The ideal trading platform for LED industry
		
			

	New purchasing pattern, 4 sub venues in Lighting malls
	600,000 sqm in total for the exhibition  space
	100,000 trade buyers come from more than 100 countries 
	convenient transportation, accommodation and visa service for via official travel agency of the fair
	Official travel agency: China Travel International (Zhongshan) Co., Ltd
	Website: www.hkcts.com
	Email: zhongm...@126.com

			
		
			

22-26 October 2014,Guangdong.China
Register right now
you will get the free traffic service between airports and venues during the show

			
			
			

	Shanghai UBM Sinoexpo International Exhibition
	Contact:  Mrs. Chloe Qu
	mailto: chloe...@ubmsinoexpo.com


	Please do not reply to this email directly. If you
have any question,  please click to send Email.
Not interested any more?  Unsubscribe instantly.

			
		
		
	


  unsubscribe Report Spam   


Re: how to sync HaProxy config with ZooKeeper

2014-07-10 Thread Holger Just
Hi,

Зайцев Сергей Александрович wrote:
 So the question is - is the a way to synchronized HaProxy's
 configuration with ZooKeeper ( somehow ).

Airbnb uses a tool called Synapse [1] as part of their Smartstack
platform [2]. It integrates HAProxy and zookeeper to provide high
availability by using node-local loadbalancers that get reconfigured on
the fly according to data in zookeeper.

Synapse provides an external watcher program which reconfigures HAProxy
using both the unix socket (where possible) as well as by generating
updated config files. To try it out, you could use the provided
smartstack Chef cookbook [3].

Patrick Viet (formerly of Airbnb, now at GetYourGuide) recently talked
about Smartstack (and the adaptations they have done at GetYourGuide,
i.e. changing from Zookeeper to Serf) at the Berlin Devops Meetup. You
can find the video on Youtube [4].

Maybe, you could gather some ideas and implementation details from their
solution.

Regards,
Holger

[1] https://github.com/airbnb/synapse
[2] http://nerds.airbnb.com/smartstack-service-discovery-cloud/
[3] https://github.com/airbnb/smartstack-cookbook
[4] https://www.youtube.com/watch?v=y739V9MMoLE



Re: POST with x-www-form-urlencoded Content-Type

2014-07-10 Thread Willy Tarreau
Hi Dan,

On Wed, Jul 09, 2014 at 07:13:33PM +, Daniel Dubovik wrote:
 Hello all,
 
 I am attempting to balance traffic to a number of backend instances.  I am
 balancing based off the Host header, and for the most part everything is
 working.  When testing a bit more today, I came across some weird behavior,
 and am hoping someone can help out.  When POSTing to a site, if it is done
 using the Content-Type application/x-www-form-urlencoded, and has actual
 data, HAProxy falls back to a roundrobin balancing scheme.  POSTing using a
 Content-Type of multipart/form-data, however, works just fine.  Oddly,
 application/x-www-form-urlencoded with no actual data, also works as
 expected.

(...)

There's indeed a bug, the amount of data forwarded is not deduced correctly
to rewind the buffer, I'm even wondering if it's expected that we let them
pass at this point. I'm investigating, thanks for your report!

Willy




Re: Strange crash of HAProxy 1.5.1

2014-07-10 Thread Merton Lister
Hi everyone,

Thanks a lot for the suggestions and discussion. Below are some more info
as requested. Hopefully they can lead to a definitive diagnosis of the
problem.

It looks like the CPU=native parameter above results in some machine
 code generated during compilation which cannot be executed on your
 actual CPU. Can you please show
   (a) one command line how gcc is invoked (i.e. which flags are passed
   for compilation)
   (b) cat /proc/cpuinfo


gcc -Iinclude -Iebtree -Wall  -O2 -march=native -g -fno-strict-aliasing
  -DCONFIG_HAP_LINUX_SPLICE -DTPROXY -DCONFIG_HAP_LINUX_TPROXY
-DCONFIG_HAP_CRYPT -DENABLE_POLL -DENABLE_EPOLL -DUSE_CPU_AFFINITY
-DASSUME_SPLICE_WORKS -DUSE_ACCEPT4 -DNETFILTER -DUSE_GETSOCKNAME
-DUSE_OPENSSL  -DUSE_SYSCALL_FUTEX -DUSE_PCRE -I/usr/include
 -DCONFIG_HAPROXY_VERSION=\1.5.1\ -DCONFIG_HAPROXY_DATE=\2014/06/24\ \
  -DBUILD_TARGET='linux2628' \
  -DBUILD_ARCH='' \
  -DBUILD_CPU='native' \
  -DBUILD_CC='gcc' \
  -DBUILD_CFLAGS='-O2 -march=native -g -fno-strict-aliasing' \
  -DBUILD_OPTIONS='USE_OPENSSL=1 USE_STATIC_PCRE=1' \
   -c -o

processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 45
model name : Intel(R) Xeon(R) CPU E5-2650L 0 @ 1.80GHz
stepping : 7
microcode : 0x70a
cpu MHz : 1800.077
cache size : 20480 KB
physical id : 0
siblings : 8
core id : 0
cpu cores : 1
apicid : 1
initial apicid : 1
fdiv_bug : no
f00f_bug : no
coma_bug : no
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu de tsc msr pae cx8 apic cmov pat clflush mmx fxsr sse sse2 ss
ht nx constant_tsc nonstop_tsc pni pclmulqdq ssse3 sse4_1 sse4_2 popcnt
tsc_deadline_timer aes hypervisor ida arat epb pln pts dtherm
bogomips : 3601.16
clflush size : 64
cache_alignment : 64
address sizes : 46 bits physical, 48 bits virtual
power management:

There are (virtual) 8 cores, so the rest are pretty much the same.

I've wanted to add my 2¢. Recently, I've compiled some software (not
 HAProxy however), using CPU=native as a make option, in a virtual machine
 on a Citrix XenServer 6.2. Although the software was able to compile
 successfully, it would fail on execution with a SIGILL.
 Recompiling the same piece of software with CPU=core2 (or CPU=generic)
 yielded a fully functional binary, so it could be possible that the CPU
 instruction set is not properly detected by the compiler (especially in
 virtualized environments).


I'm in a similar situation. The server is a Linode VPS, and I compiled and
run the binary on the same server. However, when I used CPU=native for all
the previous 1.5-dev versions on the same server, the resulted binaries
were all fine. The problem only appeared after the upgrade to 1.5.1.

Is there anything changed between dev26 and 1.5.1 that causes this issue in
VMs? Shall we assume that CPU=native is no longer recommended for VMs?
Instead, we should use CPU=generic going forward?

Mmmh, ok. Could you give a last try with:
 make clean  \
 make CFLAGS=-O0 -march=native -g -fno-strict-aliasing CPU=native \
  TARGET=linux2628 USE_OPENSSL=1 USE_STATIC_PCRE=1


The compiled binary runs fine.

The request that is crashing haproxy, is it HTTPS on the frontend or
 plain and simple HTTP?


HTTPS on the frontend.

Best regards,

Merton


On Thu, Jul 10, 2014 at 2:51 AM, Willy Tarreau w...@1wt.eu wrote:

 Hi Edwin,

 On Wed, Jul 09, 2014 at 02:42:43PM -0400, Edwin wrote:
  I've wanted to add my 2ข. Recently, I've compiled some software (not
  HAProxy however), using CPU=native as a make option, in a virtual
  machine on a Citrix XenServer 6.2. Although the software was able to
  compile successfully, it would fail on execution with a SIGILL.
  Recompiling the same piece of software with CPU=core2 (or CPU=generic)
  yielded a fully functional binary, so it could be possible that the CPU
  instruction set is not properly detected by the compiler (especially in
  virtualized environments).

 That's very interesting indeed. I remember reading a bug report somewhere
 about some CPU capabilities flags not always being correctly reported in
 certain VMs. If SSE falls in that category and is not present nor emulated,
 that could cause exactly such things.

 I suspect we'll have to remove CPU=native from the README in order to save
 people from using it in such bogus environments if it causes issues.

 Thanks,
 Willy




Re: POST with x-www-form-urlencoded Content-Type

2014-07-10 Thread Willy Tarreau
Hi Dan,

On Thu, Jul 10, 2014 at 05:20:18PM +0200, Willy Tarreau wrote:
 Hi Dan,
 
 On Wed, Jul 09, 2014 at 07:13:33PM +, Daniel Dubovik wrote:
  Hello all,
  
  I am attempting to balance traffic to a number of backend instances.  I am
  balancing based off the Host header, and for the most part everything is
  working.  When testing a bit more today, I came across some weird behavior,
  and am hoping someone can help out.  When POSTing to a site, if it is done
  using the Content-Type application/x-www-form-urlencoded, and has actual
  data, HAProxy falls back to a roundrobin balancing scheme.  POSTing using a
  Content-Type of multipart/form-data, however, works just fine.  Oddly,
  application/x-www-form-urlencoded with no actual data, also works as
  expected.
 
 (...)
 
 There's indeed a bug, the amount of data forwarded is not deduced correctly
 to rewind the buffer, I'm even wondering if it's expected that we let them
 pass at this point. I'm investigating, thanks for your report!

OK I could fix it. The patch is very small but it required some extra care
because that's a sensible area that I already fixed in dev23 but not enough.
Other balancing algorithms are affected, and worse, http-send-name-header
was bogus as well in this case.

I've applied the fix, I'm attaching it here, it applies both to 1.5 and to
1.6.

Thanks for your report, that was a nasty one and I'm glad we got rid of it
early!

Willy

From bb2e669f9e73531ac9cc9277b40066b701eec918 Mon Sep 17 00:00:00 2001
From: Willy Tarreau w...@1wt.eu
Date: Thu, 10 Jul 2014 19:06:10 +0200
Subject: BUG/MAJOR: http: correctly rewind the request body after start of
 forwarding

Daniel Dubovik reported an interesting bug showing that the request body
processing was still not 100% fixed. If a POST request contained short
enough data to be forwarded at once before trying to establish the
connection to the server, we had no way to correctly rewind the body.

The first visible case is that balancing on a header does not always work
on such POST requests since the header cannot be found. But there are even
nastier implications which are that http-send-name-header would apply to
the wrong location and possibly even affect part of the request's body
due to an incorrect rewinding.

There are two options to fix the problem :
  - first one is to force the HTTP_MSG_F_WAIT_CONN flag on all hash-based
balancing algorithms and http-send-name-header, but there's always a
risk that any new algorithm forgets to set it ;

  - the second option is to account for the amount of skipped data before
the connection establishes so that we always know the position of the
request's body relative to the buffer's origin.

The second option is much more reliable and fits very well in the spirit
of the past changes to fix forwarding. Indeed, at the moment we have
msg-sov which points to the start of the body before headers are forwarded
and which equals zero afterwards (so it still points to the start of the
body before forwarding data). A minor change consists in always making it
point to the start of the body even after data have been forwarded. It means
that it can get a negative value (so we need to change its type to signed)..

In order to avoid wrapping, we only do this as long as the other side of
the buffer is not connected yet.

Doing this definitely fixes the issues above for the requests. Since the
response cannot be rewound we don't need to perform any change there.

This bug was introduced/remained unfixed in 1.5-dev23 so the fix must be
backported to 1.5.
---
 doc/internals/body-parsing.txt | 20 +---
 include/types/proto_http.h | 11 ++-
 src/proto_http.c   |  9 +++--
 3 files changed, 26 insertions(+), 14 deletions(-)

diff --git a/doc/internals/body-parsing.txt b/doc/internals/body-parsing.txt
index e9c8b4b..5baa549 100644
--- a/doc/internals/body-parsing.txt
+++ b/doc/internals/body-parsing.txt
@@ -67,12 +67,17 @@ msg.next : points to the next byte to inspect. This offset 
is automatically
automatically adjusted to the number of bytes already inspected.
 
 msg.sov  : start of value. First character of the header's value in the header
-   states, start of the body in the data states until headers are
-   forwarded. This offset is automatically adjusted when inserting or
-   removing some headers. In data states, it always constains the size
-   of the whole HTTP headers (including the trailing CRLF) that needs
-   to be forwarded before the first byte of body. Once the headers are
-   forwarded, this value drops to zero.
+   states, start of the body in the data states. Strictly positive
+   values indicate that headers were not forwarded yet (buf.p is
+   before the start of the body), and null or positive values are seen
+   after headers are forwarded (buf.p is at or past the start of the
+   body). 

Re: Strange crash of HAProxy 1.5.1

2014-07-10 Thread Cyril Bonté

Hi all,

Le 10/07/2014 18:26, Merton Lister a écrit :

Hi everyone,

Thanks a lot for the suggestions and discussion. Below are some more
info as requested. Hopefully they can lead to a definitive diagnosis of
the problem.

It looks like the CPU=native parameter above results in some machine
code generated during compilation which cannot be executed on your
actual CPU. Can you please show
   (a) one command line how gcc is invoked (i.e. which flags are passed
   for compilation)
   (b) cat /proc/cpuinfo


gcc -Iinclude -Iebtree -Wall  -O2 -march=native -g -fno-strict-aliasing
   -DCONFIG_HAP_LINUX_SPLICE -DTPROXY -DCONFIG_HAP_LINUX_TPROXY
-DCONFIG_HAP_CRYPT -DENABLE_POLL -DENABLE_EPOLL -DUSE_CPU_AFFINITY
-DASSUME_SPLICE_WORKS -DUSE_ACCEPT4 -DNETFILTER -DUSE_GETSOCKNAME
-DUSE_OPENSSL  -DUSE_SYSCALL_FUTEX -DUSE_PCRE -I/usr/include
  -DCONFIG_HAPROXY_VERSION=\1.5.1\ -DCONFIG_HAPROXY_DATE=\2014/06/24\ \
  -DBUILD_TARGET='linux2628' \
  -DBUILD_ARCH='' \
  -DBUILD_CPU='native' \
  -DBUILD_CC='gcc' \
  -DBUILD_CFLAGS='-O2 -march=native -g -fno-strict-aliasing' \
  -DBUILD_OPTIONS='USE_OPENSSL=1 USE_STATIC_PCRE=1' \
   -c -o

processor: 0
vendor_id: GenuineIntel
cpu family: 6
model: 45
model name: Intel(R) Xeon(R) CPU E5-2650L 0 @ 1.80GHz
stepping: 7
microcode: 0x70a
cpu MHz: 1800.077
cache size: 20480 KB
physical id: 0
siblings: 8
core id: 0
cpu cores: 1
apicid: 1
initial apicid: 1
fdiv_bug: no
f00f_bug: no
coma_bug: no
fpu: yes
fpu_exception: yes
cpuid level: 13
wp: yes
flags: fpu de tsc msr pae cx8 apic cmov pat clflush mmx fxsr sse sse2 ss
ht nx constant_tsc nonstop_tsc pni pclmulqdq ssse3 sse4_1 sse4_2 popcnt
tsc_deadline_timer aes hypervisor ida arat epb pln pts dtherm
bogomips: 3601.16
clflush size: 64
cache_alignment: 64
address sizes: 46 bits physical, 48 bits virtual
power management:

There are (virtual) 8 cores, so the rest are pretty much the same.

I've wanted to add my 2¢. Recently, I've compiled some software (not
HAProxy however), using CPU=native as a make option, in a virtual
machine on a Citrix XenServer 6.2. Although the software was able to
compile successfully, it would fail on execution with a SIGILL.
Recompiling the same piece of software with CPU=core2 (or
CPU=generic) yielded a fully functional binary, so it could be
possible that the CPU instruction set is not properly detected by
the compiler (especially in virtualized environments).


I'm in a similar situation. The server is a Linode VPS, and I compiled
and run the binary on the same server. However, when I used CPU=native
for all the previous 1.5-dev versions on the same server, the resulted
binaries were all fine. The problem only appeared after the upgrade to
1.5.1.


Then it's probably due to the avx instructions which are disabled (it 
doesn't appear in your cpu flags).


Those links may be interesting :
http://lists.xen.org/archives/html/xen-announce/2013-09/msg5.html
https://forum.linode.com/viewtopic.php?p=60746

--
Cyril Bonté



Re: Strange crash of HAProxy 1.5.1

2014-07-10 Thread Manfred Hollstein
Hi Merton,

thanks for providing the requested information. I've inserted some
comments below.

On Thu, 10 Jul 2014, 18:26:19 +0200, Merton Lister wrote:
 Hi everyone,
 
 Thanks a lot for the suggestions and discussion. Below are some more info as
 requested. Hopefully they can lead to a definitive diagnosis of the problem.
 
 
 It looks like the CPU=native parameter above results in some machine
 code generated during compilation which cannot be executed on your
 actual CPU. Can you please show
   (a) one command line how gcc is invoked (i.e. which flags are passed
       for compilation)
   (b) cat /proc/cpuinfo
 
 
 gcc -Iinclude -Iebtree -Wall  -O2 -march=native -g -fno-strict-aliasing      
 -DCONFIG_HAP_LINUX_SPLICE -DTPROXY -DCONFIG_HAP_LINUX_TPROXY 
 -DCONFIG_HAP_CRYPT
 -DENABLE_POLL -DENABLE_EPOLL -DUSE_CPU_AFFINITY -DASSUME_SPLICE_WORKS
 -DUSE_ACCEPT4 -DNETFILTER -DUSE_GETSOCKNAME -DUSE_OPENSSL  -DUSE_SYSCALL_FUTEX
 -DUSE_PCRE -I/usr/include  -DCONFIG_HAPROXY_VERSION=\1.5.1\
 -DCONFIG_HAPROXY_DATE=\2014/06/24\ \
      -DBUILD_TARGET='linux2628' \
      -DBUILD_ARCH='' \
      -DBUILD_CPU='native' \
      -DBUILD_CC='gcc' \
      -DBUILD_CFLAGS='-O2 -march=native -g -fno-strict-aliasing' \
      -DBUILD_OPTIONS='USE_OPENSSL=1 USE_STATIC_PCRE=1' \
       -c -o

This command compiles the sources using some auto-detected features of
the CPU the compile job runs on (which is your physical CPU afaiu). Here
is a quote from gcc's info page:

  `-march=native' causes the compiler to auto-detect the architecture
  of the build computer.  At present, this feature is only supported
  on Linux, and not all architectures are recognized.  If the
  auto-detect is unsuccessful the option has no effect.

To be honest, I'd rather not use this option unless I know that the
compiled executable will *only* run on the same CPU type used during
compilation.

 processor : 0
 vendor_id : GenuineIntel
 cpu family : 6
 model : 45
 model name : Intel(R) Xeon(R) CPU E5-2650L 0 @ 1.80GHz
 stepping : 7
 microcode : 0x70a
 cpu MHz : 1800.077
 cache size : 20480 KB
 physical id : 0
 siblings : 8
 core id : 0
 cpu cores : 1
 apicid : 1
 initial apicid : 1
 fdiv_bug : no
 f00f_bug : no
 coma_bug : no
 fpu : yes
 fpu_exception : yes
 cpuid level : 13
 wp : yes
 flags : fpu de tsc msr pae cx8 apic cmov pat clflush mmx fxsr sse sse2 ss ht 
 nx
 constant_tsc nonstop_tsc pni pclmulqdq ssse3 sse4_1 sse4_2 popcnt
 tsc_deadline_timer aes hypervisor ida arat epb pln pts dtherm

There definitely are some flags in here, which will look differently
than inside your VM. I'd suggest to compare this output with the data
from inside the VM.

 [...] instruction set is not properly detected by the compiler (especially in
 virtualized environments).
 
 I'm in a similar situation. The server is a Linode VPS, and I compiled and run
 the binary on the same server. However, when I used CPU=native for all the
 previous 1.5-dev versions on the same server, the resulted binaries were all
 fine.

Again, pure luck ;)

   The problem only appeared after the upgrade to 1.5.1.
 
 Is there anything changed between dev26 and 1.5.1 that causes this issue in
 VMs? Shall we assume that CPU=native is no longer recommended for VMs? 
 Instead,
 we should use CPU=generic going forward?

I'd recommend so, yes. -march=native doesn't really make any sense
unless you control the complete build and runtime environment.

HTH, cheers.

l8er
manfred



Re: Strange crash of HAProxy 1.5.1

2014-07-10 Thread Willy Tarreau
Hi Merton,

On Fri, Jul 11, 2014 at 12:26:19AM +0800, Merton Lister wrote:
 flags : fpu de tsc msr pae cx8 apic cmov pat clflush mmx fxsr sse sse2 ss
 ht nx constant_tsc nonstop_tsc pni pclmulqdq ssse3 sse4_1 sse4_2 popcnt
 tsc_deadline_timer aes hypervisor ida arat epb pln pts dtherm
 ^^
 There are (virtual) 8 cores, so the rest are pretty much the same.
 ^^^

I think that's the key. The usual rule applies :

 virtual machine = virtual performance + real trouble.

 I've wanted to add my 2¢. Recently, I've compiled some software (not
  HAProxy however), using CPU=native as a make option, in a virtual machine
  on a Citrix XenServer 6.2. Although the software was able to compile
  successfully, it would fail on execution with a SIGILL.
  Recompiling the same piece of software with CPU=core2 (or CPU=generic)
  yielded a fully functional binary, so it could be possible that the CPU
  instruction set is not properly detected by the compiler (especially in
  virtualized environments).
 
 
 I'm in a similar situation. The server is a Linode VPS, and I compiled and
 run the binary on the same server. However, when I used CPU=native for all
 the previous 1.5-dev versions on the same server, the resulted binaries
 were all fine. The problem only appeared after the upgrade to 1.5.1.

The problem with invalid instructions is that it can vary according to many
things. Sometimes the compiler will find that a given instruction flow would
be better served using a different instruction that it very rarely uses. If
you remember the i386/i486 era, it was very common to see programs occasionally
crash because they wanted to use cmpxchg, xadd or bswap that were 486-only, and
that would sometimes happen very rarely (eg: for cmpxchg only during a locking
conflict between threads). Also it's possible that you have upgraded your
compiler since last builds and that new instructions are emitted by default
for this CPU.

 Is there anything changed between dev26 and 1.5.1 that causes this issue in
 VMs?

Not that I'm aware, we're in the random code generation here.

 Shall we assume that CPU=native is no longer recommended for VMs?
 Instead, we should use CPU=generic going forward?

I'll update the doc to mention exactly your case about CPU=native.

The hypervisor and/or compiler are bogus, because the code built by the
compiler for what is supposed to be the local machine should work on this
machine. So either the hypervisor lies about what the machine supports,
or the compiler forgets to test the instructions before using them. I'd
still vote for the hypervisor to do the wrong thing, that's statically
much more common :-)

Best regards,
Willy




Re: compression offload doesn't offload?

2014-07-10 Thread Willy Tarreau
Hi Cyril!

On Wed, Jul 09, 2014 at 11:04:57PM +0200, Cyril Bonté wrote:
 The offload value is not copied from the defaults, this is a one line 
 patch. If you want, you can test the one attached.
 
 Talking about it, I'm not sure it's a good idea to allow compression 
 offload in the defaults section, as there is nothing to disable it in 
 frontend/backend sections.
 Willy, any advice ?

I think that we could allow it if we add a no compression offload statement.

For example, something like this could do the trick. I don't find it perfect,
but later we could improve support for the no keyword by requiring that
consumers delete it, and checking at the end that it was properly deleted.
That would provide a more accurate check of whether it's supported or not.

What do you think ?

Willy

-
diff --git a/src/cfgparse.c b/src/cfgparse.c
index 16cf717..1bc24e9 100644
--- a/src/cfgparse.c
+++ b/src/cfgparse.c
@@ -2123,6 +2123,7 @@ int cfg_parse_listen(const char *file, int linenum, char 
**args, int kwm)
curproxy-comp = calloc(1, sizeof(struct comp));
curproxy-comp-algos = defproxy.comp-algos;
curproxy-comp-types = defproxy.comp-types;
+   curproxy-comp-offload = defproxy.comp-offload;
}
 
curproxy-grace  = defproxy.grace;
@@ -5537,7 +5538,7 @@ stats_error_parsing:
}
}
else if (!strcmp(args[1], offload)) {
-   comp-offload = 1;
+   comp-offload = kwm != KWM_NO;
}
else if (!strcmp(args[1], type)) {
int cur_arg;
@@ -5936,9 +5937,9 @@ int readcfgfile(const char *file)
args[arg] = args[arg+1];// 
shift args after inversion
}
 
-   if (kwm != KWM_STD  strcmp(args[0], option) != 0  
\
-strcmp(args[0], log) != 0) {
-   Alert(parsing [%s:%d]: negation/default currently 
supported only for options and log.\n, file, linenum);
+   if (kwm != KWM_STD  strcmp(args[0], option) != 0 
+strcmp(args[0], log) != 0  strcmp(args[0], 
compression) != 0) {
+   Alert(parsing [%s:%d]: negation/default currently 
supported only for options, log and compression.\n, file, linenum);
err_code |= ERR_ALERT | ERR_FATAL;
}
--- 




Votre Brosse à Dent Electrique Gratuite

2014-07-10 Thread Brosse à dent
Title: Cl USB SanDisk à 32 Go
			Cliquez ici pour lire cet e-mail dans votre navigateur.			Offre d'Essai ExceptionnelleVotre Brosse à Dent			ElectriqueGratuite !		Commander »Dsinscrivez vous ici		



Adding Serial Number to POST Requests

2014-07-10 Thread Zuoning Yin
Hi All,
 We recently used haproxy as the load balancer in our system and
it really worked great. However,  we still need one extra feature here.
 For every POST request, we want to be able to append an id (or
serial number) to it. Essentially, we are trying to serializing the POST
requests.
 For example,  for the following POST requests,

 a.1) curl -X POST -H 'Content-Type: application/json' -d
'{key1:value1}'
http://localhost:9000/update
 a.2) curl -X POST -H 'Content-Type: application/json' -d
'{key2:value2}'
http://localhost:9000/update
 a.3) curl -X POST -H 'Content-Type: application/json' -d
'{key3:value3}'
http://localhost:9000/update

What we want is that haproxy could append a serial number to them.
Therefore, they will become

 b.1) curl -X POST -H 'Content-Type: application/json' -d
'{key1:value1}'
http://localhost:9000/update/10001
 b.2) curl -X POST -H 'Content-Type: application/json' -d
'{key2:value2}'
http://localhost:9000/update/10002
 b.3) curl -X POST -H 'Content-Type: application/json' -d
'{key3:value3}'
http://localhost:9000/update/10003

 Here, 10001 , 10002 and 10003  are the serial numbers appended
to the original URL. Our backend servers will consume these serial numbers.

 I did some googling, but failed to find any reference about what we
want
to achieve. I am guessing we may need to tweak the src code a little bit to
achieve
the goal.
 So here are my questions:
 1) is there any configuration change which can achieve our goal?
 2) if 1) is not possible, what part of the code (which file/function)
I need to
look at to implement the hack? I guess there might be some global variables
that relate to this issue.
 3) if we are doing 2), what level of performance overhead will be
expected?

Thanks so much,
--Zuoning


Re: Adding Serial Number to POST Requests

2014-07-10 Thread Baptiste
On Thu, Jul 10, 2014 at 11:27 PM, Zuoning Yin zuon...@graphsql.com wrote:
 Hi All,
  We recently used haproxy as the load balancer in our system and
 it really worked great. However,  we still need one extra feature here.
  For every POST request, we want to be able to append an id (or
 serial number) to it. Essentially, we are trying to serializing the POST
 requests.
  For example,  for the following POST requests,

  a.1) curl -X POST -H 'Content-Type: application/json' -d
 '{key1:value1}'
 http://localhost:9000/update
  a.2) curl -X POST -H 'Content-Type: application/json' -d
 '{key2:value2}'
 http://localhost:9000/update
  a.3) curl -X POST -H 'Content-Type: application/json' -d
 '{key3:value3}'
 http://localhost:9000/update

 What we want is that haproxy could append a serial number to them.
 Therefore, they will become

  b.1) curl -X POST -H 'Content-Type: application/json' -d
 '{key1:value1}'
 http://localhost:9000/update/10001
  b.2) curl -X POST -H 'Content-Type: application/json' -d
 '{key2:value2}'
 http://localhost:9000/update/10002
  b.3) curl -X POST -H 'Content-Type: application/json' -d
 '{key3:value3}'
 http://localhost:9000/update/10003

  Here, 10001 , 10002 and 10003  are the serial numbers appended
 to the original URL. Our backend servers will consume these serial numbers.

  I did some googling, but failed to find any reference about what we
 want
 to achieve. I am guessing we may need to tweak the src code a little bit to
 achieve
 the goal.
  So here are my questions:
  1) is there any configuration change which can achieve our goal?
  2) if 1) is not possible, what part of the code (which file/function) I
 need to
 look at to implement the hack? I guess there might be some global variables
 that relate to this issue.
  3) if we are doing 2), what level of performance overhead will be
 expected?

 Thanks so much,
 --Zuoning


Hi Zuoning,

You can create a stick table in which you store URLs and count the
number of hit per URL.
Then when forwarding POST to the server, you can append the number of
hit for this particular URL.
Note that after a reload / restart of HAProxy, the number will come back to 0.
An other way could be using the unique-id feature. I let you read the
man about it.

That said, I'm not fan of integrating application logic in the LB layer...

Baptiste



Re: Adding Serial Number to POST Requests

2014-07-10 Thread Zuoning Yin
Hi Baptiste,
 Thanks so much for the reply. It is good to know stick table can help
with my case. I had tried some further googling, but can't find a similar
example that I can follow. Sorry, I am still new to HAProxy.
 It will be highly appreciated if you could provide some sample config
snippet about this particular issue.
 Before that, please allow me reiterate the goal that we want to
achieve (I omitted some details in previous POST).
 We want a global counter for POST requests (I guess we can use gpc0
here). For every POST request, we need to increase the counter by 1. We
will also have GET requests, but we don't do anything with the counter for
GET.  Then when we forward requests to backends, we want to append this
global counter to the request URL.
 For example, assume the value of current global counter is 1001, we
need:

 curl -X POST -H 'Content-Type: application/json' -d
 '{key1:value1}'  http://localhost:9000/update
 ==
 curl -X POST -H 'Content-Type: application/json' -d
 '{key1:value1}'  http://localhost:9000/update/1001

 curl -X GET  http://localhost:9000/query
 ==
 curl -X GET  http://localhost:9000/query/1001

 I guess I need to define a counter in a stick table. Then define some
acl to increase the counter. Then some rewrite rules to use this counter.
However, I just don't know how to write the config for these tasks.

Thanks,
--Zuoning










On Thu, Jul 10, 2014 at 6:16 PM, Baptiste bed...@gmail.com wrote:

 On Thu, Jul 10, 2014 at 11:27 PM, Zuoning Yin zuon...@graphsql.com
 wrote:
  Hi All,
   We recently used haproxy as the load balancer in our system and
  it really worked great. However,  we still need one extra feature here.
   For every POST request, we want to be able to append an id (or
  serial number) to it. Essentially, we are trying to serializing the POST
  requests.
   For example,  for the following POST requests,
 
   a.1) curl -X POST -H 'Content-Type: application/json' -d
  '{key1:value1}'
  http://localhost:9000/update
   a.2) curl -X POST -H 'Content-Type: application/json' -d
  '{key2:value2}'
  http://localhost:9000/update
   a.3) curl -X POST -H 'Content-Type: application/json' -d
  '{key3:value3}'
  http://localhost:9000/update
 
  What we want is that haproxy could append a serial number to them.
  Therefore, they will become
 
   b.1) curl -X POST -H 'Content-Type: application/json' -d
  '{key1:value1}'
  http://localhost:9000/update/10001
   b.2) curl -X POST -H 'Content-Type: application/json' -d
  '{key2:value2}'
  http://localhost:9000/update/10002
   b.3) curl -X POST -H 'Content-Type: application/json' -d
  '{key3:value3}'
  http://localhost:9000/update/10003
 
   Here, 10001 , 10002 and 10003  are the serial numbers
 appended
  to the original URL. Our backend servers will consume these serial
 numbers.
 
   I did some googling, but failed to find any reference about what we
  want
  to achieve. I am guessing we may need to tweak the src code a little bit
 to
  achieve
  the goal.
   So here are my questions:
   1) is there any configuration change which can achieve our goal?
   2) if 1) is not possible, what part of the code (which
 file/function) I
  need to
  look at to implement the hack? I guess there might be some global
 variables
  that relate to this issue.
   3) if we are doing 2), what level of performance overhead will be
  expected?
 
  Thanks so much,
  --Zuoning
 

 Hi Zuoning,

 You can create a stick table in which you store URLs and count the
 number of hit per URL.
 Then when forwarding POST to the server, you can append the number of
 hit for this particular URL.
 Note that after a reload / restart of HAProxy, the number will come back
 to 0.
 An other way could be using the unique-id feature. I let you read the
 man about it.

 That said, I'm not fan of integrating application logic in the LB layer...

 Baptiste



Re: POST with x-www-form-urlencoded Content-Type

2014-07-10 Thread Daniel Dubovik
Hi Willy,

I built a new package with the patch, and my test cases are passing now.

Just wanted to say thanks for the super quick turn around on this issue!

Thanks!
Dan Dubovik
Senior Linux Systems Engineer
480-505-8800 x4257





On 7/10/14 10:34 AM, Willy Tarreau w...@1wt.eu wrote:

Hi Dan,

On Thu, Jul 10, 2014 at 05:20:18PM +0200, Willy Tarreau wrote:
 Hi Dan,
 
 On Wed, Jul 09, 2014 at 07:13:33PM +, Daniel Dubovik wrote:
  Hello all,
  
  I am attempting to balance traffic to a number of backend instances.
I am
  balancing based off the Host header, and for the most part everything
is
  working.  When testing a bit more today, I came across some weird
behavior,
  and am hoping someone can help out.  When POSTing to a site, if it is
done
  using the Content-Type application/x-www-form-urlencoded, and has
actual
  data, HAProxy falls back to a roundrobin balancing scheme.  POSTing
using a
  Content-Type of multipart/form-data, however, works just fine.  Oddly,
  application/x-www-form-urlencoded with no actual data, also works as
  expected.
 
 (...)
 
 There's indeed a bug, the amount of data forwarded is not deduced
correctly
 to rewind the buffer, I'm even wondering if it's expected that we let
them
 pass at this point. I'm investigating, thanks for your report!

OK I could fix it. The patch is very small but it required some extra care
because that's a sensible area that I already fixed in dev23 but not
enough.
Other balancing algorithms are affected, and worse, http-send-name-header
was bogus as well in this case.

I've applied the fix, I'm attaching it here, it applies both to 1.5 and to
1.6.

Thanks for your report, that was a nasty one and I'm glad we got rid of it
early!

Willy





Re: POST with x-www-form-urlencoded Content-Type

2014-07-10 Thread Willy Tarreau
Hi Dan,

On Fri, Jul 11, 2014 at 01:09:32AM +, Daniel Dubovik wrote:
 Hi Willy,
 
 I built a new package with the patch, and my test cases are passing now.
 
 Just wanted to say thanks for the super quick turn around on this issue!

Great, thank you for the quick feedback!

Willy