problems with TLS offload using HaProxy & TLS VPN (ocserv, ~ Cisco VPN)

2015-12-09 Thread Eugene Istomin
Hello,

we have a problems with TLS offload using HaProxy & TLS VPN (ocserv, ~ Cisco 
VPN).

#ocserv debug log:
ocserv[64521]: worker[vpn_name]: [SOME_IP] sending 440 byte(s)
ocserv[64521]: worker[vpn_name]: [SOME_IP] sending 56 byte(s)
ocserv[64521]: worker[vpn_name]: [SOME_IP] sending 440 byte(s)
ocserv[64521]: worker[vpn_name]: [SOME_IP] sending 56 byte(s)
ocserv[64521]: worker[vpn_name]: [SOME_IP] received 60 byte(s) (TLS)
ocserv[64521]: worker[vpn_name]: [SOME_IP] writing 52 byte(s) to TUN
ocserv[64521]: worker[vpn_name]: [SOME_IP] received 1070 byte(s) (TLS)
ocserv[64521]: worker[vpn_name]: [SOME_IP] unexpected CSTP length (have 52, 
should be 1062)
ocserv[64521]: worker[vpn_name]: [SOME_IP] worker-vpn.c:1094: error parsing 
CSTP data
ocserv[64521]: worker[vpn_name]: [SOME_IP] sending message 'sm: cli stats' to 
secmod
ocserv[64521]: worker[vpn_name]: [SOME_IP] sent periodic stats (in: 52, out: 
1984) to sec-mod

It happens at first connection after ~ 30-50 packets.
Everything is OK if we switch off TLS offload (haproxy TCP mode & server 
"localhost:4443").

Comments from ocserv developers:
"My understanding is that haproxy breaks a TLS packet received
(with 1062 bytes of payload) into multiple writes to ocserv socket.
That's a bummer. Because ocserv doesn't attempt to reconstruct the
packet (in the TLS case it is not necessary as the TLS boundaries are
sufficient), this error occurs. Is there a way to instruct haproxy to
pass the full packet received rather than doing multiple writes?
Otherwise we may need some reconstruction logic for that situation."



Here are the configuration:

##ocserv.conf
...
listen-clear-file = /var/lib/haproxy/oc_vpn
listen-proxy-proto = true   
tcp-port = 4443 
udp-port = 4443
... 

 


#TLS offloaded
## haproxy.conf
...
defaults
mode http
timeout connect 10s
timeout http-request 10s
timeout http-keep-alive 15s
timeout client 300s
timeout server 300s
timeout queue 90s
timeout tunnel 1500s 


frontend http 
  bind 0.0.0.0:443 tfo npn http/1.1 ssl crt /etc/ssl/server.both force-tlsv12
  reqadd X-Forwarded-Proto:\ https
  acl is_vpn_prefix path_beg -i /hebs-tln
  reqirep POST\ /hebs-tln POST\ / if is_vpn_prefix
  default_backend vpn_http

backend vpn_http
  server socket unix@oc_vpn send-proxy-v2



## Working HaProxy configuration
## no TLS offload
..
frontend tcp 
mode tcp
  bind 0.0.0.0:443 tfo npn http/1.1 
  default_backend vpn_tcp

backend vpn_tcp
mode tcp
  server  localhost:4443 localhost:4443 send-proxy-v2

---
Best regards,
Eugene Istomin


signature.asc
Description: This is a digitally signed message part.


Re: HAProxy DDOS and attack resilient configuration

2015-05-12 Thread Eugene Istomin
Mathias,

thanks a lot! Cool stuff!

-- 
Best regards,
Eugene Istomin
IT Architect


On Tuesday, May 12, 2015 10:48:52 AM Mathias Bogaert wrote:

Hi,


Just writing to let you know I've open sourced my HAProxy setup. It's a DDOS 
and attack resilient configuration, to be used behind CloudFlare:


https://github.com/analytically/haproxy-ddos



One can use it inside or outside of a Docker container. Enjoy!


Mathias




Re: [ANNOUNCE] haproxy-1.5.7

2014-11-01 Thread Eugene Istomin
Thanks, i'll try to use on-fly base64 convertation. 

/---/
*/Best regards,/*
/Eugene Istomin/


 Hello Eugene,
 
 On Fri, Oct 31, 2014 at 11:42:40AM +0200, Eugene Istomin wrote:
  Hello Willy,
  
  thanks to ssl_c_der! Can you implement ssl_c_pem like in nginx
  (ssl_client_raw_cert) ?
 
 From the information we got here, nginx seems to require an incorrect
 header encoding that's explicitly forbidden by the HTTP standard
 (cf rfc7230 #3.2.4), and that recipients are required to reject or
 to fix. Thus if you have something like this which works in production,
 it's very likely that your recipient already consumes a fixed version
 of the header. Would you care to check how the recipient consumes that
 field, and/or to test if it accepts the standard base64 representation ?
 From what I'm seeing in questions on the net, it seems that a number of
 consumers simply remove the begin/end lines, all spaces, then pass this
 to openssl, so it's likely that the original representation should
 already be in the expected format :
 
   http-request set-header x-ssl-cert %[ssl_c_der,base64]
 
 Regards,
 Willy



Re: [ANNOUNCE] haproxy-1.5.7

2014-10-31 Thread Eugene Istomin
Hello Willy,

thanks to ssl_c_der! Can you implement ssl_c_pem like in nginx 
(ssl_client_raw_cert) ?

/---/
*/Best regards,/*
/Eugene Istomin/


 Hi all!
 
 At last, a release before the end of the week so that those of us with
 a bad weather have something to do on Friday and something to fear for
 the week-end :-)
 
 Just as for 1.5.6 two weeks ago, we have a small bunch of fixes for 1.5.7.
   - A nasty bug reported by Dmitry Sivachenko can cause haproxy to die 
in
 some rare cases when a monitoring system issues a lot of show 
sess
 commands on the CLI and aborts them in the middle of a transfer. The
 probability to hit it is so low that it has existed since v1.4 and was
 only noticed now.
 
   - Cyril Bonté fixed a bug causing wrong flags to be sometimes reported
 in the logs for keep-alive requests.
 
   - A bug where the PROXY protocol is used with a banner protocol 
causes
 an extra 200ms delay for the request to leave, slowing down 
connection
 establishment to SMTP or FTP servers. I think this won't change 
anything
 for such users given that those connections are generally quite long.
 
   - Christian Ruppert found and fixed a bug in the way regex are compiled
 when HAProxy is built with support for PCRE_JIT but the libpcre is built
 without.
 
   - The way original connection addresses are detected on a system 
where
 connections are NAT'd by Netfilter was fixed so that we wouldn't report
 IPv4 destination addresses for v6-mapped v4 addresses. This used to
 cause the PROXY protocol to emit UNKNOWN as the address families 
differed
 for the source and destination!
 
   - John Leach reported an interesting bug in the way SSL certificates 
were
 loaded : if a certificate with an invalid subject (no parsable CN) is
 loaded as the first in the list, its context will not be updated with
 the bind line arguments, resulting in such a certificate to accept SSLv3
 despite the no-sslv3 keyword. That was diagnosed and fixed by Emeric.
 
   - Emeric also implemented the global ssl-default-bind-options and
 ssl-default-server-options keywords, and implemented ssl_c_der 
and
 ssl_f_der to pass the full raw certificate to the server if needed.
 I've backported them from 1.6-dev to 1.5 because I feel a general 
demand
 for making SSL safe and easy to configure.
 
 And that's all for this version! Nothing critical again, but we're just
 trying to keep a fast pace to eliminate each and every bug and try to 
react
 quickly to bug reports.
 
 BTW I have a few patches pending for 1.4 and Cyril reminded me that we
 still have this awful http-send-name-header which is partially broken
 there and that we aren' absolutely sure how to definitely fix correctly
 without risking to break something else :-( There are features I wish
 I had never merged in certain versions :-/
 
 Concerning 1.6, I'm still working on enumerating the changes needed to
 support HTTP/2. At the moment I'm working with two lists in parallel : the
 shortest path and the durable one. What's sad is that it seems they're 
very
 close to each other. But the good thing is that I think it should be 
doable
 for the 1.6 timeframe. Since that's only paper work and code review for
 now, it explains why there is very little activity on the code base for
 now. Let's hope it'll take off soon :-)
 
 Here's the full changelog for 1.5.7 :
 
 - BUG/MEDIUM: regex: fix pcre_study error handling
 - BUG/MINOR: log: fix request flags when keep-alive is enabled
 - MINOR: ssl: add fetchs 'ssl_c_der' and 'ssl_f_der' to return DER
 formatted certs - MINOR: ssl: add statement to force some ssl options in
 global. - BUG/MINOR: ssl: correctly initialize ssl ctx for invalid
 certificates - BUG/MEDIUM: http: don't dump debug headers on 
MSG_ERROR
 - BUG/MAJOR: cli: explicitly call cli_release_handler() upon error
 - BUG/MEDIUM: tcp: fix outgoing polling based on proxy protocol
 - BUG/MEDIUM: tcp: don't use SO_ORIGINAL_DST on non-AF_INET 
sockets
 
 Usual URLs below :
   Site index   : http://www.haproxy.org/
   Sources  : http://www.haproxy.org/download/1.5/src/
   Git repository   : http://git.haproxy.org/git/haproxy-1.5.git/
   Git Web browsing : http://git.haproxy.org/?p=haproxy-1.5.git
   Changelog: 
http://www.haproxy.org/download/1.5/src/CHANGELOG
   Cyril's HTML doc :
 http://cbonte.github.com/haproxy-dconv/configuration-1.5.html
 
 Willy



Re: SNI in logs

2014-10-12 Thread Eugene Istomin
Thanks!

I missed this part of doc: If a variable is named between square brackets 
('[' .. ']') then it is used as a sample expression rule 
---
Best regards,
Eugene Istomin

On Sunday, October 12, 2014 05:24:36 PM Baptiste wrote:
 On Fri, Oct 10, 2014 at 5:54 AM, Eugene Istomin e.isto...@edss.ee wrote:
  Hello,
  
  
  
  can we log SNI headers (req_ssl_sni) or generally, SNI availability
  (ssl_fc_has_sni) the same way we log SSL version (%sslv)?
  
  ---
  
  Best regards,
  
  Eugene Istomin
 
 Hi Eugene,
 
 You can log sni information using the following sample fetch on a
 log-format directive: %[ssl_fc_sni]
 
 Baptiste




SNI in logs

2014-10-09 Thread Eugene Istomin
Hello,

can we log SNI headers (req_ssl_sni) or generally, 
SNI availability (ssl_fc_has_sni) the same way we 
log SSL version (%sslv)?
/---/
*/Best regards,/*
/Eugene Istomin/



Re: Connect to SNI-only server (haproxy as a client)

2014-10-09 Thread Eugene Istomin
Hello,

yesterday we are looking for the client-side SNI custom string for one of 
our clients and choose stunnel (as outbound TLS termination) for two 
reasons:
1) ability to send client certificate (client mode)
2) ability to send custom SNI header in client mode

We use haproxy as main L7 routers for years with a little bit of stunnel for 
client cert auth.
Do you have any plans to add this features in 1.6?

Thanks.
/---/
*/Best regards,/*
/Eugene Istomin/


 On Mon, Aug 18, 2014 at 05:46:14PM +0200, Baptiste wrote:
  On Mon, Aug 18, 2014 at 2:40 PM, Willy Tarreau w...@1wt.eu wrote:
   Hi Benedikt,
   
   On Mon, Aug 18, 2014 at 10:17:02AM +0200, Benedikt Fraunhofer 
wrote:
   Hello List,
   
   I'm trying to help an java6-app that can't connect to a server which
   seems to support SNI-only.
   
   I thought I could just add some frontend and backend stancas
   
   and include the sni-only server as a server in the backend-section 
like so:
  server a 1.2.3.4:443 ssl verify none force-tlsv12
   
   (I had verify set, just removed it to keep it simple and rule it out)
   
   But it seems the server in question insists on SNI, whatever force-* 
I
   use and the connection is tcp-reset by the server (a) right after 
the
   Client-Hello from haproxy.
   
   Is there a way to specify the TLS SNI field haproxy should use for
   these outgoing connections?
   
   Not yet. We identified multiple needs for this field which a single
   constant in the configuration will not solve. While some users will
   only need a constant value (which seems to be your case), others
   need to forward the SNI they got on the other side, or to build one
   from a Host header field.
   
   So it's likely that we'll end up with a sample expression instead of
   a constant. Additionally that means that for health checks we need 
an
   extra setting (likely a constant this time).
   
   But for now, the whole solution is not designed yet, let alone
   implented.
 
 Btw is this something you're actively looking at, to design/implement?
 
 People on the list should be able to provide feedback about the planned
 expression to set the SNI field for client connections..
   regards,
   Willy
  
  Hi,
  
  Microsoft Lync seems to have the same requirement for SNI...
  We need it in both traffic and health checks.
 
 OK, good to know.
 
 
 Thanks,
 
 -- Pasi
 
  Baptiste



Re: Recommended SSL ciphers and settings

2014-09-10 Thread Eugene Istomin
Hello,

we merged all neccessary SSL-related parameters leads to A+ without HSTS errors:

1) Use secure ciphers
bind   no-sslv3 ciphers  
EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA384:EECDH+aRSA+SHA256:EECDH+aRSA+RC4:EECDH:EDH+aRSA:!aNULL:!eNULL:!LOW:!3DES:!MD5:!EXP:!PSK:!SRP:!DSS:+RC4:RC4

2) Mark all cookies as secure if sent over SSL
rsprep ^Set-Cookie:\ (.*) Set-Cookie:\ \1;\ Secure if { ssl_fc }

3) Add the HSTS header with a 1 year max-age
spadd Strict-Transport-Security:\ max-age=31536000 if { ssl_fc }



some non-SSL security related:

4) Add HTTPS headers to backends
reqadd X-Forwarded-Proto:\ https if { ssl_fc }
reqadd X-Proto:\ SSL if { ssl_fc }

5) Methods
acl methods_strict method HEAD GET PUT POST UPGRADE
acl methods_avoid  method TRACE CONNECT

acl hosts_methods-ext.edss hdr(host) SOME_SITED_WITH_EXTENDED_METHODS

http-request allow if !hosts_methods-ext.edss methods_strict 
http-request allow if hosts_methods-ext.edss !methods_avoid 
---
Best regards,
Eugene Istomin


On Wednesday, September 10, 2014 09:00:48 AM Thomas Heil wrote:

Hi,

On 09.09.2014 15:08, pablo platt wrote:

rspadd Strict-Transport-Security:\ max-age=31536000;\ includeSubDomains if 
ssl-proxy


Do I need to add it to the frontend or backend?

so its response, so better do it in the backend but it will work in the 
frontend too. 
Will it break raw TLS (not HTTPS)?


Iam not sure what you are asking. Try it and check it via ssllabs?


Thanks



cheers
thomas

On Tue, Sep 9, 2014 at 1:25 PM, Thomas Heil h...@terminal-consulting.de wrote:

Hi,


On 09.09.2014 11:43, pablo platt wrote:

I've tried both options and I'm still not getting A+.


Unfortunately, I can't ask the user what the error is.

If I'll run into this again, I'll try to get this info.

To reach A+ you need 

rspadd Strict-Transport-Security:\ max-age=31536000;\ includeSubDomains 
if ssl-proxy
ssl-proxy means here the connection is ssl.

and a cipher list like
--
EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA384:
  
EECDH+aRSA+SHA256:EECDH+aRSA+RC4:EECDH:EDH+aRSA:RC4:!aNULL:!eNULL:!LOW:!3DES:!MD5:!EXP:!PSK:!SRP:!DSS:!RC4
--

Together it should work. 

As you can see we have no longer RC4 ciphers,

cheers
thomas 



Thanks



On Mon, Sep 8, 2014 at 9:46 AM, Jarno Huuskonen jarno.huusko...@uef.fi wrote:

Hi,

On Sun, Sep 07, pablo platt wrote:
 Hi,

 I'm using haproxy to terminate SSL and it works for most of my users.
 I have alphassl wildcard certificate.
 I'm using SSL to improve WebSockets and RTMP connections of port 443.
 I don't have sensitive data or e-commerce.

 I have one user that see a warning in Chrome and can't use my website.

Do you know what warning chrome gives to that user ?

 Is it possible that this the warning is because an antivirus is not happy
 with the default ciphers or other ssl settings?

 When running a test https://sslcheck.globalsign.com/en_US I'm getting:
 Sessions may be vulnerable to BEAST attack
 Server has not enabled HTTP Strict-Transport-Security
 Server has SSL v3 enabled
 Server is using RC4-based ciphersuites which have known vulnerabilities
 Server configuration does not meet FIPS guidelines
 Server does not have OCSP stapling configured
 Server has not yet upgraded to a Extended Validation certificate
 Server does not have SPDY enabled

 I found one suggestion:
 bind 10.0.0.9:443 name https ssl crt /path/to/domain.pem ciphers
 RC4:HIGH:!aNULL:!MD5
 http://blog.haproxy.com/2013/01/21/mitigating-the-ssl-beast-attack-using-the-aloha-load-balancer-haproxy/

 And another:
 bind 0.0.0.0:443 ssl crt /etc/cert.pem nosslv3 prefer-server-ciphers
 ciphers RC4-SHA:AES128-SHA:AES256-SHA

 Both gives me other warnings.

What other warnings ? (Does haproxy give you warnings/errors or client
browsers) ?

Perhaps you could try ciphersuite from:
https://wiki.mozilla.org/Security/Server_Side_TLS

for example in global:
ssl-default-bind-ciphers ...

or on bind:
bind 0.0.0.0:443 ssl crt /path/to/crt ciphers ...

To enable ocsp stapling see haproxy config:
http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#5.1-crt
http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#9.2-set%20ssl%20ocsp-response

-Jarno

--
Jarno Huuskonen








Re: [ANNOUNCE] haproxy-1.5.2

2014-07-13 Thread Eugene Istomin
 The maximum syslog line length is now 
configurable, both at build time
 (MAX_SYSLOG_LEN) and per logger using the 
optional len argument.

Thanks a lot! =)

/---/
*/Best regards,/*
/Eugene Istomin/



Re: Increasing MAX_SYSLOG_LEN

2014-06-24 Thread Eugene Istomin
Ping :)
/---/
*/Best regards,/*
/Eugene Istomin /




Increasing MAX_SYSLOG_LEN

2014-06-21 Thread Eugene Istomin
Hello,

right now MAX_SYSLOG_LEN is hardcoded in log.h. We have JSON-based 
logging with rsyslog parsing, some logs are more then 1024 and i think, 
fits in 2048.

Do you have plans to make syslog_len configuration file variable?

In February Herve has an idea about this: 
http://comments.gmane.org/gmane.comp.web.haproxy/15099[1] 

Thanks.
/---/
*/Best regards,/*
/Eugene Istomin /


[1] http://comments.gmane.org/gmane.comp.web.haproxy/15099


SSL_CLIENT_CERT header (SSLExportClientCertificate)

2013-12-02 Thread Eugene Istomin
Hello,

is there any possibility to export client certificate to http header (like 
Apache with option SSLExportClientCertificates)?

We are testing solution with client auth, server application logic needs users 
certificates to auth them internally.
Thanks.
-- 
Best regards,
Eugene Istomin




Re: make haproxy notice that backend server ip has changed

2013-11-23 Thread Eugene Istomin
Willy,

such a great idea! We can help in testing.
-- 
*/Best regards,/*
/Eugene Istomin/



 Hi Pawel,
 
 On Fri, Nov 22, 2013 at 07:54:18PM -0800, Pawel Veselov wrote:
  Hi.
  
  There has been a heated discussion on this about 2 years back, so 
sorry
  for
  reopening any wounds. Also sorry for long winded intro.
  
  My understanding is that neither 1.4 nor 1.5 are planned to have any
  support for resolving any server addresses during normal operations; 
i.e.
  such are always resolved at start-up.
 
 There have been some improvements in this area. The ongoing 
connection
 rework that's being done for the server-side keep-alive is designed with
 this in mind so that we'll be able to perform DNS resolving during health
 checks and change the server's address on the fly without causing 
trouble
 to pending connections. Right now the session does not need the 
server's
 address until one exact instant which is just prior to connecting. And
 this address is immediately copied into the connection and not reused.
 So that is compatible with the ability to change an address on the fly,
 even possibly from the CLI. I find it reasonable to check the DNS for
 each health check since a health check defines the update period 
you're
 interested in.
 
 This will also help people running in environments like EC2 where
 everything changes each time you sneeze. But that's not done :-)
 
  One of the ways I would like to use ha-proxy, is to to become a pure 
TCP
  proxy to a database server that is provides fail-over through DNS.
 
 Indeed it could also work for such use cases.
 
  The problem with connection the application directly to such database 
is
  that when the database does go down, previous IP address effectively 
goes
  dark, and I don't even get TCP connections reset on previously
  established connections.
 
 That's not exact because you have on-marked-down shutdown-
sessions for
 this exact purpose.
 
 (...)
 
  I tried using ha-proxy for this. The idea was - if ha-proxy determines
  that
  the server is down, it will quickly snip both previously established, or
  newly established connections, so I won't have to incur blocks 
associated
  with those. So, ha-proxy is a perfect tool to prevent unreachable 
server
  problem from the perspective of the application. This actually worked
  great
  in my test: once I simulated database failure, there was absolutely no
  blocks on database operations (sure, there were failures to connect 
to it,
  but that's fine).
 
 That's already present :-)
 
  What remains a problem - is that because the fail-over changes the IP
  address behind the server name, ha-proxy is not able to pick up the 
new
  address. It would really be perfect if it could, otherwise, that 
backend
  just never recovers.
  
  Now, I have no control over this fail-over implementation. I have no
  control over network specifics and application framework either. I can
  fiddle with the JDBC driver, but it will probably be more tedious and
  throw-away than the following.
  
  Would anybody be interested in an optional parameter address 
modifier, say
  @chk:n:m as a suffix to a host name, to enable ha-proxy to re-
check
  the
  specified name each n seconds, past initial resolution? Say, also
  agreeing to mark server as down if a name fails to resolve after m
  checks. If n is 0, then no checks are performed past initial 
resolution,
  which is default and is that now. Having m of 0 to mean to not fail 
on
  resolution errors.
 
 I'd rather have the DNS servers defined in backends and inherited from
 defaults, so that its possible to specify it once. Also I think your M
 parameter above is more related to the DNS servers themselves and is 
just
 a cache duration, so I'd put that as one of their settings. The N 
parameter
 should probably be covered by the server's check interval. It will also
 have the benefit of respecting the fastinter and downinter values so 
that
 we don't resolve DNS too fast when the server is down. I'd also add 
support
 for preventing the resolving from being made too fast and enforcing the
 cached value to be kept for a configurable amount of time (eg: min-
cache).
 
 Thus we could even have dedicated resolver sections just like we have
 peers. It would also help putting some static information later. It
 could look like this :
 
 resolver local-dns
 server dns1 192.168.0.1 cache 1m min-cache 10s
 server dns2 192.168.0.2 cache 1m min-cache 10s
 
 backend foo
 use-resolver local-dns   # (could also be put in defaults)
 server s1 name1.local:80 resolve check
 server s2 name2.local:80 resolve check
 
 Thinking about it a bit more, I'd rather have the ability to specify
 the resolver to use on each server line, so that when you LB between
 local and remote servers, you can use different resolvers :
 
 resolver private-dns
 server dns1 192.168.0.1 cache 1m min-cache 10s
 server dns2 192.168.0.2 cache 1m min

Re: -sf/-st not working

2013-02-07 Thread Eugene Istomin
Thanks for the answer,

as written in http://www.mgoff.in/2010/04/18/haproxy-reloading-your-config-with-
minimal-service-impact/
The end-result is a reload of the configuration file which is not visible by 
the customer

But in our case it leads to unbinding from all ports and finishing haproxy 
process.
Can this issue related to rpm build options? RPM build log is  
https://build.opensuse.org/package/rawlog?arch=x86_64package=haproxy-1.5project=server%3Ahttprepository=openSUSE_12.2
 

-- 
Best regards,
Eugene Istomin


On Thursday 07 February 2013 07:28:17 Willy Tarreau wrote:
 Hello Eugene,
 
 On Wed, Feb 06, 2013 at 08:29:33PM +0200, Eugene Istomin wrote:
  Hello,
  
  We have problem with reload/HUP:
  if i run #/usr/sbin/haproxy -D -f /etc/haproxy/haproxy.cfg -p
  /var/run/haproxy.pid -sf $(cat /var/run/haproxy.pid)  - haproxy process is
  shutting down and exit
 
 This is the intended behaviour, it unbinds from its ports so that the new
 process can bind, then waits for all existing connections to terminate
 and leaves. Isn't it what you're observing ? What would you have expected
 instead ?
 
 Willy

Re: -sf/-st not working

2013-02-07 Thread Eugene Istomin
I think the main problem is in systemd: 

- from commandline -sf working as expected
- from sysvinit -sf working as expected
- from systemd -sf  only stop process.

I try both init.d  systemd scripts in systemd-based linux - all results are 
the same:

  Loaded: loaded (/lib/systemd/system/haproxy.service; disabled)
  Active: failed (Result: signal) since Thu, 07 Feb 2013 17:18:43 +0200; 12s 
ago
  Process: 28125 ExecReload=/usr/sbin/haproxy -D -f /etc/haproxy/haproxy.cfg -
p /var/run/haproxy.pid -sf $MAINPID (code=exited, status=0/SUCCESS)
  Process: 28118 ExecStart=/usr/sbin/haproxy -D -f /etc/haproxy/haproxy.cfg -p 
/var/run/haproxy.pid (code=exited, status=0/SUCCESS)
  Process: 28115 ExecStartPre=/usr/sbin/haproxy -c -q -f 
/etc/haproxy/haproxy.cfg (code=exited, status=0/SUCCESS)
Main PID: 28126 (code=killed, signal=KILL)
  CGroup: name=systemd:/system/haproxy.service


systemd script:
[Unit]
Description=HAProxy For TCP And HTTP Based Applications
After=network.target

[Service]
Type=forking
PIDFile=/var/run/haproxy.pid
ExecStartPre=/usr/sbin/haproxy -c -q -f /etc/haproxy/haproxy.cfg
ExecStart=/usr/sbin/haproxy -D -f /etc/haproxy/haproxy.cfg -p 
/var/run/haproxy.pid
ExecReload=/usr/sbin/haproxy -D -f /etc/haproxy/haproxy.cfg -p 
/var/run/haproxy.pid -sf $MAINPID

[Install]
WantedBy=multi-user.target

-- 
Best regards,
Eugene Istomin


On Thursday 07 February 2013 14:07:44 Baptiste wrote:
 You should have a new HAProxy process started using the new
 configuration and binding the ports...
 
 cheers
 
 On 2/7/13, Eugene Istomin e.isto...@edss.ee wrote:
  Thanks for the answer,
  
  as written in
  http://www.mgoff.in/2010/04/18/haproxy-reloading-your-config-with-
  minimal-service-impact/
  The end-result is a reload of the configuration file which is not visible
  by
  the customer
  
  But in our case it leads to unbinding from all ports and finishing haproxy
  process.
  Can this issue related to rpm build options? RPM build log is
  https://build.opensuse.org/package/rawlog?arch=x86_64package=haproxy-1.5;
  project=server%3Ahttprepository=openSUSE_12.2
  
  
  --
  Best regards,
  Eugene Istomin
  
  On Thursday 07 February 2013 07:28:17 Willy Tarreau wrote:
  Hello Eugene,
  
  On Wed, Feb 06, 2013 at 08:29:33PM +0200, Eugene Istomin wrote:
   Hello,
   
   We have problem with reload/HUP:
   if i run #/usr/sbin/haproxy -D -f /etc/haproxy/haproxy.cfg -p
   /var/run/haproxy.pid -sf $(cat /var/run/haproxy.pid)  - haproxy process
   is
   shutting down and exit
  
  This is the intended behaviour, it unbinds from its ports so that the new
  process can bind, then waits for all existing connections to terminate
  and leaves. Isn't it what you're observing ? What would you have expected
  instead ?
  
  Willy

-sf/-st not working

2013-02-06 Thread Eugene Istomin
Hello,

We have problem with reload/HUP:
if i run #/usr/sbin/haproxy -D -f /etc/haproxy/haproxy.cfg -p 
/var/run/haproxy.pid -sf $(cat /var/run/haproxy.pid)  - haproxy process is 
shutting down and exit

from strace:
socket(PF_INET, SOCK_STREAM, IPPROTO_TCP) = 4
fcntl(4, F_SETFL, O_RDONLY|O_NONBLOCK)  = 0
setsockopt(4, SOL_SOCKET, SO_REUSEADDR, [1], 4) = 0
setsockopt(4, SOL_SOCKET, 0xf /* SO_??? */, [1], 4) = -1 ENOPROTOOPT (Protocol 
not available)
bind(4, {sa_family=AF_INET, sin_port=htons(80), sin_addr=inet_addr(0.0.0.0)}, 
16) = -1 EADDRINUSE (Address already in use)
close(4)= 0
socket(PF_INET, SOCK_STREAM, IPPROTO_TCP) = 4
fcntl(4, F_SETFL, O_RDONLY|O_NONBLOCK)  = 0
setsockopt(4, SOL_SOCKET, SO_REUSEADDR, [1], 4) = 0
setsockopt(4, SOL_SOCKET, 0xf /* SO_??? */, [1], 4) = -1 ENOPROTOOPT (Protocol 
not available)
bind(4, {sa_family=AF_INET, sin_port=htons(443), 
sin_addr=inet_addr(0.0.0.0)}, 16) = -1 EADDRINUSE (Address already in use)
close(4)= 0
socket(PF_INET, SOCK_STREAM, IPPROTO_TCP) = 4
fcntl(4, F_SETFL, O_RDONLY|O_NONBLOCK)  = 0
setsockopt(4, SOL_SOCKET, SO_REUSEADDR, [1], 4) = 0
setsockopt(4, SOL_SOCKET, 0xf /* SO_??? */, [1], 4) = -1 ENOPROTOOPT (Protocol 
not available)
setsockopt(4, SOL_TCP, TCP_DEFER_ACCEPT, [1], 4) = 0
bind(4, {sa_family=AF_INET, sin_port=htons(6000), 
sin_addr=inet_addr(0.0.0.0)}, 16) = -1 EADDRINUSE (Address already in use)
close(4) 

.

select(0, NULL, NULL, NULL, {0, 1}) = 0 (Timeout)
socket(PF_INET, SOCK_STREAM, IPPROTO_TCP) = 4
fcntl(4, F_SETFL, O_RDONLY|O_NONBLOCK)  = 0
setsockopt(4, SOL_SOCKET, SO_REUSEADDR, [1], 4) = 0
setsockopt(4, SOL_SOCKET, 0xf /* SO_??? */, [1], 4) = -1 ENOPROTOOPT (Protocol 
not available)
bind(4, {sa_family=AF_INET, sin_port=htons(80), sin_addr=inet_addr(0.0.0.0)}, 
16) = 0
listen(4, 2000) = 0
setsockopt(4, SOL_TCP, TCP_QUICKACK, [0], 4) = 0
socket(PF_INET, SOCK_STREAM, IPPROTO_TCP) = 5
fcntl(5, F_SETFL, O_RDONLY|O_NONBLOCK)  = 0
setsockopt(5, SOL_SOCKET, SO_REUSEADDR, [1], 4) = 0
setsockopt(5, SOL_SOCKET, 0xf /* SO_??? */, [1], 4) = -1 ENOPROTOOPT (Protocol 
not available)
bind(5, {sa_family=AF_INET, sin_port=htons(443), 
sin_addr=inet_addr(0.0.0.0)}, 16) = 0
listen(5, 2000)

.

rt_sigaction(SIGTTOU, {0x459110, [TTOU], SA_RESTORER|SA_RESTART, 
0x7ff38ba41da0}, {SIG_DFL, [], 0}, 8) = 0
rt_sigaction(SIGTTIN, {0x459110, [TTIN], SA_RESTORER|SA_RESTART, 
0x7ff38ba41da0}, {SIG_DFL, [], 0}, 8) = 0
unlink(/var/run/haproxy.pid)  = 0
open(/var/run/haproxy.pid, O_WRONLY|O_CREAT|O_TRUNC, 0644) = 11
kill(22708, SIGUSR1)= 0
getrlimit(RLIMIT_NOFILE, {rlim_cur=40036, rlim_max=40036}) = 0
clone(child_stack=0, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, 
child_tidptr=0x7ff38cc549d0) = 22716
write(11, 22716\n, 6) = 6
close(11)   = 0
exit_group(0)   = ?
+++ exited with 0 +++


#haproxy -V
HA-Proxy version 1.5-dev17 2012/12/28

-- 
Best regards,
Eugene Istomin

Re: HTTP redirect using domain extract from original request

2012-09-10 Thread Eugene Istomin
Hello Guillaume,

we use nginx perl for this (to redirect to www. for ex):

HAProxy:
acl host_non_wwwhdr_reg(host) ^([\w\-]+)\.([\w\-]+)$
use_backend localhost:83if host_non_www

Nginx:

server {
listen 83;
ssl  off;   
rewrite ^/(.*) http://$host_with_www/$1 permanent;

location /nstat {
stub_status on;
}
   }


perl_set $host_with_www 'sub{
my($r)=@_;
return $r-variable(host)=~/^([\w\-]+)\.([\w\-]+)$/ ? www.$1.$2 
: 
_undefined_host_;


}';
-- 
Best regards,
Eugene Istomin


On Monday 10 September 2012 21:44:44 Guillaume Castagnino wrote:
 Le lundi 10 septembre 2012 21:19:40 Baptiste a écrit :
  Hi Guillaume,
  
  You're right, this is not doable with HAProxy, unfortunately.
  The only way you could do that is through redirect with hardcoded
  hostname + acl, as you mentionned in your mail.
 
 Thanks Baptiste,
 
 So that means one acl + one redirect rule per vhost, as I fear. I think
 I will keep my nginx redirect for now, since I want to upgrade *all*
 virtualhosts, preferably without bothering to list all of them :)
 Ideally, I would like to keep haproxy vhost agnostic.
 
 Thanks !

Problem with multipart/form-data

2012-02-27 Thread Eugene Istomin
Hello,

We have a problem with Haproxy - OTRS helpdesk system.
The problem is related to some POST requests that uses a Content-Type of 
multipart/form-data

HAProxy log:
0008::80.clireq[000a:]: POST /otrs/index.pl HTTP/1.1
0008::80.clihdr[000a:]: Host: 
0008::80.clihdr[000a:]: User-Agent: Mozilla/5.0 (X11; Linux x86_64; 
rv:10.0) Gecko/20100101 Firefox/10.0
0008::80.clihdr[000a:]: Accept: 
text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
0008::80.clihdr[000a:]: Accept-Language: en-us,en;q=0.5
0008::80.clihdr[000a:]: Accept-Encoding: gzip, deflate
0008::80.clihdr[000a:]: Connection: keep-alive
0008::80.clihdr[000a:]: Referer: 
http://support.edss.ee/otrs/index.pl?Action=AgentTicketClose;TicketID=1Session=10cf240e18426b457568beab5c3624a02b
0008::80.clihdr[000a:]: Cookie: 
__utma=3742.1316239773.1328797409.1328797409.1328797409.1; 
__utmz=3742.1328797409.1.1.utmccn=(direct)|utmcsr=(direct)|utmcmd=(none); 
Session=10cf240e18426b457568beab5c3624a02b
0008::80.clihdr[000a:]: Content-Type: multipart/form-data; 
boundary=---1477711025172414033487090087
0008::80.clihdr[000a:]: Content-Length: 1829


As you can see, none of srvrep or srvhdr is answered.
This is ordinary POST request that use  application/x-www-form-urlencoded 
Content-Type:

0029::80.clireq[0011:]: POST /otrs/index.pl HTTP/1.1
0029::80.clihdr[0011:]: Host: 
0029::80.clihdr[0011:]: User-Agent: Mozilla/5.0 (X11; Linux x86_64; 
rv:10.0) Gecko/20100101 Firefox/10.0
0029::80.clihdr[0011:]: Accept: 
text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
0029::80.clihdr[0011:]: Accept-Language: en-us,en;q=0.5
0029::80.clihdr[0011:]: Accept-Encoding: gzip, deflate
0029::80.clihdr[0011:]: Connection: keep-alive
0029::80.clihdr[0011:]: Referer: 
http://support.edss.ee/otrs/index.pl?Action=AgentTicketClose
0029::80.clihdr[0011:]: Cookie: 
__utma=3742.1316239773.1328797409.1328797409.1328797409.1; 
__utmz=3742.1328797409.1.1.utmccn=(direct)|utmcsr=(direct)|utmcmd=(none); 
Session=
0029::80.clihdr[0011:]: Content-Type: application/x-www-form-urlencoded
0029::80.clihdr[0011:]: Content-Length: 111
0029:we.edss.ee:80.srvrep[0011:0016]: HTTP/1.1 302 Found
0029:we.edss.ee:80.srvhdr[0011:0016]: Date: Mon, 27 Feb 2012 15:01:40 GMT
0029:we.edss.ee:80.srvhdr[0011:0016]: Server: Apache/2.2.22 (Linux/SUSE)
0029:we.edss.ee:80.srvhdr[0011:0016]: Set-Cookie: 
Session=1093c96c0f0c1d793eb20b36efc9a94fc8; path=/
0029:we.edss.ee:80.srvhdr[0011:0016]: Location: 
/otrs/index.pl?Action=AgentTicketCloseSession=1093c96c0f0c1d793eb20b36efc9a94fc8
0029:we.edss.ee:80.srvhdr[0011:0016]: Connection: close
0029:we.edss.ee:80.srvhdr[0011:0016]: Transfer-Encoding: chunked
0029:we.edss.ee:80.srvhdr[0011:0016]: Content-Type: text/html; 
charset=utf-8;

The exact form code:
form action=/otrs/index.pl method=post enctype=multipart/form-data 
name=compose id=Compose
 class=Validate PreventMultipleSubmitsinput type=hidden 
name=ChallengeToken 
value=337e7b347a6c3665b5804a95c3f939f2/input type=hidden 
name=ChallengeToken value=337e7b347a6c3665b5804a95c3f939f2/
input type=hidden name=Action value=AgentTicketClose/
input type=hidden name=Subaction value=Store/
input type=hidden name=TicketID value=2/
input type=hidden name=Expand id=Expand value=/
input type=hidden name=FormID value=1330356031.2808030.57680829/

Can you confirm that this problem is related to multipart/form-data 
content-type?
Thanks!
-- 
Best regards,
Eugene Istomin
System Administrator
EDS Systems
e.isto...@edss.ee
Work: +372-640-96-01
Cell: +372-522-92-11


Re: Problem with multipart/form-data

2012-02-27 Thread Eugene Istomin
Hi,

tcpdump on OTRS side says that there is no TCP flow on server side when i 
press Submit button.
Haproxy log say that new client packet is recieved but this packet doesn't 
route to any server even if i use simple listen with no backends-frontends.

Haproxy version is lastest 1.4 (1.4.19), but we tru with 1.5 - the same 
strange behavior. 
-- 
Best regards,
Eugene Istomin
System Administrator
EDS Systems
e.isto...@edss.ee
Work: +372-640-96-01
Cell: +372-522-92-11


On Monday 27 February 2012 16:56:52 Baptiste wrote:
 Hi,
 
 what do your log says about this?
 There may be some errors triggered which may help diagnose the issue.
 
 which version of haproxy by the way?
 
 
 cheers
 
 On Mon, Feb 27, 2012 at 4:23 PM, Eugene Istomin e.isto...@edss.ee wrote:
  Hello,
  
  
  
  We have a problem with Haproxy - OTRS helpdesk system.
  
  The problem is related to some POST requests that uses a Content-Type of
  multipart/form-data
  
  
  
  HAProxy log:
  
  0008::80.clireq[000a:]: POST /otrs/index.pl HTTP/1.1
  
  0008::80.clihdr[000a:]: Host: 
  
  0008::80.clihdr[000a:]: User-Agent: Mozilla/5.0 (X11; Linux
  x86_64; rv:10.0) Gecko/20100101 Firefox/10.0
  
  0008::80.clihdr[000a:]: Accept:
  text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
  
  0008::80.clihdr[000a:]: Accept-Language: en-us,en;q=0.5
  
  0008::80.clihdr[000a:]: Accept-Encoding: gzip, deflate
  
  0008::80.clihdr[000a:]: Connection: keep-alive
  
  0008::80.clihdr[000a:]: Referer:
  http://support.edss.ee/otrs/index.pl?Action=AgentTicketClose;TicketID=1;
  Session=10cf240e18426b457568beab5c3624a02b
  
  0008::80.clihdr[000a:]: Cookie:
  __utma=3742.1316239773.1328797409.1328797409.1328797409.1;
  __utmz=3742.1328797409.1.1.utmccn=(direct)|utmcsr=(direct)|utmcmd=(n
  one); Session=10cf240e18426b457568beab5c3624a02b
  
  0008::80.clihdr[000a:]: Content-Type: multipart/form-data;
  boundary=---1477711025172414033487090087
  
  0008::80.clihdr[000a:]: Content-Length: 1829
  
  
  
  
  
  As you can see, none of srvrep or srvhdr is answered.
  
  This is ordinary POST request that use
  application/x-www-form-urlencoded Content-Type:
  
  
  
  0029::80.clireq[0011:]: POST /otrs/index.pl HTTP/1.1
  
  0029::80.clihdr[0011:]: Host: 
  
  0029::80.clihdr[0011:]: User-Agent: Mozilla/5.0 (X11; Linux
  x86_64; rv:10.0) Gecko/20100101 Firefox/10.0
  
  0029::80.clihdr[0011:]: Accept:
  text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
  
  0029::80.clihdr[0011:]: Accept-Language: en-us,en;q=0.5
  
  0029::80.clihdr[0011:]: Accept-Encoding: gzip, deflate
  
  0029::80.clihdr[0011:]: Connection: keep-alive
  
  0029::80.clihdr[0011:]: Referer:
  http://support.edss.ee/otrs/index.pl?Action=AgentTicketClose
  
  0029::80.clihdr[0011:]: Cookie:
  __utma=3742.1316239773.1328797409.1328797409.1328797409.1;
  __utmz=3742.1328797409.1.1.utmccn=(direct)|utmcsr=(direct)|utmcmd=(n
  one); Session=
  
  0029::80.clihdr[0011:]: Content-Type:
  application/x-www-form-urlencoded
  
  0029::80.clihdr[0011:]: Content-Length: 111
  
  0029:we.edss.ee:80.srvrep[0011:0016]: HTTP/1.1 302 Found
  
  0029:we.edss.ee:80.srvhdr[0011:0016]: Date: Mon, 27 Feb 2012
  15:01:40
  GMT
  
  0029:we.edss.ee:80.srvhdr[0011:0016]: Server: Apache/2.2.22
  (Linux/SUSE)
  
  0029:we.edss.ee:80.srvhdr[0011:0016]: Set-Cookie:
  Session=1093c96c0f0c1d793eb20b36efc9a94fc8; path=/
  
  0029:we.edss.ee:80.srvhdr[0011:0016]: Location:
  /otrs/index.pl?Action=AgentTicketCloseSession=1093c96c0f0c1d793eb20b36e
  fc9a94fc8
  
  0029:we.edss.ee:80.srvhdr[0011:0016]: Connection: close
  
  0029:we.edss.ee:80.srvhdr[0011:0016]: Transfer-Encoding: chunked
  
  0029:we.edss.ee:80.srvhdr[0011:0016]: Content-Type: text/html;
  charset=utf-8;
  
  
  
  The exact form code:
  
  form action=/otrs/index.pl method=post
  enctype=multipart/form-data
  name=compose id=Compose
  
  class=Validate PreventMultipleSubmitsinput type=hidden
  name=ChallengeToken
  
  value=337e7b347a6c3665b5804a95c3f939f2/input type=hidden
  name=ChallengeToken value=337e7b347a6c3665b5804a95c3f939f2/
  
  input type=hidden name=Action value=AgentTicketClose/
  
  input type=hidden name=Subaction value=Store/
  
  input type=hidden name=TicketID value=2/
  
  input type=hidden name=Expand id=Expand value=/
  
  input type=hidden name=FormID
  value=1330356031.2808030.57680829/ 
  Can you confirm that this problem is related to multipart/form-data
  content-type?
  
  Thanks!
  
  --
  
  Best regards,
  
  Eugene Istomin
  
  System Administrator
  
  EDS Systems
  
  e.isto...@edss.ee
  
  Work: +372-640-96-01
  
  Cell: +372-522-92-11