unique-id-header logged twice ?

2016-08-23 Thread Jakov Sosic

Hi guys,

I have following setings in my haproxy:

defaults
# populate X-Unique-ID header
unique-id-format %{+X}o\ %ci:%cp_%fi:%fp_%Ts_%rt:%pid
unique-id-header X-Unique-ID



Later I log it in Apache in custom log format:

LogFormat "%a %l %u [%{%d/%b/%Y:%T}t,%{msec_frac}t %{%z}t] \"%r\" %>s %b 
\"%{Referer}i\" \"%{User-Agent}i\" \"%{X-Unique-ID}i\"" combined_uniqueid



But, lately I've notice - very rarely but still happened, a request 
which logged two unique ids.


After verfying ips and ports, I conclude that first request has:

SRC: 192.168.50.200 [client_ip]
DST: 192.168.50.99  [haproxy_ip]

second one though, has:

SRC: 192.168.50.99  [haproxy_ip]
DST: 192.168.50.99  [haproxy_ip]


An example:

[Fri Aug 19 12:20:13.468461 2016] [-:error] [pid 9390] [client 
192.168.1.99:53393] [request_id: 
C0A801C8:DE3E_C0A80163:0050_573D9359_39BE5:6408, 
C0A80163:CBB9_C0A80163:0050_573D935C_39BE8:6408] .



Any ideas what could have happened over here?



Sporadic 503 errors in logs

2015-05-26 Thread Jakov Sosic

Hi guys,

I've noticed a lot of these kind of errors lately:

May 26 23:03:32 localhost haproxy[22628]: CLIENT_IP:50399 
[26/May/2015:23:03:27.909] FRONTEND BACKEND/SERVER 
4344/0/-1/-1/4345 503 212 - - CCNN 271/75/13/4/0 0/0 
{static.example.com|Mozilla/5.0 (Windows NT 6.1; WOW64) 
AppleWebKit/537.36 (KHTML, like Gecko) Chrome/42.0.2311.135 
Safari/537.36|http://www.example.com/bs|} {||} GET 
/images/logos/logo-example.com-bs.png HTTP/1.1 UNIQUE_ID


There is no matching log line in the backend server (apache).


Looking at the backend log (apache), I can see the request as regularly 
served a little bit later (up to 2-3 seconds), in another client request 
(also the matching log is available in haproxy too):


CLIENT_IP - - [26/May/2015:23:04:33,271 +0200] GET 
/images/logos/logo-examplea.com-bs.png HTTP/1.1 200 4304 
http://www.example.com/bs; Mozilla/5.0 (Windows NT 6.1; WOW64) 
AppleWebKit/537.36 (KHTML, like Gecko) Chrome/42.0.2311.135 
Safari/537.36 UNIQUE_ID_2



Interesting content of the (original) 503 log is:

 Tq  / Tw / Tc / Tr / Tt
4344/0/-1/-1/4345

In all these requests, Tq is always equal to or 1 ms smaller then Tt, 
while Tc and Tr are always -1.



Any ideas what's going on here?



Stop new connections from hitting nodes

2014-12-05 Thread Jakov Sosic

Hi guys

I'm wondering what ways of stopping *new* connections from hitting 
backend nodes?


1) From the docs:

https://github.com/haproxy/haproxy-1.5/blob/master/doc/architecture.txt

under section '4.2 Soft-stop using backup servers', there is a 
suggestion to set up iptables PREROUTING redirection from some other 
ports and define same servers as backup (with same cookie name).


Since the docs are pretty old I'm wondering is this way of taking 
servers down without killing sessions still viable?


If it is, I have a cleaner solution - run backend servers on 2 ports 
(for examlpe Listen *:80 and Listen *:81), and just use firewall to drop 
connections when we want to down a server.


Other then that, I don't quite understand principle behind this. Backup 
servers should be used only when all nodes are down, and since only one 
node is down how come backupA gets used for connections with cookie A ? 
And, if the backup server is fully activated, how come it doesn't server 
new incoming connections, which would make this excercise totally pointless?



2) Maintenance mode

Turning node to maintenance mode through socket is second solution. But 
just to make sure I wanted to ask - will that kill sessions, or leave 
them alive until they finish/expire?



3) Rejecting new packets?

I know this is a kludge, but just for the state of the argument, if I run:

  iptables -I INPUT -p tcp --dport 80 -m state --state NEW -j REJECT

From my understanding, this would mark webserver as down in haproxy, 
which would stop sending new connections to this specific node, but what 
after that? Would it redispatch existing connections forcefully? How do 
option redispatch and option persist play with this?




Re: TCP balancing with http check?

2014-06-10 Thread Jakov Sosic

On 06/07/2014 09:13 AM, Marco Corte wrote:


Hi,

Yes it is possible.
Look in the documentation for option httpchk:
...

This option does not necessarily require an HTTP backend, it also works
with
plain TCP backends. This is particularly useful to check simple scripts
bound
to some dedicated ports using the inetd daemon.
...

.marcoc



Yup it seems that it's just about setting up tcp listen section and 
backends with httpcheck.





TCP balancing with http check?

2014-06-06 Thread Jakov Sosic

Hi,

is it possible to set up TCP balancing, but to check backend servers 
with http checks?




Re: Lot of 503 errors - cannot explain why...

2014-05-31 Thread Jakov Sosic

On 05/27/2014 08:36 PM, Willy Tarreau wrote:

I don't see why GoogleBot would see them since they should only affect
the offending clients.


Is it by any chance possible that my ISP is somehow screwing up
connections? Because I see this kind of aborts/503s even from regular
clients fetching regular stuff?


Could be possible, but that sounds really strange. You could easily check
though, if you own a machine somewhere outside your ISP's network. Simply
send a request from there to your site and sniff at both ends. You'll see
if the trace matches or not. It could be possible that the ISP is running
a misconfigured transparent proxy which systematically closes the request
path after sending the request (as haproxy used to do with option forceclose
in early version 1.1 12 years ago). Or maybe it's part of an IDS or anti-ddos
mechanism that's automatically enabled when they run into trouble.


I've talked to ISP technicians and what they told me is that company has 
bandwith cap at XYZ Mbits, and once that limit is reached additional 
packets are simply dropped.


So, packets dropping at peaks seems promising as explanation of some of 
the behaviour we have observed...


Will investigate more, and report back.




Re: Lot of 503 errors - cannot explain why...

2014-05-31 Thread Jakov Sosic

On 05/27/2014 08:32 PM, Willy Tarreau wrote:

It could really be a site sucker. It first establishes connection pools
and distributes requests for objects found in pages across these pre-
established connections, then sends the reqeust and shutdown(SHUT_WR).
It is also possible that there is a high packet loss rate on the client
and that it took 7 seconds for it to manage to send its request after
establishing the connection! That could also be the consequence of a
bot saturating the client's link with many such requests.


I've turned off 'abortonclose' and I'm almost not seeing 503s with CCVN 
any more.


I probably have increase in number of sessions, but to be certain I have 
to set up haproxy monitoring.




Re: Lot of 503 errors - cannot explain why...

2014-05-27 Thread Jakov Sosic

On 05/27/2014 08:09 PM, Lukas Tribus wrote:

 Are you all your backend up, running and stable? How are your timeouts

configured?


My first server in my backend is down permanently (hw failure) and is 
marked as down in haproxy stats page. That shouldn't be a problem?


Other backends seem fine.

I'm running CentOS x86_64, and HaProxy from CentOS base repo (1.4.24-2.el6).

Unpacking RPM I don't see any patches, nor seds or awks in SPEC file so 
I presume it's clean 1.4.24 version.





I suggest you post your configuration (what you can share, most importantly
your timeout values) and the exact haproxy release?



I will strip my config later and post it.



Re: Lot of 503 errors - cannot explain why...

2014-05-27 Thread Jakov Sosic

On 05/27/2014 08:21 PM, Willy Tarreau wrote:


What is happening here is simple : the client disconnected before the
connection to the server managed to complete (CC flags), and you're
running with option abortonclose which allows haproxy to kill a pending
connection to the server.

Given how short the request lasted, I guess that it's a script that sent
this connection. It's basically sending the request and closing the output
channel immediately, waiting for the response. You can get this behaviour
using printf $request | nc $site 80. It's very likely a bot sucking your
site, as browsers never ever do that.

Using halog to sort them by IP will probably reveal that most of them
come from a few IP addresses. For this you can run halog -hs 503 -ic  log.


Yeah I was suspecting it's the client closing connection and was even 
planning on commenting out abortonclose later tonight in off hours (I'm 
running European +1 CEST based web site) :) So a great catch Willy!


What started all this is that I have around 3-4% error rate from 
GoogleBot (Googlebot can't access your site), and bosses/devs want to 
lower/eliminate that and found culprit in HaProxys 503 errors.




Is it by any chance possible that my ISP is somehow screwing up 
connections? Because I see this kind of aborts/503s even from regular 
clients fetching regular stuff?




Re: Queue is going rampage?

2012-07-20 Thread Jakov Sosic
On 07/17/2012 08:53 PM, Baptiste wrote:

 My advice: read the documentation about minconn, maxconn and fullconn
 and review your configuration
 Second advice, you should do content swithcing, routing your static
 content outside your dynamic farm.
 Queueing makes sense for dynamic content, but is almost useless for
 static one (only a maxconn value is enough for such servers, they
 write from memory directly to the network interface and they should
 never require queue)

OK, raising up minconn from 9 to 40 on servers really solved the
issues... What puzzles me is that I haven't touch the haproxy.cfg for at
least a few months and problems arose suddenly.



Re: Queue is going rampage?

2012-07-20 Thread Jakov Sosic
On 07/20/2012 02:44 PM, Baptiste wrote:

 Hi,
 
 Maybe your server answered faster before, or you have now much more traffic.
 HAProxy will maintain the number of connections opened it needs for
 the amount of requests it has to send to a single server.
 If your servers are slow to answer, the connections remain opened
 longer, then you could reach the minconn (and maxconn) faster.
 
 The same if you have more users: you have more chances to reach the
 minconn and to use queues.

As far as I understand, HaProxy will use more than minconn sessions on
backend servers if the traffic is higher, is that right?

If so, maybe statistics gathered through first months of usage raised
number (minconn) of active processes in backend, because the problems
didn't arise until I've restarted haproxy (after OS upgrade)?

Because from our monitoring system (zabbix) I don't think that traffic
rise that much, it's pretty much constant.

Anyway problem solved so this is just a debate from curiosity.




Queue is going rampage?

2012-07-17 Thread Jakov Sosic
Hi.

I'm using the latest and the greatest HaProxy 1.4.21. I have been
experiencing some weird problem for the last few days.

Pages are loading slow but apache backends are loaded even less than
usually. When I take a look at haproxy stats page, I can see queue
statistics going very wild, queue rises to maximum (40 out of 40) on all
servers, then drops to 0 and vice-versa. I cannot get to the bottom of
this one.

I'm using RedHat Enterprise Linux 5.8, with Apache+PHP backends. Here is
my haproxy.cfg:

global
log 127.0.0.1 local2 notice
log 127.0.0.1 local3
chroot  /var/lib/haproxy
pidfile /var/run/haproxy.pid
maxconn 2000
userhaproxy
group   haproxy
daemon
stats socket /var/lib/haproxy/stats

defaults
modehttp
log global
option  httplog
option  dontlognull
option  http-server-close
option  forwardfor except 127.0.0.0/8
option  redispatch
option  abortonclose
retries 3
timeout http-request20s
timeout queue   1m
timeout connect 10s
timeout client  1m
timeout server  3m
timeout http-keep-alive 2s
timeout check   10s
maxconn 3000


listen webfarm ip_addr:80
   source ip_addr
   stats enable
   stats auth user:pass
   balance roundrobin
   cookie MYSITECOOKIE insert nocache
   option httpchk HEAD /test.txt HTTP/1.0
   server web1 web1_ip:80 weight 3 cookie web1 check inter 15000 \
minconn 6 maxconn 100 maxqueue 40
   server web2 web2_ip:80 weight 2 cookie web1 check inter 15000 \
minconn 6 maxconn 100 maxqueue 40
   server web3 web3_ip:80 weight 3 cookie web1 check inter 15000 \
minconn 6 maxconn 100 maxqueue 40
   server web4 web4_ip:80 weight 3 cookie web1 check inter 15000 \
minconn 6 maxconn 100 maxqueue 40


I can make screenshots of haproxy stats page, or CSV output in the
situations where queue fills up while the backend servers are almost
idle. Backends also don't fluctuate, they are constantly up, I see no
errors in the haproxy logs...

I'm really puzzled, how to look into this one? The strange thing is that
this started out of nowhere, without updating software for weeks or
doing any major configuration changes (only the php app changes daily).
Even if it is application's fault, how to trace it why the queue
suddenly fills up, and sessions are in single digit numbers across all
servers?







Re: Queue is going rampage?

2012-07-17 Thread Jakov Sosic
On 07/17/2012 12:39 PM, Jakov Sosic wrote:

 cut

Some more info...


This is example from stats page for server4:

   QueueSession rateSessions
Cur Max Limit   Cur Max Limit   Cur Max Limit
22  40  40  23  185 9   27  200

As you see queue is 22, Session Rate is 23 and Only 9 Sessions out of
200 on this server.

Any idea why is this happening?





Re: Queue is going rampage?

2012-07-17 Thread Jakov Sosic
On 07/17/2012 08:53 PM, Baptiste wrote:
 Hi,
 
 To track down where your issue could come from, have a look in your logs.
 You will find information about server response time.
 Second, your minconn parameter seems very low.
 As soon as you reached the minconn, HAProxy will start queueing on your 
 server.
 The logs should tell as well how long did the request remain in the
 queue and how many connections where in the queue before the request
 was processed.
 
 So sharing your log and a screen shot would help as well.
 
 You have not setup the fullconn parameter so HAProxy calculated it
 automatically to 300 (10% from the sum of the frontend maxconn
 pointing to your backend.
 It allows you to have a dynamic maxconn value to forward more requests
 to servers when the load increases.
 
 My advice: read the documentation about minconn, maxconn and fullconn
 and review your configuration
 Second advice, you should do content swithcing, routing your static
 content outside your dynamic farm.
 Queueing makes sense for dynamic content, but is almost useless for
 static one (only a maxconn value is enough for such servers, they
 write from memory directly to the network interface and they should
 never require queue)

Thank you very much!

I misunderstood the meaning of minconn the first time I read the
documentation... I will try to raise it and see how it goes. I will
report back with new configuration, if it works out.





Sending request to a specific backend?

2011-04-04 Thread Jakov Sosic
Hi.

Is it possible to somehow force haproxy to forward request to a specific
backend?

For example I have two backends, and if I want to make some tests only
on the first backend, how to force haproxy not to load balance those
requests but to forward it to a specific host?

Is it possible?


-- 
Jakov Sosic
www.srce.hr



haproxy gives 502 on links with utf-8 chars?!

2010-11-19 Thread Jakov Sosic
Hi.


I have a haproxy doing load balacing between two apache servers which
have mod_jk. Application is on JBoss application server. Problem that I
have noticed is that if link has some UTF-8 character (Croatian language
characters), then haproxy gives error 502. Here is example from log:


Nov 19 12:40:24 porat haproxy[28047]: aaa.bbb.ccc.ddd:port
[19/Nov/2010:12:40:24.040] www www/backend-srv1 0/0/0/-1/135 502 1833 -
- PHVN 1/1/1/0/0 0/0 GET
/pithos/rest/usern...@domain/files/folder%C4%8Di%C4%87/ HTTP/1.1

Nov 19 12:40:34 porat haproxy[28047]: aaa.bbb.ccc.ddd:port
[19/Nov/2010:12:40:34.710] www www/backend-srv1 0/0/0/-1/82 502 1061 - -
PHVN 5/5/5/4/0 0/0 GET
/pithos/rest/usern...@domain/files/%C4%8D%C4%87%C5%A1%C4%91%C5%BE/
HTTP/1.1

Problem only occurs for links with those specific characters.

Interesting thing is that haproxy is the reason for that errors, because
when I try to get those same links directly from backend servers, links
work without problem...

Here is a log from apache backend:

aaa.bbb.ccc.ddd - - [19/Nov/2010:12:40:24 +0100] GET
/pithos/rest/usern...@domain/files/folder%C4%8Di%C4%87/ HTTP/1.1 200
1185
http://somethin.somedomain/pithos/A5707EF1550DF3AECFB3F1CB7B89E240.cache.html;
Mozilla/5.0 (X11; U; Linux i686; en-US) AppleWebKit/534.10 (KHTML, like
Gecko) Chrome/7.0.536.2 Safari/534.10



Any ideas? Is there anything how I can debug this?



-- 
Jakov Sosic



Re: haproxy gives 502 on links with utf-8 chars?!

2010-11-19 Thread Jakov Sosic
On 11/19/2010 01:47 PM, Willy Tarreau wrote:
 echo show errors | socat stdio unix-connect:/var/run/haproxy.sock

# echo show errors | socat stdio unix-connect:/var/run/haproxy.sock

[19/Nov/2010:15:01:56.646] backend www (#1) : invalid response
  src aaa.bbb.ccc.ddd, session #645, frontend www (#1), server
backend-srv1 (#1)
  response length 857 bytes, error at position 268:

  0  HTTP/1.1 200 OK\r\n
  00017  Date: Fri, 19 Nov 2010 14:01:56 GMT\r\n
  00054  Server: Apache/2.2.3 (CentOS)\r\n
  00085  X-Powered-By: Servlet 2.5; JBoss-5.0/JBossWeb-2.1\r\n
  00136  Expires: -1\r\n
  00149  X-GSS-Metadata:
{creationDate:1290002859579,createdBy:ngara...@sr
  00219+
ce.hr,modifiedBy:usern...@domain,name:a\r\x07\x11~,owner:
  00282+
usern...@domain,modificationDate:1290002859579,deleted:false}\r
  00350+ \n
  00351  Content-Length: 418\r\n
  00372  Connection: close\r\n
  00391  Content-Type: application/json;charset=UTF-8\r\n
  00437  \r\n
  00439
{files:[],creationDate:1290002859579,createdBy:usern...@domain
  00509+
,modifiedBy:usern...@domain,readForAll:false,name:\xC5\xA1
  00572+
\xC4\x8D\xC4\x87\xC4\x91\xC5\xBE,permissions:[{modifyACL:true,wr
  00618+
ite:true,read:true,user:usern...@domain}],owner:usern...@domain
  00688+ ce.hr,parent:{name:User User,uri:http://server/p
  00758+
ithos/rest/usern...@domain/files/},folders:[],modificationDate:1
  00828+ 290002859579,deleted:false}



Hmmm, what to do with this output now? Where is the error? :)


-- 
Jakov Sosic



Re: haproxy gives 502 on links with utf-8 chars?!

2010-11-19 Thread Jakov Sosic
On 11/19/2010 03:07 PM, German Gutierrez :: OLX Operation Center wrote:
 Looks like the field
 
  X-GSS-Metadata:
 
 Has utf-8 encoded characters, I don't know if that's valid or not, I think 
 not.

From wikipedia:
http://en.wikipedia.org/wiki/List_of_HTTP_header_fields

Accept-Charset  Character sets that are acceptable  Accept-Charset: utf-8


So I guess I need to force somehow server to set this HTTP header option?




-- 
Jakov Sosic



Re: HATop - Interactive ncurses client for HAProxy

2010-08-16 Thread Jakov Sosic
On 08/16/2010 09:19 PM, John Feuerstein wrote:
 I'm happy to announce the first full-featured release of HATop:

Nice!

If you're interested I can build RPM for RHEL v4.x and v5.x (x86_64 
i386) and publish it in my company's repo, and also you could put it to
your website too...




-- 
|Jakov Sosic|ICQ: 28410271|   PGP: 0x965CAE2D   |
=
| start fighting cancer - http://www.worldcommunitygrid.org/   |