Admin socket ACL's

2017-01-25 Thread Alexey Zilber
Hi All,

 Is there way to do something like this from the admin socket:

acl bad_ip src 184.66.248.33

tcp-request connection reject if bad_ip


Thanks!

Alex


Status code -1 with HA-Proxy version 1.5.15

2016-12-22 Thread Alexey Zilber
Hi All,

I'm seeing the 'status code' as -1 in haproxy logs, whereas the
documentation specifies:

"The status code is always 3-digit."

I do see the 'normal' result codes, but I also see a lot of -1's.


Any idea what causes this?


-Alex


subscribe

2016-12-22 Thread Alexey Zilber
Systems Architect
SFG Media Group, LLC


Re: Round Robin not very random

2015-01-15 Thread Alexey Zilber
Hi Vivek,

  You're correct.  I think the situation was that there was a huge influx
of traffic, and some servers went over their tipping point of how much they
can handle quickly.  This caused connections to stack up as some servers
choked.  Would leastconn give the same perfornance as roundrobin?  I
noticed in the haproxy docs that it's not recommended for http, which is
what we're using.  Would it be an issue to use leastconn for http?

-Alex

On Thu, Jan 15, 2015 at 9:41 PM, Vivek Malik vivek.ma...@gmail.com wrote:

 I see roubdrobin working perfectly over here. Look at sessions total and
 see how they are same for every server.

 It seems that all your requests are not the same workload. Some servers or
 some requests are taking longer to fulfill and increasing load on servers.
 Have you tried using leastconn instead of round robin?

 That might give more fair distribution of load in this situation.

 Regards,
 Vivek
 On Jan 14, 2015 11:45 PM, Alexey Zilber alexeyzil...@gmail.com wrote:

 Hi All,

   We got hit with a bit of traffic and we saw haproxy dump most of the
 traffic to 3-4 app servers, sometimes even just one and driving load on
 there to 90.  We were running 1.5.9, I upgraded to 1.5.10 and the same
 problem remained.  Currently traffic is low so everything is load balanced
 evenly, but we expect another spike in a few hours and I expect the issue
 to return.


 Here's what haproxy-status looked like:




 Do I need to switch to maybe add a weight and tracking?  We have 12
 frontend appservers load balancing to 28.  All run haproxy and the app
 server software.

 Thanks!
 Alex





Round Robin not very random

2015-01-14 Thread Alexey Zilber
Hi All,

  We got hit with a bit of traffic and we saw haproxy dump most of the
traffic to 3-4 app servers, sometimes even just one and driving load on
there to 90.  We were running 1.5.9, I upgraded to 1.5.10 and the same
problem remained.  Currently traffic is low so everything is load balanced
evenly, but we expect another spike in a few hours and I expect the issue
to return.


Here's what haproxy-status looked like:




Do I need to switch to maybe add a weight and tracking?  We have 12
frontend appservers load balancing to 28.  All run haproxy and the app
server software.

Thanks!
Alex


Re: Significant number of 400 errors..

2014-11-27 Thread Alexey Zilber
That's part of what I'm trying to figure out.. where are the junk bytes
coming from.  Is it from the client, server, haproxy, or networking issue?
  I'd rather not be passing any kind of crap to the backend, but the
changes I made have reduced the errors by around 500%.  I'm guessing the
issue may have been the header size, and the errors we're seeing now are
legitimate junk.. although I have no idea where it's coming from.I may
want to remove option accept-invalid-http-request and option
accept-invalid-http-response.
So the point is, not to switch to TCP to mask the issue, but to figure out
where this 'junk' is coming from.

There's 30 php servers btw, I figured it would be a bit redundant to list
all of them, then mask their ip.  They all have the same config.

Thanks!
-Alex

On Thu, Nov 27, 2014 at 4:06 AM, Bryan Talbot bryan.tal...@playnext.com
wrote:

 There are clearly a lot of junk bytes in those URI which are not allowed
 by the HTTP specs. If you really want to be passing unencoded binary
 control characters, spaces, and nulls to your backends in HTTP request and
 header lines, then HTTP mode is probably not going to work for you.

 TCP mode will allow them to get through but if your backends actually
 expect the requests to be valid HTTP, you will likely be opening up a huge
 can of worms and exposing your apps to a host of protocol level attacks.

 Also, your connection limits seem pretty ambitious if there really are 2
 php servers in that backend and not 2000.

 -Bryan



 On Mon, Nov 24, 2014 at 10:22 PM, Alexey Zilber alexeyzil...@gmail.com
 wrote:

 Hi Willy and Lukas,

   Here's snippets of the new config:


 -

 global

maxconn 645000

maxpipes 645000

ulimit-n 645120

user haproxy

group haproxy

 tune.bufsize 49152

spread-checks 10

daemon

quiet

stats socket /var/run/haproxy.sock level admin

pidfile /var/run/haproxy.pid


 defaults

log global

modehttp

 option accept-invalid-http-request

 option accept-invalid-http-response

option  httplog

option  dontlognull

option dontlog-normal

option log-separate-errors

 option http-server-close

 option tcp-smart-connect

 option tcp-smart-accept

 option forwardfor except 127.0.0.1

option dontlog-normal

retries 3

option redispatch

maxconn 200

contimeout  5000

clitimeout  6

srvtimeout  6

 listen  www   0.0.0.0:80

mode http

 capture response header Via len 20

  capture response header Content-Length len 10

  capture response header Cache-Control len 8

  capture response header Location len 40

balance roundrobin

# Haproxy status page

stats uri /haproxy-status

stats auth fb:phoo

# when cookie persistence is required

cookie SERVERID insert indirect nocache

# When internal servers support a status page

option httpchk GET /xyzzyx.php

 bind 0.0.0.0:443 ssl crt /etc/lighttpd/ssl_certs/.co.pem

 http-request add-header X-FORWARDED-PROTO https if { ssl_fc }

   server app1 10.1.1.6:85 check inter 4 rise 2 fall 3
 maxconn 16384

   server app2 10.1.1.7:85 check inter 4 rise 2 fall 3
 maxconn 16384

 -


 The old config did NOT have the following items, and had about 500x more
 errors:
 -
   tune.bufsize 49152

 option accept-invalid-http-request

 option accept-invalid-http-response
 -

 Here's what the 'show errors' shows on a sampling of the server.  It
 looks like 90% of the errors are the second error
 (25/Nov/2014:00:06:30.753):




 Total events captured on [24/Nov/2014:23:31:52.468] : 151



 [22/Nov/2014:21:55:56.597] frontend www (#2): invalid request

   backend www (#2), server NONE (#-1), event #150

   src 166.137.247.239:8949, session #3883610, session flags 0x0080

   HTTP msg state 26, msg flags 0x, tx flags 0x

   HTTP chunk len 0 bytes, HTTP body len 0 bytes

   buffer flags 0x00808002, out 0 bytes, total 1183 bytes

   pending 1183 bytes, wrapping at 49152, error at position 227:



   0
 sited%22%3A1416713764%2C%22times_visited%22%3A6%2C%22device%22%3A%22We

   00070+
 b%22%2C%22lastsource%22%3A%22bottomxpromo%22%2C%22language%22%3A%22en%

   00140+
 22%2C%22extra%22%3A%22%7B%5C%22tr%5C%22%3A%5C%22en%5C%22%7D%22%2C%22di

   00210+ d_watch%22%3A1%7D; yX=5167136038811769837;
 _vhist=%7B%22visito

   00280+
 r_id%22%3A%225024165909427731336%22%2C%22seen_articles%22%3A%22%7B%5C%

   00350+
 22950%5C%22%3A1402416590%2C%5C%22685%5C%22%3A1402416675%2C%5C%22799%5C

   00420+
 %22%3A1402416789%2C%5C%22954%5C%22%3A1402416997%2C%5C%22939%5C%22

Re: Significant number of 400 errors..

2014-11-24 Thread Alexey Zilber
 157.55.39.106:29900, session #4147419, session flags 0x0080

  HTTP msg state 27, msg flags 0x, tx flags 0x

  HTTP chunk len 0 bytes, HTTP body len 0 bytes

  buffer flags 0x00808002, out 0 bytes, total 304 bytes

  pending 304 bytes, wrapping at 49152, error at position 15:



  0  GET
/?id=551sr\xC3\x83\xC2\xAF\xC3\x82\xC2\xBF\xC3\x82\xC2\xBD=share_

  00034+ fb_new_551 HTTP/1.1\r\n

  00055  Cache-Control: no-cache\r\n

  00080  Connection: Keep-Alive\r\n

  00104  Pragma: no-cache\r\n

  00122  Accept: */*\r\n

  00135  Accept-Encoding: gzip, deflate\r\n

  00167  From: bingbot(at)microsoft.com\r\n

  00199  Host: xx.co\r\n

  00217  User-Agent: Mozilla/5.0 (compatible; bingbot/2.0; +
http://www.bing.com

  00287+ /bingbot.htm)\r\n

  00302  \r\n



Total events captured on [25/Nov/2014:00:11:16.030] : 557



[25/Nov/2014:00:08:19.250] frontend www (#2): invalid request

  backend www (#2), server NONE (#-1), event #556

  src 199.59.148.211:40148, session #14688643, session flags 0x0080

  HTTP msg state 26, msg flags 0x, tx flags 0x8400

  HTTP chunk len 0 bytes, HTTP body len 0 bytes

  buffer flags 0x00808002, out 0 bytes, total 140 bytes

  pending 140 bytes, wrapping at 49152, error at position 5:



  0  GET /\x00/?id=16278src=shorturl_16278 HTTP/1.1\r\n

  00046  Host: xx.com\r\n

  00065  User-Agent: Twitterbot/1.0\r\n

  00093  Accept-Encoding: gzip, deflate\r\n

  00125  Accept: */*\r\n

  00138  \r\n


-

Total events captured on [25/Nov/2014:00:12:57.050] : 577



[25/Nov/2014:00:09:20.036] frontend www (#2): invalid request

  backend www (#2), server NONE (#-1), event #576

  src 173.209.211.205:51013, session #15108367, session flags 0x0080

  HTTP msg state 26, msg flags 0x, tx flags 0x

  HTTP chunk len 0 bytes, HTTP body len 0 bytes

  buffer flags 0x00808002, out 0 bytes, total 60 bytes

  pending 60 bytes, wrapping at 49152, error at position 6:



  0  action=video_readyvalue=1time_on_page=140.87article=18539



Total events captured on [25/Nov/2014:00:14:54.259] : 526



[25/Nov/2014:00:08:22.991] frontend www (#2): invalid request

  backend www (#2), server NONE (#-1), event #525

  src 67.231.32.73:59810, session #15100093, session flags 0x0080

  HTTP msg state 26, msg flags 0x, tx flags 0x

  HTTP chunk len 0 bytes, HTTP body len 0 bytes

  buffer flags 0x00808002, out 0 bytes, total 1460 bytes

  pending 1460 bytes, wrapping at 49152, error at position 6:



  0  Cookie:
_vis=%7B%22visitor_id%22%3A%225108882918757752765%22%2C%22sour

  00070+
ce%22%3A%22fbfan_2081%22%2C%22joined%22%3A1410888291%2C%22last_visited

  00140+
%22%3A1412459722%2C%22times_visited%22%3A23%2C%22language%22%3A%22en%2

  00210+
2%2C%22extra%22%3A%22%7B%5C%22tr%5C%22%3A%5C%22en%5C%22%7D%22%2C%22dev

  00280+
ice%22%3A%22Web%22%2C%22lastsource%22%3A%22fbfan_15040%22%2C%22did_wat

  00350+ ch%22%3A1%7D; yX=5108882918757752765; _vhist=%7B%22visitor_id%

  00420+
22%3A%225108882918757752765%22%2C%22seen_articles%22%3A%22%7B%5C%22208

  00490+
1%5C%22%3A1410888291%2C%5C%2213102%5C%22%3A1410888488%2C%5C%2213225%5C

  00560+
%22%3A1410956287%2C%5C%2213369%5C%22%3A1410996782%2C%5C%2212535%5C%22%

  00630+
3A1410997026%2C%5C%223020%5C%22%3A1410997306%2C%5C%222913%5C%22%3A1410

  00700+
997471%2C%5C%222426%5C%22%3A1410997631%2C%5C%2213208%5C%22%3A141099779

  00770+
0%2C%5C%222350%5C%22%3A1410997918%2C%5C%2212995%5C%22%3A1410998005%2C%

  00840+
5C%2212777%5C%22%3A1410998345%2C%5C%222996%5C%22%3A1410998730%2C%5C%22

  00910+
3471%5C%22%3A1410999158%2C%5C%222778%5C%22%3A1410999435%2C%5C%221840%5

  00980+
C%22%3A1410999880%2C%5C%222662%5C%22%3A1411000322%2C%5C%222655%5C%22%3

  01050+
A1411000534%2C%5C%222777%5C%22%3A1411000586%2C%5C%222398%5C%22%3A14110

  01120+
00887%2C%5C%223787%5C%22%3A1411001258%2C%5C%2212977%5C%22%3A1411001402

  01190+
%2C%5C%222447%5C%22%3A1411001612%2C%5C%222450%5C%22%3A1411001901%2C%5C

  01260+
%222463%5C%22%3A1411002161%2C%5C%2213213%5C%22%3A1411029331%2C%5C%2236

  01330+
83%5C%22%3A1411029512%2C%5C%22655%5C%22%3A1411029963%2C%5C%17%5C%2

  01400+ 2%3A1411030200%2C%5C%222464%5C%22%3A1411030432%2C%5C%222767%


-


Thanks!
-Alex



On Mon, Nov 24, 2014 at 12:52 AM, Willy Tarreau w...@1wt.eu wrote:

 On Sun, Nov 23, 2014 at 11:57:50PM +0800, Alexey Zilber wrote:
  Hi Willy,
 
I already do option dontlognull.  These get logged anyway.  We've
  actually seen a high amount of 400 errors happening.  I'll post the
 configs
  in a day or two (the pre/post) configs.  I've been able to knock down the
  400 and 502 errors to about 10%, with 400 errors being around .6% of
 total
  connections.

 Then there are data received over these connections, so please follow
 Lukas' advice and issue show errors on the stats socket, you'll see
 what is sent there. Maybe some bogus browsers sending an SSL hello

Re: Significant number of 400 errors..

2014-11-23 Thread Alexey Zilber
Hi Willy,

  I already do option dontlognull.  These get logged anyway.  We've
actually seen a high amount of 400 errors happening.  I'll post the configs
in a day or two (the pre/post) configs.  I've been able to knock down the
400 and 502 errors to about 10%, with 400 errors being around .6% of total
connections.

-Alex

On Sun, Nov 23, 2014 at 7:53 PM, Willy Tarreau w...@1wt.eu wrote:

 Hi guys,

 On Sat, Nov 22, 2014 at 09:06:57PM +0100, Lukas Tribus wrote:
   Hi Lukas,
  
 I had decoded the error message and it didn't make sense.  There is
 no
   connection limit reached, there are no filters.  If you look at the
   rest of the log line, there were no cookies.  In fact, the last part a
   security check which detected and blocked a dangerous error in server
   response which might have caused information leak is very ambiguous.
   Is there any detailed explanation?
 
  Since, this error as-is can indeed have a lot of different causes, we
 will
  only be able to provide a more detailed explanation when we have the
  info from show errors and equally important, the configuration.

 Given the high timeouts, I suspect that these are simply aborted
 pre-connects
 from browsers who finally decide they won't use their connection. Alex,
 just
 add option dontlognull and they'll go away if that's it. This option
 ensures
 that connections without any data exchange do not produce any log.

 Regards,
 Willy




Re: Significant number of 400 errors..

2014-11-22 Thread Alexey Zilber
All,

  I've tripled the default buffer size, doubled maxconn and added accept
invalid http request from client and server.  This got rid of a large
number of the 400 ' s but not all.  Any ideas what it could be?   There's
nothing else specific in the logs and haproxy-status is all good.

-Alex
On Nov 22, 2014 1:06 PM, Alexey Zilber alexeyzil...@gmail.com wrote:

 Hi All,

   I'm running v1.5.4 and I'm seeing a large amount of 400 BADREQ erros:

 Nov 21 22:46:06 srvr1 haproxy[28293]: 10.10.10.10:51184
 [21/Nov/2014:22:45:50.323] www www/NOSRV -1/-1/-1/-1/16350 400 187 - -
 PRNN 445/445/0/0/3 0/0 {|||} BADREQ

 Nov 21 22:47:46 srvr1 haproxy[28293]: 10.10.10.10:54924
 [21/Nov/2014:22:47:43.572] www www/NOSRV -1/-1/-1/-1/2680 400 187 - -
 PRNN 366/366/0/0/3 0/0 {|||} BADREQ

 Nov 21 22:48:20 srvr1 haproxy[28293]: 10.10.10.10:47761
 [21/Nov/2014:22:48:20.707] www www/NOSRV -1/-1/-1/-1/0 400 187 - - PRNN
 417/417/0/0/3 0/0 {|||} BADREQ

 Nov 21 22:52:55 srvr1 haproxy[28457]: 10.10.10.10:58249
 [21/Nov/2014:22:52:51.616] www www/NOSRV -1/-1/-1/-1/3536 400 187 - -
 PRNN 534/534/2/0/3 0/0 {|||} BADREQ

 Nov 21 22:55:09 srvr1 haproxy[28457]: 10.10.10.10:49728
 [21/Nov/2014:22:55:06.827] www www/NOSRV -1/-1/-1/-1/2819 400 187 - -
 PRNN 381/381/1/0/3 0/0 {|||} BADREQ

 Nov 21 22:55:12 srvr1 haproxy[28457]: 10.10.10.10:49727
 [21/Nov/2014:22:55:06.828] www www/NOSRV -1/-1/-1/-1/5766 400 187 - -
 PRNN 368/368/1/0/3 0/0 {|||} BADREQ

 Nov 21 23:00:01 srvr1 haproxy[28457]: 10.10.10.10:64153
 [21/Nov/2014:22:59:08.964] www www/NOSRV -1/-1/-1/-1/52680 400 187 - -
 PRNN 409/409/1/0/3 0/0 {|||} BADREQ



 We're seeing about 15 per minute.  Legitimate users are reporting this
 error as well, so it doesn't seem to be a malicious user.  Any ideas?


 -Alex



Re: Significant number of 400 errors..

2014-11-22 Thread Alexey Zilber
Hi Lukas,

 I had decoded the error message and it didn't make sense.  There is no
connection limit reached, there are no filters.  If you look at the rest of
the log line, there were no cookies.  In fact, the last part a security
check which detected and blocked a dangerous error in server response which
might have caused information leak is very ambiguous.  Is there any
detailed explanation?
 Thanks for the links btw, I completely missed the socket info, and that it
was possible to get more detail on the errors via the sockets.  I'm going
to dig deeper with that and will post a followup.

-Alex

On Sat, Nov 22, 2014 at 9:37 PM, Lukas Tribus luky...@hotmail.com wrote:

 Hi Alexey,



  All,
 
  I've tripled the default buffer size, doubled maxconn and added
  accept invalid http request from client and server. This got rid of a
  large number of the 400 ' s but not all. Any ideas what it could be?
  There's nothing else specific in the logs and haproxy-status is all
  good.


 First of all, lets decode the original error message first.

 In the log we see PRNN, which means (according to the docs [1]):
 P : the session was prematurely aborted by the proxy, because of a
 connection limit enforcement, because a DENY filter was matched,
 because of a security check which detected and blocked a dangerous
 error in server response which might have caused information leak
 (eg: cacheable cookie).


 Subsequent characters (RNN) don't really matter at that point.

 To find out why exactly the proxy decided to close the connection, I
 suggest you enable the unix admin socket and provide show errors [3]
 output from there. This will tell us more about the last error.

 Also, you need to provide your configuration, especially timeout, acls
 maxconn values etc. Feel free to obfuscate hostnames and IP addresses,
 but make sure that everything else remains intact.



 Regards,

 Lukas



 [1] http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#8.5
 [2] http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#9.2
 [3]
 http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#9.2-show%20errors





Re: haproxy 1.5.8 segfault

2014-11-21 Thread Alexey Zilber
Hi Willy,
  Any ETA for the next release that incorporates this bugfix, or should I
build from current source?

Thanks!
Alex

On Fri, Nov 21, 2014 at 2:35 PM, Willy Tarreau w...@1wt.eu wrote:

 Hi Godbach!

 On Fri, Nov 21, 2014 at 11:02:52AM +0800, Godbach wrote:
  Hi Willy,
 
  On 2014/11/19 2:31, Willy Tarreau wrote:
  On Tue, Nov 18, 2014 at 08:23:57PM +0200, Denys Fedoryshchenko wrote:
  Thanks! Seems working for me :) Will test more tomorrow.
  
  There's no reason it would not, otherwise we'd have a different bug.
  When I'm unsure I ask for testing before committing, but here there
  was no doubt once the issue was understood :-)
  
  Willy
  
  
 
  A so quick fix. Cool! :-)
 
  In fact, I have also experienced this kind issue before. Of course it is
  not caused by original HAProxy codes but my own codes added to HAProxy.
  However, the real reason is the same as this issue: the memory allocated
  from pool is not reset properly.

 And that's intended. pool_alloc2() works exactly like malloc() : the caller
 is responsible for initializing it if needed.

  So I have an idea for this kind issue: how about HAProxy reset the
  memory allocated from pool directly in pool_alloc2().
 
  If we worry about that the performance may be decreased by calling
  memset() in each pool_alloc2(), a new option which allows user to enable
  or disable memset() in pool_alloc2() can be added into HAProxy.

 We only do that (partially) when using memory poisonning/debugging (to
 reproduce issues easier). Yes performance suffers a lot when doing so,
 especially when using large buffers, and people using large buffers are
 the ones who care the most about performance.

 I'd agree to slightly change the pool_alloc2() to *always* memset the area
 when memory poisonning is in place, so that developers can more easily
 detect if they missed something. But I don't want to use memset all the
 time as a brown paper bag so that developers don't have to be careful.
 We're missing some doc of course, and people can get trapped from time
 to time (and I do as well), so this is what we must improve, and not
 have the code hide the bugs instead.

 What is really needed is that each field of session/transaction is
 documented : who uses it, when, and who's responsible for initializing
 it. Here with the capture, I missed the fact that the captures are part
 of a transaction, thus were initialized by the HTTP code, so when using
 tcp without http, there's an issue... A simple comment like
/* initialized by http_init_txn() */
 in front of the capture field in the struct would have been enough to
 avoid this. This is what must be improved. We also need to write
 developer guidelines to remind people to update the doc/comments when
 modifying the API. I know it's not easy, I miss a number of them as
 well.

 Cheers,
 Willy





Significant number of 400 errors..

2014-11-21 Thread Alexey Zilber
Hi All,

  I'm running v1.5.4 and I'm seeing a large amount of 400 BADREQ erros:

Nov 21 22:46:06 srvr1 haproxy[28293]: 10.10.10.10:51184
[21/Nov/2014:22:45:50.323] www www/NOSRV -1/-1/-1/-1/16350 400 187 - -
PRNN 445/445/0/0/3 0/0 {|||} BADREQ

Nov 21 22:47:46 srvr1 haproxy[28293]: 10.10.10.10:54924
[21/Nov/2014:22:47:43.572] www www/NOSRV -1/-1/-1/-1/2680 400 187 - -
PRNN 366/366/0/0/3 0/0 {|||} BADREQ

Nov 21 22:48:20 srvr1 haproxy[28293]: 10.10.10.10:47761
[21/Nov/2014:22:48:20.707] www www/NOSRV -1/-1/-1/-1/0 400 187 - - PRNN
417/417/0/0/3 0/0 {|||} BADREQ

Nov 21 22:52:55 srvr1 haproxy[28457]: 10.10.10.10:58249
[21/Nov/2014:22:52:51.616] www www/NOSRV -1/-1/-1/-1/3536 400 187 - -
PRNN 534/534/2/0/3 0/0 {|||} BADREQ

Nov 21 22:55:09 srvr1 haproxy[28457]: 10.10.10.10:49728
[21/Nov/2014:22:55:06.827] www www/NOSRV -1/-1/-1/-1/2819 400 187 - -
PRNN 381/381/1/0/3 0/0 {|||} BADREQ

Nov 21 22:55:12 srvr1 haproxy[28457]: 10.10.10.10:49727
[21/Nov/2014:22:55:06.828] www www/NOSRV -1/-1/-1/-1/5766 400 187 - -
PRNN 368/368/1/0/3 0/0 {|||} BADREQ

Nov 21 23:00:01 srvr1 haproxy[28457]: 10.10.10.10:64153
[21/Nov/2014:22:59:08.964] www www/NOSRV -1/-1/-1/-1/52680 400 187 - -
PRNN 409/409/1/0/3 0/0 {|||} BADREQ



We're seeing about 15 per minute.  Legitimate users are reporting this
error as well, so it doesn't seem to be a malicious user.  Any ideas?


-Alex


Re: Spam to this list?

2014-09-05 Thread Alexey Zilber
I think most of us are getting soft being gmail users.   I haven't seen
this amount of spam in my inbox today, probably in years.   Personally, I
would go with an automated sub/unsub list.  People who need help will
usually take the effort to subscribe to a list, so I'm not sure what the
advantage is of having an open list.  It's like having an open smtp relay.
  I think at the very least implementing DNSBL might help (at least for the
spam received today) if you decide not to go the subscription route...

-Alex


On Fri, Sep 5, 2014 at 11:19 PM, Willy Tarreau w...@1wt.eu wrote:

 On Fri, Sep 05, 2014 at 04:32:55PM +0200, Ghislain wrote:
  hi,
 
this is not spam but some bad behavior of a person  that is
  inscribing the mail of this mailing list to newsletter just to annoy
  people.
   This guy must be laughing like mad about how such a looser he is but
  no spam filter will prevent this, there is no filter against human
  stupidity that is legal in our country.

 That's precisely the point unfortunately :-/

 And the other annoying part are those recurring claims from people who
 know better than anyone else and who pretend that they can magically run
 a mail server with no spam. That's simply nonsense and utterly false. Or
 they have such aggressive filters that they can't even receive the
 complaints
 from their users when mails are eaten. Everyone can do that, it's enough
 to alias haproxy@formilux.org to /dev/null to get the same effect!

 But the goal of the ML is not to block the maximum amount of spam but to
 ensure optimal delivery to its subscribers. As soon as you add some level
 of filtering, you automatically get some increasing amount of false
 positive.

 We even had to drop one filter a few months ago because some gmail users
 could not post anymore.

 I'm open to suggestions, provided that :

1) it doesn't add *any* burden on our side (scalability means that the
   processing should be distributed, not centralized)

2) it doesn't block any single valid e-mail, even from non-subscriber

3) it doesn't require anyone to resubscribe nor change their ingress
   filters to get the mails into the same box.

4) it doesn't add extra delays to posts (eg: no grey-listing) because
   that's really painful for people who post patches and are impatient
   to see them reach the ML.

 I'm always amazed how people are anonyed with spam in 2014. Spam is part
 of the internet experience and is so ubiquitous that I think these people
 have been living under a rock. Probably those people consider that we
 should also run blood tests on people who want to jump into a bus to
 ensure that they don't come in with any minor disease in hope that all
 diseases will finally disappear. I'm instead in the camp of those who
 consider that training the population is the best resistance, and I think
 that all living being history already proved me right.

 I probably received 5 more spams while writing this, and who cares!

 Best regards,
 Willy





Re: Spam to this list?

2014-09-05 Thread Alexey Zilber
On Sat, Sep 6, 2014 at 7:09 AM, Willy Tarreau w...@1wt.eu wrote:

 snip
 Now please let me remind me something important : this list is provided
 for free to make it easier for developers and users to exchange together.
 It's managed on spare time, it's fast and free to subscribe just like it's
 fast and free to unsubscribe. Some users do indeed subscribe, participate
 to a thread then unsubscribe. There have been about 3 times more
 subscriptions than current subscribers, many of which coming back from
 time to time. So for people for whom this amount of spam is a terrible
 experience, there are a lot of options. However when you're on the service
 side of things, options to fight spam *always* come from extra burden
 dealing
 with false positives. So the situation is clearly far from being perfect,
 but
 it used to be reasonably well balanced for 7 years now. Only very recently
 we started to get subscribed to several lists and probably the address has
 better circulated to spammers resulting in an increase in the amount of
 spam.
 But I certainly won't spend as much time dealing with anti-spam problems as
 I already spent in this sterile thread.

 Willy



Willy,

   Please don't take our issues with the spam personally.   Everyone
appreciates the list and the help it's provided, so keep in mind people are
just voicing their opinions (and most of us have strong opinions) regarding
spam.
  Going over the previous threads, imho if you don't have Greylisting,
maybe that's the way to go?  While my opinion still stands that the only
real way to combat this spam is to make it a 'subscribe to post' list;  if
you're using Greylisting, why not build it so that unsubscribed users have
a weighted value, so users who post a lot are automatically whitelisted
(before passing their email through spamassasin or other such spam scanner).

Thanks Willy!
-Alex


Filing bugs.. found a bug in 1.5.1 (http-send-name-header is broken)

2014-07-07 Thread Alexey Zilber
Hey guys,

  I couldn't a bug tracker for HAProxy, and I found a serious bug in 1.5.1
that may be a harbinger of other broken things in header manipulation.

 The bug is:

  I added 'http-send-name-header sfdev1' unde the defaults section of
haproxy.cfg.

When we would do a POST with that option enabled, we would get 'sf'
injected into a random variable.   When posting with a time field like
'07/06/2014 23:43:01' we would get back '07/06/2014 23:43:sf' consistently.

Thanks,
-Alex