the king of France was the king of israel persia egypt quetzaltcoatl feathered serpent god atlantis galaxy spacetime 08may2017

2018-05-11 Thread monster genious from outer space
  

the king of France was the king of israel persia egypt quetzaltcoatl feathered 
serpent god atlantis galaxy spacetime 08may2017





public sacrifices will be restored worldwide soon and unofficial rites will 
spread everywhere
 if you dont pay me back my legal reimboursements and criminal damages for your 
farce of sick id-iots
or i will have to suppose that you are serial and i will have to proceed for 
that case


the patsy king was psychically infected with specific technologies 
by the centurions and legitimists (both follow a fake target) 
all guardians will kill themself for high treason.
(A bas-tard shall not enter the assembly of the Lord; 
even the tenth generation shall not enter the assembly of the Lord.)




your chief isn't sending me a  lot of money for making the list of all the 
things he could be attacked for...
he is thief and/or incompetent and sooner or later..on purpouse or for 
negligence he will kill you
he is so retard and poor inside that i am committed to send this newsletter 
helping for free his scroungers parasites enemies




(fix this..)

Every Paedophile in tribunal has mental illness or disability;
any reduction of penalty for illness has no sense.
So..full penalty + medical treatment + signature of the doctor for release.
Also crimes against the patrimony or with economic motive 
are non phisiological conducts. 

(and this.. nobody'll take any notice. 
and update the medical tests for accessing any vehicle and bomb)

Why slapping and menacing the child isn't a crime and a mania ?
Methods of correction and discipline are by themslef 
previewed as crime by the actual penal code and a pathological activity.
A medical certification is needed for acting in derogation to a general rule,
indicating wich disturb all children or those children have, 
that must be corrected with these methods
(1st grade of pedos are punitive)
Otherwise we would be dealing with a psychiatric epidemic, 
vector of itself and other diseases,
perpetrated with artifact and pretextuous motives.


CASUS AD EXCLUDENDUM
Egyptians kill gays but children are not genetically gays .
Egyptians psychically infect children ( thru families) , with defined 
replicable methods,
yet previewed as crime by the actual penal code...
then kill them, making a ritual sacrifice to a wicked gay pattern object daemon 
volcano.
With specific methods..active and passive forms.


The schimpanze has 99,5% dna same as human dna 
Killing and eating humans and animals is by itself previewed as crime 
by the actual penal code, and a pathological activity.
a certification for acting in derogation to the general law 
is required motivated by nutritional integration and/or by culinary pleasure


Mentally sane habitants do not combat each other, respecting exact laws; 
in each conflict almost one combatant is mentally insane.
The enemy is mentally insane.



 



if juuss do not go to israel..israel goes to juuss


"It wasn't only Hitler: all the other big names are juuish as well (or their 
manchurian candidates thru ritual sexual child abuse),
 including Goebbels, Himmler, Hess and Eichmann. 
The Sabbatean juuss sacrificed other juuss in order to create Israel, the 
future capital of their world government"
Stalin and comunist leadership..the same.
"The story of the Khazar Empire, as it slowly emerges from the past, begins to 
look like 
the most cruel hoax which history has ever perpetrated." (Arthur Koestler, The 
Thirteenth Tribe, p. 17).

Were Hitler & Nazism Zionist Creations?

http://www.thetruthseeker.co.uk/?p=121814  

"According to Antelman, the Sabbateans hated Jews and sought their extinction. 
He cites rabbis who warned  as far back as 1750 that if the Jews didn't stop 
the Sabbateans, they would be destroyed by them. "

Rabbi Antelman - Nazism, Hitler & Shoah: Created by Sabbatean Jews

https://www.youtube.com/watch?v=S0GwURZ1Jlo

https://www.henrymakow.com/the_satanic_cult_that_rules_th.html






Great_Fire_of_Rome in the year AD 64   

https://en.wikipedia.org/wiki/Great_Fire_of_Rome  

Second-Temple-Destroyed-69-CE-or-70-CE 

http://www.chabad.org/library/article_cdo/aid/2641925/jewish/Which-Year-Was-the-Second-Temple-Destroyed-69-CE-or-70-CE.htm

The latest in Biblical scholarship has now uncovered new evidence that provides 
a disturbing explanation: 
Christianity never strayed; Christianism is a fabricated cover story for an 
Rabbinic psychological warfare operation 
born out of the First jewish-Roman War in the first century, and .."the bible 
was compiled by jewish rabbies 
who carefully selected and modified ancient text to brainwash you into a 
sheep." 

https://www.express.co.uk/news/science/693817/Jesus-Christ-HOAX-Biblical-christianity-roman-empire


Show: h-app-proxy – Application server inside haproxy

2018-05-11 Thread Tim Düsterhus
Hi list,

I recently experimented with the Lua API to check out it's capabilities
and wanted to show off the results:

I implemented a very simple short URL service entirely in haproxy with
Redis as it's backend. No backend service needed :-)

Thanks to Thierry for his Redis Connection Pool implementation:
http://blog.arpalert.org/2018/02/haproxy-lua-redis-connection-pool.html

Thierry, note that you made a small typo in your pool: r.release(conn)
in renew should read r:release(conn).

Blog post  : https://bl.duesterhus.eu/20180511/
GitHub : https://github.com/TimWolla/h-app-roxy
Live Demo  : https://bl.duesterhus.eu/20180511/demo/DWhxJf2Gpt
Hacker News: https://news.ycombinator.com/item?id=17049715

Best regards
Tim Düsterhus

PS: Don't use this at home or at work even :-)



[PATCH] BUG/MINOR: lua: Socket.send threw runtime error: 'close' needs 1 arguments.

2018-05-11 Thread sada
Function `hlua_socke_close` expected exactly one argument on the Lua stack.
But when `hlua_socket_close` was called from `hlua_socket_write_yield`,
Lua stack had 3 arguments. So `hlua_socket_close` threw the exception with
message "'close' needs 1 arguments".

Introduced new helper function `hlua_socket_close_helper`, which removed the
Lua stack argument count check and only checked if the first argument was
a socket.

This fix should be backported to 1.8, 1.7 and 1.6.
---
 src/hlua.c | 14 ++
 1 file changed, 10 insertions(+), 4 deletions(-)

diff --git a/src/hlua.c b/src/hlua.c
index d07e8d67..8cc30513 100644
--- a/src/hlua.c
+++ b/src/hlua.c
@@ -1629,14 +1629,12 @@ __LJMP static int hlua_socket_gc(lua_State *L)
 /* The close function send shutdown signal and break the
  * links between the stream and the object.
  */
-__LJMP static int hlua_socket_close(lua_State *L)
+__LJMP static int hlua_socket_close_helper(lua_State *L)
 {
struct hlua_socket *socket;
struct appctx *appctx;
struct xref *peer;
 
-   MAY_LJMP(check_args(L, 1, "close"));
-
socket = MAY_LJMP(hlua_checksocket(L, 1));
 
/* Check if we run on the same thread than the xreator thread.
@@ -1659,6 +1657,14 @@ __LJMP static int hlua_socket_close(lua_State *L)
return 0;
 }
 
+/* The close function calls close_helper.
+ */
+__LJMP static int hlua_socket_close(lua_State *L)
+{
+   MAY_LJMP(check_args(L, 1, "close"));
+   return hlua_socket_close_helper(L);
+}
+
 /* This Lua function assumes that the stack contain three parameters.
  *  1 - USERDATA containing a struct socket
  *  2 - INTEGER with values of the macro defined below
@@ -1990,7 +1996,7 @@ static int hlua_socket_write_yield(struct lua_State 
*L,int status, lua_KContext
if (len == -1)
s->req.flags |= CF_WAKE_WRITE;
 
-   MAY_LJMP(hlua_socket_close(L));
+   MAY_LJMP(hlua_socket_close_helper(L));
lua_pop(L, 1);
lua_pushinteger(L, -1);
xref_unlock(>xref, peer);
-- 
2.17.0




[PATCH] BUG/MINOR: lua: Socket.send threw runtime error: 'close' needs 1 arguments.

2018-05-11 Thread sada
Function `hlua_socke_close` expected exactly one argument on the Lua stack.
But when `hlua_socket_close` was called from `hlua_socket_write_yield`,
Lua stack had 3 arguments. So `hlua_socket_close` threw the exception with
message "'close' needs 1 arguments".

Introduced new helper function `hlua_socket_close_helper`, which removed the
Lua stack argument count check and only checked if the first argument was
a socket.

This fix should be backported to 1.8, 1.7 and 1.6.
---
 src/hlua.c | 14 ++
 1 file changed, 10 insertions(+), 4 deletions(-)

diff --git a/src/hlua.c b/src/hlua.c
index d07e8d67..8cc30513 100644
--- a/src/hlua.c
+++ b/src/hlua.c
@@ -1629,14 +1629,12 @@ __LJMP static int hlua_socket_gc(lua_State *L)
 /* The close function send shutdown signal and break the
  * links between the stream and the object.
  */
-__LJMP static int hlua_socket_close(lua_State *L)
+__LJMP static int hlua_socket_close_helper(lua_State *L)
 {
struct hlua_socket *socket;
struct appctx *appctx;
struct xref *peer;
 
-   MAY_LJMP(check_args(L, 1, "close"));
-
socket = MAY_LJMP(hlua_checksocket(L, 1));
 
/* Check if we run on the same thread than the xreator thread.
@@ -1659,6 +1657,14 @@ __LJMP static int hlua_socket_close(lua_State *L)
return 0;
 }
 
+/* The close function calls close_helper.
+ */
+__LJMP static int hlua_socket_close(lua_State *L)
+{
+   MAY_LJMP(check_args(L, 1, "close"));
+   return hlua_socket_close_helper(L);
+}
+
 /* This Lua function assumes that the stack contain three parameters.
  *  1 - USERDATA containing a struct socket
  *  2 - INTEGER with values of the macro defined below
@@ -1990,7 +1996,7 @@ static int hlua_socket_write_yield(struct lua_State 
*L,int status, lua_KContext
if (len == -1)
s->req.flags |= CF_WAKE_WRITE;
 
-   MAY_LJMP(hlua_socket_close(L));
+   MAY_LJMP(hlua_socket_close_helper(L));
lua_pop(L, 1);
lua_pushinteger(L, -1);
xref_unlock(>xref, peer);
-- 
2.17.0




Re: 1.8.8 & 1.9dev, lua, xref_get_peer_and_lock hang / 100% cpu usage after restarting haproxy a few times

2018-05-11 Thread PiBa-NL

Hi Thierry,

Okay found a simple reproduction with tcploop with a 6 second delay in 
there and a short sleep before calling kqueue.


./tcploop 81 L W N20 A R S:"response1\r\n" R P6000 S:"response2\r\n" R [ 
F K ]


 gettimeofday(_poll, NULL);
+    usleep(100);
 status = kevent(kqueue_fd[tid], // int kq

Together with the attached config the issue is reproduced every time the 
/myapplet url is requested.


Output as below:
:stats.clihdr[0007:]: Accept-Language: 
nl-NL,nl;q=0.9,en-US;q=0.8,en;q=0.7

[info] 130/195936 (76770) : Wait for it..
[info] 130/195937 (76770) : Wait response 2..
  xref_get_peer_and_lock xref->peer == 1

Hope this helps to come up with a solution..

Thanks in advance,
PiBa-NL (Pieter)

Op 9-5-2018 om 19:47 schreef PiBa-NL:

Hi Thierry,

Op 9-5-2018 om 18:30 schreef Thierry Fournier:
It seems a dead lock, but you observe a loop. 
Effectively it is a deadlock, it keeps looping over these few lines of 
code below from xref.h 
.. 
The XCHG just swaps the 2 values (both are '1') and continues on, then 
the local==BUSY check is true it loops and swaps 1 and 1 again, and 
the circle continues..


Thanks for looking into it :) Ill try and get 'simpler' reproduction 
with some well placed sleep() as you suggest.

Regards,
PiBa-NL

http://git.haproxy.org/?p=haproxy.git;a=blob;f=include/common/xref.h;h=6dfa7b62758dfaebe12d25f66aaa858dc873a060;hb=29d698040d6bb56b29c036aeba05f0d52d8ce94b 



function myapplet(applet)

  core.Info("Wait for it..")
  core.sleep(1)

  local result = ""
con = core.tcp()
con:settimeout(1)
con:connect("127.0.0.1",81)
con:send("Test1\r\n")
r = con:receive("*l")
result = result .. tostring(r)
con:send("Test\r\n")
  core.Info("Wait response 2..")
r2 = con:receive("*l")
result = result .. tostring(r2)
  core.Info("close..")
con:close()
  core.Info("DONE")

response = "Finished"
applet:add_header("Server", "haproxy/webstats")
applet:add_header("Content-Type", "text/html")
applet:start_response()
applet:send(response)

end

core.register_service("myapplet", "http", myapplet)
global
  lua-load /root/haproxytest/hang_timeout_close.lua

defaults
mode http
timeout connect 5s
timeout client 30s
timeout server 60s
  
frontend stats
bind *:80
stats enable
stats admin if TRUE
stats refresh 1s

  acl myapplet path -m beg /myapplet
  http-request use-service lua.myapplet if myapplet

 include/common/xref.h | 4 
 src/ev_kqueue.c   | 1 +
 2 files changed, 5 insertions(+)

diff --git a/include/common/xref.h b/include/common/xref.h
index 6dfa7b6..e6905a1 100644
--- a/include/common/xref.h
+++ b/include/common/xref.h
@@ -25,6 +25,10 @@ static inline void xref_create(struct xref *xref_a, struct 
xref *xref_b)
 
 static inline struct xref *xref_get_peer_and_lock(struct xref *xref)
 {
+   if (xref->peer == 1) {
+   printf("  xref_get_peer_and_lock xref->peer == 1 \n");
+   }
+
struct xref *local;
struct xref *remote;
 
diff --git a/src/ev_kqueue.c b/src/ev_kqueue.c
index bf7f666..732f20d 100644
--- a/src/ev_kqueue.c
+++ b/src/ev_kqueue.c
@@ -145,6 +145,7 @@ REGPRM2 static void _do_poll(struct poller *p, int exp)
 
fd = global.tune.maxpollevents;
gettimeofday(_poll, NULL);
+   usleep(100);
status = kevent(kqueue_fd[tid], // int kq
NULL,  // const struct kevent *changelist
0, // int nchanges


Re: Eclipse 403 access denied

2018-05-11 Thread PiBa-NL

Hi Norman,

Op 11-5-2018 om 19:36 schreef Norman Branitsky:


After upgrading to the latest version of Eclipse and installing our 
custom Eclipse Plugin,


my developers are now being blocked by HAProxy.

Here’s a sample of the problem:

May 11 15:03:37 localhost haproxy[13089]: 66.192.142.9:43041 
[11/May/2018:15:03:37.932] main_ssl~ 
ssl_backend-etkdev/i-09120e3b
0/0/1/24/25 200 436 - - --NN 52/52/0/0/0 0/0 "GET 
/entellitrak/private/api/workspaces/query/current HTTP/1.1"


May 11 15:03:38 localhost haproxy[13089]: 66.192.142.9:56417 
[11/May/2018:15:03:38.117] main_ssl~ main_ssl/
0/-1/-1/-1/0 403 188 - - PR-- 50/50/0/0/0 0/0 "POST 
/entellitrak/private/api/packages/query/workspace/t.jx HTTP/1.1"




" PR   The proxy blocked the client's HTTP request, either because of an
  invalid HTTP syntax, in which case it returned an HTTP 400 error to
  the client, or because a deny filter matched, in which case it
  returned an HTTP 403 error."


So, is the 403 because the backend server is unknown in the 2^nd request?

Or is the backend server unknown because of the 403?

This is the beginning of the JSON payload in the POST statement:

ID: 24

Address: 
https://etkdev.wisits.org/entellitrak/private/api/packages/query/workspace/thomas.jackson


Http-Method: POST

Content-Type: application/json

Headers: {Authorization=[Basic dGhvbWFzLmphY2tzb246UGFzc3dvcmQxIQ==], 
Content-Type=[application/json], Accept=[application/json]}



Could it be the 'Host' header is missing.? Which is required by http/1.1.
And above authorization can be decoded.. be careful what internal/secure 
information is posted..


Payload: 
["package.fileServer.c0413431-1236-4825-90f1-5f5be131a237","package.rfWorkflowParameterJavascript.a227ee0b-6b59-4643-b1f8-1ff203948a24",


HAProxy version info:

[WIIRIS-LB-240]# /usr/local/sbin/haproxy -vv

HA-Proxy version 1.7.9 2017/08/18

Copyright 2000-2017 Willy Tarreau 

Build options :

  TARGET  = linux2628

  CPU = generic

  CC  = gcc

  CFLAGS  = -O2 -g -fno-strict-aliasing -Wdeclaration-after-statement 
-fwrapv


  OPTIONS = USE_SLZ=1 USE_OPENSSL=1 USE_PCRE=1

Default settings :

  maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Encrypted password support via crypt(3): yes

Built with libslz for stateless compression.

Compression algorithms supported : identity("identity"), 
deflate("deflate"), raw-deflate("deflate"), gzip("gzip")


Built with OpenSSL version : OpenSSL 1.0.1e-fips 11 Feb 2013

Running on OpenSSL version : OpenSSL 1.0.2l  25 May 2017 (VERSIONS 
DIFFER!)



p.s. Running with different versions between build/running is a bad thing..

Regards,

PiBa-NL



Eclipse 403 access denied

2018-05-11 Thread Norman Branitsky
After upgrading to the latest version of Eclipse and installing our custom 
Eclipse Plugin,
my developers are now being blocked by HAProxy.
Here's a sample of the problem:

May 11 15:03:37 localhost haproxy[13089]: 66.192.142.9:43041 
[11/May/2018:15:03:37.932] main_ssl~ ssl_backend-etkdev/i-09120e3b
0/0/1/24/25 200 436 - - --NN 52/52/0/0/0 0/0 "GET 
/entellitrak/private/api/workspaces/query/current HTTP/1.1"

May 11 15:03:38 localhost haproxy[13089]: 66.192.142.9:56417 
[11/May/2018:15:03:38.117] main_ssl~ main_ssl/
0/-1/-1/-1/0 403 188 - - PR-- 50/50/0/0/0 0/0 "POST 
/entellitrak/private/api/packages/query/workspace/t.jx HTTP/1.1"

So, is the 403 because the backend server is unknown in the 2nd request?
Or is the backend server unknown because of the 403?

This is the beginning of the JSON payload in the POST statement:

ID: 24

Address: 
https://etkdev.wisits.org/entellitrak/private/api/packages/query/workspace/thomas.jackson

Http-Method: POST

Content-Type: application/json

Headers: {Authorization=[Basic dGhvbWFzLmphY2tzb246UGFzc3dvcmQxIQ==], 
Content-Type=[application/json], Accept=[application/json]}

Payload: 
["package.fileServer.c0413431-1236-4825-90f1-5f5be131a237","package.rfWorkflowParameterJavascript.a227ee0b-6b59-4643-b1f8-1ff203948a24",

HAProxy version info:

[WIIRIS-LB-240]# /usr/local/sbin/haproxy -vv

HA-Proxy version 1.7.9 2017/08/18

Copyright 2000-2017 Willy Tarreau 



Build options :

  TARGET  = linux2628

  CPU = generic

  CC  = gcc

  CFLAGS  = -O2 -g -fno-strict-aliasing -Wdeclaration-after-statement -fwrapv

  OPTIONS = USE_SLZ=1 USE_OPENSSL=1 USE_PCRE=1



Default settings :

  maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200



Encrypted password support via crypt(3): yes

Built with libslz for stateless compression.

Compression algorithms supported : identity("identity"), deflate("deflate"), 
raw-deflate("deflate"), gzip("gzip")

Built with OpenSSL version : OpenSSL 1.0.1e-fips 11 Feb 2013

Running on OpenSSL version : OpenSSL 1.0.2l  25 May 2017 (VERSIONS DIFFER!)


[PATCH 0/2] Re: Priority based queuing

2018-05-11 Thread Patrick Hemmer
Ok, so here is the full submission for priority based queuing.

Notes since previous update:

Wasn't really able to optimize the tree search any. Tried a few things,
but nothing made a measurable performance difference.

I added a warning message and documentation making clear the issues with
timestamp wrapping.
Though one thing that might not be completely obvious is that even if
the user does not configure `set-priority-offset` at all, they're still
susceptible to the wrapping issue as the priority is the queue key
whether priority is adjusted or not.

The implementation of the %sq (srv_queue) and %bq (backend_queue) was
change to keep the description accurate. The description is "number of
requests which were processed before this one". The previous
implementation just stored the size of the queue at the time the
connection was queued. Since we can inject a connection into the middle
of the queue, this no longer works. Now we keep a count of dequeued
connections, and take the difference between when the connection was
queued, and then dequeued. This also means the value will be slightly
different even for users who don't use priority, as the previous method
would have included connections which closed without being processed.

I added sample fetches for retrieving the class/offset of the current
transaction. I think it might be beneficial to add some other fetches
for tracking the health of the queue, such as average class/offset, or
an exponential moving average of the class/offset for requests added to
the queue, requests processed, and requests which closed/timed out. But
this is just more stuff the code would have to store, so unsure if
they're worth it.


I wasn't convinced the 64-bit key was a bad idea, so I implemented the
idea with a 12/52 split and an absolute timestamp.  On my system (which
is 64-bit) the performance is about 20% faster. The code is much
simpler. And it also solves the limitations and issues with wrapping.
The patch for this is included in case it's of interest.

-Patrick


Patrick Hemmer (2):
  MEDIUM: add set-priority-class and set-priority-offset
  use a 64-bit int with absolute timestamp for priority-offset

 doc/configuration.txt  |  38 +++
 doc/lua-api/index.rst  |  18 
 include/types/proxy.h  |   3 +-
 include/types/queue.h  |   2 +-
 include/types/server.h |   3 +-
 include/types/stream.h |   7 +-
 src/cfgparse.c |  15 +++
 src/hlua.c |  69 +
 src/log.c  |   4 +-
 src/proto_http.c   |   4 +-
 src/proxy.c|   2 +-
 src/queue.c| 261
+
 src/server.c   |   2 +-
 src/stream.c   |  10 +-
 14 files changed, 366 insertions(+), 72 deletions(-)

-- 
2.16.3



[PATCH 2/2] use a 64-bit int with absolute timestamp for priority-offset

2018-05-11 Thread Patrick Hemmer
---
 include/types/queue.h |   2 +-
 src/hlua.c|   5 --
 src/queue.c   | 144
+++---
 3 files changed, 33 insertions(+), 118 deletions(-)


diff --git a/include/types/queue.h b/include/types/queue.h
index 03377da69..5f4693942 100644
--- a/include/types/queue.h
+++ b/include/types/queue.h
@@ -35,7 +35,7 @@ struct pendconn {
 	struct stream *strm;
 	struct proxy  *px;
 	struct server *srv;/* the server we are waiting for, may be NULL */
-	struct eb32_node node;
+	struct eb64_node node;
 	__decl_hathreads(HA_SPINLOCK_T lock);
 };
 
diff --git a/src/hlua.c b/src/hlua.c
index 6e727648d..dd7311ff8 100644
--- a/src/hlua.c
+++ b/src/hlua.c
@@ -5351,11 +5351,6 @@ __LJMP static int hlua_txn_set_priority_offset(lua_State *L)
 	htxn = MAY_LJMP(hlua_checktxn(L, 1));
 	offset = MAY_LJMP(luaL_checkinteger(L, 2));
 
-	if (offset < -0x7)
-		offset = -0x7;
-	else if (offset > 0x7)
-		offset = 0x7;
-
 	htxn->s->priority_offset = offset;
 
 	return 0;
diff --git a/src/queue.c b/src/queue.c
index cf445f97d..0f86fdb36 100644
--- a/src/queue.c
+++ b/src/queue.c
@@ -14,7 +14,7 @@
 #include 
 #include 
 #include 
-#include 
+#include 
 
 #include 
 #include 
@@ -26,23 +26,6 @@
 #include 
 
 
-#define NOW_OFFSET_BOUNDARY() (now_ms - (TIMER_LOOK_BACK >> 12) & 0xf)
-#define KEY_CLASS(key) (key & 0xfff0)
-#define KEY_OFFSET(key) (key & 0xf)
-#define KEY_CLASS_BOUNDARY(key) (KEY_CLASS(key) | NOW_OFFSET_BOUNDARY())
-
-static u32 key_incr(u32 key) {
-	u32 key_next = key + 1;
-
-	if (KEY_CLASS(key_next) != KEY_CLASS(key))
-		key_next = KEY_CLASS(key_next);
-	else if (key_next == KEY_CLASS_BOUNDARY(key))
-		key_next += 0x10;
-
-	return key_next;
-}
-
-
 struct pool_head *pool_head_pendconn;
 
 /* perform minimal intializations, report 0 in case of error, 1 if OK. */
@@ -100,57 +83,7 @@ static void pendconn_unlink(struct pendconn *p)
 		p->px->nbpend--;
 	}
 	HA_ATOMIC_SUB(>px->totpend, 1);
-	eb32_delete(>node);
-}
-
-/* Retrieve the next pending connection from the given pendconns ebtree with
- * key >= min.
- *
- * See pendconn_add for an explanation of the key & queue behavior.
- *
- * This function handles all the cases where due to the timestamp wrapping
- * the first node in the tree is not the highest priority.
- */
-static struct pendconn *pendconn_next(struct eb_root *pendconns, u32 min) {
-	struct eb32_node *node, *node2 = NULL;
-	u32 max;
-
-	// min is inclusive
-	// max is exclusive
-	max = KEY_CLASS_BOUNDARY(min);
-
-	node = eb32_lookup_ge(pendconns, min);
-
-	if (node) {
-		if (node->key < max || (max <= min && KEY_CLASS(node->key) == KEY_CLASS(min)))
-			return eb32_entry(node, struct pendconn, node);
-		if (KEY_CLASS(node->key) != KEY_CLASS(min))
-			node2 = node;
-		if (max > min)
-			goto class_next;
-	}
-
-	if (max <= min)
-		node = eb32_lookup_ge(pendconns, KEY_CLASS(min));
-	if (!node)
-		return NULL;
-	if (node->key < max && node->key < min)
-		return eb32_entry(node, struct pendconn, node);
-
-class_next:
-	if (node2) {
-		min = KEY_CLASS_BOUNDARY(node2->key);
-		if (node2->key >= min)
-			return eb32_entry(node2, struct pendconn, node);
-	} else
-		min = KEY_CLASS_BOUNDARY(min) + 0x10;
-	node = eb32_lookup_ge(pendconns, min);
-	if (node && KEY_CLASS(node->key) == KEY_CLASS(min))
-		return eb32_entry(node, struct pendconn, node);
-	if (node2)
-		return eb32_entry(node2, struct pendconn, node);
-
-	return NULL;
+	eb64_delete(>node);
 }
 
 /* Process the next pending connection from either a server or a proxy, and
@@ -174,8 +107,8 @@ class_next:
 static int pendconn_process_next_strm(struct server *srv, struct proxy *px)
 {
 	struct pendconn *p = NULL, *pp = NULL;
+	struct eb64_node *node;
 	struct server   *rsrv;
-	u32 pkey, ppkey;
 	int remote;
 
 	rsrv = srv->track;
@@ -183,18 +116,26 @@ static int pendconn_process_next_strm(struct server *srv, struct proxy *px)
 		rsrv = srv;
 
 	if (srv->nbpend) {
-		for (p = pendconn_next(>pendconns, NOW_OFFSET_BOUNDARY());
-		 p;
-		 p = pendconn_next(>pendconns, key_incr(p->node.key)))
+		for (node = eb64_first(>pendconns);
+		 node;
+ node = eb64_lookup_ge(>pendconns, node->key + 1)) {
+			p = eb64_entry(node, struct pendconn, node);
 			if (!HA_SPIN_TRYLOCK(PENDCONN_LOCK, >lock))
 break;
+		}
+		if (!node)
+			p = NULL;
 	}
 	if (px->nbpend) {
-		for (pp = pendconn_next(>pendconns, NOW_OFFSET_BOUNDARY());
-		 pp;
-		 pp = pendconn_next(>pendconns, key_incr(pp->node.key)))
+		for (node = eb64_first(>pendconns);
+		 node;
+		 node = eb64_lookup_ge(>pendconns, node->key + 1)) {
+			pp = eb64_entry(node, struct pendconn, node);
 			if (!HA_SPIN_TRYLOCK(PENDCONN_LOCK, >lock))
 break;
+		}
+		if (!node)
+			pp = NULL;
 	}
 
 	if (!p && !pp)
@@ -206,23 +147,7 @@ static int pendconn_process_next_strm(struct server *srv, struct proxy *px)
 		p = pp;
 		goto pendconn_found;
 	}
-	if (KEY_CLASS(p->node.key) < KEY_CLASS(pp->node.key)) {
-		

[PATCH 1/2] MEDIUM: add set-priority-class and set-priority-offset

2018-05-11 Thread Patrick Hemmer

This adds the set-priority-class and set-priority-offset actions to
http-request and tcp-request content.
The priority values are used when connections are queued to determine
which connections should be served first. The lowest priority class is
served first. When multiple requests from the same class are found, the
earliest (according to queue_time + offset) is served first.
---
 doc/configuration.txt  |  38 ++
 doc/lua-api/index.rst  |  18 +++
 include/types/proxy.h  |   3 +-
 include/types/queue.h  |   2 +-
 include/types/server.h |   3 +-
 include/types/stream.h |   7 +-
 src/cfgparse.c |  15 +++
 src/hlua.c |  74 ---
 src/log.c  |   4 +-
 src/proto_http.c   |   4 +-
 src/proxy.c|   2 +-
 src/queue.c| 345
++---
 src/server.c   |   2 +-
 src/stream.c   |  10 +-
 14 files changed, 453 insertions(+), 74 deletions(-)


diff --git a/doc/configuration.txt b/doc/configuration.txt
index cbea3309d..7ec010811 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -3911,6 +3911,7 @@ http-request { allow | auth [realm ] | redirect  | reject |
   replace-value|
   set-method  | set-path  | set-query  |
   set-uri  | set-tos  | set-mark  |
+  set-priority-class  | set-priority-offset 
   add-acl()  |
   del-acl()  |
   del-map()  |
@@ -4107,6 +4108,24 @@ http-request { allow | auth [realm ] | redirect  | reject |
   downloads). This works on Linux kernels 2.6.32 and above and requires
   admin privileges.
 
+- "set-priority-class" is used to set the queue priority class of the
+  current request. The value must be a sample expression which converts to
+  an integer in the range -2047..2047. Results outside this range will be
+  truncated. The priority class determines the order in which queued
+  requests are processed. Lower values have higher priority.
+
+- "set-priority-offset" is used to set the queue priority timestamp offset
+  of the current request. The value must be a sample expression which
+  converts to an integer in the range -524287..524287. Results outside this
+  range will be truncated. When a request is queued, it is ordered first by
+  the priority class, then by the current timestamp adjusted by the given
+  offset in milliseconds. Lower values have higher priority.
+  Note that the resulting timestamp is is only tracked with enough precision
+  for 524,287ms (8m44s287ms). If the request is queued long enough to where
+  the adjusted timestamp exceeds this value, it will be misidentified as
+  highest priority. Thus it is important to set "timeout queue" to a value,
+  where when combined with the offset, does not exceed this limit.
+
 - "add-acl" is used to add a new entry into an ACL. The ACL must be loaded
   from a file (even a dummy empty file). The file name of the ACL to be
   updated is passed between parentheses. It takes one argument: ,
@@ -9446,6 +9465,7 @@ tcp-request content  [{if | unless} ]
 - accept : the request is accepted
 - reject : the request is rejected and the connection is closed
 - capture : the specified sample expression is captured
+- set-priority-class  | set-priority-offset 
 - { track-sc0 | track-sc1 | track-sc2 }  [table ]
 - sc-inc-gpc0()
 - sc-inc-gpc1()
@@ -9507,6 +9527,24 @@ tcp-request content  [{if | unless} ]
   The "unset-var" is used to unset a variable. See above for details about
   .
 
+  The "set-priority-class" is used to set the queue priority class of the
+  current request. The value must be a sample expression which converts to an
+  integer in the range -2047..2047. Results outside this range will be
+  truncated. The priority class determines the order in which queued requests
+  are processed. Lower values have higher priority.
+
+  The "set-priority-offset" is used to set the queue priority timestamp offset
+  of the current request. The value must be a sample expression which converts
+  to an integer in the range -524287..524287. Results outside this range will be
+  truncated. When a request is queued, it is ordered first by the priority
+  class, then by the current timestamp adjusted by the given offset in
+  milliseconds. Lower values have higher priority.
+  Note that the resulting timestamp is is only tracked with enough precision for
+  524,287ms (8m44s287ms). If the request is queued long enough to where the
+  adjusted timestamp exceeds this value, it will be misidentified as highest
+  priority.  Thus it is important to set "timeout queue" to a value, where when
+  combined with the offset, does not exceed this limit.
+
   The "send-spoe-group" is used to trigger sending of a group of SPOE
   messages. To do so, the SPOE engine used to send messages must be defined, as
   well as the SPOE 

Re: Cannot handle more than 1,000 clients / s

2018-05-11 Thread Marco Colli
>
> Do you get better results if you'll use http instead of https ?


I already tested it yesterday and the results are pretty much the same
(only a very small improvement, which is expected, but not a substantial
change).

Running top / htop should show if userspace uses all cpu.


 During the test the CPU usage is this:


%Cpu0  :* 65.1 *us,*  5.0 *sy,*  0.0 *ni,* 29.9 *id,*  0.0 *wa,*  0.0 *hi,*
0.0 *si,*  0.0 *st

%Cpu1  :* 49.0 *us,*  6.3 *sy,*  0.0 *ni,* 30.3 *id,*  0.0 *wa,*  0.0 *hi,
* 14.3 *si,*  0.0 *st

%Cpu2  :* 67.7 *us,*  4.0 *sy,*  0.0 *ni,* 24.8 *id,*  0.0 *wa,*  0.0 *hi,*
3.6 *si,*  0.0 *st

%Cpu3  :* 72.1 *us,*  6.0 *sy,*  0.0 *ni,* 21.9 *id,*  0.0 *wa,*  0.0 *hi,*
0.0 *si,*  0.0 *st


Also note that when I increase the number of CPUs and HAProxy processes I
don't get any benefit on performance (and the CPU usage is much lower).


On Fri, May 11, 2018 at 5:45 PM, Jarno Huuskonen 
wrote:

> Hi,
>
> On Fri, May 11, Marco Colli wrote:
> > Hope that this is the right place to ask.
> >
> > We have a website that uses HAProxy as a load balancer and nginx in the
> > backend. The website is hosted on DigitalOcean (AMS2).
> >
> > The problem is that - no matter the configuration or the server size - we
> > cannot achieve a connection rate higher than 1,000 new connections / s.
> > Indeed we are testing using loader.io and these are the results:
> > - for a session rate of 1,000 clients per second we get exactly 1,000
> > responses per second
> > - for session rates higher than that, we get long response times (e.g.
> 3s)
> > and only some hundreds of responses per second (so there is a bottleneck)
> > https://ldr.io/2I5hry9
>
> Is your load tester using https connections or http (probably https,
> since you have redirect scheme https if !{ ssl_fc }) ? If https and each
> connection renegotiates tls then there's a chance you are testing how
> fast your VM can do tls negot.
>
> Running top / htop should show if userspace uses all cpu.
>
> Do you get better results if you'll use http instead of https ?
>
> -Jarno
>
> --
> Jarno Huuskonen
>


Re: Cannot handle more than 1,000 clients / s

2018-05-11 Thread Marco Colli
>
> Maybe you want to disable it


Thanks for the reply! I have already tried that and doesn't help.

 Maybe you can run a "top" showing each CPU usage, so we can see how much
> time is spent in SI and in userland


During the test the CPU usage is pretty constant and the values are these:


%Cpu0  :* 65.1 *us,*  5.0 *sy,*  0.0 *ni,* 29.9 *id,*  0.0 *wa,*  0.0 *hi,*
0.0 *si,*  0.0 *st

%Cpu1  :* 49.0 *us,*  6.3 *sy,*  0.0 *ni,* 30.3 *id,*  0.0 *wa,*  0.0 *hi,*
14.3 *si,*  0.0 *st

%Cpu2  :* 67.7 *us,*  4.0 *sy,*  0.0 *ni,* 24.8 *id,*  0.0 *wa,*  0.0 *hi,*
3.6 *si,*  0.0 *st

%Cpu3  :* 72.1 *us,*  6.0 *sy,*  0.0 *ni,* 21.9 *id,*  0.0 *wa,*  0.0 *hi,*
0.0 *si,*  0.0 *st


I saw you're doing http-server-close. Is there any good reason for that?


I need to handle different requests from different clients (I am not
interested in keep alive, since clients usually make just 1 or 2 requests).
So I think that http-server-close doesn't matter because it is used only
for multiple request *from the same client*.

The maxconn on your frontend seem too low too compared to your target
> traffic (despite the 5000 will apply to each process).


It is 5,000 * 4 = 20,000 which should be enough for a test with 2,000
clients. In any case I have also tried to increase it to 25,000 per process
and the performance are the same in the load tests.

Last, I would create 4 bind lines, one per process, like this in your
> frontend:
>   bind :80 process 1
>   bind :80 process 2
>

Do you mean bind-process? The HAProxy docs say that when bind-process is
not present is the same as bind-process all, so I think that it is useless
to write it explicitly.


On Fri, May 11, 2018 at 4:58 PM, Baptiste  wrote:

> Hi Marco,
>
> I see you enabled compression in your HAProxy configuration. Maybe you
> want to disable it and re-run a test just to see (though I don't expect any
> improvement since you seem to have some free CPU cycles on the machine).
> Maybe you can run a "top" showing each CPU usage, so we can see how much
> time is spent in SI and in userland.
> I saw you're doing http-server-close. Is there any good reason for that?
> The maxconn on your frontend seem too low too compared to your target
> traffic (despite the 5000 will apply to each process).
> Last, I would create 4 bind lines, one per process, like this in your
> frontend:
>   bind :80 process 1
>   bind :80 process 2
>   ...
>
> Maybe one of your process is being saturated and you don't see it . The
> configuration above will ensure an even load distribution of the incoming
> connections to the HAProxy process.
>
> Baptiste
>
>
> On Fri, May 11, 2018 at 4:29 PM, Marco Colli 
> wrote:
>
>> how many connections you have opened on the private side
>>
>>
>> Thanks for the reply! What should I do exactly? Can you see it from
>> HAProxy stats? I have taken two screenshots (see attachments) during the
>> load test (30s, 2,000 client/s)
>>
>> here are not closing fast enough and you are reaching the limit.
>>
>>
>> What can I do to improve that?
>>
>>
>>
>>
>> On Fri, May 11, 2018 at 3:30 PM, Mihai Vintila  wrote:
>>
>>> Check how many connections you have opened on the private side(i.e.
>>> between haproxy and nginx), i'm thinking that there are not closing fast
>>> enough and you are reaching the limit.
>>>
>>> Best regards,
>>> Mihai
>>>
>>> On 5/11/2018 4:26 PM, Marco Colli wrote:
>>>
>>> Another note: each nginx server in the backend can handle 8,000 new
>>> clients/s: http://bit.ly/2Kh86j9 (tested with keep alive disabled and
>>> with the same http request)
>>>
>>> On Fri, May 11, 2018 at 2:02 PM, Marco Colli 
>>> wrote:
>>>
 Hello!

 Hope that this is the right place to ask.

 We have a website that uses HAProxy as a load balancer and nginx in the
 backend. The website is hosted on DigitalOcean (AMS2).

 The problem is that - no matter the configuration or the server size -
 we cannot achieve a connection rate higher than 1,000 new connections / s.
 Indeed we are testing using loader.io and these are the results:
 - for a session rate of 1,000 clients per second we get exactly 1,000
 responses per second
 - for session rates higher than that, we get long response times (e.g.
 3s) and only some hundreds of responses per second (so there is a
 bottleneck) https://ldr.io/2I5hry9

 Note that if we use a long http keep alive in HAProxy and the same
 browser makes multiple requests we get much better results: however the
 problem is that in the reality we need to handle many different clients
 (which make 1 or 2 requests on average), not many requests from the same
 client.

 Currently we have this configuration:
 - 1x HAProxy with 4 vCPU (we have also tested with 12 vCPU... the
 result is the same)
 - system / process limits and HAProxy configuration:
 

Re: Cannot handle more than 1,000 clients / s

2018-05-11 Thread Jarno Huuskonen
Hi,

On Fri, May 11, Marco Colli wrote:
> Hope that this is the right place to ask.
> 
> We have a website that uses HAProxy as a load balancer and nginx in the
> backend. The website is hosted on DigitalOcean (AMS2).
> 
> The problem is that - no matter the configuration or the server size - we
> cannot achieve a connection rate higher than 1,000 new connections / s.
> Indeed we are testing using loader.io and these are the results:
> - for a session rate of 1,000 clients per second we get exactly 1,000
> responses per second
> - for session rates higher than that, we get long response times (e.g. 3s)
> and only some hundreds of responses per second (so there is a bottleneck)
> https://ldr.io/2I5hry9

Is your load tester using https connections or http (probably https,
since you have redirect scheme https if !{ ssl_fc }) ? If https and each
connection renegotiates tls then there's a chance you are testing how
fast your VM can do tls negot.

Running top / htop should show if userspace uses all cpu.

Do you get better results if you'll use http instead of https ?

-Jarno

-- 
Jarno Huuskonen



Re: Cannot handle more than 1,000 clients / s

2018-05-11 Thread Marco Colli
>
> Solution is to have more than one ip on the backend and a round robin when
> sending to the backends.


What do you mean exactly? I already use round robin (as you can see in the
config file linked previously) and in the backend I have 10 different
servers with 10 different IPs

sysctl net.ipv4.ip_local_port_range


Currently I have ~30,000 ports available... they should be enough for 2,000
clients / s. Note that the number during the test is kept constant to 2,000
client (the number of connected clients is not cumulative / does not
increase during the test).
In any case I have also tested increasing the number of ports to 64k and
run a load test, but nothing changes.

You are probably keeping it opened for around 60 seconds and thus the limit


No, on the backend side I use http-server-close. On the client side the
number is constant to 2k clients during the test and in any case I have
http keep alive timeout set to 500ms.


On Fri, May 11, 2018 at 4:51 PM, Mihai Vintila  wrote:

> You can not have too many open ports . Once a new connections comes to
> haproxy on the backend it'll initiate a new connection to the nginx. Each
> new connections opens a local port, and ports are limited by sysctl
> net.ipv4.ip_local_port_range . So even if you set it to 1024 65535 you
> still have only ~ 64000 sessions. Solution is to have more than one ip on
> the backend and a round robin when sending to the backends. This way you'll
> have for each backend ip on the haproxy 64000 sessions. Alternatively make
> sure that you are not keeping the connections opened for too long . You are
> probably keeping it opened for around 60 seconds and thus the limit. As you
> can see you have 61565 sessions in the screenshots provided. Other limit
> could be the file descriptors but seems that this is set to 200k
>
> Best regards,
> Mihai Vintila
>
> On 5/11/2018 5:29 PM, Marco Colli wrote:
>
> how many connections you have opened on the private side
>
>
> Thanks for the reply! What should I do exactly? Can you see it from
> HAProxy stats? I have taken two screenshots (see attachments) during the
> load test (30s, 2,000 client/s)
>
> here are not closing fast enough and you are reaching the limit.
>
>
> What can I do to improve that?
>
>
>
>
> On Fri, May 11, 2018 at 3:30 PM, Mihai Vintila  wrote:
>
>> Check how many connections you have opened on the private side(i.e.
>> between haproxy and nginx), i'm thinking that there are not closing fast
>> enough and you are reaching the limit.
>>
>> Best regards,
>> Mihai
>>
>> On 5/11/2018 4:26 PM, Marco Colli wrote:
>>
>> Another note: each nginx server in the backend can handle 8,000 new
>> clients/s: http://bit.ly/2Kh86j9 (tested with keep alive disabled and
>> with the same http request)
>>
>> On Fri, May 11, 2018 at 2:02 PM, Marco Colli 
>> wrote:
>>
>>> Hello!
>>>
>>> Hope that this is the right place to ask.
>>>
>>> We have a website that uses HAProxy as a load balancer and nginx in the
>>> backend. The website is hosted on DigitalOcean (AMS2).
>>>
>>> The problem is that - no matter the configuration or the server size -
>>> we cannot achieve a connection rate higher than 1,000 new connections / s.
>>> Indeed we are testing using loader.io and these are the results:
>>> - for a session rate of 1,000 clients per second we get exactly 1,000
>>> responses per second
>>> - for session rates higher than that, we get long response times (e.g.
>>> 3s) and only some hundreds of responses per second (so there is a
>>> bottleneck) https://ldr.io/2I5hry9
>>>
>>> Note that if we use a long http keep alive in HAProxy and the same
>>> browser makes multiple requests we get much better results: however the
>>> problem is that in the reality we need to handle many different clients
>>> (which make 1 or 2 requests on average), not many requests from the same
>>> client.
>>>
>>> Currently we have this configuration:
>>> - 1x HAProxy with 4 vCPU (we have also tested with 12 vCPU... the result
>>> is the same)
>>> - system / process limits and HAProxy configuration:
>>> https://gist.github.com/collimarco/347fa757b1bd1b3f1de536bf1e90f195
>>> - 10x nginx backend servers with 2 vCPU each
>>>
>>> What can we improve in order to handle more than 1,000 different new
>>> clients per second?
>>>
>>> Any suggestion would be extremely helpful.
>>>
>>> Have a nice day
>>> Marco Colli
>>>
>>>
>>
>


Re: Cannot handle more than 1,000 clients / s

2018-05-11 Thread Baptiste
Hi Marco,

I see you enabled compression in your HAProxy configuration. Maybe you want
to disable it and re-run a test just to see (though I don't expect any
improvement since you seem to have some free CPU cycles on the machine).
Maybe you can run a "top" showing each CPU usage, so we can see how much
time is spent in SI and in userland.
I saw you're doing http-server-close. Is there any good reason for that?
The maxconn on your frontend seem too low too compared to your target
traffic (despite the 5000 will apply to each process).
Last, I would create 4 bind lines, one per process, like this in your
frontend:
  bind :80 process 1
  bind :80 process 2
  ...

Maybe one of your process is being saturated and you don't see it . The
configuration above will ensure an even load distribution of the incoming
connections to the HAProxy process.

Baptiste


On Fri, May 11, 2018 at 4:29 PM, Marco Colli  wrote:

> how many connections you have opened on the private side
>
>
> Thanks for the reply! What should I do exactly? Can you see it from
> HAProxy stats? I have taken two screenshots (see attachments) during the
> load test (30s, 2,000 client/s)
>
> here are not closing fast enough and you are reaching the limit.
>
>
> What can I do to improve that?
>
>
>
>
> On Fri, May 11, 2018 at 3:30 PM, Mihai Vintila  wrote:
>
>> Check how many connections you have opened on the private side(i.e.
>> between haproxy and nginx), i'm thinking that there are not closing fast
>> enough and you are reaching the limit.
>>
>> Best regards,
>> Mihai
>>
>> On 5/11/2018 4:26 PM, Marco Colli wrote:
>>
>> Another note: each nginx server in the backend can handle 8,000 new
>> clients/s: http://bit.ly/2Kh86j9 (tested with keep alive disabled and
>> with the same http request)
>>
>> On Fri, May 11, 2018 at 2:02 PM, Marco Colli 
>> wrote:
>>
>>> Hello!
>>>
>>> Hope that this is the right place to ask.
>>>
>>> We have a website that uses HAProxy as a load balancer and nginx in the
>>> backend. The website is hosted on DigitalOcean (AMS2).
>>>
>>> The problem is that - no matter the configuration or the server size -
>>> we cannot achieve a connection rate higher than 1,000 new connections / s.
>>> Indeed we are testing using loader.io and these are the results:
>>> - for a session rate of 1,000 clients per second we get exactly 1,000
>>> responses per second
>>> - for session rates higher than that, we get long response times (e.g.
>>> 3s) and only some hundreds of responses per second (so there is a
>>> bottleneck) https://ldr.io/2I5hry9
>>>
>>> Note that if we use a long http keep alive in HAProxy and the same
>>> browser makes multiple requests we get much better results: however the
>>> problem is that in the reality we need to handle many different clients
>>> (which make 1 or 2 requests on average), not many requests from the same
>>> client.
>>>
>>> Currently we have this configuration:
>>> - 1x HAProxy with 4 vCPU (we have also tested with 12 vCPU... the result
>>> is the same)
>>> - system / process limits and HAProxy configuration:
>>> https://gist.github.com/collimarco/347fa757b1bd1b3f1de536bf1e90f195
>>> - 10x nginx backend servers with 2 vCPU each
>>>
>>> What can we improve in order to handle more than 1,000 different new
>>> clients per second?
>>>
>>> Any suggestion would be extremely helpful.
>>>
>>> Have a nice day
>>> Marco Colli
>>>
>>>
>>
>


Re: Cannot handle more than 1,000 clients / s

2018-05-11 Thread Mihai Vintila
Check how many connections you have opened on the private side(i.e. 
between haproxy and nginx), i'm thinking that there are not closing fast 
enough and you are reaching the limit.


Best regards,
Mihai

On 5/11/2018 4:26 PM, Marco Colli wrote:
Another note: each nginx server in the backend can handle 8,000 new 
clients/s: http://bit.ly/2Kh86j9 (tested with keep alive disabled and 
with the same http request)


On Fri, May 11, 2018 at 2:02 PM, Marco Colli > wrote:


Hello!

Hope that this is the right place to ask.

We have a website that uses HAProxy as a load balancer and nginx
in the backend. The website is hosted on DigitalOcean (AMS2).

The problem is that - no matter the configuration or the server
size - we cannot achieve a connection rate higher than 1,000 new
connections / s. Indeed we are testing using loader.io
 and these are the results:
- for a session rate of 1,000 clients per second we get exactly
1,000 responses per second
- for session rates higher than that, we get long response times
(e.g. 3s) and only some hundreds of responses per second (so there
is a bottleneck) https://ldr.io/2I5hry9 

Note that if we use a long http keep alive in HAProxy and the same
browser makes multiple requests we get much better results:
however the problem is that in the reality we need to handle many
different clients (which make 1 or 2 requests on average), not
many requests from the same client.

Currently we have this configuration:
- 1x HAProxy with 4 vCPU (we have also tested with 12 vCPU... the
result is the same)
- system / process limits and HAProxy configuration:
https://gist.github.com/collimarco/347fa757b1bd1b3f1de536bf1e90f195

- 10x nginx backend servers with 2 vCPU each

What can we improve in order to handle more than 1,000 different
new clients per second?

Any suggestion would be extremely helpful.

Have a nice day
Marco Colli




Re: Cannot handle more than 1,000 clients / s

2018-05-11 Thread Marco Colli
Another note: each nginx server in the backend can handle 8,000 new
clients/s: http://bit.ly/2Kh86j9 (tested with keep alive disabled and with
the same http request)

On Fri, May 11, 2018 at 2:02 PM, Marco Colli  wrote:

> Hello!
>
> Hope that this is the right place to ask.
>
> We have a website that uses HAProxy as a load balancer and nginx in the
> backend. The website is hosted on DigitalOcean (AMS2).
>
> The problem is that - no matter the configuration or the server size - we
> cannot achieve a connection rate higher than 1,000 new connections / s.
> Indeed we are testing using loader.io and these are the results:
> - for a session rate of 1,000 clients per second we get exactly 1,000
> responses per second
> - for session rates higher than that, we get long response times (e.g. 3s)
> and only some hundreds of responses per second (so there is a bottleneck)
> https://ldr.io/2I5hry9
>
> Note that if we use a long http keep alive in HAProxy and the same browser
> makes multiple requests we get much better results: however the problem is
> that in the reality we need to handle many different clients (which make 1
> or 2 requests on average), not many requests from the same client.
>
> Currently we have this configuration:
> - 1x HAProxy with 4 vCPU (we have also tested with 12 vCPU... the result
> is the same)
> - system / process limits and HAProxy configuration:
> https://gist.github.com/collimarco/347fa757b1bd1b3f1de536bf1e90f195
> - 10x nginx backend servers with 2 vCPU each
>
> What can we improve in order to handle more than 1,000 different new
> clients per second?
>
> Any suggestion would be extremely helpful.
>
> Have a nice day
> Marco Colli
>
>


Re: [PATCH] BUG/MEDIUM: pollers/kqueue: use incremented position in event list

2018-05-11 Thread Olivier Houchard
On Fri, May 11, 2018 at 02:09:43PM +0200, Willy Tarreau wrote:
> Hi guys,
> 
> On Fri, May 11, 2018 at 01:57:10PM +0200, Olivier Houchard wrote:
> > Hi Pieter,
> > 
> > On Thu, May 10, 2018 at 01:12:40AM +0200, PiBa-NL wrote:
> > > Hi Olivier,
> > > 
> > > Please take a look at attached patch. When adding 2 fd's the second
> > > overwrote the first one.
> > > Tagged it medium as haproxy just didn't work at all. (with kqueue.). 
> > > Though
> > > it could perhaps also be minor, as the commit has only been done 
> > > recently.?.
> > > Anyhow.. This seems to fix it :). Tried to go with a 'int reference 
> > > pointer'
> > > but couldn't find the best syntax to do it that way, and this seems clean
> > > enough as well.. Though if you think a different approach (thread local 
> > > int 
> > > ?) or any other way is better please change it or advice :)
> > > 
> > 
> > Oops you're right, I'm an idiot,
> 
> Why did you think you where hired ? :-)
> 

I thought it was because I was good looking, but that may actually prove your
point.

> > I think your patch is fine, Willy can you please commit it ?
> 
> Sure, now done. Thanks!

Thanks !

Olivier



Re: [PATCH] BUG/MEDIUM: pollers/kqueue: use incremented position in event list

2018-05-11 Thread Willy Tarreau
Hi guys,

On Fri, May 11, 2018 at 01:57:10PM +0200, Olivier Houchard wrote:
> Hi Pieter,
> 
> On Thu, May 10, 2018 at 01:12:40AM +0200, PiBa-NL wrote:
> > Hi Olivier,
> > 
> > Please take a look at attached patch. When adding 2 fd's the second
> > overwrote the first one.
> > Tagged it medium as haproxy just didn't work at all. (with kqueue.). Though
> > it could perhaps also be minor, as the commit has only been done recently.?.
> > Anyhow.. This seems to fix it :). Tried to go with a 'int reference pointer'
> > but couldn't find the best syntax to do it that way, and this seems clean
> > enough as well.. Though if you think a different approach (thread local int 
> > ?) or any other way is better please change it or advice :)
> > 
> 
> Oops you're right, I'm an idiot,

Why did you think you where hired ? :-)

> I think your patch is fine, Willy can you please commit it ?

Sure, now done. Thanks!
Willy



Cannot handle more than 1,000 clients / s

2018-05-11 Thread Marco Colli
Hello!

Hope that this is the right place to ask.

We have a website that uses HAProxy as a load balancer and nginx in the
backend. The website is hosted on DigitalOcean (AMS2).

The problem is that - no matter the configuration or the server size - we
cannot achieve a connection rate higher than 1,000 new connections / s.
Indeed we are testing using loader.io and these are the results:
- for a session rate of 1,000 clients per second we get exactly 1,000
responses per second
- for session rates higher than that, we get long response times (e.g. 3s)
and only some hundreds of responses per second (so there is a bottleneck)
https://ldr.io/2I5hry9

Note that if we use a long http keep alive in HAProxy and the same browser
makes multiple requests we get much better results: however the problem is
that in the reality we need to handle many different clients (which make 1
or 2 requests on average), not many requests from the same client.

Currently we have this configuration:
- 1x HAProxy with 4 vCPU (we have also tested with 12 vCPU... the result is
the same)
- system / process limits and HAProxy configuration:
https://gist.github.com/collimarco/347fa757b1bd1b3f1de536bf1e90f195
- 10x nginx backend servers with 2 vCPU each

What can we improve in order to handle more than 1,000 different new
clients per second?

Any suggestion would be extremely helpful.

Have a nice day
Marco Colli


Re: [PATCH] BUG/MEDIUM: pollers/kqueue: use incremented position in event list

2018-05-11 Thread Olivier Houchard
Hi Pieter,

On Thu, May 10, 2018 at 01:12:40AM +0200, PiBa-NL wrote:
> Hi Olivier,
> 
> Please take a look at attached patch. When adding 2 fd's the second
> overwrote the first one.
> Tagged it medium as haproxy just didn't work at all. (with kqueue.). Though
> it could perhaps also be minor, as the commit has only been done recently.?.
> Anyhow.. This seems to fix it :). Tried to go with a 'int reference pointer'
> but couldn't find the best syntax to do it that way, and this seems clean
> enough as well.. Though if you think a different approach (thread local int 
> ?) or any other way is better please change it or advice :)
> 

Oops you're right, I'm an idiot, I think your patch is fine, Willy can you
please commit it ?

Thanks !

Olivier



Re: req.body_param([])

2018-05-11 Thread Jarno Huuskonen
Hi,

On Wed, May 09, Simon Schabel wrote:
> We use the req.body_param([]) setting to retrieve body
> parameter from the incoming HTTP queries and place them into the
> logs.
> 
> Unfortunately this only works with HTTP POST requests. In our case
> we need to extract the parameter from PUT requests as well.
> 
> Would it be an option to use req.body_param([]) on any HTTP
> method type instead of restricting it to HTTP POST?

Can you share your haproxy -vv and the logging config ?

I just tested with haproxy-ss-20180507 and this minimal test
seems to get req.body_param(log) to stick table on both POST/PUT requests
(tested w/curl -X PUT|POST):
...
frontend test
option  http-buffer-request
bind ipv4@127.0.0.1:8080
http-request track-sc2 req.body_param(log),lower table test_be if 
METH_POST || METH_PUT

default_backend test_be

backend test_be
stick-table type string len 48 size 64 expire 240s store http_req_cnt
server wp2 some.ip.add.ress:80 id 2
...

curl -X PUT -d'log=user1' http://127.0.0.1:8080/
curl -X POST -d'log=user2' http://127.0.0.1:8080/

-Jarno

-- 
Jarno Huuskonen



Re: HAProxy keepalived

2018-05-11 Thread Marcello Lorenzi
Actually we have this configuration on memcached port where the keepalive
is visibile from netstat output

tcp0  0 127.0.0.1:11233   0.0.0.0:*   LISTEN
  9250/memcached   keepalive (0.05/0/0)

We started the haproxy with this configuration:

global

  log 127.0.0.1:514 local5

chroot  /var/lib/haproxy

pidfile /var/run/haproxy.pid

maxconn 65536

user haproxy

group haproxy



defaults

log global

retries 3

mode tcp

option tcpka

option clitcpka

   option srvtcpcka

option dontlognull

option redispatch

option log-separate-errors

timeout connect  5000

timeout client   6h

timeout server  6h



frontend frontend-memcached

bind 0.0.0.0:11211

mode tcp

option tcplog

option socket-stats

default_backend backend-test



backend backend-memcached

mode tcp

balance roundrobin

server test 127.0.0.1:11233 maxconn 1 check inter 5000
fastinter 2000 downinter 2000 rise 3 fall 3

If we check the netstat output, the keepalive is not present.

tcp0  0 0.0.0.0:112110.0.0.0:*
 LISTEN  -off (0.00/0/0)

Marcello

On Fri, May 11, 2018 at 12:20 PM, Jarno Huuskonen 
wrote:

> Hi,
>
> On Fri, May 11, Marcello Lorenzi wrote:
> > we are checking the possibility to balance some memcached instances with
> an
> > HAProxy 1.8 instance but we need to implement the keepalive on TCP Listen
> > port. If we use the command "netstat -ano" we noticed that memcached
> > configure the keepalive on the connection but not HAProxy. Could you help
> > us to verify if it's possible to configure the same behavior?
>
> Does option clitcpka / option srvtcpka do what you need ?
> (https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#4-option%
> 20clitcpka
> https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#option%
> 20srvtcpka
> )
>
> Can you describe your use case ? Maybe memcached proxy (twemproxy,
> mcrouter etc.) is an alternative solution ?
>
> -Jarno
>
> --
> Jarno Huuskonen
>


Re: HAProxy keepalived

2018-05-11 Thread Jarno Huuskonen
Hi,

On Fri, May 11, Marcello Lorenzi wrote:
> we are checking the possibility to balance some memcached instances with an
> HAProxy 1.8 instance but we need to implement the keepalive on TCP Listen
> port. If we use the command "netstat -ano" we noticed that memcached
> configure the keepalive on the connection but not HAProxy. Could you help
> us to verify if it's possible to configure the same behavior?

Does option clitcpka / option srvtcpka do what you need ?
(https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#4-option%20clitcpka
https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#option%20srvtcpka
)

Can you describe your use case ? Maybe memcached proxy (twemproxy,
mcrouter etc.) is an alternative solution ?

-Jarno

-- 
Jarno Huuskonen



Re: Haproxy support for handling concurrent requests from different clients

2018-05-11 Thread Mihir Shirali
Thanks Aleksandar for the help!
I did look up some examples for setting 503 - but all of them (as you've
indicated) seem based on src ip or src header. I'm guessing this is more
suitable for a DOS/DDOS  attack? In our deployment, the likelihood of
getting one request from multiple clients is more than multiple requests
from a single client.
As an update the rate-limit directive has helped. However, the only problem
is that the client does not know that the server is busy and *could* time
out. It would be great if it were possible to somehow send a 503 out , so
the clients could retry after a random time.

With respect to the update - we are evaluating this and have run into some
issues since we need to host 2 different certificates on the port (served
based on the cipher). We should be able to fix this on our own though.

On Fri, May 11, 2018 at 11:41 AM, Aleksandar Lazic 
wrote:

> Hi Mihir.
>
> Am 11.05.2018 um 05:57 schrieb Mihir Shirali:
> > Hi Aleksandar,
> >
> > Why do you add http header for a tftp service?
> > Do you really mean https://de.wikipedia.org/wiki/Trivial_File_Transfer_
> Protocol
> > 
> > [Mihir]>>This TFTP is a custom application written by us. The http
> headers also
> > have custom attributes which are used by the backend application.
> >
> > haproxy version is
> > HA-Proxy version 1.5.11 2015/01/31
>
> Could you try to update at least to the latest 1.5 or better to 1.8?
> https://www.haproxy.org/bugs/bugs-1.5.11.html
>
> > https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#4.2-rate-
> limit%20sessions
> >  limit%20sessions>
> >
> > [Mihir]>>I believe this only queues the packets right? Is there a way we
> could
> > tell the client to back off and retry after a bit (like a 503). This
> decision
> > based on the high number of requests.
>
> Yes it's possible but I haven't done it before.
> I would try this, but I hope that someone with more experience in this
> topic
> step forward and show us a working solution.
>
> https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#4.2-http-
> request
> https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#7.3.3-src_
> conn_rate
>
> http-request connection track-sc0 src
> http-request deny deny_status 503 if { src_conn_rate gt 10 }
>
> This lines are shameless copied from the examples in
> https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#4.2-tcp-
> request%20connection
>
> Regards
> Aleks
>
> > On Fri, May 11, 2018 at 1:58 AM, Aleksandar Lazic  > > wrote:
> >
> > Am 10.05.2018 um 18:27 schrieb Mihir Shirali:
> > > Hi Team,
> > >
> > > We have haproxy installed on a server which is being used
> primarily for front
> > > ending TLS. After session establishment it sets certain headers in
> the http
> > > request and forwards it to the application in the backend. The
> back end
> > > application is a tftp server and hence it can receive requests
> from a large
> > > number of clients.
> >
> > Why do you add http header for a tftp service?
> > Do you really mean
> > https://de.wikipedia.org/wiki/Trivial_File_Transfer_Protocol
> > 
> >
> > > What we observe on our server is that when we have large number of
> clients
> > > haproxy gets quite busy and the CPU clocks pretty high. Since both
> haproxy and
> > > our backend application run on the same server - this combined CPU
> can get close
> > > to the limit.
> > > What we’d like to know is if there is a way to throttle the number
> of requests
> > > per second. All the searches so far - seem to indicate that we
> could rate limit
> > > based on src ip or http header. However, since our client ips will
> be different
> > > in the real world we wont be able to use that (less recurrence)
> > > Could you please help? Is this possible?
> >
> > What's the output of haproxy -vv ?
> > There was some issues about high CPU Usage so maybe you will need to
> update.
> >
> > Could this be a option?
> > https://cbonte.github.io/haproxy-dconv/1.8/
> configuration.html#4.2-rate-limit%20sessions
> >  configuration.html#4.2-rate-limit%20sessions>
> > https://cbonte.github.io/haproxy-dconv/1.8/
> configuration.html#7.3.3-src_updt_conn_cnt
> >  configuration.html#7.3.3-src_updt_conn_cnt>
> >
> > What's 'less recurrence' , hours, days?
> >
> > Regards
> > Aleks
> >
> >
> >
> >
> > --
> > Regards,
> > Mihir
>
>


-- 
Regards,
Mihir


HAProxy keepalived

2018-05-11 Thread Marcello Lorenzi
Hi All,
we are checking the possibility to balance some memcached instances with an
HAProxy 1.8 instance but we need to implement the keepalive on TCP Listen
port. If we use the command "netstat -ano" we noticed that memcached
configure the keepalive on the connection but not HAProxy. Could you help
us to verify if it's possible to configure the same behavior?

Thanks,
Marcello


Re: Haproxy support for handling concurrent requests from different clients

2018-05-11 Thread Aleksandar Lazic
Hi Mihir.

Am 11.05.2018 um 05:57 schrieb Mihir Shirali:
> Hi Aleksandar,
> 
> Why do you add http header for a tftp service?
> Do you really mean 
> https://de.wikipedia.org/wiki/Trivial_File_Transfer_Protocol
> 
> [Mihir]>>This TFTP is a custom application written by us. The http headers 
> also
> have custom attributes which are used by the backend application.
> 
> haproxy version is 
> HA-Proxy version 1.5.11 2015/01/31

Could you try to update at least to the latest 1.5 or better to 1.8?
https://www.haproxy.org/bugs/bugs-1.5.11.html

> https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#4.2-rate-limit%20sessions
> 
> 
> [Mihir]>>I believe this only queues the packets right? Is there a way we could
> tell the client to back off and retry after a bit (like a 503). This decision
> based on the high number of requests.

Yes it's possible but I haven't done it before.
I would try this, but I hope that someone with more experience in this topic
step forward and show us a working solution.

https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#4.2-http-request
https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#7.3.3-src_conn_rate

http-request connection track-sc0 src
http-request deny deny_status 503 if { src_conn_rate gt 10 }

This lines are shameless copied from the examples in
https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#4.2-tcp-request%20connection

Regards
Aleks

> On Fri, May 11, 2018 at 1:58 AM, Aleksandar Lazic  > wrote:
> 
> Am 10.05.2018 um 18:27 schrieb Mihir Shirali:
> > Hi Team,
> > 
> > We have haproxy installed on a server which is being used primarily for 
> front
> > ending TLS. After session establishment it sets certain headers in the 
> http
> > request and forwards it to the application in the backend. The back end
> > application is a tftp server and hence it can receive requests from a 
> large
> > number of clients.
> 
> Why do you add http header for a tftp service?
> Do you really mean
> https://de.wikipedia.org/wiki/Trivial_File_Transfer_Protocol
> 
> 
> > What we observe on our server is that when we have large number of 
> clients
> > haproxy gets quite busy and the CPU clocks pretty high. Since both 
> haproxy and
> > our backend application run on the same server - this combined CPU can 
> get close
> > to the limit.
> > What we’d like to know is if there is a way to throttle the number of 
> requests
> > per second. All the searches so far - seem to indicate that we could 
> rate limit
> > based on src ip or http header. However, since our client ips will be 
> different
> > in the real world we wont be able to use that (less recurrence)
> > Could you please help? Is this possible?
> 
> What's the output of haproxy -vv ?
> There was some issues about high CPU Usage so maybe you will need to 
> update.
> 
> Could this be a option?
> 
> https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#4.2-rate-limit%20sessions
> 
> 
> 
> https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#7.3.3-src_updt_conn_cnt
> 
> 
> 
> What's 'less recurrence' , hours, days?
> 
> Regards
> Aleks
> 
> 
> 
> 
> -- 
> Regards,
> Mihir