Hi All,
we are checking the possibility to balance some memcached instances with an
HAProxy 1.8 instance but we need to implement the keepalive on TCP Listen
port. If we use the command "netstat -ano" we noticed that memcached
configure the keepalive on the connection but not HAProxy. Could you hel
Thanks Aleksandar for the help!
I did look up some examples for setting 503 - but all of them (as you've
indicated) seem based on src ip or src header. I'm guessing this is more
suitable for a DOS/DDOS attack? In our deployment, the likelihood of
getting one request from multiple clients is more t
Hi,
On Fri, May 11, Marcello Lorenzi wrote:
> we are checking the possibility to balance some memcached instances with an
> HAProxy 1.8 instance but we need to implement the keepalive on TCP Listen
> port. If we use the command "netstat -ano" we noticed that memcached
> configure the keepalive on
Actually we have this configuration on memcached port where the keepalive
is visibile from netstat output
tcp0 0 127.0.0.1:11233 0.0.0.0:* LISTEN
9250/memcached keepalive (0.05/0/0)
We started the haproxy with this configuration:
global
log 127.0.0
Hi,
On Wed, May 09, Simon Schabel wrote:
> We use the req.body_param([]) setting to retrieve body
> parameter from the incoming HTTP queries and place them into the
> logs.
>
> Unfortunately this only works with HTTP POST requests. In our case
> we need to extract the parameter from PUT requests
Hi Pieter,
On Thu, May 10, 2018 at 01:12:40AM +0200, PiBa-NL wrote:
> Hi Olivier,
>
> Please take a look at attached patch. When adding 2 fd's the second
> overwrote the first one.
> Tagged it medium as haproxy just didn't work at all. (with kqueue.). Though
> it could perhaps also be minor, as t
Hello!
Hope that this is the right place to ask.
We have a website that uses HAProxy as a load balancer and nginx in the
backend. The website is hosted on DigitalOcean (AMS2).
The problem is that - no matter the configuration or the server size - we
cannot achieve a connection rate higher than 1
Hi guys,
On Fri, May 11, 2018 at 01:57:10PM +0200, Olivier Houchard wrote:
> Hi Pieter,
>
> On Thu, May 10, 2018 at 01:12:40AM +0200, PiBa-NL wrote:
> > Hi Olivier,
> >
> > Please take a look at attached patch. When adding 2 fd's the second
> > overwrote the first one.
> > Tagged it medium as ha
On Fri, May 11, 2018 at 02:09:43PM +0200, Willy Tarreau wrote:
> Hi guys,
>
> On Fri, May 11, 2018 at 01:57:10PM +0200, Olivier Houchard wrote:
> > Hi Pieter,
> >
> > On Thu, May 10, 2018 at 01:12:40AM +0200, PiBa-NL wrote:
> > > Hi Olivier,
> > >
> > > Please take a look at attached patch. When
Another note: each nginx server in the backend can handle 8,000 new
clients/s: http://bit.ly/2Kh86j9 (tested with keep alive disabled and with
the same http request)
On Fri, May 11, 2018 at 2:02 PM, Marco Colli wrote:
> Hello!
>
> Hope that this is the right place to ask.
>
> We have a website t
Check how many connections you have opened on the private side(i.e.
between haproxy and nginx), i'm thinking that there are not closing fast
enough and you are reaching the limit.
Best regards,
Mihai
On 5/11/2018 4:26 PM, Marco Colli wrote:
Another note: each nginx server in the backend can ha
Hi Marco,
I see you enabled compression in your HAProxy configuration. Maybe you want
to disable it and re-run a test just to see (though I don't expect any
improvement since you seem to have some free CPU cycles on the machine).
Maybe you can run a "top" showing each CPU usage, so we can see how
>
> Solution is to have more than one ip on the backend and a round robin when
> sending to the backends.
What do you mean exactly? I already use round robin (as you can see in the
config file linked previously) and in the backend I have 10 different
servers with 10 different IPs
sysctl net.ipv4
Hi,
On Fri, May 11, Marco Colli wrote:
> Hope that this is the right place to ask.
>
> We have a website that uses HAProxy as a load balancer and nginx in the
> backend. The website is hosted on DigitalOcean (AMS2).
>
> The problem is that - no matter the configuration or the server size - we
>
>
> Maybe you want to disable it
Thanks for the reply! I have already tried that and doesn't help.
Maybe you can run a "top" showing each CPU usage, so we can see how much
> time is spent in SI and in userland
During the test the CPU usage is pretty constant and the values are these:
%Cpu0
>
> Do you get better results if you'll use http instead of https ?
I already tested it yesterday and the results are pretty much the same
(only a very small improvement, which is expected, but not a substantial
change).
Running top / htop should show if userspace uses all cpu.
During the tes
This adds the set-priority-class and set-priority-offset actions to
http-request and tcp-request content.
The priority values are used when connections are queued to determine
which connections should be served first. The lowest priority class is
served first. When multiple requests from the same
Ok, so here is the full submission for priority based queuing.
Notes since previous update:
Wasn't really able to optimize the tree search any. Tried a few things,
but nothing made a measurable performance difference.
I added a warning message and documentation making clear the issues with
times
---
include/types/queue.h | 2 +-
src/hlua.c| 5 --
src/queue.c | 144
+++---
3 files changed, 33 insertions(+), 118 deletions(-)
diff --git a/include/types/queue.h b/include/types/queue.h
index 03377da69..5f4693942 100644
---
After upgrading to the latest version of Eclipse and installing our custom
Eclipse Plugin,
my developers are now being blocked by HAProxy.
Here's a sample of the problem:
May 11 15:03:37 localhost haproxy[13089]: 66.192.142.9:43041
[11/May/2018:15:03:37.932] main_ssl~ ssl_backend-etkdev/i-0912nn
Hi Norman,
Op 11-5-2018 om 19:36 schreef Norman Branitsky:
After upgrading to the latest version of Eclipse and installing our
custom Eclipse Plugin,
my developers are now being blocked by HAProxy.
Here’s a sample of the problem:
May 11 15:03:37 localhost haproxy[13089]: 66.192.142.9:43041
Hi Thierry,
Okay found a simple reproduction with tcploop with a 6 second delay in
there and a short sleep before calling kqueue.
./tcploop 81 L W N20 A R S:"response1\r\n" R P6000 S:"response2\r\n" R [
F K ]
gettimeofday(&before_poll, NULL);
+ usleep(100);
status = kevent(kque
Function `hlua_socke_close` expected exactly one argument on the Lua stack.
But when `hlua_socket_close` was called from `hlua_socket_write_yield`,
Lua stack had 3 arguments. So `hlua_socket_close` threw the exception with
message "'close' needs 1 arguments".
Introduced new helper function `hlua_s
Function `hlua_socke_close` expected exactly one argument on the Lua stack.
But when `hlua_socket_close` was called from `hlua_socket_write_yield`,
Lua stack had 3 arguments. So `hlua_socket_close` threw the exception with
message "'close' needs 1 arguments".
Introduced new helper function `hlua_s
ction Pool implementation:
http://blog.arpalert.org/2018/02/haproxy-lua-redis-connection-pool.html
Thierry, note that you made a small typo in your pool: r.release(conn)
in renew should read r:release(conn).
Blog post : https://bl.duesterhus.eu/20180511/
GitHub : https://github.com/TimWolla/h-app
the king of France was the king of israel persia egypt quetzaltcoatl feathered
serpent god atlantis galaxy spacetime 08may2017
public sacrifices will be restored worldwide soon and unofficial rites will
spread everywhere
if you dont pay me back my legal reimboursements and criminal
On Fri, 11 May 2018 8:01 pm Mihir Shirali wrote:
> Thanks Aleksandar for the help!
> I did look up some examples for setting 503 - but all of them (as you've
> indicated) seem based on src ip or src header. I'm guessing this is more
> suitable for a DOS/DDOS attack? In our deployment, the likeli
27 matches
Mail list logo