On Sun, Sep 30, 2018 at 10:16:55PM +0200, PiBa-NL wrote:
> Hi Willy,
> Op 30-9-2018 om 20:38 schreef Willy Tarreau:
> > On Sun, Sep 30, 2018 at 08:22:23PM +0200, Willy Tarreau wrote:
> > > On Sun, Sep 30, 2018 at 07:59:34PM +0200, PiBa-NL wrote:
> > > > Indeed it works with 1.8, so in that regard i
On Mon, Oct 01, 2018 at 02:00:16AM +0200, Lukas Tribus wrote:
> "boolean" may confuse users into thinking they need to provide
> additional arguments, like false or true. This is a simple option
> like many others, so lets not confuse the users with internals.
>
> Also fixes an additional typo.
>
"boolean" may confuse users into thinking they need to provide
additional arguments, like false or true. This is a simple option
like many others, so lets not confuse the users with internals.
Also fixes an additional typo.
Should be backported to 1.8 and 1.7.
---
doc/configuration.txt | 4 ++--
Hi Willy,
Op 30-9-2018 om 20:38 schreef Willy Tarreau:
On Sun, Sep 30, 2018 at 08:22:23PM +0200, Willy Tarreau wrote:
On Sun, Sep 30, 2018 at 07:59:34PM +0200, PiBa-NL wrote:
Indeed it works with 1.8, so in that regard i 'think' the test itself is
correct.. Also when disabling threads, or runni
On Sun, Sep 30, 2018 at 08:22:23PM +0200, Willy Tarreau wrote:
> On Sun, Sep 30, 2018 at 07:59:34PM +0200, PiBa-NL wrote:
> > Indeed it works with 1.8, so in that regard i 'think' the test itself is
> > correct.. Also when disabling threads, or running only 1 client, it still
> > works.. Then both
On Sun, Sep 30, 2018 at 07:59:34PM +0200, PiBa-NL wrote:
> Indeed it works with 1.8, so in that regard i 'think' the test itself is
> correct.. Also when disabling threads, or running only 1 client, it still
> works.. Then both CumConns CumReq show 11 for the first stats result.
Hmmm for me it fai
On Sun, Sep 30, 2018 at 07:15:59PM +0200, PiBa-NL wrote:
> > on a simple config, the CummConns always matches the CumReq, and when
> > running this test I'm seeing random values there in the output, but I
> > also see that they are retrieved before all connections are closed
> But CurrConns is 0, s
Hi Willy,
Op 30-9-2018 om 7:46 schreef Willy Tarreau:
Hi Pieter,
On Sun, Sep 30, 2018 at 12:05:14AM +0200, PiBa-NL wrote:
Hi Willy,
I thought lets give those reg-test another try :) as its easy to run and
dev3 just came out.
All tests pass on my FreeBSD system, except this one, new reg-test a
Hi Willy,
Op 30-9-2018 om 7:56 schreef Willy Tarreau:
On Sun, Sep 30, 2018 at 07:46:24AM +0200, Willy Tarreau wrote:
Well, at least it works fine on 1.8 and not on 1.9-dev3 so I think you
spotted a regression that we have to analyse. However, I'd like to merge
the fix before merging the regtest
On Sun, Sep 30, 2018 at 03:54:14PM +0200, Fabrice Fontaine wrote:
> OK, thanks for your quick review, see attached patch, I made two variables
> PCRE_CONFIG and PCRE2_CONFIG.
Thank you, now applied.
Willy
Dear Willy,
Le dim. 30 sept. 2018 à 14:38, Willy Tarreau a écrit :
> Hello Fabrice,
>
> On Sun, Sep 30, 2018 at 12:20:55PM +0200, Fabrice Fontaine wrote:
> > Dear all,
> >
> > I added haproxy to buildroot and to do so, I added a way of configuring
> the
> > path of pcre-config and pcre-config2.
Hello Fabrice,
On Sun, Sep 30, 2018 at 12:20:55PM +0200, Fabrice Fontaine wrote:
> Dear all,
>
> I added haproxy to buildroot and to do so, I added a way of configuring the
> path of pcre-config and pcre-config2.
This looks OK however I think from a users' perspective that it would be
better to
On Sun, Sep 30, 2018 at 02:35:24PM +0300, Ciprian Dorin Craciun wrote:
> One question about this: if the client gradually reads from the
> (server side) buffer, but it doesn't completely clears it, having this
> `TCP_USER_TIMEOUT` configured would consider this connection "live"?
yes, that's it.
On Sun, Sep 30, 2018 at 2:22 PM Willy Tarreau wrote:
> > As seen the timeout which I believe is the culprit is the `timeout
> > client 30s` which I guess is quite enough.
>
> I tend to consider that if the response starts to be sent,
> then the most expensive part was done and it'd better be compl
On Sun, Sep 30, 2018 at 12:23:20PM +0300, Ciprian Dorin Craciun wrote:
> On Sun, Sep 30, 2018 at 12:12 PM Willy Tarreau wrote:
> > > Anyway, why am I trying to configure the sending buffer size: if I
> > > have large downloads and I have (some) slow clients, and as a
> > > consequence HAProxy tim
Dear all,
I added haproxy to buildroot and to do so, I added a way of configuring the
path of pcre-config and pcre-config2. So, please find attached a patch. As
this my first contribution to haproxy, please excuse me if I made any
mistakes.
Best Regards,
Fabrice
From f3dcdf6c9ffea4d9b89dca9706a4
On Sun, Sep 30, 2018 at 12:12 PM Willy Tarreau wrote:
> > If so then by not setting it the kernel should choose the default
> > value, which according to:
> >
> > > sysctl net.ipv4.tcp_wmem
> > net.ipv4.tcp_wmem = 409616384 4194304
> >
> > , should be 16384.
>
> No, it *starts*
Hi Willy.
Am 30.09.2018 um 11:05 schrieb Willy Tarreau:
> Hi Aleks,
>
> On Sun, Sep 30, 2018 at 10:38:20AM +0200, Aleksandar Lazic wrote:
>> Do you have any release date for 1.9, as I plan to launch some new site and
>> thought to use 1.9 from beginning because it sounds like that 1.9 will be
>>
On Sun, Sep 30, 2018 at 12:01:54PM +0300, Ciprian Dorin Craciun wrote:
> On Sun, Sep 30, 2018 at 11:41 AM Ciprian Dorin Craciun
> wrote:
> > > - tune.sndbuf.client 16384 allows you to have 16384 bytes "on-the-fly",
> > > meaning unacknowlegded. 16384 / 0.16 sec = roughly 128 KB/s
> > > - do t
Hi Aleks,
On Sun, Sep 30, 2018 at 10:38:20AM +0200, Aleksandar Lazic wrote:
> Do you have any release date for 1.9, as I plan to launch some new site and
> thought to use 1.9 from beginning because it sounds like that 1.9 will be able
> to handle h2 with the backend.
It's initially planned for en
On Sun, Sep 30, 2018 at 11:41 AM Ciprian Dorin Craciun
wrote:
> > - tune.sndbuf.client 16384 allows you to have 16384 bytes "on-the-fly",
> > meaning unacknowlegded. 16384 / 0.16 sec = roughly 128 KB/s
> > - do the math with your value of 131072 and you will have get your ~800
> > KB/s.
Ho
On Sun, Sep 30, 2018 at 11:33 AM Mathias Weiersmüller
wrote:
> Sorry for the extremly brief answer:
> - you mentioned you have 160 ms latency.
Yes, I have mentioned this because I've read somewhere (not
remembering now where), that the `SO_SNDBUF` socket option also
impacts the TCP window size.
Am 29.09.2018 um 20:41 schrieb Willy Tarreau:
> Subject: [ANNOUNCE] haproxy-1.9-dev3
> To: haproxy@formilux.org
>
> Hi,
>
> Now that Kernel Recipes is over (it was another awesome edition), I'm back
> to my haproxy activities. Well, I was pleased to see that my coworkers
> reserved me a nice surp
However the bandwidth behaviour is exactly the same:
* no `tune.sndbuf.client`, bandwidth goes up to 11 MB/s for a large download;
* with `tune.sndbuf.client 16384` it goes up to ~110 KB/s;
* with `tune.sndbuf.client 131072` it goes up to ~800 KB/s;
* with `tune.sndbuf.client 262144` it goes up to
On Sun, Sep 30, 2018 at 10:35 AM Willy Tarreau wrote:
> Note that these are not fragments but segments. And as Matti suggested,
> it's indeed due to GSO, you're seeing two TCP frames sent at once through
> the stack, and they will be segmented by the NIC.
I have disabled all offloading features:
On Sun, Sep 30, 2018 at 10:35 AM Willy Tarreau wrote:
> On Sun, Sep 30, 2018 at 10:20:06AM +0300, Ciprian Dorin Craciun wrote:
> > I was just trying to replicate the issue I've seen yesterday, and for
> > a moment (in initial tests) I was able to. However on repeated tests
> > it seems that the `
On Sun, Sep 30, 2018 at 10:20:06AM +0300, Ciprian Dorin Craciun wrote:
> On Sun, Sep 30, 2018 at 10:06 AM Mathias Weiersmüller
> wrote:
> > I am pretty sure you have TCP segmentation offload enabled. The TCP/IP
> > stack therefore sends bigger-than-allowed TCP segments towards the NIC who
> > in t
On Sun, Sep 30, 2018 at 07:06:29AM +, Mathias Weiersmüller wrote:
> I am pretty sure you have TCP segmentation offload enabled. The TCP/IP stack
> therefore sends bigger-than-allowed TCP segments towards the NIC who in turn
> takes care about the proper segmentation.
>
> You want to check the
On Sun, Sep 30, 2018 at 10:06 AM Mathias Weiersmüller
wrote:
> I am pretty sure you have TCP segmentation offload enabled. The TCP/IP stack
> therefore sends bigger-than-allowed TCP segments towards the NIC who in turn
> takes care about the proper segmentation.
I was just trying to replicate t
I am pretty sure you have TCP segmentation offload enabled. The TCP/IP stack
therefore sends bigger-than-allowed TCP segments towards the NIC who in turn
takes care about the proper segmentation.
You want to check the output of "ethtool -k eth0" and the values of:
tcp-segmentation-offload
generi
30 matches
Mail list logo