Let Us Pay You to Post Great Content!

2020-05-14 Thread Sarah
Hello again,

This is just a quick follow-up to check if you received my first email (see 
below) regarding contributing to your site?

Talk Soon!,

Sarah

On Thu, Mar 26, 2020 at 5:12 PM, Sarah  
wrote:

 
Hello,

My name is Sarah and I am in charge of partnerships with great sites like yours.


We are looking to add sites to our publisher database that are interested in 
receiving content from our clients for publication on yours!


Of course we would only send relevant content and we are looking for do follow 
link placements.


In return, we will pay you for each and every post you publish from us!


We have sites making several thousand dollars per month by posting our content. 
Good content and we pay for your time to post.


If this sounds interesting to you, please share the URL of any sites you own 
and would like to have considered for addition to our database.


We will reach out with a few additional questions and a payment offer for you 
site shortly thereafter.


Thank you and talk to you soon,


Sarah


Re: stable-bot: Bugfixes waiting for a release 2.1 (27), 2.0 (24)

2020-05-14 Thread Tim Düsterhus
Hi List,

Am 13.05.20 um 02:00 schrieb stable-...@haproxy.com:
> Last release 2.1.4 was issued on 2020-04-02.  There are currently 27 patches 
> in the queue cut down this way:
> - 10 MEDIUM, first one merged on 2020-05-01
> - 17 MINOR, first one merged on 2020-04-02
> 
> Thus the computed ideal release date for 2.1.5 would be 2020-04-30, which was 
> two weeks ago.
> 
> Last release 2.0.14 was issued on 2020-04-02.  There are currently 24 patches 
> in the queue cut down this way:
> - 12 MEDIUM, first one merged on 2020-05-07
> - 12 MINOR, first one merged on 2020-04-02
> 
> Thus the computed ideal release date for 2.0.15 would be 2020-04-30, which 
> was two weeks ago.

Is there any date planned for 2.1.5? I'm still running 2.1.3 on one
machine, because I use Dovecot.

Best regards
Tim Düsterhus



raise() on HAProxy 2.0

2020-05-14 Thread Olivier D
Hello,

I'm spamming a lot these days :)

I found a strange coredump on HAProxy 2.0.14 that started a few days ago
for no reason. It's not a coredump but a raise().

Stacktrace :

#0  0x7fde8c9f8495 in raise () from /lib64/libc.so.6
#1  0x7fde8c9f9c75 in abort () from /lib64/libc.so.6
#2  0x7fde8ca363a7 in __libc_message () from /lib64/libc.so.6
#3  0x7fde8ca3bdee in malloc_printerr () from /lib64/libc.so.6
#4  0x7fde8ca3ec3d in _int_free () from /lib64/libc.so.6
#5  0x0047a885 in ssl_sock_free_ssl_conf () at src/ssl_sock.c:3740
#6  0x0047bdd2 in ssl_sock_free_all_ctx () at src/ssl_sock.c:5063
#7  0x0047c301 in ssl_sock_destroy_bind_conf () at
src/ssl_sock.c:5095
#8  0x0050c8fb in deinit () at src/haproxy.c:2533
#9  0x0050dc3f in main () at src/haproxy.c:3449


This seems to happen when issuing the following command :
'set ssl ocsp-response xxx' |socat stdio /var/run/haproxy.sock

This is the first time I see such a behaviour :/

I can provide a "bt full" output privately if needed.

HAProxy build options:
Build options :
  TARGET  = linux-glibc
  CPU = generic
  CC  = gcc
  CFLAGS  = -O2 -g -fno-strict-aliasing -Wdeclaration-after-statement
-fwrapv -Wno-unused-label -Wno-sign-compare -Wno-unused-parameter
-Wno-old-style-declaration -Wno-ignored-qualifiers -Wno-clobbered
-Wno-missing-field-initializers -Wtype-limits
  OPTIONS = USE_THREAD=0 USE_STATIC_PCRE=1 USE_OPENSSL=1 USE_LUA=1
USE_ZLIB=1 USE_NS=

Built with OpenSSL version : OpenSSL 1.1.1g  21 Apr 2020



Olivier


minor typo?

2020-05-14 Thread Olaf Buitelaar
Dear HaProxy folks,

I'm not sure but i think there is a "d" missing in this line on the
performance section of the homepage;
optimized *HTTP header analysis* : headers are parsed an interpreted on the
fly,
i think it should be;
optimized *HTTP header analysis* : headers are parsed an*d* interpreted on
the fly,
Sorry for nitpicking..or if i'm wrong.

Best Olaf


Re: [External] Re: [RFC PATCH] Improved moving averages

2020-05-14 Thread Marcin Deranek
Hi Willy,

On Wed, May 13, 2020 at 11:25 AM Willy Tarreau  wrote:

> Hi Marcin!
>
> On Tue, May 12, 2020 at 12:17:19PM +0200, Marcin Deranek wrote:
> > Hi,
> >
> > Not a long ago while investigating some issues with one of the services I
> > stumbled across Connect Time (based on ctime metric) graphs (see
> > ctime-problem.png). It turned out that metrics were nowhere near reality
> -
> > they were trying to reach real average value, but never got there as each
> > reload reset it back to 0. Keep in mind this was happening on a
> > multi-process HAProxy setup for a service with relatively low/medium
> amount
> > of traffic. I did a bit of investigation, tested different scenarios /
> > algorithms, selected the most optimal one and deployed it on one of our
> > load balancers (see low-traffic-compare.png and
> > medium-traffic-compare.png). ctime represents current algorithm and
> ctime2
> > represents new algorithm for calculating moving averages. As you can see,
> > differences can be dramatic over long periods of time.
>
> The improvements are indeed really nice!
>

Glad to hear that!


> > Drops of ctime
> > metric represent reloads of an HAProxy instance. Spikes of ctime2 are due
> > to http-reuse option - after reloading a new instance cannot reuse
> existing
> > connections, so it has to establish new connections, so timing goes up
> > (this is expected).
> > See a proposed patch. Few comments about the patch:
> > - The patch changes behaviour of moving average metrics generation (eg.
> > ctime, ttime) in a way that metric is not generated if there is no data.
> > Currently metrics start with 0 and it's not possible to distinguish if
> > latency is 0 (or close to 0) or there is no data. You can always check
> > req_tot or lbconn (depending on mode), but that makes things much more
> > complicated thus I decided to only expose those metrics if there is data
> > (at least 1 request has been made). Gaps on low-traffic-compare.png graph
> > indicate that during that period there were no requests and thus we
> return
> > no data.
>
> In fact it's a different metric (though very useful). I've had the same
> needs recently. The current ctime reports the avg time experienced by the
> last 1024 *requests* and is documented as such, so when you want to think
> in terms of user experience it's the one to consider. For example, if you
> have a 99% reuse rate and 1% connect rate, even a DC far away at 100ms
> will only add 1ms on average to the request time, because 99% of the time,
> the connect time really is zero for the request. Ditto if you're using
> the cache. Your timer reports the average connect time per *connection*,
> and there it's much more suitable to analyse the infrastructure's impact
> on performance. Both are equally useful but do not report the same metric.
>

It was not my intention to change that. In my mind they both report the
very same thing with one major difference: current implementation provides
misleading results before it processes at least TIME_STATS_SAMPLES samples
(it always assumes there were at least TIME_STATS_SAMPLES samples
processed) whereas new implementation dynamically scales n value depending
on amount of samples it processed so far. In fact up to TIME_STATS_SAMPLES
samples it should produce an exact moving average. New implementation takes
into account reuse logic too, but after reload there are no connections to
be re-used, so that's why latency sharply goes up (I actually looked at
timing reported in log entries to confirm that). ctime, ttime etc. are also
produced for tcp mode (which makes sense in my mind), so documentation
might not be accurate in this matter (just tested it).


> I'd be in favor of creating a new one for yours. Anyway we're still missing
> a number of other ones like the average TLS handshake cost on each side,
> which should also be useful both per connection and per request. I'm saying
> this in case that helps figuring a pattern to find a name for this one :-)
>

I don't mind creating additional metrics if they make sense of course. One
of them (which could help to avoid exceptions) would be req_tot for a
server (it's available for backend). To dynamically scale n, req_tot is
much more suitable than lbconn for http mode (lbconn is incremented much
earlier than req_tot creating a time window when metrics can be incorrectly
calculated leading to wrong averages). Of course if you still think it's a
good idea to separate them (personally I'm not convinced about this unless
you don't want to change behaviour of existing metric etc.) I can make new
metrics eg. ctime_avg. Keep in mind that I already decided (as you
suggested) to create new functions.
What is your take on not exposing the metric if there is no data? There
were no requests and we don't know what connect time is. Shall we report 0
or nothing (empty) value ?


> > - I haven't changed a similar swrate_add_scaled function as it's not used
> > yet and the description feels a bit misleading 

Re: [PATCH] DOC: retry-on can only be used with mode http

2020-05-14 Thread William Lallemand
On Wed, May 13, 2020 at 09:59:38PM +0200, Jerome Magnin wrote:
> Hi,
> 
> with github issue #627 we've had at least one report of someone using retry-on
> with mode tcp. Olivier fixed the crash, and I propose the attached patch to
> better document that retry-on is only valid when used with mode http and
> ignored otherwise.
> 
> Jérôme

> From e030ea97758cc8b6af5f655637137230e9a1791f Mon Sep 17 00:00:00 2001
> From: Jerome Magnin 
> Date: Wed, 13 May 2020 20:09:57 +0200
> Subject: [PATCH] DOC: retry-on can only be used with mode http

Thanks, merged.

-- 
William Lallemand



Re: Logging captured payload not working

2020-05-14 Thread Christopher Faulet

Le 14/05/2020 à 07:38, Tom a écrit :

Hi

I'm still searching a way for capturing the request payload with
haproxy-2.1.3 as it was possible with haproxy-1.8.x.

config-snippet:

---

declare capture request len 9
declare capture response len 9
log-format "srcip=%ci:%cp feip=%fi:%fp(%f,%ft,%fc) beip=%bi:%bp(%b,%bc)
serverip=%si:%sp(%s) "%r" %ac/%fc/%bc/%sc/%rc %sq/%bq requests=%rt
resptime=%Tr bytesread=%B status=%ST tsc=%tsc sslv=%sslv ms=%ms
request=%hr response=%hs"
http-request capture req.payload(0,0) id 0
http-response capture res.payload(0,0) id 0

---

The log shows only "{#DO?} instead of the requested payload:

...sslv=TLSv1.3 ms=325 request={#D0?} response={#D0?}


Any hints for this? Or another way configuring this?



Hi,

The internal representation of HTTP messages has changed in 1.9. Before, the 
message was stored in raw. But to fully support H2 and future HTTP versions, we 
decided to change it to a structured representation, called HTX. In 1.9 and 2.0, 
both representations are available. In 1.9, the old one is the default. In 2.0, 
it is the new one. For these both versions, you should set or unset the 
http-use-htx option in your configuration to change the HTTP messages 
representation.


But, in the 2.1, the old HTTP representation was dropped. req.payload() and 
res.payload() are L6 sample fetches. So there are not aware of this 
representation. It is no longer possible to get the HTTP message using these 
sample fetches. I must fix that in a way or another.


For now, you may use req.hdrs() to get all request headers. Unfortunately 
res.hdrs() was introduced recently in the 2.2. I guess it could be backported. 
In the same way, the request body can be retrieve using req.body. Here too, 
res.body was introduced in the 2.2 and could probably be backported.


--
Christopher Faulet