Re: how to write to a file safely in haproxy

2021-06-10 Thread Willy Tarreau
Hello,

On Fri, Jun 11, 2021 at 11:38:38AM +0530, reshma r wrote:
> Hello all,
> I Had a follow up query, sorry if it is an obvious question. once I
> implement the socket call as a sidecar process, say as a lua script that
> reads the new configuration from portal into a variable, how would I then
> update this variable within haproxy to be used by lua script loaded using
> lua_load directive? I understand we can do it using stats socket via CLI
> and socat but is there a way to do it from within lua script (the one
> running as sidecar process) itself so that I check and update portal config
> variable every 5 min automatically?

There are various ways. Your Lua script could periodically connect outside
to retrieve some data and set local variables for example. If you'd prefer
to control this from outside you could also write an HTTP service in Lua
that receives a certain URL after appropriate controls and extracts info
from query string, payload or headers, and sets variables as well. I don't
have much example to point you to, though.

Hoping this helps,
Willy



Re: how to write to a file safely in haproxy

2021-06-10 Thread reshma r
Hello all,
I Had a follow up query, sorry if it is an obvious question. once I
implement the socket call as a sidecar process, say as a lua script that
reads the new configuration from portal into a variable, how would I then
update this variable within haproxy to be used by lua script loaded using
lua_load directive? I understand we can do it using stats socket via CLI
and socat but is there a way to do it from within lua script (the one
running as sidecar process) itself so that I check and update portal config
variable every 5 min automatically?


Thank you

On Thu, 27 May 2021, 17:37 reshma r,  wrote:

> Hi, thank you for the detailed and informative reply! It definitely helped
> clarify things.
> Indeed I had disabled chroot to read the files at runtime, but have since
> switched to read during init phase . Thank you for the tips on the
> architecture aspect as well. I will study and explore these options.
>
> Thanks,
> Reshma
>
> On Thu, May 27, 2021 at 11:50 AM Willy Tarreau  wrote:
>
>> Hi,
>>
>> On Wed, May 26, 2021 at 10:43:17PM +0530, reshma r wrote:
>> > Hi Tim thanks a lot for the reply. I am not familiar with what a sidecar
>> > process is. I will look into it. If it is specific to haproxy, if you
>> could
>> > point to some relevant documentation that would be helpful.
>>
>> It's not specific to haproxy, it's a general principle consisting in
>> having another process deal with certain tasks. See it as an assistant
>> if you want. We can do a parallel at lower layers so that it might be
>> clearer. Your kernel deals with routing tables, yet the kernel never
>> manipulates files by itself nor does it stop processing packets to
>> dump a routing table update into a file. Instead it's up to separate
>> processes to perform such slow tasks, and keep it in sync.
>>
>> > >I am making a
>> > > socket call which periodically checks whether the portal has been
>> changed
>> > > (from within haproxy action).
>> >
>> > Leaving aside the writing to file bit for a moment, is it otherwise
>> okay to
>> > do to the above within Haproxy alone and read the config fetched from
>> > portal into a global variable instead of saving to file? Or is it not an
>> > advisable solution? Actually this is what I am doing at present and I
>> have
>> > not observed any issues with performance...
>>
>> What you must never ever do is to read/write files at run time, as this
>> is extremely slow and will pause your traffic. It can be supported to
>> load a file during boot from Lua for example since at this point there
>> is no network processing in progress. We only slightly discourage from
>> doing so because most often the code starts by reading during init and
>> two months later it ends up being done at runtime (and people start to
>> disable chroots and permission drops in order to do this).
>>
>> It's possible to read/set process-wide variables from the CLI, so maybe
>> you can send some events there. Also as Tim mentioned, it's possible to
>> read/set maps from the CLI, that are also readable from Lua. That may
>> be another option to pass live info between an external process and your
>> haproxy config or Lua code. In fact nowadays plenty of people are
>> (ab)using maps to use them as dynamic routing tables or to store dynamic
>> thresholds. What is convenient with them is that they're loaded during
>> boot and you can feed the whole file over the CLI at runtime to pass
>> updates. Maybe that can match your needs.
>>
>> Last point, as a general architecture rule, as soon as you're using
>> multiple components (portal, LB, agents, etc), it's critically important
>> to define who's authoritary over the other ones. Once you do that you
>> need to make sure that the other ones can be sacrified and restarted.
>> In your case I suspect the authority is the portal and that the rest
>> can be killed and restarted at any time. This means that the trust you
>> put in such components must always be lower than the trust you put in
>> the authority (portal I presume).
>>
>> Thus these components must not play games like dumping files by
>> themselves.
>> In the best case they could be consulted to retrieve a current state
>> to be reused in case of a reload. But your portal should be the one
>> imposing its desired state on others. For example, imagine you face a
>> bug, a crash, an out-of-memory or whatever situation where your haproxy
>> dies in the middle of a dump to this file. Your file is ruined and you
>> cannot reuse it. Possibly you can't even restart the service anymore
>> because your corrupted file causes startup errors.
>>
>> This means you'll necessarily have to make sure that a fresh new copy
>> can be instantly delivered by the portal just to cover this unlikely
>> case. If you implement this capability in your portal, then it should
>> become the standard way to produce that file (you don't want the file
>> to come from two different sources, do you?). Then you can simply have
>> a sidecar process dedicated to c

Re: Upgrading from 1.8 to 2.4, getting warning I can't figure out

2021-06-10 Thread Shawn Heisey

On 6/8/2021 1:47 AM, Remi Tricot-Le Breton wrote:
OCSP stapling won't work on any version that shows this warning (for 
this specific response). But apart from that, everything else should 
work fine, that's why you only get a warning when parsing the 
configuration file. If you are positive that your OCSP response is valid 
we may indeed have a bug on our side


The warning was completely valid.  I was getting a bad ocsp response 
with my script running "openssl ocsp".


The problem turned out to be that I had the wrong issuer certificate. I 
updated that .pem file and now it works.


Thanks for the assist, and I think there probably is no bug.

Shawn



Re: Speeding up opentracing build in CI ?

2021-06-10 Thread Tim Düsterhus

William,

On 6/10/21 5:48 PM, William Lallemand wrote:

Looks fine to me, but from what I remember when debugging some reg-tests
there was only one CPU available, I hope I'm wrong.



GitHub-provided Action runners are 2-core VMs:

https://docs.github.com/en/actions/using-github-hosted-runners/about-github-hosted-runners#supported-runners-and-hardware-resources

Best regards
Tim Düsterhus



Re: Speeding up opentracing build in CI ?

2021-06-10 Thread Илья Шипицин
I was mistaken. LibreSSL does not like parallel install

libressl fails on `make -j4 install` · Issue #461 ·
libressl-portable/portable (github.com)



anyway, if CI works, I'm ok with changes

чт, 10 июн. 2021 г. в 20:49, William Lallemand :

> On Thu, Jun 10, 2021 at 07:52:23AM +0200, Willy Tarreau wrote:
> > Subject: Re: Speeding up opentracing build in CI ?
> >
> > On Thu, Jun 10, 2021 at 07:19:37AM +0200, Willy Tarreau wrote:
> > > On Thu, Jun 10, 2021 at 10:15:46AM +0500,  ??? wrote:
> > > > OT takes about 30 sec (it is built with almost everything disabled).
> the
> > > > biggest time eater is openssl-3.0.0
> > >
> > > Maybe that one could be sped up too, I haven't checked if it uses
> parallel
> > > builds.
> >
> > So I checked. Good news, it wasn't parallel either, and this alone:
> >
> > --- a/scripts/build-ssl.sh
> > +++ b/scripts/build-ssl.sh
> > @@ -21,7 +21,8 @@ build_openssl_linux () {
> >  (
> >  cd "openssl-${OPENSSL_VERSION}/"
> >  ./config shared --prefix="${HOME}/opt"
> --openssldir="${HOME}/opt" -DPURIFY
> > -make all install_sw
> > +make -j$(nproc) all
> > +make install_sw
> >  )
> >  }
> >
> > Is enough to drop from 4:52 to 1:28 on my machine. About 1/4 of this time
> > is used to build man and HTML pages that we don't use. Instead of the
> "all"
> > target, we should use "build_sw"
> >
> > --- a/scripts/build-ssl.sh
> > +++ b/scripts/build-ssl.sh
> > @@ -21,7 +21,8 @@ build_openssl_linux () {
> >  (
> >  cd "openssl-${OPENSSL_VERSION}/"
> >  ./config shared --prefix="${HOME}/opt"
> --openssldir="${HOME}/opt" -DPURIFY
> > -make all install_sw
> > +make -j$(nproc) build_sw
> > +make install_sw
> >  )
> >  }
> >
> > this further downs the time to 1:9, hence more than 4 times faster than
> > the initial one. It should probably be tested on macos to be certain it's
> > OK there as well, and I don't know how to get the CPU count there (or
> > maybe we could just force it to a low value like 2 or 4).
> >
> > Willy
> >
>
> Looks fine to me, but from what I remember when debugging some reg-tests
> there was only one CPU available, I hope I'm wrong.
>
> --
> William Lallemand
>


Re: Speeding up opentracing build in CI ?

2021-06-10 Thread William Lallemand
On Thu, Jun 10, 2021 at 07:52:23AM +0200, Willy Tarreau wrote:
> Subject: Re: Speeding up opentracing build in CI ?
>
> On Thu, Jun 10, 2021 at 07:19:37AM +0200, Willy Tarreau wrote:
> > On Thu, Jun 10, 2021 at 10:15:46AM +0500,  ??? wrote:
> > > OT takes about 30 sec (it is built with almost everything disabled). the
> > > biggest time eater is openssl-3.0.0
> > 
> > Maybe that one could be sped up too, I haven't checked if it uses parallel
> > builds.
> 
> So I checked. Good news, it wasn't parallel either, and this alone:
> 
> --- a/scripts/build-ssl.sh
> +++ b/scripts/build-ssl.sh
> @@ -21,7 +21,8 @@ build_openssl_linux () {
>  (
>  cd "openssl-${OPENSSL_VERSION}/"
>  ./config shared --prefix="${HOME}/opt" --openssldir="${HOME}/opt" 
> -DPURIFY
> -make all install_sw
> +make -j$(nproc) all
> +make install_sw
>  )
>  }
> 
> Is enough to drop from 4:52 to 1:28 on my machine. About 1/4 of this time
> is used to build man and HTML pages that we don't use. Instead of the "all"
> target, we should use "build_sw"
> 
> --- a/scripts/build-ssl.sh
> +++ b/scripts/build-ssl.sh
> @@ -21,7 +21,8 @@ build_openssl_linux () {
>  (
>  cd "openssl-${OPENSSL_VERSION}/"
>  ./config shared --prefix="${HOME}/opt" --openssldir="${HOME}/opt" 
> -DPURIFY
> -make all install_sw
> +make -j$(nproc) build_sw
> +make install_sw
>  )
>  }
> 
> this further downs the time to 1:9, hence more than 4 times faster than
> the initial one. It should probably be tested on macos to be certain it's
> OK there as well, and I don't know how to get the CPU count there (or
> maybe we could just force it to a low value like 2 or 4).
> 
> Willy
> 

Looks fine to me, but from what I remember when debugging some reg-tests
there was only one CPU available, I hope I'm wrong.

-- 
William Lallemand