Networking

2022-04-29 Thread Nick Owen
So I am pretty new to networking and I am not quite sure how to set up the 
config file correctly. I just want a simple reverse proxy and I have created a 
diagram to show you how’d I’d like it configured. If you have any sites or 
examples that could point me in the right direction that’d be great.

Sent from Mail for Windows



Re: PEM Certificates for HAproxy

2022-04-29 Thread Nicolas CARPi
On 29 Apr, Shawn Heisey wrote:
> I know that a fresh install can be instantly operational with TLS,
> suggesting that it is not generating them on the fly ... so I really wonder
> how secure the default params are.  I wonder what is being used when there
> are no params in the cert file. Does it get something hardcoded and use that
> until params generated in the background can be swapped in?
You'll want to have a look at this issue:
https://github.com/haproxy/haproxy/issues/1604

Indeed HAProxy has default ones, and reading the issue and comments of 
Lukas you'll understand why DH params are a thing of the past (if you 
use modern ciphers), and why generating them yourself is not even that 
great to begin with.

(I'm the author of the issue btw)

Best,
~Nico




Re: PEM Certificates for HAproxy

2022-04-29 Thread Shawn Heisey

On 4/29/22 12:42, Branitsky, Norman wrote:


If you include the following in your HAProxy configuration global 
section you don't need to include DH Params in the certificate:


tune.ssl.default-dh-param 2048



It takes several minutes to generate params, so I doubt that with that 
option that there would be different params for each certificate.  It is 
my understanding that when they are included in the cert file, each cert 
can have different params.  Part of my automated cert renewal process 
included generating brand new dh params.


I know that a fresh install can be instantly operational with TLS, 
suggesting that it is not generating them on the fly ... so I really 
wonder how secure the default params are.  I wonder what is being used 
when there are no params in the cert file. Does it get something 
hardcoded and use that until params generated in the background can be 
swapped in?


Thanks,
Shawn




Re: PEM Certificates for HAproxy

2022-04-29 Thread Shawn Heisey

On 4/29/22 11:16, Henning Svane wrote:

I have tried to build a PEM Certificate, but with no luck.

What should it include and in which order?



I use certs issued by LetsEncrypt.

My certificate file that I use for haproxy and most other software doing 
TLS has four PEM-encoded items in it:


Server cert
LetsEncrypt Issuing cert
Private Key
DH Params

The file is owned by root and has 600 permissions.

The only thing that might be important there as far as order would be to 
have the server cert before the issuing cert.


You do not normally need to include the CA's root certificate in the 
file -- the browser already has root certificates for any authority that 
it trusts ... that is how trust is established. Unless you created the 
cert yourself, what you want to have in your file is certs for the 
entire trust chain *EXCEPT* for the root cert.


Most software will ignore DH Params in the certificate file.  It is my 
understanding that haproxy actually uses it.  So each cert file that I 
employ gets its own 4096 bit DH Params.  My cert is also 4096 bit.


Thanks,
Shawn




PEM Certificates for HAproxy

2022-04-29 Thread Henning Svane
Hi

I have tried to build a PEM Certificate, but with no luck.
What should it include and in which order?

The PEM file from the Exchange Server include Attributes blocks, should these 
been removed from the Private PEM file?
Here are all the certificates I have
Also from DigiCert which certificate should I include

  *   Intermediate Certificate
  *   Root Certificate
>From the Private Certificate I have

  *   Private Certificate
  *   Public Certificate

Here is the Privat Certificate with the mention Attributes Blocks
Bag Attributes
Microsoft Local Key set: 
localKeyID: 01 00 00 00
friendlyName: xx-xx----
Microsoft CSP Name: Microsoft RSA SChannel Cryptographic Provider
Key Attributes
X509v3 Key Usage: 10
-BEGIN PRIVATE KEY-
(Private certificate has been removed)
-END PRIVATE KEY-
Bag Attributes
localKeyID: 01 00 00 00
friendlyName: "friendly Name"
subject=C = DK, L = Copenhagen, O = "Company name", CN = "Common name"

issuer=C = US, O = DigiCert Inc, CN = DigiCert TLS RSA SHA256 2020 CA1

-BEGIN CERTIFICATE-
(Certificate has been removed)
-END CERTIFICATE-

Regards
Henning


Re: Thoughts on QUIC/HTTP3

2022-04-29 Thread Shawn Heisey

On 4/25/22 10:55, Shawn Heisey wrote:
I was testing with the master branch from 
https://github.com/haproxy/haproxy.git. Just pulled down the latest 
changes, built it, and installed it.  Now I am sometimes seeing 
different behavior on the large POST.  It will load a page quickly 
sometimes, returning to the same page with blank fields, just as it 
would when first going there.  Another time, apache returned a 504 
error, which is very weird.  When haproxy got the 504, it sent its own 
error page. 


I did a build and install this morning, a bunch of quic-related changes 
in that.  Now everything seems to be working on my paste site.  Large 
pastes work, and I can reload the page a ton of times without it hanging 
until browser restart.


I changed the URL of my paste website, and now that everything seems to 
be working with http3, it's still using http3:


https://stikked.elyograg.org/

Thanks,
Shawn




[ANNOUNCE] haproxy-2.3.20

2022-04-29 Thread Christopher Faulet

Hi,

HAProxy 2.3.20 was released on 2022/04/29. It added 41 new commits
after version 2.3.19.

The 2.3 branch was planned to be EOL last quarter. There are no longer bug
reports for this specific branch. Thus, it is probably the last 2.3
release. Except if there are critical bugs in next few weeks, no further
release should be expected. You should have no reason to deploy it anymore
in a production environment. Use the 2.4 instead. No specific support should
no longer be expected on the 2.3.

Here are main changes for this release, cut-pasted from 2.4.16 announce:

 * An internal issue leading to truncated messages was fixed. When data were
   mixed with an error report, connection errors could be handled too early
   by the stream-interface. Now connection errors are only considered by the
   stream-interface during the connection establishment. After that, it
   relies on the conn-stream to be notified of any error.

 * An issue in the pass-through multiplexer, exposed by the previous fix,
   and that may lead to a loop at 100% CPU was fixed. Connection error was
   not properly reported to the conn-stream on the sending path.

 * An issue with the FCGI multiplexer when the response is compressed was
   fixed. The FCGI application was rewriting the response headers modifying
   HTX flags while the compression filter was doing so by modifying the HTTP
   message flags. Thus some modification performed on a side were not
   detected by the other, leading to produce invalid responses. Now, the
   flags of both structures are systematically updated.

 * An issue with responses to HEAD requests sent to FCGI servers was fixed.
   A "Content-Length: 0" header was erroneously added on the bodyless
   responses while it should not. Indeed, if the expected payload size is
   not specified by the server, HAProxy must not add this header because it
   cannot know it. In addition, still in the FCGI multiplexer, the parsing
   of headers and trailers was fixed to properly handle parsing errors.

 * Two issues in the H1 multiplexer were fixed. First, Connection error was
   reported to early, when there were still pending data for the
   stream. Because of this bug, last pending data could be truncated. Now
   the connection error is reported only if there is no pending data. The
   second issue is a problem about full buffer detection during the trailers
   parsing. Because of this bug, it was possible to block the message
   parsing till the timeout expiration. The same bug was fixed about
   processing of EOM block.

 * Some issues in the H2 multiplexers were fixed. First the GOAWAY frame is
   no longer sent if SETTINGS were not sent. Then, as announced, the
   "timeout http-keep-alive" and "timeout http-request" are now respected
   and work as documented, so that it will finally be possible to force such
   connections to be closed when no request comes even if they're seeing
   control traffic such as PING frames. This can typically happen in some
   server-to-server communications whereby the client application makes use
   of PING frames to make sure the connection is still alive.

 * A crash of HAproxy was fixed. It happened when HAproxy was compiled
   without the PCRE/PCRE2 support if it tried to replace part of the uri
   while the path is invalid or not specified.

 * An issue with url_enc() converter was fixed. It was able to crush HTTP
   headers. It is now fixed.

 * Expired entries were displayed in "show cache" output. These entries are
   now evicted instead of being listed.

Thanks everyone for your help and your contributions !

Please find the usual URLs below :
   Site index   : http://www.haproxy.org/
   Documentation: http://docs.haproxy.org/
   Wiki : https://github.com/haproxy/wiki/wiki
   Discourse: http://discourse.haproxy.org/
   Slack channel: https://slack.haproxy.org/
   Issue tracker: https://github.com/haproxy/haproxy/issues
   Sources  : http://www.haproxy.org/download/2.3/src/
   Git repository   : http://git.haproxy.org/git/haproxy-2.3.git/
   Git Web browsing : http://git.haproxy.org/?p=haproxy-2.3.git
   Changelog: http://www.haproxy.org/download/2.3/src/CHANGELOG
   Pending bugs : http://www.haproxy.org/l/pending-bugs
   Reviewed bugs: http://www.haproxy.org/l/reviewed-bugs
   Code reports : http://www.haproxy.org/l/code-reports


---
Complete changelog :
Christopher Faulet (14):
  BUG/MEDIUM: mux-fcgi: Properly handle return value of headers/trailers 
parsing
  BUG/MEDIUM: mux-h1: Properly detect full buffer cases during message 
parsing
  BUG/MEDIUM: mux-h1: Properly detect full buffer cases when adding EOM 
block
  BUG/MINOR: fcgi-app: Don't add C-L header on response to HEAD requests
  BUG/MEDIUM: http-conv: Fix url_enc() to not crush const samples
  BUG/MEDIUM: http-act: Don't replace URI if path is not found or invalid
  BUG/MEDIUM: mux-h1: Don't request more room on partial trail

[ANNOUNCE] haproxy-2.4.16

2022-04-29 Thread Christopher Faulet

Hi,

HAProxy 2.4.16 was released on 2022/04/29. It added 65 new commits
after version 2.4.15.

This release is pretty similar to the 2.5.6 released early in the
week. Thus, here is a cut-paste of relevant parts:

 * An internal issue leading to truncated messages was fixed. When data were
   mixed with an error report, connection errors could be handled too early
   by the stream-interface. Now connection errors are only considered by the
   stream-interface during the connection establishment. After that, it
   relies on the conn-stream to be notified of any error.

 * An issue in the idle connections management code was fixed. It's
   extremely hard to hit but it could randomly crash the process under high
   contention on the server side due to a missing lock.

 * An issue in the pass-through multiplexer, exposed by the previous fix,
   and that may lead to a loop at 100% CPU was fixed. Connection error was
   not properly reported to the conn-stream on the sending path.

 * An issue with the FCGI multiplexer when the response is compressed was
   fixed. The FCGI application was rewriting the response headers modifying
   HTX flags while the compression filter was doing so by modifying the HTTP
   message flags. Thus some modification performed on a side were not
   detected by the other, leading to produce invalid responses. Now, the
   flags of both structures are systematically updated.

 * An issue with responses to HEAD requests sent to FCGI servers was fixed.
   A "Content-Length: 0" header was erroneously added on the bodyless
   responses while it should not. Indeed, if the expected payload size is
   not specified by the server, HAProxy must not add this header because it
   cannot know it. In addition, still in the FCGI multiplexer, the parsing
   of headers and trailers was fixed to properly handle parsing errors.

 * Two issues in the H1 multiplexer were fixed. First, Connection error was
   reported to early, when there were still pending data for the
   stream. Because of this bug, last pending data could be truncated. Now
   the connection error is reported only if there is no pending data. The
   second issue is a problem about full buffer detection during the trailers
   parsing. Because of this bug, it was possible to block the message
   parsing till the timeout expiration.

 * A design issue with the HTX was fixed. When EOM HTX block was replaced by
   a flag, we tried hard to be sure the flag was always set with the last
   HTX block.  It works pretty well for all messages received from a client
   or a server. But for internal messages, it was not always true,
   especially for messages produced by applets. Some workarounds were found
   to fix this design issue on stable versions. But a more elegant solution
   must be found for the 2.6. Prometheus exporter, the stats applet and lua
   HTTP applets were concerned.

 * Some issues in the H2 multiplexers were fixed. First the GOAWAY frame is
   no longer sent if SETTINGS were not sent. Then, as announced, the
   "timeout http-keep-alive" and "timeout http-request" are now respected
   and work as documented, so that it will finally be possible to force such
   connections to be closed when no request comes even if they're seeing
   control traffic such as PING frames. This can typically happen in some
   server-to-server communications whereby the client application makes use
   of PING frames to make sure the connection is still alive.

 * A crash of HAproxy was fixed. It happened when HAproxy was compiled
   without the PCRE/PCRE2 support if it tried to replace part of the uri
   while the path is invalid or not specified.

 * An issue with url_enc() converter was fixed. It was able to crush HTTP
   headers. It is now fixed.

 * Expired entries were displayed in "show cache" output. These entries are
   now evicted instead of being listed.

 * The server queue management was made way more scalable with threads. Till
   now dequeuing would wake up next pending entry which could run on a
   different thread, resulting in a lot of entries in the shared run queue
   when many threads were running, causing a lot of contention on the
   scheduler's lock, thus slowing down the dequeuing and adding in turn
   contention on the queue's lock, to the point that a few users were seeing
   similar performance with N threads as with a single thread when queues
   were highly solicited. A small change was made both in the scheduler and
   in the dequeuing code to bypass this locking and completely address this
   issue.

 * Support for MQTT 3.1 was added.

 * An improvement which is not related to the code, with the precious help
   of Tim and Cyril, we could finally set up an automatic generation of the
   HTML documentation. It's performed daily and published on github pages at
   http://docs.haproxy.org.

Thanks everyone for your help and your contributions!

Please find the usual URLs below :
   Site index   : http://www.haproxy.org/
  

Re: valgrind follow up

2022-04-29 Thread Илья Шипицин
пт, 29 апр. 2022 г. в 17:39, Willy Tarreau :

> Hi Ilya,
>
> On Fri, Apr 29, 2022 at 04:35:03PM +0500,  ??? wrote:
> > Hello,
> >
> > I added sample in my branch: CI: github actions: add valgrind smoke
> tests ·
> > chipitsine/haproxy@7cd7f4a
> > <
> https://github.com/chipitsine/haproxy/commit/7cd7f4ae1ba7751a6b1157b69c273735792aee91
> >
> >
> > here's its run:
> >
> > VTest · chipitsine/haproxy@7cd7f4a (github.com)
> > <
> https://github.com/chipitsine/haproxy/runs/6227296166?check_suite_focus=true
> >
>
> Thanks. Those are interesting, of course, but we must not add to the
> CI something that we already know will report errors, otherwise it
> will end up being quickly disabled by masking other ones. The deinit
> stuff are not real problems, they're just indications of what could
> be cleaned up in case some parts would turn more dynamic in the future.
>
> However once we manage to get rid of all of them, it would be interesting
> to enable them in the CI so that new regressions can be caught. But until
> this happens, it would only be reports for known failures.
>
> Anyway your test is useful in that it reported quite a significant number
> of entries at once, we rarely see so many, so it will be a good starting
> point about new locations to look for.
>

that was the idea behind it.
to report findings ... and enable CI once ready


>
> Thanks,
> Willy
>


Re: valgrind follow up

2022-04-29 Thread Willy Tarreau
On Fri, Apr 29, 2022 at 02:43:24PM +0200, Tim Düsterhus wrote:
> > Anyway your test is useful in that it reported quite a significant number
> > of entries at once, we rarely see so many, so it will be a good starting
> > point about new locations to look for.
> 
> Those in Ilya's test are "false positives" in so far, as `-cc` currently
> does not yet use deinit_and_exit, but only exit. So there's a huge number of
> live allocations we can already clean.

Ah yes you're right, you even asked me to use deinit_and_exit() in your
last reproducer!

Thanks,
Willy



Re: valgrind follow up

2022-04-29 Thread Tim Düsterhus

Willy,

On 4/29/22 14:39, Willy Tarreau wrote:

However once we manage to get rid of all of them, it would be interesting
to enable them in the CI so that new regressions can be caught. But until
this happens, it would only be reports for known failures.


I agree and I planned to propose that once I've worked through the 
backlog for my production config.



Anyway your test is useful in that it reported quite a significant number
of entries at once, we rarely see so many, so it will be a good starting
point about new locations to look for.


Those in Ilya's test are "false positives" in so far, as `-cc` currently 
does not yet use deinit_and_exit, but only exit. So there's a huge 
number of live allocations we can already clean.


Currently a deinit only happens for:

- haproxy -vv
- haproxy -c (if the check is successful, i.e. exit 0).
- SIGUSR1

Best regards
Tim Düsterhus



Re: valgrind follow up

2022-04-29 Thread Willy Tarreau
Hi Ilya,

On Fri, Apr 29, 2022 at 04:35:03PM +0500,  ??? wrote:
> Hello,
> 
> I added sample in my branch: CI: github actions: add valgrind smoke tests ·
> chipitsine/haproxy@7cd7f4a
> 
> 
> here's its run:
> 
> VTest · chipitsine/haproxy@7cd7f4a (github.com)
> 

Thanks. Those are interesting, of course, but we must not add to the
CI something that we already know will report errors, otherwise it
will end up being quickly disabled by masking other ones. The deinit
stuff are not real problems, they're just indications of what could
be cleaned up in case some parts would turn more dynamic in the future.

However once we manage to get rid of all of them, it would be interesting
to enable them in the CI so that new regressions can be caught. But until
this happens, it would only be reports for known failures.

Anyway your test is useful in that it reported quite a significant number
of entries at once, we rarely see so many, so it will be a good starting
point about new locations to look for.

Thanks,
Willy



valgrind follow up

2022-04-29 Thread Илья Шипицин
Hello,

I added sample in my branch: CI: github actions: add valgrind smoke tests ·
chipitsine/haproxy@7cd7f4a


here's its run:

VTest · chipitsine/haproxy@7cd7f4a (github.com)



Ilya