Not sure if my mails to haproxy mailing lists are being blocked.

2020-09-08 Thread Badari Prasad
Hi Admin,
 Need help here , not sure if my mails to the mailing lists are being
blocked. Can you kindly check.

regards
  badari


stable-bot: Bugfixes waiting for a release 2.1 (23), 2.0 (16), 1.8 (8)

2020-09-08 Thread stable-bot
Hi,

This is a friendly bot that watches fixes pending for the next haproxy-stable 
release!  One such e-mail is sent periodically once patches are waiting in the 
last maintenance branch, and an ideal release date is computed based on the 
severity of these fixes and their merge date.  Responses to this mail must be 
sent to the mailing list.


Last release 2.1.8 was issued on 2020-07-31.  There are currently 23 patches in 
the queue cut down this way:
- 1 MAJOR, first one merged on 2020-09-07
- 6 MEDIUM, first one merged on 2020-08-05
- 16 MINOR, first one merged on 2020-08-11

Thus the computed ideal release date for 2.1.9 would be 2020-09-04, which was 
within the last week.

Last release 2.0.17 was issued on 2020-07-31.  There are currently 16 patches 
in the queue cut down this way:
- 1 MAJOR, first one merged on 2020-09-07
- 6 MEDIUM, first one merged on 2020-08-05
- 9 MINOR, first one merged on 2020-08-11

Thus the computed ideal release date for 2.0.18 would be 2020-10-04, which is 
in four weeks or less.

Last release 1.8.26 was issued on 2020-08-03.  There are currently 8 patches in 
the queue cut down this way:
- 2 MEDIUM, first one merged on 2020-08-05
- 6 MINOR, first one merged on 2020-08-03

Thus the computed ideal release date for 1.8.27 would be 2020-10-26, which is 
in seven weeks or less.

The current list of patches in the queue is:
 - 2.0, 2.1  - MAJOR   : contrib/spoa-server: Fix unhandled 
python call leading to memory leak
 - 2.0, 2.1  - MEDIUM  : mux-h1: Refresh H1 connection timeout 
after a synchronous send
 - 1.8, 2.0  - MEDIUM  : mux-h2: Don't fail if nothing is 
parsed for a legacy chunk response
 - 2.0, 2.1  - MEDIUM  : doc: Fix replace-path action 
description
 - 2.0, 2.1  - MEDIUM  : htx: smp_prefetch_htx() must always 
validate the direction
 - 2.0, 2.1  - MEDIUM  : contrib/spoa-server: Fix ipv4_address 
used instead of ipv6_address
 - 2.1   - MEDIUM  : ssl: memory leak of ocsp data at 
SSL_CTX_free()
 - 1.8, 2.0, 2.1 - MEDIUM  : map/lua: Return an error if a map is 
loaded during runtime
 - 2.1   - MINOR   : http-rules: Replace path and 
query-string in "replace-path" action
 - 1.8, 2.0, 2.1 - MINOR   : lua: Check argument type to convert it 
to IPv4/IPv6 arg validation
 - 2.0, 2.1  - MINOR   : snapshots: leak of snapshots on 
deinit()
 - 1.8   - MINOR   : dns: ignore trailing dot
 - 2.1   - MINOR   : ssl: fix memory leak at OCSP loading
 - 1.8, 2.0, 2.1 - MINOR   : startup: haproxy -s cause 100% cpu
 - 1.8, 2.0, 2.1 - MINOR   : lua: Check argument type to convert it 
to IP mask in arg validation
 - 1.8, 2.0, 2.1 - MINOR   : stats: use strncmp() instead of 
memcmp() on health states
 - 2.0, 2.1  - MINOR   : contrib/spoa-server: Ensure ip address 
references are freed
 - 2.1   - MINOR   : lua: Duplicate lua strings in sample 
fetches/converters arg array
 - 2.0, 2.1  - MINOR   : contrib/spoa-server: Do not free 
reference to NULL
 - 1.8, 2.0, 2.1 - MINOR   : reload: do not fail when no socket is 
sent
 - 2.1   - MINOR   : http-rules: Replace path and 
query-string in "replace-path" action"
 - 2.1   - MINOR   : arg: Fix leaks during arguments 
validation for fetches/converters
 - 2.0, 2.1  - MINOR   : contrib/spoa-server: Updating 
references to free in case of failure
 - 2.1   - MINOR   : converters: Store the sink in an arg 
pointer for debug() converter
 - 2.1   - MINOR   : lua: Duplicate map name to load it 
when a new Map object is created

-- 
The haproxy stable-bot is freely provided by HAProxy Technologies to help 
improve the quality of each HAProxy release.  If you have any issue with these 
emails or if you want to suggest some improvements, please post them on the 
list so that the solutions suiting the most users can be found.



Re: Haproxy 2.2.3 source

2020-09-08 Thread Alex Evonosky
Correct..  this is arm based on my side as well.



Sent from my Pixel 3XL


On Tue, Sep 8, 2020, 5:47 PM Vincent Bernat  wrote:

>  ❦  8 septembre 2020 16:13 -04, Alex Evonosky:
>
> > Just compiling 2.2.3 and getting this reference:
> >
> >
> > /haproxy-2.2.3/src/thread.c:212: undefined reference to
> > `_Unwind_Find_FDE'
>
> I am getting the same issue on armhf only. Other platforms don't get
> this issue. On this platform, we only get:
>
>   w   DF *UND*    GLIBC_2.4   __gnu_Unwind_Find_exidx
> 000165d0 gDF .text  000c  GCC_3.0 _Unwind_DeleteException
> d1f6 gDF .text  0002  GCC_3.0 _Unwind_GetTextRelBase
> 00016e1c gDF .text  0022  GCC_4.3.0   _Unwind_Backtrace
> 00016df8 gDF .text  0022  GCC_3.0 _Unwind_ForcedUnwind
> 00016dd4 gDF .text  0022  GCC_3.3 _Unwind_Resume_or_Rethrow
> d1f0 gDF .text  0006  GCC_3.0 _Unwind_GetDataRelBase
> 0001662c gDF .text  0036  GCC_3.5 _Unwind_VRS_Set
> 00016db0 gDF .text  0022  GCC_3.0 _Unwind_Resume
> 000169d8 gDF .text  02ba  GCC_3.5 _Unwind_VRS_Pop
> 00017178 gDF .text  000a  GCC_3.0 _Unwind_GetRegionStart
> 000165cc gDF .text  0002  GCC_3.5 _Unwind_Complete
> 00017184 gDF .text  0012  GCC_3.0
>  _Unwind_GetLanguageSpecificData
> 000165dc gDF .text  0036  GCC_3.5 _Unwind_VRS_Get
> 000164f0 gDF .text  0004  GCC_3.3 _Unwind_GetCFA
> 00016d8c gDF .text  0022  GCC_3.0 _Unwind_RaiseException
>
> So, older symbols are:
>
> 000165d0 gDF .text  000c  GCC_3.0 _Unwind_DeleteException
> d1f6 gDF .text  0002  GCC_3.0 _Unwind_GetTextRelBase
> 00016df8 gDF .text  0022  GCC_3.0 _Unwind_ForcedUnwind
> d1f0 gDF .text  0006  GCC_3.0 _Unwind_GetDataRelBase
> 00016db0 gDF .text  0022  GCC_3.0 _Unwind_Resume
> 00017178 gDF .text  000a  GCC_3.0 _Unwind_GetRegionStart
> 00017184 gDF .text  0012  GCC_3.0
>  _Unwind_GetLanguageSpecificData
> 00016d8c gDF .text  0022  GCC_3.0 _Unwind_RaiseException
>
> Moreover, comment says _Unwind_Find_DFE doesn't take arguments, but the
> signature I have in glibc is:
>
> fde *
> _Unwind_Find_FDE (void *pc, struct dwarf_eh_bases *bases)
> --
> Don't sacrifice clarity for small gains in "efficiency".
> - The Elements of Programming Style (Kernighan & Plauger)
>


Re: Haproxy 2.2.3 source

2020-09-08 Thread Vincent Bernat
 ❦  8 septembre 2020 16:13 -04, Alex Evonosky:

> Just compiling 2.2.3 and getting this reference:
>
>
> /haproxy-2.2.3/src/thread.c:212: undefined reference to
> `_Unwind_Find_FDE'

I am getting the same issue on armhf only. Other platforms don't get
this issue. On this platform, we only get:

  w   DF *UND*    GLIBC_2.4   __gnu_Unwind_Find_exidx
000165d0 gDF .text  000c  GCC_3.0 _Unwind_DeleteException
d1f6 gDF .text  0002  GCC_3.0 _Unwind_GetTextRelBase
00016e1c gDF .text  0022  GCC_4.3.0   _Unwind_Backtrace
00016df8 gDF .text  0022  GCC_3.0 _Unwind_ForcedUnwind
00016dd4 gDF .text  0022  GCC_3.3 _Unwind_Resume_or_Rethrow
d1f0 gDF .text  0006  GCC_3.0 _Unwind_GetDataRelBase
0001662c gDF .text  0036  GCC_3.5 _Unwind_VRS_Set
00016db0 gDF .text  0022  GCC_3.0 _Unwind_Resume
000169d8 gDF .text  02ba  GCC_3.5 _Unwind_VRS_Pop
00017178 gDF .text  000a  GCC_3.0 _Unwind_GetRegionStart
000165cc gDF .text  0002  GCC_3.5 _Unwind_Complete
00017184 gDF .text  0012  GCC_3.0 _Unwind_GetLanguageSpecificData
000165dc gDF .text  0036  GCC_3.5 _Unwind_VRS_Get
000164f0 gDF .text  0004  GCC_3.3 _Unwind_GetCFA
00016d8c gDF .text  0022  GCC_3.0 _Unwind_RaiseException

So, older symbols are:

000165d0 gDF .text  000c  GCC_3.0 _Unwind_DeleteException
d1f6 gDF .text  0002  GCC_3.0 _Unwind_GetTextRelBase
00016df8 gDF .text  0022  GCC_3.0 _Unwind_ForcedUnwind
d1f0 gDF .text  0006  GCC_3.0 _Unwind_GetDataRelBase
00016db0 gDF .text  0022  GCC_3.0 _Unwind_Resume
00017178 gDF .text  000a  GCC_3.0 _Unwind_GetRegionStart
00017184 gDF .text  0012  GCC_3.0 _Unwind_GetLanguageSpecificData
00016d8c gDF .text  0022  GCC_3.0 _Unwind_RaiseException

Moreover, comment says _Unwind_Find_DFE doesn't take arguments, but the
signature I have in glibc is:

fde *
_Unwind_Find_FDE (void *pc, struct dwarf_eh_bases *bases)
-- 
Don't sacrifice clarity for small gains in "efficiency".
- The Elements of Programming Style (Kernighan & Plauger)



Re: [PATCH v3 0/4] Add support for if-none-match for cache responses

2020-09-08 Thread Willy Tarreau
On Tue, Sep 08, 2020 at 07:14:10PM +0200, William Lallemand wrote:
> On Tue, Sep 08, 2020 at 06:59:07PM +0200, William Lallemand wrote:
> > On Tue, Sep 08, 2020 at 05:51:31PM +0200, Willy Tarreau wrote:
> > > On Tue, Sep 08, 2020 at 05:21:34PM +0200, William Lallemand wrote:
> > > > Also, when reading the RFC about the 304, I notice that they impose to
> > > > remove some of the entity headers in the case of the weak etag, so the
> > > > output is not exactly the same as the HEAD.
> > > > https://tools.ietf.org/html/rfc2616#section-10.3.5
> > > 
> > > Warning, 2616 is totally outdated and must really die. Please use 7234
> > > for caching, 7231 for semantics and 7230 for messaging.
> > > 
> > > Willy
> > 
> > Sorry, I checked on 7230, but somehow I pasted the wrong one :-)
> > 
> > Thanks,
> > 
> > 
> 
> I definitively checked the wrong one, as the RFC now states a SHOULD
> NOT for this, so that's not a requirement:
> 
> https://tools.ietf.org/html/rfc7232#section-4.1

Actually, among the numerous changes that happened between 2616 and 723x,
an important one was the removal of the distinction between entity header,
message headers, representation headers etc, which were quite confusing
since plenty got added in between. The only thing that remains are the
hop-by-hop headers (as opposed to end-to-end), and these are designated
by being referenced in the Connection header.

Willy



Re: Dynamic Googlebot identification via lua?

2020-09-08 Thread Aleksandar Lazic

On 08.09.20 22:54, Tim Düsterhus wrote:

Reinhard,
Björn,

Am 08.09.20 um 21:39 schrieb Björn Jacke:

the only official supported way to identify a google bot is to run a
reverse DNS lookup on the accessing IP address and run a forward DNS
lookup on the result to verify that it points to accessing IP address
and the resulting domain name is in either googlebot.com or google.com
domain.
...


thanks for asking this again, I brought this up earlier this year and I
got no answer:

https://www.mail-archive.com/haproxy@formilux.org/msg37301.html

I would expect that this is something that most sites would actually
want to check and I'm surprised that there is no solution for this or at
least none that is obvious to find.


The usually recommended solution for this kind of checks is either Lua
or the SPOA, running the actual logic out of process.

For Lua my haproxy-auth-request script is a batteries included solution
to query an arbitrary HTTP service:
https://github.com/TimWolla/haproxy-auth-request. It comes with the
drawback that Lua runs single-threaded within HAProxy, so you might not
want to use this if the checks need to run in the hot path, handling
thousands of requests per second.

It should be possible to cache the results of the script using a stick
table or a map.

Back in nginx times I used nginx' auth_request to query a local service
that checked whether the client IP address was a Tor exit node. It
worked well.

For SPOA there's this random IP reputation service within the HAProxy
repository:
https://github.com/haproxy/haproxy/tree/master/contrib/spoa_example. I
never used the SPOA feature, so I can't comment on whether that example
generally works and how hard it would be to extend it. It certainly
comes with the restriction that you are limited to C or Python (or a
manual implementation of the SPOA protocol) vs anything that speaks HTTP.


In addition to Tim's answer you can also try to use spoa_server which
supports `-n `.
https://github.com/haproxy/haproxy/tree/master/contrib/spoa_server


Best regards
Tim Düsterhus


Regards
Aleks



Re: Dynamic Googlebot identification via lua?

2020-09-08 Thread Tim Düsterhus
Reinhard,
Björn,

Am 08.09.20 um 21:39 schrieb Björn Jacke:
>> the only official supported way to identify a google bot is to run a
>> reverse DNS lookup on the accessing IP address and run a forward DNS
>> lookup on the result to verify that it points to accessing IP address
>> and the resulting domain name is in either googlebot.com or google.com
>> domain.
>> ...
> 
> thanks for asking this again, I brought this up earlier this year and I
> got no answer:
> 
> https://www.mail-archive.com/haproxy@formilux.org/msg37301.html
> 
> I would expect that this is something that most sites would actually
> want to check and I'm surprised that there is no solution for this or at
> least none that is obvious to find.

The usually recommended solution for this kind of checks is either Lua
or the SPOA, running the actual logic out of process.

For Lua my haproxy-auth-request script is a batteries included solution
to query an arbitrary HTTP service:
https://github.com/TimWolla/haproxy-auth-request. It comes with the
drawback that Lua runs single-threaded within HAProxy, so you might not
want to use this if the checks need to run in the hot path, handling
thousands of requests per second.

It should be possible to cache the results of the script using a stick
table or a map.

Back in nginx times I used nginx' auth_request to query a local service
that checked whether the client IP address was a Tor exit node. It
worked well.

For SPOA there's this random IP reputation service within the HAProxy
repository:
https://github.com/haproxy/haproxy/tree/master/contrib/spoa_example. I
never used the SPOA feature, so I can't comment on whether that example
generally works and how hard it would be to extend it. It certainly
comes with the restriction that you are limited to C or Python (or a
manual implementation of the SPOA protocol) vs anything that speaks HTTP.

Best regards
Tim Düsterhus



Haproxy 2.2.3 source

2020-09-08 Thread Alex Evonosky
Hello Haproxy group-

Just compiling 2.2.3 and getting this reference:


/haproxy-2.2.3/src/thread.c:212: undefined reference to `_Unwind_Find_FDE'


Is there a new lib thats required?


Thank you!


Cloudflare Using Companies

2020-09-08 Thread Sophia Lillis
​​Hello,

Are you looking for ways to enhance your marketing and sales efforts? We at 
Fortune Data Services offer niche marketers the most accurate and up to date 
Cloudflare Customers' List in the market.

We also provide Cisco Umbrella, Amazon Route 53, AWS WAF, Google Cloud CDN, 
NGINX, Amazon CloudFront, Imperva CDN customers list.

I appreciate your time, and look forward to hearing from you.

Warm regards,
Sophia Lillis| Go-To-Market Coordinator

If you don't want to include yourself in our mailing list, please reply 
"unsubscribe" a subject line


Re: Dynamic Googlebot identification via lua?

2020-09-08 Thread Björn Jacke
Hi Reinhard,

On 08.09.20 21:20, Reinhard Vicinus wrote:
> the only official supported way to identify a google bot is to run a
> reverse DNS lookup on the accessing IP address and run a forward DNS
> lookup on the result to verify that it points to accessing IP address
> and the resulting domain name is in either googlebot.com or google.com
> domain.
> ...

thanks for asking this again, I brought this up earlier this year and I
got no answer:

https://www.mail-archive.com/haproxy@formilux.org/msg37301.html

I would expect that this is something that most sites would actually
want to check and I'm surprised that there is no solution for this or at
least none that is obvious to find.

Björn



signature.asc
Description: OpenPGP digital signature


Dynamic Googlebot identification via lua?

2020-09-08 Thread Reinhard Vicinus
Hi,

the only official supported way to identify a google bot is to run a
reverse DNS lookup on the accessing IP address and run a forward DNS
lookup on the result to verify that it points to accessing IP address
and the resulting domain name is in either googlebot.com or google.com
domain.

As far as I understand the lua api documentation, it is not possible in
lua to perform DNS requests in runtime mode, so the only solution would
be to use an external service to do the actual checking of an accessing
IP address and use lua to question the external service and cache the
result of the IP to increase performance.

So as I am not that experienced in lua programming my question is if
this is feasible or if I am missing something? Also, if there are other
solutions I am not aware I would be thankful if I got pointers.

Thanks in advance
Reinhard Vicinus




Re: [PATCH v3 0/4] Add support for if-none-match for cache responses

2020-09-08 Thread William Lallemand
On Tue, Sep 08, 2020 at 05:48:40PM +0200, Tim Düsterhus wrote:
> William,
> 
> Unfortunately that very much sounds like it is above my "pay grade" as a
> community contributor to HAProxy. I *do not* plan on implementing this,
> because I don't expect this to getting this right without investing much
> effort.
> 

I understand, that allowed us to check what are the requirements for
this, so this was useful!

> Feel free to take the first 3 patches if you want. I believe they are
> good and useful for whoever is going to implement this. I'm going to
> link the mailing list thread in the issue for reference.
> 

Okay, thanks!

-- 
William Lallemand



Re: [PATCH v3 0/4] Add support for if-none-match for cache responses

2020-09-08 Thread William Lallemand
On Tue, Sep 08, 2020 at 06:59:07PM +0200, William Lallemand wrote:
> On Tue, Sep 08, 2020 at 05:51:31PM +0200, Willy Tarreau wrote:
> > On Tue, Sep 08, 2020 at 05:21:34PM +0200, William Lallemand wrote:
> > > Also, when reading the RFC about the 304, I notice that they impose to
> > > remove some of the entity headers in the case of the weak etag, so the
> > > output is not exactly the same as the HEAD.
> > > https://tools.ietf.org/html/rfc2616#section-10.3.5
> > 
> > Warning, 2616 is totally outdated and must really die. Please use 7234
> > for caching, 7231 for semantics and 7230 for messaging.
> > 
> > Willy
> 
> Sorry, I checked on 7230, but somehow I pasted the wrong one :-)
> 
> Thanks,
> 
> 

I definitively checked the wrong one, as the RFC now states a SHOULD
NOT for this, so that's not a requirement:

https://tools.ietf.org/html/rfc7232#section-4.1

-- 
William Lallemand



Re: [PATCH v3 0/4] Add support for if-none-match for cache responses

2020-09-08 Thread William Lallemand
On Tue, Sep 08, 2020 at 05:51:31PM +0200, Willy Tarreau wrote:
> On Tue, Sep 08, 2020 at 05:21:34PM +0200, William Lallemand wrote:
> > Also, when reading the RFC about the 304, I notice that they impose to
> > remove some of the entity headers in the case of the weak etag, so the
> > output is not exactly the same as the HEAD.
> > https://tools.ietf.org/html/rfc2616#section-10.3.5
> 
> Warning, 2616 is totally outdated and must really die. Please use 7234
> for caching, 7231 for semantics and 7230 for messaging.
> 
> Willy

Sorry, I checked on 7230, but somehow I pasted the wrong one :-)

Thanks,

-- 
William Lallemand



Re: [PATCH v3 0/4] Add support for if-none-match for cache responses

2020-09-08 Thread Willy Tarreau
On Tue, Sep 08, 2020 at 05:21:34PM +0200, William Lallemand wrote:
> Also, when reading the RFC about the 304, I notice that they impose to
> remove some of the entity headers in the case of the weak etag, so the
> output is not exactly the same as the HEAD.
> https://tools.ietf.org/html/rfc2616#section-10.3.5

Warning, 2616 is totally outdated and must really die. Please use 7234
for caching, 7231 for semantics and 7230 for messaging.

Willy



Re: [PATCH v3 0/4] Add support for if-none-match for cache responses

2020-09-08 Thread Tim Düsterhus
William,

Am 08.09.20 um 17:21 schrieb William Lallemand:
>> Yes, this generally  makes sense to me. Unfortunately the code frankly
>> is a foreign language to me here.
>>
>> I have not the slightest idea what steps I would need to perform to get
>> the headers of the cached response within 'http_action_req_cache_use'.
>> That's the primary reason why I implemented the logic within
>> 'http_cache_io_handler' and not in 'http_action_req_cache_use'.
> 
> Indeed it's kind of complicated in the current state of the cache,
> because the data are chunked in the shctx so you can't use the htx
> functions to do it, so we need to dump the headers before.
> 
>> I would also say that doing this in 'http_cache_io_handler' is
>> logically the correct place, because we are already dealing with the
>> response here.  However I understand that needing to malloc() the
>> if-none-match is non-ideal.
>>
>> Do you have any advice how I can efficiently retrieve the ETag header
>> from the cached response in 'http_action_req_cache_use'?
> 
> 
> I think it's better to handle the headers in the action, like it's done
> for the store_cache where the action store the headers and the filters
> stores the body.
> 
> So what could be done is to dump the headers directly from the action,
> so the applet is created only in the case there is a body to dump, that
> could even improve the performances.
> 
> It will be a requirement in the future if we want to handle properly the
> Vary header because we will need to chose the right object depending on
> the header before setuping the appctx.

Unfortunately that very much sounds like it is above my "pay grade" as a
community contributor to HAProxy. I *do not* plan on implementing this,
because I don't expect this to getting this right without investing much
effort.

Feel free to take the first 3 patches if you want. I believe they are
good and useful for whoever is going to implement this. I'm going to
link the mailing list thread in the issue for reference.

> Also, when reading the RFC about the 304, I notice that they impose to
> remove some of the entity headers in the case of the weak etag, so the
> output is not exactly the same as the HEAD.
> https://tools.ietf.org/html/rfc2616#section-10.3.5

Oh, yes, indeed. For a weak ETag some headers need to be dropped, good
catch.

Best regards
Tim Düsterhus



Bid Writing Workshops Via Zoom

2020-09-08 Thread NFP Workshops


NFP WORKSHOPS
18 Blake Street, York YO1 8QG   01133 280988
Affordable Training Courses for Charities, Schools & Public Sector 
Organisations 




This email has been sent to haproxy@formilux.org
CLICK TO UNSUBSCRIBE FROM LIST
Alternatively send a blank e-mail to unsubscr...@nfpmail2001.co.uk quoting 
haproxy@formilux.org in the subject line.
Unsubscribe requests will take effect within seven days. 




Bid Writing: The Basics
Online via ZOOM 

COST £95.00

TOPICS COVERED

Do you know the most common reasons for rejection? Are you gathering the right 
evidence? Are you making the right arguments? Are you using the right 
terminology? Are your numbers right? Are you learning from rejections? Are you 
assembling the right documents? Do you know how to create a clear and concise 
standard funding bid?

Are you communicating with people or just excluding them? Do you know your own 
organisation well enough? Are you thinking through your projects carefully 
enough? Do you know enough about your competitors? Are you answering the 
questions funders will ask themselves about your application? Are you 
submitting applications correctly?

PARTICIPANTS  

Staff members, volunteers, trustees or board members of charities, schools, not 
for profits or public sector organisations who intend to submit grant funding 
applications to charitable grant making trusts and foundations. People who 
provide advice to these organisations are also welcome.

BOOKING DETAILS   

Participants receive full notes and sample bids by e-mail after the workshop. 
The workshop consists of talk, questions and answers. There are no power points 
or audio visuals used. All places must be booked through the online booking 
system using a debit card, credit card or paypal. We do not issue invoices or 
accept bank or cheque payments. If you do not have a payment card from your 
organisation please use a personal one and claim reimbursement using the 
booking confirmation e-mail as proof of purchase.

BOOKING TERMS

Workshop bookings are non-cancellable and non-refundable. If you are unable to 
participate on the booked date you may allow someone else to log on in your 
place. There is no need to contact us to let us know that there will be a 
different participant. Bookings are non-transferable between dates unless an 
event is postponed. If an event is postponed then bookings will be valid on any 
future scheduled date for that workshop.
   
QUESTIONS

If you have a question please e-mail questi...@nfpmail2001.co.uk You will 
usually receive a response within 24 hours. Due to our training commitments we 
are unable to accept questions by phone. 
Bid Writing: Advanced
Online via ZOOM 

COST £95.00

TOPICS COVERED

Are you applying to the right trusts? Are you applying to enough trusts? Are 
you asking for the right amount of money? Are you applying in the right ways? 
Are your projects the most fundable projects? 

Are you carrying out trust fundraising in a professional way? Are you 
delegating enough work? Are you highly productive or just very busy? Are you 
looking for trusts in all the right places? 

How do you compare with your competitors for funding? Is the rest of your 
fundraising hampering your bids to trusts? Do you understand what trusts are 
ideally looking for?

PARTICIPANTS  

Staff members, volunteers, trustees or board members of charities, schools, not 
for profits or public sector organisations who intend to submit grant funding 
applications to charitable grant making trusts and foundations. People who 
provide advice to these organisations are also welcome.

BOOKING DETAILS   

Participants receive full notes and sample bids by e-mail after the workshop. 
The workshop consists of talk, questions and answers. There are no power points 
or audio visuals used. All places must be booked through the online booking 
system using a debit card, credit card or paypal. We do not issue invoices or 
accept bank or cheque payments. If you do not have a payment card from your 
organisation please use a personal one and claim reimbursement using the 
booking confirmation e-mail as proof of purchase.

BOOKING TERMS

Workshop bookings are non-cancellable and non-refundable. If you are unable to 
participate on the booked date you may allow someone else to log on in your 
place. There is no need to contact us to let us know that there will be a 
different participant. Bookings are non-transferable between dates unless an 
event is postponed. If an event is postponed then bookings will be valid on any 
future scheduled date for that workshop.
   
QUESTIONS

If you have a question please e-mail questi...@nfpmail2001.co.uk You will 
usually receive a response within 24 hours. Due to our training commitments we 
are unable to accept questions by phone. 
Dates & Booking Links
BID WRITING: THE BASICS
Mon 14 Sep 2020
10.00 to 12.30Booking Link
Mon 28 Sep 2020
10.00 to 12.30Booking Link
Mon 12 Oct 2020
10.00 to 12.30Booking Link
Mon 26 Oct 2020

Re: [PATCH v3 0/4] Add support for if-none-match for cache responses

2020-09-08 Thread William Lallemand
On Tue, Sep 08, 2020 at 04:11:40PM +0200, Tim Düsterhus wrote:
> William,
> 
> [Did you leave out the list intentionally?]
>
Oops, no sorry, I'll bounce my previous mail on the list.

> Am 08.09.20 um 14:40 schrieb William Lallemand:
> >> diff --git a/include/haproxy/applet-t.h b/include/haproxy/applet-t.h
> >> index 60f30c56f..7cccec977 100644
> >> --- a/include/haproxy/applet-t.h
> >> +++ b/include/haproxy/applet-t.h
> >> @@ -113,6 +113,7 @@ struct appctx {
> >>unsigned int offset;/* start offset of 
> >> remaining data relative to beginning of the next block */
> >>unsigned int rem_data;  /* Remaining bytes for the 
> >> last data block (HTX only, 0 means process next block) */
> >>struct shared_block *next;  /* The next block of data 
> >> to be sent for this cache entry. */
> >> +  struct ist if_none_match;   /* The if-none-match 
> >> request header. */
> >>} cache;
> > 
> > In my opinion we only need a flag here, because the validation must be
> > done near the lookup. It's not useful to do a copy of the header string.
> 
> Yes, this generally  makes sense to me. Unfortunately the code frankly
> is a foreign language to me here.
> 
> I have not the slightest idea what steps I would need to perform to get
> the headers of the cached response within 'http_action_req_cache_use'.
> That's the primary reason why I implemented the logic within
> 'http_cache_io_handler' and not in 'http_action_req_cache_use'.

Indeed it's kind of complicated in the current state of the cache,
because the data are chunked in the shctx so you can't use the htx
functions to do it, so we need to dump the headers before.

> I would also say that doing this in 'http_cache_io_handler' is
> logically the correct place, because we are already dealing with the
> response here.  However I understand that needing to malloc() the
> if-none-match is non-ideal.
>
> Do you have any advice how I can efficiently retrieve the ETag header
> from the cached response in 'http_action_req_cache_use'?


I think it's better to handle the headers in the action, like it's done
for the store_cache where the action store the headers and the filters
stores the body.

So what could be done is to dump the headers directly from the action,
so the applet is created only in the case there is a body to dump, that
could even improve the performances.

It will be a requirement in the future if we want to handle properly the
Vary header because we will need to chose the right object depending on
the header before setuping the appctx.

Also, when reading the RFC about the 304, I notice that they impose to
remove some of the entity headers in the case of the weak etag, so the
output is not exactly the same as the HEAD.
https://tools.ietf.org/html/rfc2616#section-10.3.5

Regards,

-- 
William Lallemand



[ANNOUNCE] haproxy-2.2.3

2020-09-08 Thread Willy Tarreau
Hi,

HAProxy 2.2.3 was released on 2020/09/08. It added 59 new commits
after version 2.2.2, in about 5 weeks.

There were not that many issues but they were grouped by subsystem and
likely affect some users.

First, a number of issues were addressed on SSL. The negative filters in
crt-lists didn't work, and the SNI lookups were incomplete when a pair of
certificates wouldn't cover the same server names using all algorithms. A
few other less important issues such as occasional memory leaks in OCSP
were addressed.

Second, there were issues in the DNS code in the way servers learned from
SRV records were updated. If multiple info would change at once (e.g. weight
and address), not all were applied and a server could for example continue
to use its old address or even never recover.

Third, Lua calls to the native fetch functions and converters didn't always
map arguments correctly to their target types, often resulting in leaks of
allocated strings. Other limitations in the way arguments were handled used
to limit the number of sample fetch and converter keywords to those taking
no argument or trivial arguments. This caused a few of them to disappear
from Lua when they were slightly extended (such as date() and http_date()).
All of this was reworked so that they're now all passed as strings, parsed
and processed on the fly, meaning that all keywords are now available again.

Fourth, there were some issues around replace-path which was documented as
replacing the query string while its brothers (set-path, path etc) did not
use it, according to the terminology used in HTTP. But in practice
replace-path didn't act on it either. Given that the action was only fairly
recently introduced and the "fix" would add a lot more confusion, it was
preferred as, an exception, to fix the doc instead of risking to break
working setups, and to provide a new pair of actions "set-pathq" and
"replace-pathq" and a new sample fetch function "pathq" which all act both
on the path and the query string.

Fifth, the reference spoa-server the contrib directory received a number of
fixes and was apparently severely affected by memory leaks, or freeing the
wrong element. So those who wrote their own agents based on it may want to
double-check or rebase their work.

Aside this, 100-continue responses were needlesly delayed on output causing
some slowdowns of small POST requests, and HTTP/1 send timeouts were not
always updated when performing synchronous sends, occasionally resulting in
aborted connections in the middle of a transfer. A bug in the command line
parser caused "haproxy -s" to spin at 100% CPU on startup while parsing the
command line. An occasional crash on deinit() due to the impossibility for
libpthread to access libgcc_s.so from within a chroot was worked around.
Various harmless memory leaks on deinit() were addressed. And the rest is
pretty minor.

Please find the usual URLs below :
   Site index   : http://www.haproxy.org/
   Discourse: http://discourse.haproxy.org/
   Slack channel: https://slack.haproxy.org/
   Issue tracker: https://github.com/haproxy/haproxy/issues
   Wiki : https://github.com/haproxy/wiki/wiki
   Sources  : http://www.haproxy.org/download/2.2/src/
   Git repository   : http://git.haproxy.org/git/haproxy-2.2.git/
   Git Web browsing : http://git.haproxy.org/?p=haproxy-2.2.git
   Changelog: http://www.haproxy.org/download/2.2/src/CHANGELOG
   Cyril's HTML doc : http://cbonte.github.io/haproxy-dconv/

Willy
---
Complete changelog :
Baptiste Assmann (2):
  CLEANUP: dns: typo in reported error message
  BUG/MAJOR: dns: disabled servers through SRV records never recover

Christopher Faulet (23):
  BUG/MEDIUM: mux-h1: Refresh H1 connection timeout after a synchronous send
  BUG/MEDIUM: map/lua: Return an error if a map is loaded during runtime
  MINOR: arg: Add an argument type to keep a reference on opaque data
  BUG/MINOR: converters: Store the sink in an arg pointer for debug() 
converter
  BUG/MINOR: lua: Duplicate map name to load it when a new Map object is 
created
  BUG/MINOR: arg: Fix leaks during arguments validation for 
fetches/converters
  BUG/MINOR: lua: Check argument type to convert it to IPv4/IPv6 arg 
validation
  BUG/MINOR: lua: Check argument type to convert it to IP mask in arg 
validation
  MINOR: hlua: Don't needlessly copy lua strings in trash during args 
validation
  BUG/MINOR: lua: Duplicate lua strings in sample fetches/converters arg 
array
  MEDIUM: lua: Don't filter exported fetches and converters
  BUG/MEDIUM: http-ana: Don't wait to send 1xx responses received from 
servers
  MINOR: http-htx: Add an option to eval query-string when the path is 
replaced
  BUG/MINOR: http-rules: Replace path and query-string in "replace-path" 
action
  Revert "BUG/MINOR: http-rules: Replace path and query-string in 
"replace-path" action"
  BUG/MEDIUM: doc: Fix 

Re: [PATCH] DOC: ssl-load-extra-files only applies to certificates on bind lines.

2020-09-08 Thread William Lallemand
On Mon, Sep 07, 2020 at 12:15:11PM +0200, Jerome Magnin wrote:
> Hi,
> 
> this is a small doc patch for ssl-load-extra-files.
> I will create a feature request to support separating the key from the
> certificate when used on server lines, as discussed privately with
> William.
> 
> -- 
> Jérôme

> From 01cfd0dcd2f7efbb90a25bd2f72053bdbd5f559c Mon Sep 17 00:00:00 2001
> From: Jerome Magnin 
> Date: Mon, 7 Sep 2020 11:55:57 +0200
> Subject: [PATCH] DOC: ssl-load-extra-files only applies to certificates on
>  bind lines.
> 
> Be explicit about ssl-load-extra-files not applying to certificates
> referenced with the crt keyword on server lines.
> ---
>  doc/configuration.txt | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/doc/configuration.txt b/doc/configuration.txt
> index a8242793a..c1f6f8219 100644
> --- a/doc/configuration.txt
> +++ b/doc/configuration.txt
> @@ -1373,7 +1373,8 @@ ssl-dh-param-file 
>  
>  ssl-load-extra-files *
>This setting alters the way HAProxy will look for unspecified files during
> -  the loading of the SSL certificates.
> +  the loading of the SSL certificates associated to "bind" lines. It does not
> +  apply to certificates used for client authentication on "server" lines.
>  
>By default, HAProxy discovers automatically a lot of files not specified in
>the configuration, and you may want to disable this behavior if you want to
> -- 
> 2.28.0
> 

Thanks, applied.


-- 
William Lallemand



Re: [RFC PATCH] MAJOR: ssl: Support for validating backend certificates with URI SANs (subjectAltName)

2020-09-08 Thread Teo Klestrup Röijezon
Hey Willy, sorry about the delay.. managed to get sick right after that stuff.

> I don't understand what you mean here in that it does not make sense to
> you. Actually it's not even about overriding verifyhost, it's more that
> we match that the requested host (if any) is indeed supported by the
> presented certificate. The purpose is to make sure that the connection
> is not served by a server presenting a valid cert which doesn't match
> the authority we're asking for. And if we don't send any servername,
> then we can still enforce the check against a hard-coded servername
> presented in verifyhost.

To my mind, `verifyhost` is more or less an acknowledgement that "no, this 
isn't quite set up perfectly, but we can at least verify with some caveats". 
Otherwise, the host could just be taken from the address in the `connect` 
keyword before SNI?

> How is this used, usually ? I had never heard of such features with these
> URIs being advertised instead of server names (but I must not be taken as
> a reference about TLS as I'm mostly clueless about it). I'm seeing that
> in your patch you're checking a type against GEN_URI, so I guess that there
> is an explicit type for such alt names. Thus we could imagine having a new
> keyword to enumerate a list of valid URIs to match against. 
> But do these have to be hard-coded or may they be determined dynamically
> (from SNI or anything else for example) ?

SPIFFE (and Istio in particular) uses URIs to identify the "service account" 
associated with the service[0], rather than the endpoint itself. For example, 
the SPIFFE ID spiffe://cluster.local/ns/myns/sa/myuser identifies the 
ServiceAccount `myuser`, in the Namespace `myns`, in the Kubernetes cluster 
`cluster.local`.

For our use-case, the plan is to run a separate service that maintains a 
mapping of Host/SNI -> SPIFFE IDs in HAProxy's config, based on our Kubernetes 
cluster's state. This wouldn't strictly need to depend on SNI, since we could 
just generate a backend section per SNI, but it would be nice to be able to 
keep this consolidated in order to avoid duplicate health checks and keep the 
stats page clean.

That said, there also seem to be other definitions for how to use the URI SAN 
field. RFC6125 proposes using it as a tuple of (protocol,DNS name)[1], in which 
case it would need to depend on the SNI.

> This would completely eliminate the risk to match against other fields by 
accident.

This concern was (at least partially) a goof on my side. Of course, any 
wildcard would be in the certificate, not the match statement.

> If you could provide an example sequence where this is used to illustrate
> the use case, it would be useful for those who, like me, are not aware of
> this feature. Also, one thing which is not clear to me is, should this check
> be necessary or sufficient to validate a cert ? I.e. if we have a matching
> servername, and a configured URI, should we check that this URI is valid in
> addition to the servername, or as an alternative ?

Mentioned an example above. In all use-cases I can see, it should be sufficient 
for validation, since it either replaces the existing validation model 
(SPIFFE) or acts as a more restrictive version of it (RFC6125).

[0]: https://spiffe.io/docs/latest/spiffe/concepts/#spiffe-id
[1]: https://www.rfc-editor.org/rfc/rfc6125#section-6.3





Re: Backend servers backup setup

2020-09-08 Thread Artur
Hello,

I didn't see any answer or comment on my inquiry.
I supposed that one will say it's not possible or there is a miracle
solution or it may be a new feature. :)
Could you please tell me what would be the right hypothesis ?

Le 01/09/2020 à 11:08, Artur a écrit :
> Hello,
>
> I need your help on configuring servers backup in a backend.
> This is my current (simplified) backend setup :
>
> backend ws_be
>     mode http
>     option redispatch
>     cookie c insert indirect nocache attr "SameSite=Lax"
>     balance roundrobin
>     server s1 1.2.3.3:1234 cookie s1 check
>     server s2 1.2.3.3:2345 cookie s2 check
>     option allbackups
>     server sb1 2.3.4.5:3456 cookie s1 check backup
>     server sb2 2.3.4.5:4567 cookie s2 check backup
>
> FYI, the servers of this backend are node.js processes (dynamic content
> and websockets).
> Case 1 : If s1 or s2 is DOWN all the connexions are redispatched to the
> remaining UP server (s1 or s2).
> Case 2 : If s1 AND s2 are DOWN, all the connexions are redispatched on
> sb1 and sb2 backup servers.
>
> In the second case, the global application performance is similar to the
> normal situation where all main servers are UP.
> However, in the case one, the application performance can be degraded
> because there is only one server serving requests instead of two (and
> backup servers are inactive).
> I would like to modify the current setup so if there is a main server
> down, it's at once replaced by a backup server and all the connexions
> redispatched from DOWN server to a backup server.
> Of course, there may be variations :
> - 1 main server DOWN -> Corresponding backup server activated
> - 1 main server DOWN -> all backup servers activated
> - 1 main server DOWN -> some backup servers activated
>
> Any idea on how to achieve this ?
>
-- 
Best regards,
Artur