Re: Stick table counter not working after upgrade to 2.2.11

2021-03-22 Thread Bren
‐‐‐ Original Message ‐‐‐

On Monday, March 22nd, 2021 at 3:06 PM, Sander Klein  wrote:

> Hi,
>
> I have upgraded to haproxy 2.2.11 today and it seems like my stick table 
> counter is not working anymore.

I was going to upgrade to 2.2.11 soon so I tested this quick and can confirm 
that counters no longer decrement over time. Tested this using the 
haproxy:2.2.11 Docker image and a standard stick table:

fe-test
  http-request track-sc0 src table be-test

be-test
  stick-table type ipv6 size 1m expire 24h store http_req_rate(2s)

Bren



Stick table counter not working after upgrade to 2.2.11

2021-03-22 Thread Sander Klein

Hi,

I have upgraded to haproxy 2.2.11 today and it seems like my stick table 
counter is not working anymore. It is only increasing on every hit and 
never decreases anymore. Downgrading back to 2.2.10 fixes this issue.


The setup is a replicated stick table like:

```
table apikey type ipv6 size 1m expire 24h store http_req_rate(2s)
```

And in my frontend I use:

```
acl has_apiKey url_param(apiKey) -m found
acl is_apiabuser src_http_req_rate(lb1/apikey) gt 10
acl is_rejectapi src_http_req_rate(lb1/apikey) gt 20

http-request track-sc0 src table lb1/apikey if has_apiKey 
!in_picturae_ip

http-request deny deny_status 429 if is_rejectapi
http-request lua.delay_request if is_apiabuser


Is this a know issue? I didn't find anything on Github.

Regards,

Sander



Re: [ANNOUNCE] haproxy-1.6.16

2021-03-22 Thread Lukas Tribus
Hello Willy,


On Sat, 20 Mar 2021 at 10:09, Willy Tarreau  wrote:
> > 1.6 was EOL last year, I don't understand why there is a last release.
>
> There were some demands late last year and early this year to issue a
> last one with pending fixes to "flush the pipe" but it was terribly
> difficult to find enough time to go through the whole chain with the
> other nasty bugs that kept us busy.
>
> > Both 1.6 and 1.7 are marked for critical fixes but many fixes are pushed
> > in it. The risk is to introduce a late regression in this last version.
>
> There's always such a risk when doing backports unfortunately and it's
> always difficult to set the boundary between what is needed and what
> not. A lot of the issues I'm seeing there are crash-related, and
> others address not-so-funny recent changes in compilers behaviors
> leading to bad code generation. There are also some that were possibly
> not strictly necessary, but then they're obvious fixes (like the one
> on the timer that's incorrectly compared), and whose possible
> consequences are not always trivial to imagine (such as checks looping
> at 100% CPU every 24 days maybe depending on the tick's sign).

I agree that finding the sweet spot can be difficult, but I have to
say I share Vincent's concerns. I do feel like currently we backport
*a lot*, especially on those near-EOLed trees. When looking at the
list of backported patches, I don't feel like the age and remaining
lifetime is taken into consideration.

I don't want to be the monday morning quarterback, but in 1.7 we have
853926a9ac ("BUG/MEDIUM: ebtree: use a byte-per-byte memcmp() to
compare memory blocks") and I quote from the commit message:

> This is not correct and definitely confuses address sanitizers. In real
> life the problem doesn't have visible consequences.
> [...]
> there is no way such a multi-byte access will cross a page boundary
> and end up reading from an unallocated zone. This is why it was
> never noticed before.

This sounds like a backport candidate for "warm" stable branches
(maybe), but 1.7 and 1.8 feel too "cold" for this, even 8 - 9 months
ago.

This backport specifically causes a build failure of 1.7.13 on musl
(#760) - because of a missing backport, but that's just an example. 39
"MINOR" patches made it into 1.6.16, 62 patches in 1.7.13. While it is
true that a lot of "MINOR" tagged patches are actually important, this
is still a large number for a tree that is supposed to die so soon.
Very rarely do users build from source from such old trees anyway (and
those users would be especially conservative, definitely not
interested in generic, non-important improvements).


> But with this in mind, there were two options:
>   - not releasing the latest fixes

You are talking about whether to publish a release or not for tree
X.Y, with the backports that are already in the tree. I don't think
that's the issue.

I think the discussion should be about what commits land in those old
trees in the first place. And I don't believe it is scalable to make
those decisions during your backporting sessions. Instead I believe we
should be more conservative when suggesting backports in the commit
message. Currently, we say "should/can be backported to X.Y" based on
whether it is *technically* possible to do so for supported trees, not
if it makes sense considering the age and lifetime of the suggested
tree. This is why I'm proposing a commit author should make such
considerations when suggesting backports. Avoiding backports to cold
trees of no-impact improvements and minor fixes for rare corner cases
should be a goal.

Unless we insist every single bug needs to be fixed on every single
supported release branch.


lukas



Re: [ANNOUNCE] haproxy-1.6.16

2021-03-22 Thread Willy Tarreau
Hi Lukas,

On Tue, Mar 23, 2021 at 12:21:02AM +0100, Lukas Tribus wrote:
> I agree that finding the sweet spot can be difficult, but I have to
> say I share Vincent's concerns. I do feel like currently we backport
> *a lot*, especially on those near-EOLed trees. When looking at the
> list of backported patches, I don't feel like the age and remaining
> lifetime is taken into consideration.

It usually is, but let's face it, we're all humans, and when you spend
a whole day doing backports from 2.3 to 1.6, it's not always trivial
to decide whether it's better to drop certain fixes, especially when
you remember having worked on them and know the trouble the bug they
fix can cause.

> I don't want to be the monday morning quarterback,

No, feel free to!

> but in 1.7 we have
> 853926a9ac ("BUG/MEDIUM: ebtree: use a byte-per-byte memcmp() to
> compare memory blocks") and I quote from the commit message:
> 
> > This is not correct and definitely confuses address sanitizers. In real
> > life the problem doesn't have visible consequences.
> > [...]
> > there is no way such a multi-byte access will cross a page boundary
> > and end up reading from an unallocated zone. This is why it was
> > never noticed before.
> 
> This sounds like a backport candidate for "warm" stable branches
> (maybe), but 1.7 and 1.8 feel too "cold" for this, even 8 - 9 months
> ago.

I agree in principle, but I'm pretty sure we've later met a situation
where it resulted in a read past the end of a node. I mean, given that
commits cannot be amended after they're merged, it's quite common to
later figure that a fix for a somewhat innocent bug used to fix a more
important one. This is exactly one of the problems that GregKH explained
in his talk about issues caused by CVE (https://youtu.be/HeeoTE9jLjM).
Normally we address this by adding a personal comment to the commit
during the backport.

> This backport specifically causes a build failure of 1.7.13 on musl
> (#760) - because of a missing backport, but that's just an example.

Yes but this is a good example. We are particularly careful about
such issues and sometimes wait longer to pick a series at once in
order to get a fix and its own fixes. But this does not always work
that well :-/  And we're trying to force ourselves not to backport
too fast to older versions. However we know that backporting a patch
to 5 versions is roughly the same amount of work as doing it for 2
as long as it's still hot in the developer's head, while it would
be 5 if done with a delay, so there's a sweet spot to find there.

We've even thought about using the -next branch in stable repos to
queue delayed fixes and wait for their own fixes, but that implies
doing other non-trivial changes to our workflow.

> 39
> "MINOR" patches made it into 1.6.16, 62 patches in 1.7.13. While it is
> true that a lot of "MINOR" tagged patches are actually important, this
> is still a large number for a tree that is supposed to die so soon.

I agree. But on the other hand many times we've found that we've left a
nasty bug because we didn't backport something. The problem here is in
fact the frequency of such releases. If we managed to emit them more
often with colder fixes, this would be much less of a concern. Right
now, patches are "ejected" once a year or so, and while some of them
are totally cold and save, others are much more recent and may present
a problem. I'm well accustomed to this issue, I was facing the same with
the extended LTS kernels. In theory everyone would love to see a sliding
window of patches, in practice some fixes are quite important and may
rely on other ones so it's not that easy to keep some out of the way.

> Very rarely do users build from source from such old trees anyway (and
> those users would be especially conservative, definitely not
> interested in generic, non-important improvements).

Sure but the goal is not to bring improvements there but to fix real
issues for which they upgrade. On the other hand we also know that such
users will not update their prod unless they face an issue. And when
you face an issue after 3-4 years of operations, it's rarely one tagged
major because it would have been spotted much earlier. Often it's a
corner case of a minor one that was not identified during its fixing
(e.g. a crash every 3 months because the memcmp() above crosses a page
boundary when comparing a string past its 8ths byte and lands into an
unallocated area).

> > But with this in mind, there were two options:
> >   - not releasing the latest fixes
> 
> You are talking about whether to publish a release or not for tree
> X.Y, with the backports that are already in the tree. I don't think
> that's the issue.

That's the concern Vincent initially brought, this is why I'm speaking
about this option.

> I think the discussion should be about what commits land in those old
> trees in the first place.

Yes I agree.

> And I don't believe it is scalable to make
> those decisions during your backporting 

Bid Writing, Fundraising and Volunteering Workshops

2021-03-22 Thread NFP Workshops



NFP WORKSHOPS
Affordable Training Courses

18 Blake Street, York YO1 8QG01133 280988




Bid Writing: The Basics


 Do you know the most common reasons for rejection? Are you gathering the right 
evidence? Are you making the right arguments? Are you using the right 
terminology? Are your numbers right? Are you learning from rejections? 

Are you assembling the right documents? Do you know how to create a clear and 
concise standard funding bid? Are you communicating with people or just 
excluding them? Do you know your own organisation well enough? 

Are you thinking through your projects carefully enough? Do you know enough 
about your competitors? Are you answering the questions funders will ask 
themselves about your application? Are you submitting applications correctly?
ONLINE VIA ZOOM
10.00 TO 12.30
COST £95.00
MON 22 MAR 2021
BOOKING LINK
MON 12 APR 2021
BOOKING LINK
MON 26 APR 2021
BOOKING LINK




Bid Writing: Advanced

 Are you applying to the right trusts? Are you applying to enough trusts? Are 
you asking for the right amount of money? Are you applying in the right ways? 
Are your projects the most fundable projects? 

Are you carrying out trust fundraising in a professional way? Are you 
delegating enough work? Are you highly productive or just very busy? Are you 
looking for trusts in all the right places? 

How do you compare with your competitors for funding? Is the rest of your 
fundraising hampering your bids to trusts? Do you understand what trusts are 
ideally looking for?
ONLINE VIA ZOOM
10.00 TO 12.30
COST £95.00
TUE 23 MAR 2021
BOOKING LINK
TUE 13 APR 2021
BOOKING LINK
TUE 27 APR 2021
BOOKING LINK



Major Donor Fundraising

 Major Donor Characteristics, Motivations and Requirements. Researching and 
Screening Major Donors. Encouraging, Involving and Retaining Major Donors.

Building Relationships with Major Donors. Major Donor Events and Activities. 
Setting Up Major Donor Clubs. Asking For Major Gifts. Looking After and 
Reporting Back to Major Donors.  
 
Delivering on Major Donor Expectations. Showing Your Appreciation to Major 
Donors. Fundraising Budgets and Committees.   
ONLINE VIA ZOOM
10.00 TO 12.30
COST £95
WED 14 APR 2021
BOOKING LINK
THU 10 JUN 2021
BOOKING LINK



Corporate Fundraising 

Who are these companies? Why do they get involved? What do they like? What can 
you get from them? What can you offer them? What are the differences between 
donations, sponsorship, advertising and cause related marketing? 

Are companies just like trusts? How do you find these companies? How do you 
research them? How do you contact them? How do you pitch to them? How do you 
negotiate with them? 

When should you say no? How do you draft contracts? How do you manage the 
relationships? What could go wrong? What are the tax issues? What are the legal 
considerations?
ONLINE VIA ZOOM
10.00 TO 12.30
COST £95
THU 29 APR 2021
BOOKING LINK
THU 24 JUN 2021
BOOKING LINK



Recruiting and Managing Volunteers
 Where do you find volunteers? How do you find the right volunteers? How do you 
attract volunteers? How do you run volunteer recruitment events? How do you 
interview volunteers? 

How do you train volunteers? How do you motivate volunteers? How do you involve 
volunteers? How do you recognise volunteers? How do you recognise problems with 
volunteers? How do you learn from volunteer problems? 

How do you retain volunteers? How do you manage volunteers? What about 
volunteers and your own staff? What about younger, older and employee 
volunteers?

ONLINE VIA ZOOM
10.00 TO 12.30
COST £95
THU 13 MAY 2021
BOOKING LINK



Legacy Fundraising 

Why do people make legacy gifts? What are the ethical issues? What are the 
regulations? What are the tax issues? What are the statistics? What are the 
trends? How can we integrate legacy fundraising into our other fundraising? 

What are the sources for research? How should we set a budget? How should we 
evaluate our results? How should we forecast likely income? Should we use 
consultants? How should we build a case for support? 

What media and marketing channels should we use? What about in memory giving? 
How should we setup our admin systems? What are the common problems & pitfalls?
ONLINE VIA ZOOM
10.00 TO 12.30
COST £95
THU 27 MAY 2021
BOOKING LINK



GDPR & Data Protection 

What are the key elements of Data Protection? Who are data controllers? What 
does Lawful Basis mean? What are the six principles of data protection? What 
rights do data subjects have? How should we respond to Subject Access Requests? 
Within service delivery what are the data protection issues? What are the data 
protection issues within fundraising? How should staff records be maintained? 
Who are the ICO? What do the ICO do?
ONLINE VIA ZOOM
10.00 TO 12.30
COST £95  
THU 25 MAR 2021
BOOKING LINK



Public Relations & Publicity 

What is Public Relations? How is it different to Publicity? How can I create a 
good press release? How can I stand out? What are the 

Re: [2.2.9] 100% CPU usage

2021-03-22 Thread Maciej Zdeb
Hi Christopher,

Thanks! I'm building a patched version and will return with feedback!

Kind regards,

pt., 19 mar 2021 o 16:40 Christopher Faulet 
napisał(a):

> Le 16/03/2021 à 13:46, Maciej Zdeb a écrit :
> > Sorry for spam. In the last message I said that the old process (after
> reload)
> > is consuming cpu for lua processing and that's not true, it is
> processing other
> > things also.
> >
> > I'll take a break. ;) Then I'll verify if the issue exists on 2.3 and
> maybe 2.4
> > branch. For each version I need a week or two to be sure the issue does
> not
> > occur. :(
> >
> > If 2.3 and 2.4 behaves the same way the 2.2 does, I'll try to confirm if
> there
> > is any relation between infinite loops and custom configuration:
> > - lua scripts (mainly used for header generation/manipulation),
> > - spoe (used for sending metadata about each request to external
> service),
> > - peers (we have a cluster of 12 HAProxy servers connected to each
> other).
> >
> Hi Maciej,
>
> I've read more carefully your backtraces, and indeed, it seems to be
> related to
> lua processing. I don't know if the watchdog is triggered because of the
> lua or
> if it is just a side-effect. But the lua execution is interrupted inside
> the
> memory allocator. And malloc/realloc are not async-signal-safe.
> Unfortunately,
> when the lua stack is dumped, the same allocator is also used. At this
> stage,
> because a lock was not properly released, HAProxy enter in a deadlock.
>
> On other threads, we loop in the watchdog, waiting for the hand to dump
> the
> thread information and that explains the 100% CPU usage you observe.
>
> So, to prevent this situation, the lua stack must not be dumped if it was
> interrupted inside an unsafe part. It is the easiest way we found to
> workaround
> this bug. And because it is pretty rare, it should be good enough.
>
> However, I'm unable to reproduce the bug. Could you test attached patches
> please
> ? I attached patched for the 2.4, 2.3 and 2.2. Because you experienced
> this bug
> on the 2.2, it is probably easier to test patches for this version.
>
> If fixed, it could be good to figure out why the watchdog is triggered on
> your
> old processes.
>
> --
> Christopher Faulet
>