On Τρίτη, 15 Σεπτεμβρίου 2020 9:24:32 Π.Μ. CEST Willy Tarreau wrote:
> Hi Pavlos!
> 
> On Sat, Sep 12, 2020 at 11:45:12AM +0200, Pavlos Parissis wrote:
> > Hi old friends!,
> > 
> > Is in the roadmap the addition of a circuit breaking which adapts its 
> > settings using real-time data?
> > I believe we discussed this in the last HAProxyConf with a group of people, 
> > but I don't remember if there were
> > , back then, concrete plans to work on it.
> > 
> > I know that something similar can  be accomplished by using agent check, 
> > but IMHO it less ideal.
> > - watch 
> > https://www.youtube.com/watch?v=CQvmSXlnyeQ&list=PLj6h78yzYM2O1wlsM-Ma-RYhfT5LKq0XC&index=21
> > 
> > - read 
> > https://www.envoyproxy.io/docs/envoy/v1.15.0/configuration/http/http_filters/adaptive_concurrency_filter
> 
> Yep as you say we can indeed already do it using agent checks. I know
> it's not ideal, but what matters to me is that it indicates we already
> have the required core functionalities to do something like this.
> 

This is excellent!

> The proper way to implement such a mechanism is by using feedback
> reaction, similar to how MPPT solar panel controllers work: you never
> know if you're sending enough load to keep the servers busy, so you
> constantly need to try to send a bit more to see if any service
> degradation occurs or not. The degradation happens by response time
> suddenly becoming affine to the number of concurrent requests. Then
> the decision of where to stick needs to be made based on the choice
> of lower latency (bottom of curve) or higher bandwidth (top of curve).
> A simpler approach would consist in having a setting indicating how
> much extra response time is acceptable compared to the measued optimal
> one.
> 
> I also think that it remains important to let the user configure lower
> and upper bounds that guarantee safe operations. And that's typically
> what could be done using the minconn and maxconn values. Instead of
> using the backend's fullconn setting we would rely on a response time
> measurement.
> 
> Or maybe that could be the right opportunity for splitting connection
> concurrency from request concurrency, keeping connections for the hard
> limits and using request concurrency for softer limits.
> 

What do we gain by doing this? IMHO, most application will suffer from
concurrent requests rather from connections. Having said that, we still have
applications that handling many idle or not SSL connection comes with
performance issues.

> There's no such ongoing work that I'm aware of but that has always
> been a subject of interest to me (I even wrote down the algorithm to
> compute weights by measured response times using a low-pass filter a
> decade ago but I lost my notes and never felt like doing work again).
> So if anyone is interested in this subject, we can continue this
> conversation till we reach something that looks like a possible design
> roadmap.
> 

I can volunteering in testing at my spare time, I don't have a valid use-case 
at my
current work, but I am still very much interested in helping haproxy community
to support this feature.

Thanks Willy for getting back to me,
Pavlos

> Cheers,
> Willy
> 





Reply via email to