Re: Stale BZ Bug Tracker reports

2018-11-06 Thread William A Rowe Jr
On Fri, Nov 2, 2018, 06:24 Luca Toscano  Il giorno gio 1 nov 2018 alle ore 22:35 William A Rowe Jr
>  ha scritto:
> >
> > To keep this thread moving (additional feedback is welcomed and
> appreciated)...
>
> Thanks a lot for this effort William, I really think that having 1700+
> bugs opened does not look good for any reporter/user that doesn't know
> how we work, since it is easy to get the impression that bugs are
> simply opened to take dust for ages (thing that doesn't really
> happen).
>
>
> >
> > So I'd read this as the bug needs to be reproduced with a "later"
> version of httpd, and is subject to reconsideration "later" on further
> review, but may have already been resolved in a "later" release.
>
> The RESOLVED/FUTURE seems a bit confusing from my point of view, but I
> don't really have any good suggestion.. I would personally stick with
> something that clearly indicates that the bug was closed due to being
> too old.
>
> [Generic text comment proposed]
>
> +1 nice!
>
> > Edits noted, thanks!
>

It seems I should define that this bug may be reevaluated "LATER" and we
are asking for their help to close or reopen against a current flavor. I'll
work up a small tweak to the language we've agreed on.

We've given this enough cycles, I'll proceed to the initial step. Looking
for any tweaks which might let bugs mentioning "2.4." in the discussing to
be calldd out for manual reevaluation.


Re: Load balancing and load determination

2018-11-06 Thread Jim Jagielski
Which is why we allow for both pre-send checks and out-of-band health checks...

> On Nov 5, 2018, at 10:58 AM, William A Rowe Jr  wrote:
> 
> 
> The last thing we want are the routing headaches of contacting an
> ever-changing list one-or-many potential balancers. And we can't
> rely on a dying lbmember to "check in" that it isn't functional. Since
> the balancer must already start requests to the backend, having that
> backend supplement the responses with its health status is simple.



Re: Load balancing and load determination

2018-11-06 Thread jean-frederic clere
On 05/11/2018 16:58, William A Rowe Jr wrote:
> On Mon, Nov 5, 2018 at 7:48 AM jean-frederic clere  > wrote:
> 
> On 30/10/2018 13:53, Jim Jagielski wrote:
> > As some of you know, one of my passions and area of focus is
> > on the use of Apache httpd as a reverse proxy and, as such, load
> > balancing, failover, etc are of vital interest to me.
> >
> > One topic which I have mulling over, off and on, has been the
> > idea of some sort of universal load number, that could be used
> > and agreed upon by web servers. Right now, the reverse proxy
> > "guesses" the load on the backend servers which is OK, and
> > works well enough, but it would be great if it actually "knew"
> > the current loads on those servers. I already have code that
> > shares basic architectural info, such as number of CPUs, available
> > memory, loadavg, etc which can help, of course, but again, all
> > this info can be used to *infer* the current status of those backend
> > servers; it doesn't really provide what the current load actually
> > *is*.
> >
> > So I was thinking maybe some sort of small, simple and "fast"
> > benchmark which could be run by the backends as part of their
> > "status" update to the front-end reverse proxy server... something
> > that shows general capability at that point in time, like Hanoi or
> > something similar. Or maybe some hash function. Some simple code
> > that could be used to create that "universal" load number.
> >
> > Thoughts? Ideas? Comments? Suggestions? :)
> 
> having the back-ends to provide the load they are able to handle
> lbfactor (via w_lf or somethere similar. That requires the back-ends to
> be able to send request to httpd balancer-manager handler.
> 
> 
> Not really. I'd suggest a response header, travelling with each response
> back to the balancer, which can be composed quickly enough to share
> a play-by-play snapshot of the availability of that backend. This adds
> next to no traffic and minimal cpu drain if composed cleanly. And it can
> optionally be axed by the balancer in the response to the client.

The problem is that if there is no requests going to back-end the
load-balancer won't know that the back-end is available again after a
load peak.

> 
> The last thing we want are the routing headaches of contacting an
> ever-changing list one-or-many potential balancers. And we can't
> rely on a dying lbmember to "check in" that it isn't functional. Since
> the balancer must already start requests to the backend, having that
> backend supplement the responses with its health status is simple.
> 
> 

cping/cpong or options * allows check back-end nodes before sending
requests.

-- 
Cheers

Jean-Frederic