Re: svn commit: r1838941 - /httpd/httpd/trunk/modules/proxy/mod_proxy.h

2018-08-24 Thread William A Rowe Jr
MMN major, please?

On Fri, Aug 24, 2018 at 2:46 PM,  wrote:

> Author: jim
> Date: Fri Aug 24 19:46:57 2018
> New Revision: 1838941
>
> URL: http://svn.apache.org/viewvc?rev=1838941=rev
> Log:
> better struct layout ...
>
> Modified:
> httpd/httpd/trunk/modules/proxy/mod_proxy.h
>
> Modified: httpd/httpd/trunk/modules/proxy/mod_proxy.h
> URL: http://svn.apache.org/viewvc/httpd/httpd/trunk/modules/
> proxy/mod_proxy.h?rev=1838941=1838940=1838941=diff
> 
> ==
> --- httpd/httpd/trunk/modules/proxy/mod_proxy.h (original)
> +++ httpd/httpd/trunk/modules/proxy/mod_proxy.h Fri Aug 24 19:46:57 2018
> @@ -403,6 +403,9 @@ typedef struct {
>  char  uds_path[PROXY_WORKER_MAX_NAME_SIZE];   /* path to
> worker's unix domain socket if applicable */
>  char  hcuri[PROXY_WORKER_MAX_ROUTE_SIZE]; /* health check
> uri */
>  char  hcexpr[PROXY_WORKER_MAX_SCHEME_SIZE];   /* name of
> condition expr for health check */
> +char  secret[PROXY_WORKER_MAX_SECRET_SIZE]; /* authentication
> secret (e.g. AJP13) */
> +char  upgrade[PROXY_WORKER_MAX_SCHEME_SIZE];/* upgrade protocol
> used by mod_proxy_wstunnel */
> +char  hostname_ex[PROXY_RFC1035_HOSTNAME_SIZE];  /* RFC1035
> compliant version of the remote backend address */
>  int lbset;  /* load balancer cluster set */
>  int retries;/* number of retries on this worker */
>  int lbstatus;   /* Current lbstatus */
> @@ -438,14 +441,11 @@ typedef struct {
>  apr_size_t  io_buffer_size;
>  apr_size_t  elected;/* Number of times the worker was elected
> */
>  apr_size_t  busy;   /* busyness factor */
> +apr_size_t  response_field_size; /* Size of proxy response buffer
> in bytes. */
>  apr_port_t  port;
>  apr_off_t   transferred;/* Number of bytes transferred to remote
> */
>  apr_off_t   read;   /* Number of bytes read from remote */
>  void*context;   /* general purpose storage */
> -char  secret[PROXY_WORKER_MAX_SECRET_SIZE]; /* authentication
> secret (e.g. AJP13) */
> -char  upgrade[PROXY_WORKER_MAX_SCHEME_SIZE];/* upgrade protocol
> used by mod_proxy_wstunnel */
> -char  hostname_ex[PROXY_RFC1035_HOSTNAME_SIZE];  /* RFC1035
> compliant version of the remote backend address */
> -apr_size_t   response_field_size; /* Size of proxy response buffer in
> bytes. */
>  unsigned int keepalive:1;
>  unsigned int disablereuse:1;
>  unsigned int is_address_reusable:1;
> @@ -460,7 +460,7 @@ typedef struct {
>  unsigned int disablereuse_set:1;
>  unsigned int was_malloced:1;
>  unsigned int is_name_matchable:1;
> -unsigned int response_field_size_set:1;
> +unsigned int response_field_size_set:1;
>  } proxy_worker_shared;
>
>  #define ALIGNED_PROXY_WORKER_SHARED_SIZE (APR_ALIGN_DEFAULT(sizeof(
> proxy_worker_shared)))
>
>
>


Re: [Bug 62318] healthcheck

2018-08-24 Thread Jim Jagielski


> On Aug 24, 2018, at 1:18 PM, Eric Covener  wrote:
> 
> 
> I don't think it's hc execution time relatve to interval, it's hc
> execution time relative to AP_WD_TM_SLICE.  If it's much more than
> AP_WD_TM_SLICE they'll stack up while one is running.
> For a 1 second response you could queue up 10 in a second even if you
> only have an interval of as minute or hour.


Hmm... let me do some testing here... if that's the case, then it's borked
for sure.

Re: [Bug 62318] healthcheck

2018-08-24 Thread Christophe JAILLET

Le 24/08/2018 à 19:08, Jim Jagielski a écrit :



On Aug 24, 2018, at 12:53 PM, Christophe JAILLET 
 wrote:

I've only found it in mod_proxy_balancer and, IIUC, the meaning is "slightly" 
different from its use in hcheck! :)
Looks like this 'updated' field was dedicated for recording the time a worker 
has been added.

So, my understanding is that, either:
- hcheck already changed the meaning of this field, and broke the API when 
it has been introduced.
or
   - the API only says that 'updated' is "timestamp of last update", without telling 
which kind of update! So why couldn't be used by hcheck to keep record of the "timestamp of 
last update"... of its check?

It's the latter... recall that health check workers are totally different and 
separate from real workers.

I wasn't aware of that. So +1 for me.
Maybe some code comment (if not already present), should clarify that?





I still think that moving when s->updated is updated (sic!) in hcheck should be 
OK, and wouldn't be an API breakage for me.
I don't thing that it can interfere in any way with mod_proxy_balancer, at 
least with the actual code.
And we should clarify what is the use of thee fields to avoid someelse to 
'steal' them.

Let's stop w/ the idea that the API is broken or stolen :)

But yeah, doing the update = now when started/queued makes sense,
assuming that we understand the issue.






Re: [Bug 62318] healthcheck

2018-08-24 Thread Eric Covener
On Fri, Aug 24, 2018 at 1:05 PM Jim Jagielski  wrote:
>
>
>
> On Aug 24, 2018, at 12:05 PM, Eric Covener  wrote:
>
> On Fri, Aug 24, 2018 at 11:57 AM Christophe JAILLET
>  wrote:
>
>
> Le 24/08/2018 à 16:40, Jim Jagielski a écrit :
>
> I was wondering if someone wanted to provide a sanity check
> on the above PR and what's "expected" by the health check code.
>
> It would be very easy to adjust so that hcinterval was not
> the time between successive checks but the interval between
> the end of one and the start of another, but I'm not sure that
> is as useful. In other words, I think the current behavior
> is right (but think the docs need to be updated), but am
> willing to have my mind changed :)
>
> Hi Jim,
>
> the current behavior is also what I would expect.
> If I configure a check every 10s, I would expect 6 checks each minute,
> even if the test itself takes time to perform.
>
>
>
> Bug describes something else IIUC.  Because the watchdog calls us 10
> times per second, it continuously sees that the worker hasn't been
> health checked within the desired interval and queues up a check, it
> doesn't know one is queued.
>
>
> But that is only an issue, afaict, if the time taken to do the health check is
> greater than the interval chosen... Or am I misunderstanding? That is,
> if the interval is 200ms, and the health check takes 100ms, all is fine, we
> get 5 checks a second.

I don't think it's hc execution time relatve to interval, it's hc
execution time relative to AP_WD_TM_SLICE.  If it's much more than
AP_WD_TM_SLICE they'll stack up while one is running.
For a 1 second response you could queue up 10 in a second even if you
only have an interval of as minute or hour.


Re: [Bug 62318] healthcheck

2018-08-24 Thread Jim Jagielski



> On Aug 24, 2018, at 12:53 PM, Christophe JAILLET 
>  wrote:
> 
> I've only found it in mod_proxy_balancer and, IIUC, the meaning is "slightly" 
> different from its use in hcheck! :)
> Looks like this 'updated' field was dedicated for recording the time a worker 
> has been added.
> 
> So, my understanding is that, either:
>- hcheck already changed the meaning of this field, and broke the API when 
> it has been introduced.
> or
>   - the API only says that 'updated' is "timestamp of last update", without 
> telling which kind of update! So why couldn't be used by hcheck to keep 
> record of the "timestamp of last update"... of its check?

It's the latter... recall that health check workers are totally different and 
separate from real workers.

> 
> I still think that moving when s->updated is updated (sic!) in hcheck should 
> be OK, and wouldn't be an API breakage for me.
> I don't thing that it can interfere in any way with mod_proxy_balancer, at 
> least with the actual code.
> And we should clarify what is the use of thee fields to avoid someelse to 
> 'steal' them.

Let's stop w/ the idea that the API is broken or stolen :)

But yeah, doing the update = now when started/queued makes sense,
assuming that we understand the issue.

Re: [Bug 62318] healthcheck

2018-08-24 Thread Jim Jagielski


> On Aug 24, 2018, at 12:05 PM, Eric Covener  wrote:
> 
> On Fri, Aug 24, 2018 at 11:57 AM Christophe JAILLET
> mailto:christophe.jail...@wanadoo.fr>> wrote:
>> 
>> Le 24/08/2018 à 16:40, Jim Jagielski a écrit :
>>> I was wondering if someone wanted to provide a sanity check
>>> on the above PR and what's "expected" by the health check code.
>>> 
>>> It would be very easy to adjust so that hcinterval was not
>>> the time between successive checks but the interval between
>>> the end of one and the start of another, but I'm not sure that
>>> is as useful. In other words, I think the current behavior
>>> is right (but think the docs need to be updated), but am
>>> willing to have my mind changed :)
>>> 
>> Hi Jim,
>> 
>> the current behavior is also what I would expect.
>> If I configure a check every 10s, I would expect 6 checks each minute,
>> even if the test itself takes time to perform.
> 
> 
> Bug describes something else IIUC.  Because the watchdog calls us 10
> times per second, it continuously sees that the worker hasn't been
> health checked within the desired interval and queues up a check, it
> doesn't know one is queued.

But that is only an issue, afaict, if the time taken to do the health check is
greater than the interval chosen... Or am I misunderstanding? That is,
if the interval is 200ms, and the health check takes 100ms, all is fine, we
get 5 checks a second. 

I guess what we could do is emit a warning if when a check is queued, we
already have one queued, or in process. This would some info to the sysadmin.
We could also track the time taken to perform a check and have that available
via mod_status as well. But these all assume that the underlying logic, and
how it's implemented, is sane.

Re: [Bug 62318] healthcheck

2018-08-24 Thread Jim Jagielski
Yes, but the updated field is used differently for the health check workers and
the "real" workers.

> On Aug 24, 2018, at 12:21 PM, Eric Covener  wrote:
> 
> On Fri, Aug 24, 2018 at 12:13 PM Christophe JAILLET
> mailto:christophe.jail...@wanadoo.fr>> wrote:
>> 
>> Le 24/08/2018 à 17:56, Christophe JAILLET a écrit :
>>> Le 24/08/2018 à 16:40, Jim Jagielski a écrit :
 I was wondering if someone wanted to provide a sanity check
 on the above PR and what's "expected" by the health check code.
 
 It would be very easy to adjust so that hcinterval was not
 the time between successive checks but the interval between
 the end of one and the start of another, but I'm not sure that
 is as useful. In other words, I think the current behavior
 is right (but think the docs need to be updated), but am
 willing to have my mind changed :)
 
>>> Hi Jim,
>>> 
>>> the current behavior is also what I would expect.
>>> If I configure a check every 10s, I would expect 6 checks each minute,
>>> even if the test itself takes time to perform.
>>> 
>>> 
>>> 
>>> Not related, but is there any use for 'hc_pre_config()'?
>>> We already have:
>>>   static int tpsize = HC_THREADPOOL_SIZE;
>>> 
>>> Having both looks redundant.
>>> 
>>> CJ
>>> 
>>> 
>> but shouldn't we
>>worker->s->update = now;
>> when the check is started (in hc_watchdog_callback()) instead of when it
>> is funished (at the end of hc_check())?
> 
> Looks like s->updated is not used elsewhere in HC but is used
> elsewhere in proxy modules and is in the API.
> I don't know if that calls for a 2nd timestamp or a just a bit for
> when checks are in progress.  Could be useful in
> the future to keep track of the addl information.



Re: [Bug 62318] healthcheck

2018-08-24 Thread Christophe JAILLET

Le 24/08/2018 à 18:21, Eric Covener a écrit :

On Fri, Aug 24, 2018 at 12:13 PM Christophe JAILLET
 wrote:

Le 24/08/2018 à 17:56, Christophe JAILLET a écrit :

Le 24/08/2018 à 16:40, Jim Jagielski a écrit :

I was wondering if someone wanted to provide a sanity check
on the above PR and what's "expected" by the health check code.

It would be very easy to adjust so that hcinterval was not
the time between successive checks but the interval between
the end of one and the start of another, but I'm not sure that
is as useful. In other words, I think the current behavior
is right (but think the docs need to be updated), but am
willing to have my mind changed :)


Hi Jim,

the current behavior is also what I would expect.
If I configure a check every 10s, I would expect 6 checks each minute,
even if the test itself takes time to perform.



Not related, but is there any use for 'hc_pre_config()'?
We already have:
static int tpsize = HC_THREADPOOL_SIZE;

Having both looks redundant.

CJ



but shouldn't we
 worker->s->update = now;
when the check is started (in hc_watchdog_callback()) instead of when it
is funished (at the end of hc_check())?

Looks like s->updated is not used elsewhere in HC but is used
elsewhere in proxy modules and is in the API.
I don't know if that calls for a 2nd timestamp or a just a bit for
when checks are in progress.  Could be useful in
the future to keep track of the addl information.
I've only found it in mod_proxy_balancer and, IIUC, the meaning is 
"slightly" different from its use in hcheck! :)
Looks like this 'updated' field was dedicated for recording the time a 
worker has been added.


So, my understanding is that, either:
   - hcheck already changed the meaning of this field, and broke the 
API when it has been introduced.

or
  - the API only says that 'updated' is "timestamp of last update", 
without telling which kind of update! So why couldn't be used by hcheck 
to keep record of the "timestamp of last update"... of its check?


I still think that moving when s->updated is updated (sic!) in hcheck 
should be OK, and wouldn't be an API breakage for me.
I don't thing that it can interfere in any way with mod_proxy_balancer, 
at least with the actual code.
And we should clarify what is the use of thee fields to avoid someelse 
to 'steal' them.


just my 2c.

CJ



Re: [Bug 62318] healthcheck

2018-08-24 Thread Eric Covener
On Fri, Aug 24, 2018 at 12:13 PM Christophe JAILLET
 wrote:
>
> Le 24/08/2018 à 17:56, Christophe JAILLET a écrit :
> > Le 24/08/2018 à 16:40, Jim Jagielski a écrit :
> >> I was wondering if someone wanted to provide a sanity check
> >> on the above PR and what's "expected" by the health check code.
> >>
> >> It would be very easy to adjust so that hcinterval was not
> >> the time between successive checks but the interval between
> >> the end of one and the start of another, but I'm not sure that
> >> is as useful. In other words, I think the current behavior
> >> is right (but think the docs need to be updated), but am
> >> willing to have my mind changed :)
> >>
> > Hi Jim,
> >
> > the current behavior is also what I would expect.
> > If I configure a check every 10s, I would expect 6 checks each minute,
> > even if the test itself takes time to perform.
> >
> >
> >
> > Not related, but is there any use for 'hc_pre_config()'?
> > We already have:
> >static int tpsize = HC_THREADPOOL_SIZE;
> >
> > Having both looks redundant.
> >
> > CJ
> >
> >
> but shouldn't we
> worker->s->update = now;
> when the check is started (in hc_watchdog_callback()) instead of when it
> is funished (at the end of hc_check())?

Looks like s->updated is not used elsewhere in HC but is used
elsewhere in proxy modules and is in the API.
I don't know if that calls for a 2nd timestamp or a just a bit for
when checks are in progress.  Could be useful in
the future to keep track of the addl information.


Re: [Bug 62318] healthcheck

2018-08-24 Thread Marion & Christophe JAILLET

Yes, agreed.

CJ


Le 24/08/2018 à 18:05, Eric Covener a écrit :

On Fri, Aug 24, 2018 at 11:57 AM Christophe JAILLET
 wrote:

Le 24/08/2018 à 16:40, Jim Jagielski a écrit :

I was wondering if someone wanted to provide a sanity check
on the above PR and what's "expected" by the health check code.

It would be very easy to adjust so that hcinterval was not
the time between successive checks but the interval between
the end of one and the start of another, but I'm not sure that
is as useful. In other words, I think the current behavior
is right (but think the docs need to be updated), but am
willing to have my mind changed :)


Hi Jim,

the current behavior is also what I would expect.
If I configure a check every 10s, I would expect 6 checks each minute,
even if the test itself takes time to perform.


Bug describes something else IIUC.  Because the watchdog calls us 10
times per second, it continuously sees that the worker hasn't been
health checked within the desired interval and queues up a check, it
doesn't know one is queued.





Re: [Bug 62318] healthcheck

2018-08-24 Thread Christophe JAILLET

Le 24/08/2018 à 17:56, Christophe JAILLET a écrit :

Le 24/08/2018 à 16:40, Jim Jagielski a écrit :

I was wondering if someone wanted to provide a sanity check
on the above PR and what's "expected" by the health check code.

It would be very easy to adjust so that hcinterval was not
the time between successive checks but the interval between
the end of one and the start of another, but I'm not sure that
is as useful. In other words, I think the current behavior
is right (but think the docs need to be updated), but am
willing to have my mind changed :)


Hi Jim,

the current behavior is also what I would expect.
If I configure a check every 10s, I would expect 6 checks each minute, 
even if the test itself takes time to perform.




Not related, but is there any use for 'hc_pre_config()'?
We already have:
   static int tpsize = HC_THREADPOOL_SIZE;

Having both looks redundant.

CJ



but shouldn't we
   worker->s->update = now;
when the check is started (in hc_watchdog_callback()) instead of when it 
is funished (at the end of hc_check())?


Otherwise, it could be re-triggered before the completion of the first 
one (if slow)


CJ



Re: [Bug 62318] healthcheck

2018-08-24 Thread Eric Covener
On Fri, Aug 24, 2018 at 11:57 AM Christophe JAILLET
 wrote:
>
> Le 24/08/2018 à 16:40, Jim Jagielski a écrit :
> > I was wondering if someone wanted to provide a sanity check
> > on the above PR and what's "expected" by the health check code.
> >
> > It would be very easy to adjust so that hcinterval was not
> > the time between successive checks but the interval between
> > the end of one and the start of another, but I'm not sure that
> > is as useful. In other words, I think the current behavior
> > is right (but think the docs need to be updated), but am
> > willing to have my mind changed :)
> >
> Hi Jim,
>
> the current behavior is also what I would expect.
> If I configure a check every 10s, I would expect 6 checks each minute,
> even if the test itself takes time to perform.


Bug describes something else IIUC.  Because the watchdog calls us 10
times per second, it continuously sees that the worker hasn't been
health checked within the desired interval and queues up a check, it
doesn't know one is queued.


Re: [Bug 62318] healthcheck

2018-08-24 Thread Christophe JAILLET

Le 24/08/2018 à 16:40, Jim Jagielski a écrit :

I was wondering if someone wanted to provide a sanity check
on the above PR and what's "expected" by the health check code.

It would be very easy to adjust so that hcinterval was not
the time between successive checks but the interval between
the end of one and the start of another, but I'm not sure that
is as useful. In other words, I think the current behavior
is right (but think the docs need to be updated), but am
willing to have my mind changed :)


Hi Jim,

the current behavior is also what I would expect.
If I configure a check every 10s, I would expect 6 checks each minute, 
even if the test itself takes time to perform.




Not related, but is there any use for 'hc_pre_config()'?
We already have:
   static int tpsize = HC_THREADPOOL_SIZE;

Having both looks redundant.

CJ



Re: [Bug 62318] healthcheck

2018-08-24 Thread Jim Jagielski
I was wondering if someone wanted to provide a sanity check
on the above PR and what's "expected" by the health check code.

It would be very easy to adjust so that hcinterval was not
the time between successive checks but the interval between
the end of one and the start of another, but I'm not sure that
is as useful. In other words, I think the current behavior
is right (but think the docs need to be updated), but am
willing to have my mind changed :)