Re: svn commit: r1892740 - in /httpd/httpd/trunk: changes-entries/ap_proxy_tunnel_run.txt modules/proxy/proxy_util.c

2021-09-03 Thread Ruediger Pluem



On 9/3/21 6:46 PM, Yann Ylavic wrote:
> On Fri, Sep 3, 2021 at 2:15 PM Ruediger Pluem  wrote:
>>
>> On 9/3/21 1:01 PM, Yann Ylavic wrote:
>>>

>>
>> Still confused.
> 
> Hope the above helps :/

I guess I get it know. Thanks for your detailed explanations.

Regards

Rüdiger



Re: svn commit: r1892740 - in /httpd/httpd/trunk: changes-entries/ap_proxy_tunnel_run.txt modules/proxy/proxy_util.c

2021-09-03 Thread Yann Ylavic
On Fri, Sep 3, 2021 at 2:15 PM Ruediger Pluem  wrote:
>
> On 9/3/21 1:01 PM, Yann Ylavic wrote:
> >
> > What I want to avoid in this loop is to handle all the POLLIN|POLLERR
> > or POLLOUT|POLLERR or POLLERR alone which depend on the system.
> > So it's written such that there is no "blank" pass, when no
> > POLLIN|POLLHUP or POLLOUT is returned (likely an error condition) we
> > will issue either a read or a write (in that order of preference) to
> > catch the error at the core filters' level.
> > This was missing the POLLOUT|POLLHUP case when the socket is shutdown
> > for read already, meaning that we will never get POLLIN anymore, and
> > this is fixed by this !(reqevents & APR_POLLIN) case to issue a write
> > (well a shutdown for write exactly here) when no read is to be issued
> > below.
> >
> > Does it make sense?
>
> Just to be sure. We need this because
>
> 1. We requested POLLOUT
> 2. We did not get POLLOUT back, but rtnevents might contain POLLHUP and or 
> POLLIN and hence the last condition existing before the
> patch (!(pfd->rtnevents & (APR_POLLIN | APR_POLLHUP))) would not trigger. The 
> new condition ensures in these cases that we flush
> stuff even if blocking and close down write.

We asked for POLLOUT (only) in reqevents but did not get it back in
rtnevents, instead we got POLLHUP|POLLPRI (in the case of PR 65521)
but it could also have been POLLERR or anything we didn't ask for.
In any case when we ask for POLLIN or POLLOUT or both and get back
none of them, the system tells us there is something "wrong" with the
socket (error, reset or EOF). There might be something wrong with
POLLIN or POLLOUT returned too but at least we know what to do when it
happens, here we have no clue based on the returned events only.

I think this can happen whether the connection was half shutdown or
not, at least POLLERR, see below.

>
> This means we could also use
>
> tc->other->down_in == 1
>
> instead of !(tc->pfd->reqevents & APR_POLLIN)
>
> to get the same results?

This is a more restrictive check that would not catch for instance
POLLERR while we asked for POLLOUT (only) on a full duplex connection
still.
We might have !(tc->pfd->reqevents & APR_POLLIN) because tc->other's
output filters have pending data (we stop reading tc in this case to
avoid yet more pending data and risk blocking), so it does not
necessarily mean that the connections are half closed (down_in/out).
Still if we get POLLERR we want to catch the error on write rather
than read, because read is shutdown or suspended.

>
> Or does it need to be !(tc->other->pfd->reqevents & APR_POLLIN) instead of 
> !(tc->pfd->reqevents & APR_POLLIN) ?

No, this is an event on tc (not tc->other), so we need to check
whether tc needs reading and/or writing because we will read from
and/or write to there in this pass.

>
> Still confused.

Hope the above helps :/


Thanks;
Yann.


Re: svn commit: r1892740 - in /httpd/httpd/trunk: changes-entries/ap_proxy_tunnel_run.txt modules/proxy/proxy_util.c

2021-09-03 Thread Ruediger Pluem



On 9/3/21 1:01 PM, Yann Ylavic wrote:
> On Tue, Aug 31, 2021 at 9:29 AM Ruediger Pluem  wrote:
>>
>> On 8/30/21 8:04 PM, yla...@apache.org wrote:
>>>
>>> +/* We want to write if we asked for POLLOUT and got:
>>> + * - POLLOUT: the socket is ready for write;
>>> + * - !POLLIN: the socket is in error state (POLLERR) so we let
>>> + *   the user know by failing the write and log, OR the socket
>>> + *   is shutdown for read already (POLLHUP) so we have to
>>> + *   shutdown for write.
>>>   */
>>>  if ((tc->pfd->reqevents & APR_POLLOUT)
>>>  && ((pfd->rtnevents & APR_POLLOUT)
>>> +|| !(tc->pfd->reqevents & APR_POLLIN)
>>
>> Why should we write if we requested POLLOUT did not get POLLOUT back, but 
>> did not request for POLLIN?
> 
> What I want to avoid in this loop is to handle all the POLLIN|POLLERR
> or POLLOUT|POLLERR or POLLERR alone which depend on the system.
> So it's written such that there is no "blank" pass, when no
> POLLIN|POLLHUP or POLLOUT is returned (likely an error condition) we
> will issue either a read or a write (in that order of preference) to
> catch the error at the core filters' level.
> This was missing the POLLOUT|POLLHUP case when the socket is shutdown
> for read already, meaning that we will never get POLLIN anymore, and
> this is fixed by this !(reqevents & APR_POLLIN) case to issue a write
> (well a shutdown for write exactly here) when no read is to be issued
> below.
> 
> Does it make sense?

Just to be sure. We need this because

1. We requested POLLOUT
2. We did not get POLLOUT back, but rtnevents might contain POLLHUP and or 
POLLIN and hence the last condition existing before the
patch (!(pfd->rtnevents & (APR_POLLIN | APR_POLLHUP))) would not trigger. The 
new condition ensures in these cases that we flush
stuff even if blocking and close down write.

This means we could also use

tc->other->down_in == 1

instead of !(tc->pfd->reqevents & APR_POLLIN)

to get the same results?

Or does it need to be !(tc->other->pfd->reqevents & APR_POLLIN) instead of 
!(tc->pfd->reqevents & APR_POLLIN) ?

Still confused.

Regards

Rüdiger



Re: svn commit: r1892740 - in /httpd/httpd/trunk: changes-entries/ap_proxy_tunnel_run.txt modules/proxy/proxy_util.c

2021-09-03 Thread Yann Ylavic
On Fri, Sep 3, 2021 at 1:01 PM Yann Ylavic  wrote:
>
> What I want to avoid in this loop is to handle all the POLLIN|POLLERR
> or POLLOUT|POLLERR or POLLERR alone which depend on the system.
> So it's written such that there is no "blank" pass, when no
> POLLIN|POLLHUP or POLLOUT is returned (likely an error condition) we
> will issue either a read or a write (in that order of preference) to
> catch the error at the core filters' level.
> This was missing the POLLOUT|POLLHUP case when the socket is shutdown

This was missing the POLLPRI|POLLHUP (PR 65521) ...

> for read already, meaning that we will never get POLLIN anymore, and
> this is fixed by this !(reqevents & APR_POLLIN) case to issue a write
> (well a shutdown for write exactly here) when no read is to be issued
> below.
>
> Does it make sense?
>
> Regards;
> Yann.


Re: svn commit: r1892740 - in /httpd/httpd/trunk: changes-entries/ap_proxy_tunnel_run.txt modules/proxy/proxy_util.c

2021-09-03 Thread Yann Ylavic
On Fri, Sep 3, 2021 at 12:58 PM Ruediger Pluem  wrote:
>
>
>
> On 9/3/21 12:27 PM, Yann Ylavic wrote:
> > On Tue, Aug 31, 2021 at 9:29 AM Ruediger Pluem  wrote:
> >>
> >> On 8/30/21 8:04 PM, yla...@apache.org wrote:
> >>> +
> >>> +/* Flush any pending input data now, we don't know 
> >>> when
> >>> + * the next POLLIN will trigger and retaining data 
> >>> might
> >>> + * deadlock the underlying protocol. We don't check 
> >>> for
> >>> + * pending data first with ap_filter_input_pending() 
> >>> since
> >>> + * the read from proxy_tunnel_forward() is 
> >>> nonblocking
> >>> + * anyway and returning OK if there's no data.
> >>> + */
> >>> +rc = proxy_tunnel_forward(tunnel, in);
> >>> +if (rc != OK) {
> >>> +return rc;
> >>> +}
> >>
> >> Don't we do all of this already a few lines above if 
> >> ap_filter_input_pending(in->c) == OK?
> >> Why doing it again?
> >
> > With whatever tc (client or origin connection), here on tc->rtnevents
> > & POLLOUT we have out == tc and in == tc->other, so we forward
> > potential pending input data from tc->other to tc (without waiting
> > tc->other to become readable).
> >
> > Below with the same tc (same loop), if tc->rtnevents & POLLIN we
> > forward from tc to tc->other, thus the opposite direction.
>
> I still don't get it. Starting at line 4941 in proxy_util.c we have
>
>
>
> /* Flush any pending input data now, we don't know when
>  * the next POLLIN will trigger and retaining data might
>  * block the protocol.
>  */
> if (ap_filter_input_pending(in->c) == OK) {
> rc = proxy_tunnel_forward(tunnel, in);
> if (rc != OK) {
> return rc;
> }
> }
>
> /* Flush any pending input data now, we don't know when
>  * the next POLLIN will trigger and retaining data might
>  * deadlock the underlying protocol. We don't check for
>  * pending data first with ap_filter_input_pending() since
>  * the read from proxy_tunnel_forward() is nonblocking
>  * anyway and returning OK if there's no data.
>  */
> rc = proxy_tunnel_forward(tunnel, in);
> if (rc != OK) {
> return rc;
> }
>
>
> We execute the same code
>
>
> rc = proxy_tunnel_forward(tunnel, in);
> if (rc != OK) {
> return rc;
> }
>
> twice if ap_filter_input_pending(in->c) == OK. Even the comments above these 
> two executions
> start with the same wording. I still ask me why we need to do it twice.

Argh sorry, I thought you were talking about the
proxy_tunnel_forward() in the POLLIN case, because I didn't see that I
had left the old code in (not visible in the diff..).

I think I got it in r1892851 :)

Thanks!


Re: svn commit: r1892740 - in /httpd/httpd/trunk: changes-entries/ap_proxy_tunnel_run.txt modules/proxy/proxy_util.c

2021-09-03 Thread Yann Ylavic
On Tue, Aug 31, 2021 at 9:29 AM Ruediger Pluem  wrote:
>
> On 8/30/21 8:04 PM, yla...@apache.org wrote:
> >
> > +/* We want to write if we asked for POLLOUT and got:
> > + * - POLLOUT: the socket is ready for write;
> > + * - !POLLIN: the socket is in error state (POLLERR) so we let
> > + *   the user know by failing the write and log, OR the socket
> > + *   is shutdown for read already (POLLHUP) so we have to
> > + *   shutdown for write.
> >   */
> >  if ((tc->pfd->reqevents & APR_POLLOUT)
> >  && ((pfd->rtnevents & APR_POLLOUT)
> > +|| !(tc->pfd->reqevents & APR_POLLIN)
>
> Why should we write if we requested POLLOUT did not get POLLOUT back, but did 
> not request for POLLIN?

What I want to avoid in this loop is to handle all the POLLIN|POLLERR
or POLLOUT|POLLERR or POLLERR alone which depend on the system.
So it's written such that there is no "blank" pass, when no
POLLIN|POLLHUP or POLLOUT is returned (likely an error condition) we
will issue either a read or a write (in that order of preference) to
catch the error at the core filters' level.
This was missing the POLLOUT|POLLHUP case when the socket is shutdown
for read already, meaning that we will never get POLLIN anymore, and
this is fixed by this !(reqevents & APR_POLLIN) case to issue a write
(well a shutdown for write exactly here) when no read is to be issued
below.

Does it make sense?

Regards;
Yann.


Re: svn commit: r1892740 - in /httpd/httpd/trunk: changes-entries/ap_proxy_tunnel_run.txt modules/proxy/proxy_util.c

2021-09-03 Thread Ruediger Pluem



On 9/3/21 12:27 PM, Yann Ylavic wrote:
> On Tue, Aug 31, 2021 at 9:29 AM Ruediger Pluem  wrote:
>>
>> On 8/30/21 8:04 PM, yla...@apache.org wrote:
>>> +
>>> +/* Flush any pending input data now, we don't know when
>>> + * the next POLLIN will trigger and retaining data 
>>> might
>>> + * deadlock the underlying protocol. We don't check for
>>> + * pending data first with ap_filter_input_pending() 
>>> since
>>> + * the read from proxy_tunnel_forward() is nonblocking
>>> + * anyway and returning OK if there's no data.
>>> + */
>>> +rc = proxy_tunnel_forward(tunnel, in);
>>> +if (rc != OK) {
>>> +return rc;
>>> +}
>>
>> Don't we do all of this already a few lines above if 
>> ap_filter_input_pending(in->c) == OK?
>> Why doing it again?
> 
> With whatever tc (client or origin connection), here on tc->rtnevents
> & POLLOUT we have out == tc and in == tc->other, so we forward
> potential pending input data from tc->other to tc (without waiting
> tc->other to become readable).
> 
> Below with the same tc (same loop), if tc->rtnevents & POLLIN we
> forward from tc to tc->other, thus the opposite direction.

I still don't get it. Starting at line 4941 in proxy_util.c we have



/* Flush any pending input data now, we don't know when
 * the next POLLIN will trigger and retaining data might
 * block the protocol.
 */
if (ap_filter_input_pending(in->c) == OK) {
rc = proxy_tunnel_forward(tunnel, in);
if (rc != OK) {
return rc;
}
}

/* Flush any pending input data now, we don't know when
 * the next POLLIN will trigger and retaining data might
 * deadlock the underlying protocol. We don't check for
 * pending data first with ap_filter_input_pending() since
 * the read from proxy_tunnel_forward() is nonblocking
 * anyway and returning OK if there's no data.
 */
rc = proxy_tunnel_forward(tunnel, in);
if (rc != OK) {
return rc;
}


We execute the same code


rc = proxy_tunnel_forward(tunnel, in);
if (rc != OK) {
return rc;
}

twice if ap_filter_input_pending(in->c) == OK. Even the comments above these 
two executions
start with the same wording. I still ask me why we need to do it twice.

Regards

Rüdiger


Re: svn commit: r1892740 - in /httpd/httpd/trunk: changes-entries/ap_proxy_tunnel_run.txt modules/proxy/proxy_util.c

2021-09-03 Thread Yann Ylavic
On Tue, Aug 31, 2021 at 9:29 AM Ruediger Pluem  wrote:
>
> On 8/30/21 8:04 PM, yla...@apache.org wrote:
> > +
> > +/* Flush any pending input data now, we don't know when
> > + * the next POLLIN will trigger and retaining data 
> > might
> > + * deadlock the underlying protocol. We don't check for
> > + * pending data first with ap_filter_input_pending() 
> > since
> > + * the read from proxy_tunnel_forward() is nonblocking
> > + * anyway and returning OK if there's no data.
> > + */
> > +rc = proxy_tunnel_forward(tunnel, in);
> > +if (rc != OK) {
> > +return rc;
> > +}
>
> Don't we do all of this already a few lines above if 
> ap_filter_input_pending(in->c) == OK?
> Why doing it again?

With whatever tc (client or origin connection), here on tc->rtnevents
& POLLOUT we have out == tc and in == tc->other, so we forward
potential pending input data from tc->other to tc (without waiting
tc->other to become readable).

Below with the same tc (same loop), if tc->rtnevents & POLLIN we
forward from tc to tc->other, thus the opposite direction.


Regards;
Yann.


Re: svn commit: r1892740 - in /httpd/httpd/trunk: changes-entries/ap_proxy_tunnel_run.txt modules/proxy/proxy_util.c

2021-08-31 Thread Ruediger Pluem



On 8/30/21 8:04 PM, yla...@apache.org wrote:
> Author: ylavic
> Date: Mon Aug 30 18:04:20 2021
> New Revision: 1892740
> 
> URL: http://svn.apache.org/viewvc?rev=1892740=rev
> Log:
> mod_proxy: Fix potential tunneling infinite loop and spurious timeout.
>PRs 65521 and 65519.
> 
> * modules/proxy/proxy_util.c(ap_proxy_tunnel_run):
>   Avoid an infinite loop by shutting down the connection for write when poll()
>   returns POLLHUP and read is already down.  PR 65521.
> 
> * modules/proxy/proxy_util.c(ap_proxy_tunnel_run):
>   When write completion is finished don't check for ap_filter_input_pending()
>   before proxy_tunnel_forward() to flush input data, this is a nonblocking 
> read
>   already which will do the same thing implicitely. ap_filter_input_pending()
>   is broken in 2.4.x without the whole pending data mechanism (not backported
>   yet), so let's align here.  PR 65519.
> 
> 
> Added:
> httpd/httpd/trunk/changes-entries/ap_proxy_tunnel_run.txt
> Modified:
> httpd/httpd/trunk/modules/proxy/proxy_util.c
> 

> Modified: httpd/httpd/trunk/modules/proxy/proxy_util.c
> URL: 
> http://svn.apache.org/viewvc/httpd/httpd/trunk/modules/proxy/proxy_util.c?rev=1892740=1892739=1892740=diff
> ==
> --- httpd/httpd/trunk/modules/proxy/proxy_util.c (original)
> +++ httpd/httpd/trunk/modules/proxy/proxy_util.c Mon Aug 30 18:04:20 2021
> @@ -4890,12 +4890,16 @@ PROXY_DECLARE(int) ap_proxy_tunnel_run(p
>  return HTTP_INTERNAL_SERVER_ERROR;
>  }
>  
> -/* Write if we asked for POLLOUT, and got it or POLLERR
> - * alone (i.e. not with POLLIN|HUP). We want the output filters
> - * to know about the socket error if any, by failing the write.
> +/* We want to write if we asked for POLLOUT and got:
> + * - POLLOUT: the socket is ready for write;
> + * - !POLLIN: the socket is in error state (POLLERR) so we let
> + *   the user know by failing the write and log, OR the socket
> + *   is shutdown for read already (POLLHUP) so we have to
> + *   shutdown for write.
>   */
>  if ((tc->pfd->reqevents & APR_POLLOUT)
>  && ((pfd->rtnevents & APR_POLLOUT)
> +|| !(tc->pfd->reqevents & APR_POLLIN)

Why should we write if we requested POLLOUT did not get POLLOUT back, but did 
not request for POLLIN?

>  || !(pfd->rtnevents & (APR_POLLIN | APR_POLLHUP {
>  struct proxy_tunnel_conn *out = tc, *in = tc->other;
>  
> @@ -4944,12 +4948,25 @@ PROXY_DECLARE(int) ap_proxy_tunnel_run(p
>  return rc;
>  }
>  }
> +
> +/* Flush any pending input data now, we don't know when
> + * the next POLLIN will trigger and retaining data might
> + * deadlock the underlying protocol. We don't check for
> + * pending data first with ap_filter_input_pending() 
> since
> + * the read from proxy_tunnel_forward() is nonblocking
> + * anyway and returning OK if there's no data.
> + */
> +rc = proxy_tunnel_forward(tunnel, in);
> +if (rc != OK) {
> +return rc;
> +}

Don't we do all of this already a few lines above if 
ap_filter_input_pending(in->c) == OK?
Why doing it again?

Regards

Rüdiger