> -----Original Message-----
> From: Joe Orton [mailto:jor...@redhat.com]
> Sent: Montag, 13. August 2012 14:32
> To: dev@httpd.apache.org
> Subject: core filters vs non-blocking socket (was Re: Fix for Windows
> bug#52476)
> 
> On Fri, Aug 10, 2012 at 01:31:07PM -0400, Jeff Trawick wrote:
> > We picked up that apr_socket_opt_set() from the async-dev branch with
> > r327872, though the timeout calls in there were changed subsequently.
> > I wonder if that call is stray and it doesn't get along with the
> > timeout handling on Windows because of the SO_RCVTIMEO/SO_SENDTIMEO
> > use on Windows, in lieu of non-blocking socket + poll like on Unix.
> >
> > At the time it was added, the new code was
> >
> > apr_socket_timeout_set(client_socket, 0)
> > apr_socket_opt_set(client_socket, APR_SO_NONBLOCK, 1)
> >
> > (redundant unless there was some APR glitch at the time)
> 
> Hmmmm, is this right?
> 
> For event, the listening socket, and hence the accepted socket, is
> always set to non-blocking in the MPM.
> 
> For non-event on Unix, the listening socket, and hence the accepted
> socket, is set to non-blocking IFF there are multiple listeners.
> 
> So that line is not redundant in the non-event, single listener
> configuration on Unix... no?

Don't we switch to non-blocking in apr_socket_timeout_set if we set the timeout 
to 0?
And if we do a read with a timeout don't we do a poll with a timeout where it 
does not
matter whether the socket is blocking or non blocking?

Or did I get confused now completely?

Regards

Rüdiger

Reply via email to