-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Hi Ben!

On 07/01/2014 03:20 PM, Ben Noordhuis wrote:
> On Mon, Jun 30, 2014 at 9:28 AM, Saúl Ibarra Corretgé
> <[email protected]> wrote:
>> https://gist.github.com/saghul/909502cc7367a25c247a
> 
> Moving the goalposts a little: for consistency, similar changes
> should be considered for uv_listen() and uv_udp_recv_start().
> 

Yep, this was my goal, I just picked a place to start :-)

> uv_udp_recv_start() is the easy one, that could just follow
> uv_read():
> 
> int uv_udp_recv(uv_udp_recv_t* req, uv_udp_t* handle, uv_alloc_cb
> alloc_cb, uv_udp_recv_cb recv_cb);
> 
> Allowing for multiple queued uv_udp_recv_t requests would let
> libuv exploit recvmmsg() on newer Linux systems.  I've observed
> that recvmmsg() isn't unequivocally faster than plain recvmsg() but
> it's good to at least have the option.
> 
> uv_listen() is more interesting.  It takes a callback that in
> turns calls uv_accept() to accept the incoming connection.  A
> problem with the current implementation is that on UNIX platforms,
> the connection has already been accept()'d by the time the callback
> is called; uv_accept() just packages the socket file descriptor in
> a uv_stream_t.
> 
> It makes it difficult to implement throttling well because there's 
> always a connection that ends up in limbo until the application
> starts calling uv_accept() again.  There have been repeated
> requests for a uv_listen_stop() function for exactly that reason.
> 
> Folding uv_listen() and uv_accept() into a single API function
> would resolve that.  I'll dub the new function uv_accept() and it
> would look something like this:
> 
> int uv_accept(uv_accept_t* req, uv_stream_t* server_handle, 
> uv_accept_cb accept_cb);
> 
> Where uv_accept_cb would look like this:
> 
> typedef void (*uv_accept_cb)(uv_stream_t* client_handle, int
> status);
> 
> As long as there are pending accept requests, the listen socket is 
> polled.  When there are none, the socket is removed from the poll
> set. Allowing for multiple pending accept requests lets libuv
> optimize for systems having a (so far hypothetical) acceptv()
> system call.
> 
> One drawback with the suggested API is that it requires that the 
> client handle is allocated and initialized upfront, something that 
> would complicate cleanup for the user on shutdown or error.
> Another potential issue is when the user embeds the handle in a
> larger data structure that until now had an expectation of always
> having a fully initialized handle.
> 

Well, this would definitely be a problem for me in pyuv. I embed
uv_udp_t in the Python object structure, and I can't possibly have it
upfront (the user may have subclassed it...)

> Changing it to defer allocation of the handle until there is a 
> connection is an option, of course, but it would in turn make
> other use cases more complicated: for example, using a
> stack-allocated handle would require that the user carries the
> address of the handle around until it's needed.  Tradeoffs...
> 
> Boost.Asio takes the 'commit upfront' approach and I'm leaning
> towards that as well, if only because it lowers the cognitive
> dissonance between the two projects.  Of course, enforcing proper
> cleanup is easier in C++ than it is in C.
> 

Here is one idea that I've had somewhere at the back of my head for a
while: have uv_accept return the fd and push the responsibility of
initializing the handle to the user. AFAIS, the current uv_accept
basically gets the fd and calls uv_*_open with it, so the user may do
that himself. An added benefit is that if someone wants to play with
the fd beofre handling it over to a libuv handle, he can. (context:
https://github.com/saghul/pyuv/issues/157)

Taking this one step further, lets add early socket allocation to the mix:

uv_tcp_init(loop, handle, family)    # creates the socket early
uv_tcp_init_socket(loop, handle, fd)   # initializes the handle with
the given fd

So, back to uv_accept request:

void accept_cb(uv_accept_t *req, int status)
{
    if (status == 0) {
        uv_tcp_t *conn = malloc(sizeof *conn);
        uv_tcp_init_socket(req->loop, conn, req->socket);
    }
}

I haven't looked super-deep, but I think we can achieve the same on
The Other Side (R).

> Last but not least, request-driven accept and receive functionality
> - especially when cancellable - should make life a whole lot easier
> for the people that implement synchronous green threading on top
> of libuv's asynchronous API, like Rust and Julia.
> 

Yes! Slowly going on that direction :-)

- -- 
Saúl Ibarra Corretgé
bettercallsaghul.com

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
Comment: Using GnuPG with Icedove - http://www.enigmail.net/

iQIcBAEBAgAGBQJTsrqVAAoJEEEOVVOum8BZv94P/2mULV+Vi0uTH6fL876wRdxU
JX3w5d+O81aEGW2SCr+iraKaQZHN/5Cjo97O/3otNbpZjUuuMoVXy+5NisgruSS/
VaszaYJcGq7dNz2yWKOPBpPcpsq1myhDBN/t2xehTuvpICrGNrjpkbMNNvxDJkQC
c//VqNKooHgp8p0QlcohROXW52mjb698Tfzlt32ojGece0dU2XVuuJ/xiCAXxxoW
rUt0L+TolJE9TSrfdBtUKY7X6w8F3s26bUSkIOmkemPzzRYRxaJQSBK2B2wPmLJT
hwTm0xpSLmyUJ9ySm8get2XLXLCDCGn9KDsT1bldIE5nLXOOR5agqSHiqYWqD0vy
T07PayJo6dEvPqHLN0HQvRurdyoo3xsSR7OAaLndSIK4z0p0N623w4DITqxxuz+j
SvS4O24m8sXHIjae7or7NTweupQMbghoLw5WOHwxH7gPlevM5132vpTpmotVL6C8
sk+DgRsBV6TL9VS7kVWgBRkZAu7ajExdoQ52wkGHieLT8vmbTM/J/20QO0bCGMEm
YmvoSWsZPDJkj9uC/7R/ThsAL9qgPytEgheqqL+lBnyWelb3UWXzhkPQj75SLKFZ
d6VNc1xbTpbODZsVM2qc0s1XBwQ52HxAQpYPPXfGd2sKBAQ9wATjzOdNHcSWXnJc
wfvze8Rc+3Qh18y9Bps6
=IMNc
-----END PGP SIGNATURE-----

-- 
You received this message because you are subscribed to the Google Groups 
"libuv" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/libuv.
For more options, visit https://groups.google.com/d/optout.

Reply via email to