I tried it yesterday. I'm writing an epoll loop implementation in c#, so 
that is not js/nodejs or c/libuv. I don't know how the slab allocator works 
in node and such but I think the test results would be similar in c/libuv.

My test scenario was simple, send 5GB of data calling 1MB writes. At the 
producing end I was using the blocking API in a separate thread.

At the receiving end I used epoll with nonblocking reads. Using a static 
64k buffer It took ~8000ms on my machine to send and receive that data.

The ioctl call added ~20ms overhead. However, the ioctl returned numbers 
bigger than 64k. I assumed that 64k was the maximum buffer size for sockets 
on linux. So whenever a suggested size of ioctl was higher than my current 
buffer size, I expanded my buffer.
The result was that I could send those 5GB of data in ~2050 ms (The 
suggested buffer size was around 4 times bigger than 64k, so that kinda 
makes sense). Now, I didn't do anything significant with the data, just 
receive it, no parsing, no inspection, so the performance gain is purely on 
the data shovelling side on fully saturated lines sending a lot of data.

But the added benefit is that now the allocator callback knows *exactly* 
how much memory it needs to provide which means less fragmentation when the 
user sends small packets and the callback allocates its memory dynamically. 
For just 1% performance decrease and a huge performance gain, since the 
maximum size of the buffer can be determined.

I will write a test case in c/libuv to show these effects.

-- 
You received this message because you are subscribed to the Google Groups 
"libuv" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/libuv.
For more options, visit https://groups.google.com/d/optout.

Reply via email to