On Sun, Nov 25, 2007 at 08:07:39PM -0800, Simon Hardy-Francis <[EMAIL 
PROTECTED]> wrote:
> I was grepping the libev source code for FD_SETSIZE (which apparently
> is set by default to 64 for Win32). Have you ever tried setting
> FD_SETSIZE to a larger value?

Yes, it works about everywhere I tested, although the fd_set is only
used on windows by default. It does work with windows, too, although
performance on windows overall makes it basically pointless to use many
sockets there, the only thing you can hope for on windows is that your
programs work with reduced performance.

> I found this mail here suggesting  that
> a value of 16384 works fine:
> http://www.mail-archive.com/[EMAIL PROTECTED]/msg14069.html

Yes with "newer" windows versions, winsocket works around the kernel limit of
64 objects to wait for per thread (which is hardcoded into windows), so more
actually works.

The drawback is that the fd_set on windows are O(n) operations for
everything (as opposed to O(1) as everywhere else), so increaing the fd
set is not automatically a big win.

> Recently I was testing epoll on Ubuntu and managed to get 100k
> connected sockets without problem.

If you are careful you cna get that with select, too, in almost the same time
even :)

> then top reported a memory usage which lead be to believe that an
> average socket connection used 9.5KB.

That of course depends majorly on the socket buffers used (and how much
of that the connection actually uses). Linux is quite good at scaling
dynamically, but if you really have a lot of connections resizing the
socket buffers below the minimum might turn out to be a big win (for
example, for an nntp server you rarely need more than 0.5kb receive
window+buffer).

> Have you tried libev with large amounts of sockets?

Well, quite obviously the benchmark was run with 100k sockets. I do not
usually have these many sockets in my own daemons, which rarely have more
than 10k connections.

Since everything is dynamically sized (excluding select on windows),
and complexity for that is amortised O(1) for all fd operations
inside libevent, seeing _libev_ behaviour with more fds is relatively
uninteresting. What is more interesting is kernel behaviour (but there are
no surprises there), but most interesting is not actually performance, but
correctness.

As a simple example, freebsd (and to a lesser extent openbsd and darwin)
has some very loud proponents for kqueue over other such mechanisms (such
as the teensy linux epoll), and indeed kqueue is marginally better by
design (but suffers from the same design problems epoll does), but when I
ported rxvt-unicode to libev (mostly to iron out portability problems in
libev, as rxvt-unicode is deployed on a wide range of platforms), we found
out that the situation is:

openbsd: broken accoridng to reports I received
freebsd: broken (gives endless readyness notifications for e.g. ptys, or none 
at all)
darwin:  completely broken (doesn't work corretcly even for sockets in most 
versions)
netbsd:  works!

This means that, all the horrible slowness of poll is not an important
issue if there is no replacement mechanism for it. For a generic event
handling library, kqueue is no option on those systems, because it isn't
a generic event handling interface (its documented as one, but it doesn't
work in practise).

Windows has similar issues: the readyness notification system does not
exist with windows, mostly because windows doesn't have a generic I/O
model, so there is similar breakage. Unfortunately, with windows, there
is no workaround possible (short of providing your own read/write and
basically the full unix API).

(libev has the ability to embed kqueue into a poll-based loop, btw. for
those platforms where you know sockets work with kqueue, but this has not yet
been exposed to perl)

> Do you happen to know what the maximum number of sockets is for Win32?

There isn't "the maximum" there are many such, and they depend on windows
version (for example server or not) and configuration. There is also the
handle limit, many windows version simply cannot hand out more than 16k
handles per process etc. etc. I'd check the msdn documentation and your
own tests.

-- 
                The choice of a       Deliantra, the free code+content MORPG
      -----==-     _GNU_              http://www.deliantra.net
      ----==-- _       generation
      ---==---(_)__  __ ____  __      Marc Lehmann
      --==---/ / _ \/ // /\ \/ /      [EMAIL PROTECTED]
      -=====/_/_//_/\_,_/ /_/\_\

Reply via email to