Kev wrote:
>
> > > I don't understand what it is you're saying here. The ircu server uses
> > > non-blocking sockets, and has since long before EfNet and Undernet branched,
> > > so it already handles EWOULDBLOCK or EAGAIN intelligently, as far as I know.
> >
> > Right. poll() and Solaris /dev/p
Aaron Sethman wrote:
>
> On Sun, 3 Feb 2002, Dan Kegel wrote:
>
> > Kev wrote:
> > >
> > > > The /dev/epoll patch is good, but the interface is different enough
> > > > from /dev/poll that ircd would need a new engine_epoll.c anyway.
> > > > (It would look like a cross between engine_devpoll.c and
On Sun, 3 Feb 2002, Dan Kegel wrote:
> I'd like to know how it disagrees.
> I believe rtsig requires you to tweak your I/O code in three ways:
> 1. you need to pick a realtime signal number to use for an event queue
Did that.
> 2. you need to wrap your read()/write() calls on the socket with co
> > I don't understand what it is you're saying here. The ircu server uses
> > non-blocking sockets, and has since long before EfNet and Undernet branched,
> > so it already handles EWOULDBLOCK or EAGAIN intelligently, as far as I know.
>
> Right. poll() and Solaris /dev/poll are programmer-frie
Aaron Sethman wrote:
>
> > 2. you need to wrap your read()/write() calls on the socket with code
> > that notices EWOULDBLOCK
> This is perhaps the part we it disagrees with our code. I will
> investigate this part. The way we normally do things is have callbacks
> per fd, that get called when o
On Sun, 3 Feb 2002, Dan Kegel wrote:
> Kev wrote:
> >
> > > The /dev/epoll patch is good, but the interface is different enough
> > > from /dev/poll that ircd would need a new engine_epoll.c anyway.
> > > (It would look like a cross between engine_devpoll.c and engine_rtsig.c,
> > > as it would n
> Alas, I already did this. As can be seen in the SERVER out string in the
> logs I mailed, the connections are set to around 200 odd. The standard for
> gnuworld is AAz which is around 50 somewhere.
>
> Also, I should point out that evern with the 50 set, it still falls over
> with more than 4 o
> The /dev/epoll patch is good, but the interface is different enough
> from /dev/poll that ircd would need a new engine_epoll.c anyway.
> (It would look like a cross between engine_devpoll.c and engine_rtsig.c,
> as it would need to be notified by os_linux.c of any EWOULDBLOCK return values.
> Bo
Kev wrote:
>
> > The /dev/epoll patch is good, but the interface is different enough
> > from /dev/poll that ircd would need a new engine_epoll.c anyway.
> > (It would look like a cross between engine_devpoll.c and engine_rtsig.c,
> > as it would need to be notified by os_linux.c of any EWOULDBLOC
Alas, I already did this. As can be seen in the SERVER out string in the
logs I mailed, the connections are set to around 200 odd. The standard for
gnuworld is AAz which is around 50 somewhere.
Also, I should point out that evern with the 50 set, it still falls over
with more than 4 or 5 clients.
> Hmm. Have a look at
> http://www.mail-archive.com/coder-com@undernet.org/msg00060.html
> It looks like the mainline Undernet ircd was rewritten around May 2001
> to support high efficiency techniques like /dev/poll and kqueue.
> The source you pointed to is way behind Undernet's current sources
> OK.. have been tearing my hair out for a while now trying to figure out
> why this happens...
I triggered this while playing with X the other day. The basic problem is
that the client capacity information sent to ircu must be a number of the
form (2^n)-1. I'm not sure where that capacity val
Arjen Wolfs wrote:
> The ircu version that supports kqueue and /dev/poll is currently being
> beta-tested on a few servers on the Undernet. The graph at
> http://www.break.net/ircu10-to-11.png shows the load average (multiplied by
> 100) on a on a server with 3000-4000 clients using poll(), and /d
>
>
>So I dunno if I'm going to go ahead and do that myself, but at least I've
>scoped out the situation. Before I did any work, I'd measure CPU
>usage under a simulated load of 2000 clients, just to verify that
>poll() was indeed a bottleneck (ok, can't imagine it not being a
>bottleneck, but i
Howdy. I noticed that
http://coder-com.undernet.org/cgi-bin/cvsweb.cgi/~checkout~/ircu2.10/TODO?only_with_tag=HEAD
mentions
"* Prepare network code to handle even more connections:
http://www.kegel.com/c10k.html";
Is there a stress test program commonly used to measure
how many connections i
Vincent Sweeney wrote:
> > > [I want to use Linux for my irc server, but performance sucks.]
> > > 1) Someone is going to have to recode the ircd source we use and
> > > possibly a modified kernel in the *hope* that performance improves.
> > > 2) Convert the box to FreeBSD which seems to h
OK.. have been tearing my hair out for a while now trying to figure out
why this happens...
Basically, I am writing a NickServ type GNUworld module, to protect
peoples registered nicks. A nice easy way to prevent anyone from changing
to that nick after the client has been killed, is to 'jupe' tha
Dan Kegel wrote:
>
> Before I did any work, I'd measure CPU
> usage under a simulated load of 2000 clients, just to verify that
> poll() was indeed a bottleneck (ok, can't imagine it not being a
> bottleneck, but it's nice to have a baseline to compare the improved
> version against).
I half-did
18 matches
Mail list logo