Mike Jagdis wrote:
> This patch firstly extends the wait queue mechanism
> to allow an arbitrary action to be performed. Then I rewrote
> the select/poll implementation to use event queueing to avoid
> rescanning descriptors that had not changed - and restructured
> the loops to be rather more
Mike Jagdis wrote:
This patch firstly extends the wait queue mechanism
to allow an arbitrary action to be performed. Then I rewrote
the select/poll implementation to use event queueing to avoid
rescanning descriptors that had not changed - and restructured
the loops to be rather more
Here's something I did last year and then put on ice, partly
through lack of time and partly because I thought I'd pick
it up for 2.5.
All this talk of event queues misses one thing: we already
have an event queue mechanism. They're called wait queues.
The only problem is that the only
Here's something I did last year and then put on ice, partly
through lack of time and partly because I thought I'd pick
it up for 2.5.
All this talk of event queues misses one thing: we already
have an event queue mechanism. They're called wait queues.
The only problem is that the only
John Gardiner Myers <[EMAIL PROTECTED]> wrote:
> Your proposed interface suffers from most of the same problems as the
> other Unix event interfaces I've seen. Key among the problems are
> inherent race conditions when the interface is used by multithreaded
> applications.
>
> The
John Gardiner Myers [EMAIL PROTECTED] wrote:
Your proposed interface suffers from most of the same problems as the
other Unix event interfaces I've seen. Key among the problems are
inherent race conditions when the interface is used by multithreaded
applications.
The "stickiness" of
Linus Torvolds wrote:
> So sticky arrays of events are good, while queues are bad. Let's take that
> as one of the fundamentals.
Please let's not. There is nothing fundamentally different between an
event queue of size N and an interest set of size N.
Your proposed interface suffers from most
> It doesn't practically matter how efficient the X server is when
> you aren't busy, after all.
A simple polling scheme (i.e. not using poll() or select(), just looping
through all fd's trying nonblocking reads) is perfectly efficient when the
server is 100% busy, and perfectly inefficient when
It doesn't practically matter how efficient the X server is when
you aren't busy, after all.
A simple polling scheme (i.e. not using poll() or select(), just looping
through all fd's trying nonblocking reads) is perfectly efficient when the
server is 100% busy, and perfectly inefficient when
Linus Torvolds wrote:
So sticky arrays of events are good, while queues are bad. Let's take that
as one of the fundamentals.
Please let's not. There is nothing fundamentally different between an
event queue of size N and an interest set of size N.
Your proposed interface suffers from most
In article [EMAIL PROTECTED]> you write:
>Linus Torvalds wrote:
>> I'd much rather have an event interface that is documented to be edge-
>> triggered and is really _lightweight_, than have another interface that
>> starts out with some piggy features.
>
>Agreed (except for that 'edge-triggered'
Linus Torvalds wrote:
> However, we also need to remember what got us to this discussion in the
> first place. One of the reasons why poll() is such a piggy interface is
> exactly because it tries to be "nice" to the programmer.
poll() is a piggy interface because it is O(n) in polled file
Jim Gettys wrote:
> So I want an interface in which I can get as many events as possible
> at once, and one in which the events themselves can have appropriate
> aggregation behavior. It isn't quite clear to me if the proposed interface
> would have this property.
I believe get_event,
Note that there is another aspect to the efficiency / performance of the
select/poll style of interfaces not immediately obvious, but which occurs
as a result of how some (streaming/batching) protocols work.
An X server does not call select all that often (probably one of the two items most
"Eric W. Biederman" wrote:
>
> Dan Kegel <[EMAIL PROTECTED]> writes:
> > It's harder to write correct programs that use edge-triggered events.
>
> Huh? The race between when an event is reported, and when you take action
> on it effectively means all events are edge triggered.
Nope. With any
Dan Kegel <[EMAIL PROTECTED]> writes:
> It's harder to write correct programs that use edge-triggered events.
Huh? The race between when an event is reported, and when you take action
on it effectively means all events are edge triggered.
So making the interface clearly edge triggered seems
Dan Kegel [EMAIL PROTECTED] writes:
It's harder to write correct programs that use edge-triggered events.
Huh? The race between when an event is reported, and when you take action
on it effectively means all events are edge triggered.
So making the interface clearly edge triggered seems to
"Eric W. Biederman" wrote:
Dan Kegel [EMAIL PROTECTED] writes:
It's harder to write correct programs that use edge-triggered events.
Huh? The race between when an event is reported, and when you take action
on it effectively means all events are edge triggered.
Nope. With any of these
Note that there is another aspect to the efficiency / performance of the
select/poll style of interfaces not immediately obvious, but which occurs
as a result of how some (streaming/batching) protocols work.
An X server does not call select all that often (probably one of the two items most
Jim Gettys wrote:
So I want an interface in which I can get as many events as possible
at once, and one in which the events themselves can have appropriate
aggregation behavior. It isn't quite clear to me if the proposed interface
would have this property.
I believe get_event, /dev/poll,
Linus Torvalds wrote:
However, we also need to remember what got us to this discussion in the
first place. One of the reasons why poll() is such a piggy interface is
exactly because it tries to be "nice" to the programmer.
poll() is a piggy interface because it is O(n) in polled file
In article local.mail.linux-kernel/[EMAIL PROTECTED] you write:
Linus Torvalds wrote:
I'd much rather have an event interface that is documented to be edge-
triggered and is really _lightweight_, than have another interface that
starts out with some piggy features.
Agreed (except for that
Helge Hafting wrote:
> > With poll(), it was *not a bug* for the user code to drop events; with
> > your proposed interface, it *is a bug* for the user code to drop events.
> > I'm just emphasizing this because Simon Kirby ([EMAIL PROTECTED]) posted
> > incorrectly that your interface "has the
On Tue, Oct 24, 2000 at 04:12:38PM -0700, Dan Kegel wrote:
> With poll(), it was *not a bug* for the user code to drop events; with
> your proposed interface, it *is a bug* for the user code to drop events.
> I'm just emphasizing this because Simon Kirby ([EMAIL PROTECTED]) posted
> incorrectly
[EMAIL PROTECTED] (Linus Torvalds) wrote on 23.10.00 in
<[EMAIL PROTECTED]>:
> actually inform about the events. The way to do this simply is to limit it
> in very clear ways, the most notable one being simply that there is only
> one event queue per process (or rather, per "struct
>
>
>
> On Tue, 24 Oct 2000, Evan Jeffrey wrote:
> >
> > > Multiple event queues are bad, because it completely breaks the notion of
> > > even-driven programming. How do you want to listen to them all? You can't.
> > > You can only listen to one event queue at a time - unless you create some
[EMAIL PROTECTED] (Linus Torvalds) wrote on 23.10.00 in
[EMAIL PROTECTED]:
actually inform about the events. The way to do this simply is to limit it
in very clear ways, the most notable one being simply that there is only
one event queue per process (or rather, per "struct files_struct" -
On Tue, Oct 24, 2000 at 04:12:38PM -0700, Dan Kegel wrote:
With poll(), it was *not a bug* for the user code to drop events; with
your proposed interface, it *is a bug* for the user code to drop events.
I'm just emphasizing this because Simon Kirby ([EMAIL PROTECTED]) posted
incorrectly that
Helge Hafting wrote:
With poll(), it was *not a bug* for the user code to drop events; with
your proposed interface, it *is a bug* for the user code to drop events.
I'm just emphasizing this because Simon Kirby ([EMAIL PROTECTED]) posted
incorrectly that your interface "has the same
Linus Torvalds wrote:
>
> The point they disagree is when the event gets removed from the event
> queue. For edge triggered, this one is trivial: when a get_events() thing
> happens and moves it into user land. This is basically a one-liner, and it
> is local to get_events() and needs absolutely
Linus Torvalds <[EMAIL PROTECTED]> writes:
> bind_event(sock, POLLIN, NULL, accept_fn);
[...]
> (In fact, you might as well move the event array completely inside
> "get_event()", because nobody would be supposed to look at the raw array
> any more. So the "get_event()" interface would
Followup to: <[EMAIL PROTECTED]>
By author:Linus Torvalds <[EMAIL PROTECTED]>
In newsgroup: linux.dev.kernel
>
> Oh, I agree.
>
> And I think something like CLONE_EVENTS would be fine - and decide
> yourself what kind of threads you want (do you want indistinguishable
> "anonymous" threads
Linus Torvalds wrote:
> > But user code currently written for poll() has the luxury of dropping
> > events because poll() will happily report on the current readiness of
> > the socket every time. /dev/poll is level-triggered because it's trying
> > to make conversion of poll()-based code easy.
On Tue, 24 Oct 2000, Evan Jeffrey wrote:
>
> > Multiple event queues are bad, because it completely breaks the notion of
> > even-driven programming. How do you want to listen to them all? You can't.
> > You can only listen to one event queue at a time - unless you create some
>
> You can
> Multiple event queues are bad, because it completely breaks the notion of
> even-driven programming. How do you want to listen to them all? You can't.
> You can only listen to one event queue at a time - unless you create some
You can listen to one event queue per thread. Maybe in the case
> Shouldn't there also be a way to add non-filedescriptor based events
> into this, such as "child exited" or "signal caught" or shm things?
Waiting on pthreads condition variables, POSIX message queues, and
semaphores (as well as fd's) at the same time would *rock*...
Unifying all these
In article <[EMAIL PROTECTED]>,
Linus Torvalds <[EMAIL PROTECTED]> wrote:
> struct event {
> unsigned long id; /* file descriptor ID the event is on */
> unsigned long event;/* bitmask of active events */
> };
> int bind_event(int fd,
Linus Torvalds wrote:
> > * it doesn't add extra syscalls
>
> Sure it does.
>
> What do you think ioctl's are?
As I explained a few lines down from where you stopped quoting (and probably
stopped reading) the ioctl() use is just an artifact of Solaris's icky
implementation. It could and
On Tue, 24 Oct 2000, Abramo Bagnara wrote:
> Linus Torvalds wrote:
> >
> >
> > struct event {
> > int fd;
> > unsigned long mask;
> > void *opaque;
> > void (*event_fn)(ind fd, unsigned long mask, void *opaque);
>
> My
On Tue, 24 Oct 2000, Dan Kegel wrote:
>
> But user code currently written for poll() has the luxury of dropping
> events because poll() will happily report on the current readiness of
> the socket every time. /dev/poll is level-triggered because it's trying
> to make conversion of
Linus Torvalds wrote:
> > * Do you get an event whenever an fd is ready for something, or
> > only when its readiness changes? (Presumably whenever an fd is ready for
>something?)
>
> Only when its readiness changes - probably with the addition that it would
> simplify things that a new
There is only one thiong I don't understand about this... why can't we
re-implement the poll() implementation of Linux instead of introducing
another system call?
If I understood Linux correctly, what he is saying is that the bind_event
system call is needed to give the kernel a hint that the
Linus Torvalds wrote:
>
>
> struct event {
> int fd;
> unsigned long mask;
> void *opaque;
> void (*event_fn)(ind fd, unsigned long mask, void *opaque);
My experience say that:
unsigned long rmask;
On Tue, Oct 24, 2000 at 10:03:04AM -0700, Linus Torvalds wrote:
> Basically, with get_events(), there is a maximum of one event per "bind".
> And the memory for that is statically allocated at bind_event() time.
>...
> But you'd be doing so in a controlled manner: the memory use wouldn't go
>
On Tue, 24 Oct 2000, Simon Kirby wrote:
>
> However, isn't there already something like this, albeit maybe without
> the ability to return multiple events at a time? When discussing
> select/poll on IRC a while ago with sct, sct said:
>
> Simon: You just put your sockets into
On Mon, Oct 23, 2000 at 10:39:36PM -0700, Linus Torvalds wrote:
> Actually, forget the mmap, it's not needed.
>
> Here's a suggested "good" interface that would certainly be easy to
> implement, and very easy to use, with none of the scalability issues that
> many interfaces have.
>...
>
On Tue, 24 Oct 2000, Mitchell Blank Jr wrote:
> I think everyone should take a timeout and look at Solaris 8's /dev/poll
> interface. This discussion is reinventing the wheel, the lever, and the
> inclined plane.
>
> http://docs.sun.com/ab2/coll.40.6/REFMAN7/@Ab2PageView/55123
>
> I
On Tue, 24 Oct 2000, Dan Kegel wrote:
> Linus Torvalds wrote:
> > Basically, the main loop would boil down to
> > for (;;) {
> > static struct event ev_list[MAXEV];
> > get_event(ev_list, MAXEV, );
> > .. timeout handling here ..
> >
[ Moving on to practical matters ]
On Tue, 24 Oct 2000, Dan Kegel wrote:
>
> Might be good to pick more unique names than 'struct event' and 'get_event'.
> People probably use those already.
I agree. I would never implement them under those names, but it's easier
to talk about "event" than
Linus Torvalds wrote:
> Here's a suggested "good" interface that would certainly be easy to
> implement, and very easy to use, with none of the scalability issues that
> many interfaces have.
I think everyone should take a timeout and look at Solaris 8's /dev/poll
interface. This discussion is
Thus spake Linus Torvalds ([EMAIL PROTECTED]):
> I disagree.
> Let's just face it, poll() is a bad interface scalability-wise.
Is that a reason to implement it badly?
Felix
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
On Mon, Oct 23, 2000 at 09:06:11PM -0700, Linus Torvalds wrote:
>
>
> On Tue, 24 Oct 2000, Andi Kleen wrote:
> >
> > I don't see the problem. You have the poll table allocated in the kernel,
> > the drivers directly change it and the user mmaps it (I was not proposing
> > to let poll make a
Linus Torvalds wrote:
> Basically, the main loop would boil down to
> for (;;) {
> static struct event ev_list[MAXEV];
> get_event(ev_list, MAXEV, );
> .. timeout handling here ..
> }
>
> because get_even() would end up doing all
Linus Torvalds wrote:
> Basically, the main loop would boil down to
>
> for (;;) {
> static struct event ev_list[MAXEV];
> get_event(ev_list, MAXEV, );
> .. timeout handling here ..
> }
>
> because get_even() would end up doing all
At 10:39 PM 23/10/2000 -0700, Linus Torvalds wrote:
>First, let's see what is so nice about "select()" and "poll()". They do
>have one _huge_ advantage, which is why you want to fall back on poll()
>once the RT signal interface stops working. What is that?
RT methods are bad if they consume too
On Mon, 23 Oct 2000, Dan Kegel wrote:
>
> kqueue lets you associate an arbitrary integer with each event
> specification; the integer is returned along with the event.
> This is very handy for, say, passing the 'this' pointer of the
> object that should handle the event. Yes, you can simulate
On Mon, 23 Oct 2000, Dan Kegel wrote:
>
>
>http://www.FreeBSD.org/cgi/man.cgi?query=kqueue=0=0=FreeBSD+5.0-current=html
> describes the FreeBSD kqueue interface for events:
I've actually read the BSD kevent stuff, and I think it's classic
over-design. It's not easy to see what it's all
Followup to: <[EMAIL PROTECTED]>
By author:Dave Zarzycki <[EMAIL PROTECTED]>
In newsgroup: linux.dev.kernel
>
> Maybe I'm missing something, but why do you seperate out fd from the event
> structure. Why not just "int bind_event(struct event *event)"
>
> The only thing I might have done
Dan Kegel wrote:
> [kqueue is] Pretty similar to yours, with the following additions:
>
> Your proposal seems to only have one stream of available events per
> process. kqueue() returns a handle to an event queue, and kevent()
> takes that handle as a first parameter.
>
> [kqueue] uses a
On Mon, 23 Oct 2000, Linus Torvalds wrote:
> where you say "I want an array of pending events, and I have an array you
> can fill with up to 'maxnr' events - and if you have no events for me,
> please sleep until you get one, or until 'tmout'".
>
> The above looks like a _really_ simple
On Mon, 23 Oct 2000, Linus Torvalds wrote:
> Here's a suggested "good" interface that would certainly be easy to
> implement, and very easy to use, with none of the scalability issues that
> many interfaces have. ...
> It boils down to one very simple rule: dense arrays of sticky status
>
On Mon, 23 Oct 2000, Linus Torvalds wrote:
Here's a suggested "good" interface that would certainly be easy to
implement, and very easy to use, with none of the scalability issues that
many interfaces have. ...
It boils down to one very simple rule: dense arrays of sticky status
On Mon, 23 Oct 2000, Linus Torvalds wrote:
where you say "I want an array of pending events, and I have an array you
can fill with up to 'maxnr' events - and if you have no events for me,
please sleep until you get one, or until 'tmout'".
The above looks like a _really_ simple interface to
Dan Kegel wrote:
[kqueue is] Pretty similar to yours, with the following additions:
Your proposal seems to only have one stream of available events per
process. kqueue() returns a handle to an event queue, and kevent()
takes that handle as a first parameter.
[kqueue] uses a single call
Followup to: [EMAIL PROTECTED]
By author:Dave Zarzycki [EMAIL PROTECTED]
In newsgroup: linux.dev.kernel
Maybe I'm missing something, but why do you seperate out fd from the event
structure. Why not just "int bind_event(struct event *event)"
The only thing I might have done differently
On Mon, 23 Oct 2000, Dan Kegel wrote:
http://www.FreeBSD.org/cgi/man.cgi?query=kqueueapropos=0sektion=0manpath=FreeBSD+5.0-currentformat=html
describes the FreeBSD kqueue interface for events:
I've actually read the BSD kevent stuff, and I think it's classic
over-design. It's not easy to
On Mon, 23 Oct 2000, Dan Kegel wrote:
kqueue lets you associate an arbitrary integer with each event
specification; the integer is returned along with the event.
This is very handy for, say, passing the 'this' pointer of the
object that should handle the event. Yes, you can simulate it
At 10:39 PM 23/10/2000 -0700, Linus Torvalds wrote:
First, let's see what is so nice about "select()" and "poll()". They do
have one _huge_ advantage, which is why you want to fall back on poll()
once the RT signal interface stops working. What is that?
RT methods are bad if they consume too
Linus Torvalds wrote:
Basically, the main loop would boil down to
for (;;) {
static struct event ev_list[MAXEV];
get_event(ev_list, MAXEV, tmout);
.. timeout handling here ..
}
because get_even() would end up doing all the
Linus Torvalds wrote:
Basically, the main loop would boil down to
for (;;) {
static struct event ev_list[MAXEV];
get_event(ev_list, MAXEV, tmout);
.. timeout handling here ..
}
because get_even() would end up doing all the
On Mon, Oct 23, 2000 at 09:06:11PM -0700, Linus Torvalds wrote:
On Tue, 24 Oct 2000, Andi Kleen wrote:
I don't see the problem. You have the poll table allocated in the kernel,
the drivers directly change it and the user mmaps it (I was not proposing
to let poll make a kiobuf out of
Thus spake Linus Torvalds ([EMAIL PROTECTED]):
I disagree.
Let's just face it, poll() is a bad interface scalability-wise.
Is that a reason to implement it badly?
Felix
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Linus Torvalds wrote:
Here's a suggested "good" interface that would certainly be easy to
implement, and very easy to use, with none of the scalability issues that
many interfaces have.
I think everyone should take a timeout and look at Solaris 8's /dev/poll
interface. This discussion is
[ Moving on to practical matters ]
On Tue, 24 Oct 2000, Dan Kegel wrote:
Might be good to pick more unique names than 'struct event' and 'get_event'.
People probably use those already.
I agree. I would never implement them under those names, but it's easier
to talk about "event" than about
On Tue, 24 Oct 2000, Dan Kegel wrote:
Linus Torvalds wrote:
Basically, the main loop would boil down to
for (;;) {
static struct event ev_list[MAXEV];
get_event(ev_list, MAXEV, tmout);
.. timeout handling here ..
}
On Tue, 24 Oct 2000, Mitchell Blank Jr wrote:
I think everyone should take a timeout and look at Solaris 8's /dev/poll
interface. This discussion is reinventing the wheel, the lever, and the
inclined plane.
http://docs.sun.com/ab2/coll.40.6/REFMAN7/@Ab2PageView/55123
I think
On Mon, Oct 23, 2000 at 10:39:36PM -0700, Linus Torvalds wrote:
Actually, forget the mmap, it's not needed.
Here's a suggested "good" interface that would certainly be easy to
implement, and very easy to use, with none of the scalability issues that
many interfaces have.
...
Basically,
On Tue, 24 Oct 2000, Simon Kirby wrote:
However, isn't there already something like this, albeit maybe without
the ability to return multiple events at a time? When discussing
select/poll on IRC a while ago with sct, sct said:
sct Simon: You just put your sockets into
Linus Torvalds wrote:
struct event {
int fd;
unsigned long mask;
void *opaque;
void (*event_fn)(ind fd, unsigned long mask, void *opaque);
My experience say that:
unsigned long rmask;
On Tue, Oct 24, 2000 at 10:03:04AM -0700, Linus Torvalds wrote:
Basically, with get_events(), there is a maximum of one event per "bind".
And the memory for that is statically allocated at bind_event() time.
...
But you'd be doing so in a controlled manner: the memory use wouldn't go
up
There is only one thiong I don't understand about this... why can't we
re-implement the poll() implementation of Linux instead of introducing
another system call?
If I understood Linux correctly, what he is saying is that the bind_event
system call is needed to give the kernel a hint that the
Linus Torvalds wrote:
* Do you get an event whenever an fd is ready for something, or
only when its readiness changes? (Presumably whenever an fd is ready for
something?)
Only when its readiness changes - probably with the addition that it would
simplify things that a new event always
On Tue, 24 Oct 2000, Dan Kegel wrote:
But user code currently written for poll() has the luxury of dropping
events because poll() will happily report on the current readiness of
the socket every time. /dev/poll is level-triggered because it's trying
to make conversion of poll()-based
On Tue, 24 Oct 2000, Abramo Bagnara wrote:
Linus Torvalds wrote:
struct event {
int fd;
unsigned long mask;
void *opaque;
void (*event_fn)(ind fd, unsigned long mask, void *opaque);
My experience say
Linus Torvalds wrote:
* it doesn't add extra syscalls
Sure it does.
What do you think ioctl's are?
As I explained a few lines down from where you stopped quoting (and probably
stopped reading) the ioctl() use is just an artifact of Solaris's icky
implementation. It could and should
In article [EMAIL PROTECTED],
Linus Torvalds [EMAIL PROTECTED] wrote:
struct event {
unsigned long id; /* file descriptor ID the event is on */
unsigned long event;/* bitmask of active events */
};
int bind_event(int fd, struct event
Shouldn't there also be a way to add non-filedescriptor based events
into this, such as "child exited" or "signal caught" or shm things?
Waiting on pthreads condition variables, POSIX message queues, and
semaphores (as well as fd's) at the same time would *rock*...
Unifying all these
Multiple event queues are bad, because it completely breaks the notion of
even-driven programming. How do you want to listen to them all? You can't.
You can only listen to one event queue at a time - unless you create some
You can listen to one event queue per thread. Maybe in the case
On Tue, 24 Oct 2000, Evan Jeffrey wrote:
Multiple event queues are bad, because it completely breaks the notion of
even-driven programming. How do you want to listen to them all? You can't.
You can only listen to one event queue at a time - unless you create some
You can listen to
Linus Torvalds wrote:
But user code currently written for poll() has the luxury of dropping
events because poll() will happily report on the current readiness of
the socket every time. /dev/poll is level-triggered because it's trying
to make conversion of poll()-based code easy. With
Followup to: [EMAIL PROTECTED]
By author:Linus Torvalds [EMAIL PROTECTED]
In newsgroup: linux.dev.kernel
Oh, I agree.
And I think something like CLONE_EVENTS would be fine - and decide
yourself what kind of threads you want (do you want indistinguishable
"anonymous" threads like
Linus Torvalds [EMAIL PROTECTED] writes:
bind_event(sock, POLLIN, NULL, accept_fn);
[...]
(In fact, you might as well move the event array completely inside
"get_event()", because nobody would be supposed to look at the raw array
any more. So the "get_event()" interface would be even
Linus Torvalds wrote:
The point they disagree is when the event gets removed from the event
queue. For edge triggered, this one is trivial: when a get_events() thing
happens and moves it into user land. This is basically a one-liner, and it
is local to get_events() and needs absolutely no
On Mon, 23 Oct 2000, Jordan Mendelson wrote:
> What you describe is exactly what the /dev/poll interface patch from the
> Linux scalability project does.
>
> It creates a special device which you can open up and write
> add/remove/modify entries you wish to be notified of using the standard
>
On Mon, 23 Oct 2000, Linus Torvalds wrote:
>
> > What is your favourite interface then ?
>
> I suspect a good interface that can easily be done efficiently would
> basically be something where the user _does_ do the equivalent of a
> read-only mmap() of poll entries - and explicit and
Linus Torvalds wrote:
>
> On Tue, 24 Oct 2000, Andi Kleen wrote:
> >
> > I don't see the problem. You have the poll table allocated in the kernel,
> > the drivers directly change it and the user mmaps it (I was not proposing
> > to let poll make a kiobuf out of the passed array)
> Th eproblem
On Tue, 24 Oct 2000, Andi Kleen wrote:
>
> I don't see the problem. You have the poll table allocated in the kernel,
> the drivers directly change it and the user mmaps it (I was not proposing
> to let poll make a kiobuf out of the passed array)
That's _not_ how poll() works at all.
We
Nick Piggin ([EMAIL PROTECTED]) wrote:
> > I'm trying to write a server that handles 1 clients. On 2.4.x,
> > the RT signal queue stuff looks like the way to achieve that.
>
> I would suggest you try multiple polling threads. Not only will you get
> better SMP scalability, if you have say
David Schwartz wrote:
> > I'm trying to write a server that handles 1 clients. On 2.4.x,
> > the RT signal queue stuff looks like the way to achieve that.
> > Unfortunately, when the RT signal queue overflows, the consensus seems
> > to be that you fall back to a big poll(). And even
Dan Kegel wrote:
>
> Jordan Mendelson ([EMAIL PROTECTED]) wrote:
> > An implementation of /dev/poll for Linux already exists and has shown to
> > be more scalable than using RT signals under my tests. A patch for 2.2.x
> > and 2.4.x should be available at the Linux Scalability Project @
> >
1 - 100 of 135 matches
Mail list logo