On Tue, 29 Mar 2005 07:23:35 -0800, Paul Jackson <[EMAIL PROTECTED]> wrote:
> Out of curiosity, what are these 'several user space applications?' The
> only one I know of is this extension to bsd accounting to include
> capturing parent and child pid at fork. Probably you've mentioned some
>
On Tue, 29 Mar 2005 07:23:35 -0800, Paul Jackson [EMAIL PROTECTED] wrote:
Out of curiosity, what are these 'several user space applications?' The
only one I know of is this extension to bsd accounting to include
capturing parent and child pid at fork. Probably you've mentioned some
other
> So it still can be used for accounting :)
No ... so these results don't show that it shouldn't be used for
accounting.
--
I won't rest till it's the best ...
Programmer, Linux Scalability
Paul Jackson <[EMAIL PROTECTED]> 1.650.933.1373,
The parent information ((ppid,pid) pair) is useful for process group
aggregation while do_exit() hook is needed to save per task
accounting data before the task data is disposed.
Thanks,
- jay
dean gaudet wrote:
On Tue, 29 Mar 2005, Jay Lan wrote:
The fork_connector is not designed to solve
Sorry for long delay - I was quite far from my test machines.
Here are results:
fork connector with turned off disk writes and direct connector's
methods calls.
pcix$ ./fork_test 10
Average per process fork+exit time is 505 usecs [diff=50567251, max=10].
pcix$ ./fork_test 10
Average
On Wed, 2005-03-30 at 20:25 +1000, Herbert Xu wrote:
> Paul Jackson <[EMAIL PROTECTED]> wrote:
> >
> > So I suppose if fork_connector were not used to collect > child pid> information for accounting, then someone would have to make
> > the case that there were enough other uses, of sufficient
On Wed, 2005-03-30 at 20:25 +1000, Herbert Xu wrote:
> Paul Jackson <[EMAIL PROTECTED]> wrote:
> >
> > So I suppose if fork_connector were not used to collect > child pid> information for accounting, then someone would have to make
> > the case that there were enough other uses, of sufficient
Paul Jackson <[EMAIL PROTECTED]> wrote:
>
> So I suppose if fork_connector were not used to collect child pid> information for accounting, then someone would have to make
> the case that there were enough other uses, of sufficient value, to add
> fork_connector. We have to be a bit careful, in
Paul Jackson [EMAIL PROTECTED] wrote:
So I suppose if fork_connector were not used to collect parent pid,
child pid information for accounting, then someone would have to make
the case that there were enough other uses, of sufficient value, to add
fork_connector. We have to be a bit
On Wed, 2005-03-30 at 20:25 +1000, Herbert Xu wrote:
Paul Jackson [EMAIL PROTECTED] wrote:
So I suppose if fork_connector were not used to collect parent pid,
child pid information for accounting, then someone would have to make
the case that there were enough other uses, of sufficient
On Wed, 2005-03-30 at 20:25 +1000, Herbert Xu wrote:
Paul Jackson [EMAIL PROTECTED] wrote:
So I suppose if fork_connector were not used to collect parent pid,
child pid information for accounting, then someone would have to make
the case that there were enough other uses, of sufficient
Sorry for long delay - I was quite far from my test machines.
Here are results:
fork connector with turned off disk writes and direct connector's
methods calls.
pcix$ ./fork_test 10
Average per process fork+exit time is 505 usecs [diff=50567251, max=10].
pcix$ ./fork_test 10
Average
The parent information ((ppid,pid) pair) is useful for process group
aggregation while do_exit() hook is needed to save per task
accounting data before the task data is disposed.
Thanks,
- jay
dean gaudet wrote:
On Tue, 29 Mar 2005, Jay Lan wrote:
The fork_connector is not designed to solve
So it still can be used for accounting :)
No ... so these results don't show that it shouldn't be used for
accounting.
--
I won't rest till it's the best ...
Programmer, Linux Scalability
Paul Jackson [EMAIL PROTECTED] 1.650.933.1373,
Guillaume wrote:
> I'm sorry but I really don't understand why you're speaking about
> accounting when I present results about fork connector. I agree that
> ELSA is using the fork connector but the fork connector has nothing to
> do with accounting.
True - sorry. I kinda hijacked your thread.
On Tue, 2005-03-29 at 22:06 -0800, dean gaudet wrote:
> On Tue, 29 Mar 2005, Jay Lan wrote:
>
> > The fork_connector is not designed to solve accounting data collection
> > problem.
> >
> > The accounting data collection must be done via a hook from do_exit().
>
> by the time do_exit() occurs
Guillaume wrote:
> When I wrote "several user space applications" it was just to say that
> this fork connector is not designed only for ELSA and fork information
> is available to every listeners.
So I suppose if fork_connector were not used to collect information for accounting, then someone
Dean wrote:
> by the time do_exit() occurs the parent may have disappeared
I don't think Jay was disagreeing with this. I think he agrees
that there is to be collected:
1) the classic bsd accounting data, in do_exit
2) the fork time by some mechanism at
fork time (perhaps just not the
On Tue, 29 Mar 2005, Jay Lan wrote:
> The fork_connector is not designed to solve accounting data collection
> problem.
>
> The accounting data collection must be done via a hook from do_exit().
by the time do_exit() occurs the parent may have disappeared... you do
need to record something at
On Tue, 2005-03-29 at 07:35 -0800, Paul Jackson wrote:
> Guillaume wrote:
> > I ran some test using the CBUS instead of the cn_netlink_send() routine
> > and the overhead is nearly 0%:
>
> Overhead of what? Does this include merging the data and getting it to
> disk?
I test the overhead of
On Tue, 2005-03-29 at 07:23 -0800, Paul Jackson wrote:
> Guillaume wrote:
> > The goal of the fork connector is to inform a user space application
> > that a fork occurs in the kernel. This information (cpu ID, parent PID
> > and child PID) can be used by several user space applications. It's
[ Hmmm .. the following pertains more to accounting than to fork_connector,
as have my other remarks earlier today. I notice just now I am on a thread
whose Subject is "fork_connector". Oh well. Sorry. - pj ]
Jay wrote:
> You probably can look at it this way: the accounting data
Jay wrote:
> The fork_connector is not designed to solve accounting data collection
> problem.
I don't think I ever said it was designed for that purpose.
Indeed, I will confess to not yet knowing the 'real' purpose of its
design.
> It was never the fork_connector's
> intention to piggy back
Paul,
The fork_connector is not designed to solve accounting data collection
problem.
The accounting data collection must be done via a hook from do_exit().
The acct_process() hook invokes do_acct_process() to write BSD
accounting data to disk. CSA needs a similar hook off do_exit() to
collect
Paul Jackson wrote:
Guillaume wrote:
The goal of the fork connector is to inform a user space application
that a fork occurs in the kernel. This information (cpu ID, parent PID
and child PID) can be used by several user space applications. It's not
only for accounting. Accounting and
Evgeniy writes:
> Here forking connector module "exits" and can handle next fork() on the
> same CPU.
Fine ... but it's not about what the fork_connector does. It's about
getting the accounting data to disk, if I understand correctly.
> That is why it is very fast in "fast-path".
I don't care
Guillaume wrote:
> I ran some test using the CBUS instead of the cn_netlink_send() routine
> and the overhead is nearly 0%:
Overhead of what? Does this include merging the data and getting it to
disk?
Am I even asking the right question here - is it true that this data,
when collected for
Guillaume wrote:
> The goal of the fork connector is to inform a user space application
> that a fork occurs in the kernel. This information (cpu ID, parent PID
> and child PID) can be used by several user space applications. It's not
> only for accounting. Accounting and fork_connector are two
Guillaume wrote:
> Yes, dean's suggestion helps. The overhead is now around 4%
More improvement than I expected (and I see a CBUS result further
down in my inbox).
Does this include a minimal consumer task of the data that writes
it to disk?
> I think that it can be moved in
On Mon, 2005-03-28 at 13:42 -0800, Paul Jackson wrote:
> Guillaume wrote:
> > The lmbench shows that the overhead (the construction and the sending
> > of the message) in the fork() routine is around 7%.
>
> Thanks for including the numbers. The 7% seems a bit costly, for a bit
> more
On Tue, 2005-03-29 at 00:49 -0800, Paul Jackson wrote:
> Evgeniy wrote:
> > There is no overhead at all using CBUS.
>
> This is unlikely. Very unlikely.
>
> Please understand that I am not trying to critique CBUS or connector in
> isolation, but rather trying to determine what mechanism is best
On Tue, 2005-03-29 at 00:49 -0800, Paul Jackson wrote:
> This
> amortizes the cost of almost all the handling, and of all the disk i/o,
> over many data collection events. Correct me if I'm wrong, but
> fork_connector doesn't do this merging of events into a consolidated
> data buffer, so is at
Evgeniy wrote:
> There is no overhead at all using CBUS.
This is unlikely. Very unlikely.
Please understand that I am not trying to critique CBUS or connector in
isolation, but rather trying to determine what mechanism is best suited
for getting this accounting data written to disk, which is
On Mon, 2005-03-28 at 23:02 -0800, Greg KH wrote:
> On Tue, Mar 29, 2005 at 11:04:16AM +0400, Evgeniy Polyakov wrote:
> > On Mon, 2005-03-28 at 13:42 -0800, Paul Jackson wrote:
> > > I don't see it in my copies of *-mm or recent Linus bk trees. Am I
> > > missing something?
> >
> > It was
On Mon, 2005-03-28 at 13:42 -0800, Paul Jackson wrote:
> Guillaume wrote:
> > The lmbench shows that the overhead (the construction and the sending
> > of the message) in the fork() routine is around 7%.
>
> Thanks for including the numbers. The 7% seems a bit costly, for a bit
> more
On Mon, 2005-03-28 at 13:42 -0800, Paul Jackson wrote:
Guillaume wrote:
The lmbench shows that the overhead (the construction and the sending
of the message) in the fork() routine is around 7%.
Thanks for including the numbers. The 7% seems a bit costly, for a bit
more accounting
On Mon, 2005-03-28 at 23:02 -0800, Greg KH wrote:
On Tue, Mar 29, 2005 at 11:04:16AM +0400, Evgeniy Polyakov wrote:
On Mon, 2005-03-28 at 13:42 -0800, Paul Jackson wrote:
I don't see it in my copies of *-mm or recent Linus bk trees. Am I
missing something?
It was dropped from -mm
Evgeniy wrote:
There is no overhead at all using CBUS.
This is unlikely. Very unlikely.
Please understand that I am not trying to critique CBUS or connector in
isolation, but rather trying to determine what mechanism is best suited
for getting this accounting data written to disk, which is
On Tue, 2005-03-29 at 00:49 -0800, Paul Jackson wrote:
This
amortizes the cost of almost all the handling, and of all the disk i/o,
over many data collection events. Correct me if I'm wrong, but
fork_connector doesn't do this merging of events into a consolidated
data buffer, so is at a
On Tue, 2005-03-29 at 00:49 -0800, Paul Jackson wrote:
Evgeniy wrote:
There is no overhead at all using CBUS.
This is unlikely. Very unlikely.
Please understand that I am not trying to critique CBUS or connector in
isolation, but rather trying to determine what mechanism is best suited
On Mon, 2005-03-28 at 13:42 -0800, Paul Jackson wrote:
Guillaume wrote:
The lmbench shows that the overhead (the construction and the sending
of the message) in the fork() routine is around 7%.
Thanks for including the numbers. The 7% seems a bit costly, for a bit
more accounting
Guillaume wrote:
Yes, dean's suggestion helps. The overhead is now around 4%
More improvement than I expected (and I see a CBUS result further
down in my inbox).
Does this include a minimal consumer task of the data that writes
it to disk?
I think that it can be moved in
Guillaume wrote:
The goal of the fork connector is to inform a user space application
that a fork occurs in the kernel. This information (cpu ID, parent PID
and child PID) can be used by several user space applications. It's not
only for accounting. Accounting and fork_connector are two
Guillaume wrote:
I ran some test using the CBUS instead of the cn_netlink_send() routine
and the overhead is nearly 0%:
Overhead of what? Does this include merging the data and getting it to
disk?
Am I even asking the right question here - is it true that this data,
when collected for
Evgeniy writes:
Here forking connector module exits and can handle next fork() on the
same CPU.
Fine ... but it's not about what the fork_connector does. It's about
getting the accounting data to disk, if I understand correctly.
That is why it is very fast in fast-path.
I don't care how
Paul Jackson wrote:
Guillaume wrote:
The goal of the fork connector is to inform a user space application
that a fork occurs in the kernel. This information (cpu ID, parent PID
and child PID) can be used by several user space applications. It's not
only for accounting. Accounting and
Paul,
The fork_connector is not designed to solve accounting data collection
problem.
The accounting data collection must be done via a hook from do_exit().
The acct_process() hook invokes do_acct_process() to write BSD
accounting data to disk. CSA needs a similar hook off do_exit() to
collect
Jay wrote:
The fork_connector is not designed to solve accounting data collection
problem.
I don't think I ever said it was designed for that purpose.
Indeed, I will confess to not yet knowing the 'real' purpose of its
design.
It was never the fork_connector's
intention to piggy back the
[ Hmmm .. the following pertains more to accounting than to fork_connector,
as have my other remarks earlier today. I notice just now I am on a thread
whose Subject is fork_connector. Oh well. Sorry. - pj ]
Jay wrote:
You probably can look at it this way: the accounting data
On Tue, 2005-03-29 at 07:23 -0800, Paul Jackson wrote:
Guillaume wrote:
The goal of the fork connector is to inform a user space application
that a fork occurs in the kernel. This information (cpu ID, parent PID
and child PID) can be used by several user space applications. It's not
On Tue, 2005-03-29 at 07:35 -0800, Paul Jackson wrote:
Guillaume wrote:
I ran some test using the CBUS instead of the cn_netlink_send() routine
and the overhead is nearly 0%:
Overhead of what? Does this include merging the data and getting it to
disk?
I test the overhead of sending the
On Tue, 29 Mar 2005, Jay Lan wrote:
The fork_connector is not designed to solve accounting data collection
problem.
The accounting data collection must be done via a hook from do_exit().
by the time do_exit() occurs the parent may have disappeared... you do
need to record something at
Dean wrote:
by the time do_exit() occurs the parent may have disappeared
I don't think Jay was disagreeing with this. I think he agrees
that there is to be collected:
1) the classic bsd accounting data, in do_exit
2) the fork time parent pid, child pid by some mechanism at
fork time
Guillaume wrote:
When I wrote several user space applications it was just to say that
this fork connector is not designed only for ELSA and fork information
is available to every listeners.
So I suppose if fork_connector were not used to collect parent pid,
child pid information for
On Tue, 2005-03-29 at 22:06 -0800, dean gaudet wrote:
On Tue, 29 Mar 2005, Jay Lan wrote:
The fork_connector is not designed to solve accounting data collection
problem.
The accounting data collection must be done via a hook from do_exit().
by the time do_exit() occurs the parent
Guillaume wrote:
I'm sorry but I really don't understand why you're speaking about
accounting when I present results about fork connector. I agree that
ELSA is using the fork connector but the fork connector has nothing to
do with accounting.
True - sorry. I kinda hijacked your thread. I
On Tue, Mar 29, 2005 at 11:04:16AM +0400, Evgeniy Polyakov wrote:
> On Mon, 2005-03-28 at 13:42 -0800, Paul Jackson wrote:
> > I don't see it in my copies of *-mm or recent Linus bk trees. Am I
> > missing something?
>
> It was dropped from -mm tree, since bk tree where it lives
> was in
On Mon, 2005-03-28 at 13:42 -0800, Paul Jackson wrote:
> Guillaume wrote:
> > The lmbench shows that the overhead (the construction and the sending
> > of the message) in the fork() routine is around 7%.
>
> Thanks for including the numbers. The 7% seems a bit costly, for a bit
> more
Guillaume wrote:
> The lmbench shows that the overhead (the construction and the sending
> of the message) in the fork() routine is around 7%.
Thanks for including the numbers. The 7% seems a bit costly, for a bit
more accounting information. Perhaps dean's suggestion, to not use
ascii, will
Guillaume wrote:
The lmbench shows that the overhead (the construction and the sending
of the message) in the fork() routine is around 7%.
Thanks for including the numbers. The 7% seems a bit costly, for a bit
more accounting information. Perhaps dean's suggestion, to not use
ascii, will
On Mon, 2005-03-28 at 13:42 -0800, Paul Jackson wrote:
Guillaume wrote:
The lmbench shows that the overhead (the construction and the sending
of the message) in the fork() routine is around 7%.
Thanks for including the numbers. The 7% seems a bit costly, for a bit
more accounting
On Tue, Mar 29, 2005 at 11:04:16AM +0400, Evgeniy Polyakov wrote:
On Mon, 2005-03-28 at 13:42 -0800, Paul Jackson wrote:
I don't see it in my copies of *-mm or recent Linus bk trees. Am I
missing something?
It was dropped from -mm tree, since bk tree where it lives
was in maintenance
On Fri, 25 Mar 2005, Guillaume Thouvenin wrote:
...
> The lmbench shows that the overhead (the construction and the sending
> of the message) in the fork() routine is around 7%.
...
> + /*
> + * size of data is the number of characters
> + * printed plus
This patch adds a fork connector in the do_fork() routine. It sends a
netlink datagram when enabled. The message can be read by a user space
application. By this way, the user space application is alerted when a
fork occurs.
It uses the userspace <-> kernelspace connector that works on top of
This patch adds a fork connector in the do_fork() routine. It sends a
netlink datagram when enabled. The message can be read by a user space
application. By this way, the user space application is alerted when a
fork occurs.
It uses the userspace - kernelspace connector that works on top of
On Fri, 25 Mar 2005, Guillaume Thouvenin wrote:
...
The lmbench shows that the overhead (the construction and the sending
of the message) in the fork() routine is around 7%.
...
+ /*
+ * size of data is the number of characters
+ * printed plus one
On Tue, 2005-03-22 at 21:51, Evgeniy Polyakov wrote:
> On Wed, 2005-03-23 at 08:01 +0300, Evgeniy Polyakov wrote:
> > On Tue, 2005-03-22 at 15:51 -0800, Jay Lan wrote:
> >
>
> > > I see this issue less a case of bad guys vs. good guys. I see it
> > > as various components doing system related
On Tue, 2005-03-22 at 10:15 -0800, Jay Lan wrote:
> Guillaume Thouvenin wrote:
> > On Mon, 2005-03-21 at 12:52 -0800, Ram wrote:
> >
> >> If a bunch of applications are listening for fork events,
> >> your patch allows any application to turn off the
> >> fork event notification?
On Tue, 2005-03-22 at 10:15 -0800, Jay Lan wrote:
Guillaume Thouvenin wrote:
On Mon, 2005-03-21 at 12:52 -0800, Ram wrote:
If a bunch of applications are listening for fork events,
your patch allows any application to turn off the
fork event notification? Is this the
On Tue, 2005-03-22 at 21:51, Evgeniy Polyakov wrote:
On Wed, 2005-03-23 at 08:01 +0300, Evgeniy Polyakov wrote:
On Tue, 2005-03-22 at 15:51 -0800, Jay Lan wrote:
I see this issue less a case of bad guys vs. good guys. I see it
as various components doing system related work, but
On Tue, 2005-03-22 at 15:51 -0800, Jay Lan wrote:
> >>I think a better way is:
> >>
> >> Providing a different connector channel called the administrator
> >> channel which can be used only by a super-user, and gives you
> >> the ability to switch on or off any connector channel including
On Tue, 2005-03-22 at 12:42 -0800, Ram wrote:
> On Tue, 2005-03-22 at 12:25, Evgeniy Polyakov wrote:
> > On Tue, 22 Mar 2005 11:18:07 -0800
> > Ram <[EMAIL PROTECTED]> wrote:
> >
> > > > I still do not see why it is needed.
> > > > Super-user can run ip command and turn network interface off
> >
Evgeniy Polyakov wrote:
On Tue, 22 Mar 2005 10:26:19 -0800
Ram <[EMAIL PROTECTED]> wrote:
On Mon, 2005-03-21 at 23:07, Guillaume Thouvenin wrote:
On Mon, 2005-03-21 at 12:52 -0800, Ram wrote:
If a bunch of applications are listening for fork events,
your patch allows any application to
Ram wrote:
On Tue, 2005-03-22 at 11:22, Evgeniy Polyakov wrote:
On Tue, 22 Mar 2005 10:26:19 -0800
Ram <[EMAIL PROTECTED]> wrote:
On Mon, 2005-03-21 at 23:07, Guillaume Thouvenin wrote:
On Mon, 2005-03-21 at 12:52 -0800, Ram wrote:
If a bunch of applications are listening for fork events,
On Tue, 2005-03-22 at 12:25, Evgeniy Polyakov wrote:
> On Tue, 22 Mar 2005 11:18:07 -0800
> Ram <[EMAIL PROTECTED]> wrote:
>
> > > I still do not see why it is needed.
> > > Super-user can run ip command and turn network interface off
> > > not waiting while apache or named exits or unbind.
> > >
On Tue, 22 Mar 2005 11:18:07 -0800
Ram <[EMAIL PROTECTED]> wrote:
> > I still do not see why it is needed.
> > Super-user can run ip command and turn network interface off
> > not waiting while apache or named exits or unbind.
> >
> > In theory I can create some kind of userspace registration
On Tue, 2005-03-22 at 11:22, Evgeniy Polyakov wrote:
> On Tue, 22 Mar 2005 10:26:19 -0800
> Ram <[EMAIL PROTECTED]> wrote:
>
> > On Mon, 2005-03-21 at 23:07, Guillaume Thouvenin wrote:
> > > On Mon, 2005-03-21 at 12:52 -0800, Ram wrote:
> > > > If a bunch of applications are listening for
On Tue, 22 Mar 2005 10:26:19 -0800
Ram <[EMAIL PROTECTED]> wrote:
> On Mon, 2005-03-21 at 23:07, Guillaume Thouvenin wrote:
> > On Mon, 2005-03-21 at 12:52 -0800, Ram wrote:
> > > If a bunch of applications are listening for fork events,
> > > your patch allows any application to turn
On Mon, 2005-03-21 at 20:36, Evgeniy Polyakov wrote:
> On Mon, 2005-03-21 at 12:52 -0800, Ram wrote:
> > On Mon, 2005-03-21 at 04:48, Guillaume Thouvenin wrote:
> > > ChangeLog:
> > >
> > > - Remove the global cn_fork_lock and replace it by a per CPU
> > > counter.
> > > - The processor
On Mon, 2005-03-21 at 23:07, Guillaume Thouvenin wrote:
> On Mon, 2005-03-21 at 12:52 -0800, Ram wrote:
> > If a bunch of applications are listening for fork events,
> > your patch allows any application to turn off the
> > fork event notification? Is this the right behavior?
>
Guillaume Thouvenin wrote:
On Mon, 2005-03-21 at 12:52 -0800, Ram wrote:
If a bunch of applications are listening for fork events,
your patch allows any application to turn off the
fork event notification? Is this the right behavior?
Yes it is. The main management is done by
Guillaume Thouvenin wrote:
On Mon, 2005-03-21 at 12:52 -0800, Ram wrote:
If a bunch of applications are listening for fork events,
your patch allows any application to turn off the
fork event notification? Is this the right behavior?
Yes it is. The main management is done by
On Mon, 2005-03-21 at 23:07, Guillaume Thouvenin wrote:
On Mon, 2005-03-21 at 12:52 -0800, Ram wrote:
If a bunch of applications are listening for fork events,
your patch allows any application to turn off the
fork event notification? Is this the right behavior?
Yes it
On Mon, 2005-03-21 at 20:36, Evgeniy Polyakov wrote:
On Mon, 2005-03-21 at 12:52 -0800, Ram wrote:
On Mon, 2005-03-21 at 04:48, Guillaume Thouvenin wrote:
ChangeLog:
- Remove the global cn_fork_lock and replace it by a per CPU
counter.
- The processor ID has been added
On Tue, 2005-03-22 at 11:22, Evgeniy Polyakov wrote:
On Tue, 22 Mar 2005 10:26:19 -0800
Ram [EMAIL PROTECTED] wrote:
On Mon, 2005-03-21 at 23:07, Guillaume Thouvenin wrote:
On Mon, 2005-03-21 at 12:52 -0800, Ram wrote:
If a bunch of applications are listening for fork events,
On Tue, 22 Mar 2005 11:18:07 -0800
Ram [EMAIL PROTECTED] wrote:
I still do not see why it is needed.
Super-user can run ip command and turn network interface off
not waiting while apache or named exits or unbind.
In theory I can create some kind of userspace registration mechanism,
On Tue, 2005-03-22 at 12:25, Evgeniy Polyakov wrote:
On Tue, 22 Mar 2005 11:18:07 -0800
Ram [EMAIL PROTECTED] wrote:
I still do not see why it is needed.
Super-user can run ip command and turn network interface off
not waiting while apache or named exits or unbind.
In theory I
Evgeniy Polyakov wrote:
On Tue, 22 Mar 2005 10:26:19 -0800
Ram [EMAIL PROTECTED] wrote:
On Mon, 2005-03-21 at 23:07, Guillaume Thouvenin wrote:
On Mon, 2005-03-21 at 12:52 -0800, Ram wrote:
If a bunch of applications are listening for fork events,
your patch allows any application to
On Tue, 2005-03-22 at 12:42 -0800, Ram wrote:
On Tue, 2005-03-22 at 12:25, Evgeniy Polyakov wrote:
On Tue, 22 Mar 2005 11:18:07 -0800
Ram [EMAIL PROTECTED] wrote:
I still do not see why it is needed.
Super-user can run ip command and turn network interface off
not waiting while
On Tue, 2005-03-22 at 15:51 -0800, Jay Lan wrote:
I think a better way is:
Providing a different connector channel called the administrator
channel which can be used only by a super-user, and gives you
the ability to switch on or off any connector channel including the
On Mon, 2005-03-21 at 12:52 -0800, Ram wrote:
> If a bunch of applications are listening for fork events,
> your patch allows any application to turn off the
> fork event notification? Is this the right behavior?
Yes it is. The main management is done by application so, if
On Mon, 2005-03-21 at 12:52 -0800, Ram wrote:
> On Mon, 2005-03-21 at 04:48, Guillaume Thouvenin wrote:
> > ChangeLog:
> >
> > - Remove the global cn_fork_lock and replace it by a per CPU
> > counter.
> > - The processor ID has been added in the data part of the message.
> > Thus
On Mon, 2005-03-21 at 04:48, Guillaume Thouvenin wrote:
> ChangeLog:
>
> - Remove the global cn_fork_lock and replace it by a per CPU
> counter.
> - The processor ID has been added in the data part of the message.
> Thus datas sent in a message are: "CPU_ID PARENT_PID CHILD_PID"
>
ChangeLog:
- Remove the global cn_fork_lock and replace it by a per CPU
counter.
- The processor ID has been added in the data part of the message.
Thus datas sent in a message are: "CPU_ID PARENT_PID CHILD_PID"
Those modifications were done to be more scalable because, as
On Thu, 2005-03-17 at 14:05 -0800, Jesse Barnes wrote:
> On Thursday, March 17, 2005 1:38 pm, Evgeniy Polyakov wrote:
> > The most significant part there - is requirement to store
> > u32 seq in each CPU's cache and thus flush cacheline +
> > invalidate/get from mem on each other cpus
> > each
On Thu, 2005-03-17 at 14:05 -0800, Jesse Barnes wrote:
On Thursday, March 17, 2005 1:38 pm, Evgeniy Polyakov wrote:
The most significant part there - is requirement to store
u32 seq in each CPU's cache and thus flush cacheline +
invalidate/get from mem on each other cpus
each time it is
ChangeLog:
- Remove the global cn_fork_lock and replace it by a per CPU
counter.
- The processor ID has been added in the data part of the message.
Thus datas sent in a message are: CPU_ID PARENT_PID CHILD_PID
Those modifications were done to be more scalable because, as
On Mon, 2005-03-21 at 04:48, Guillaume Thouvenin wrote:
ChangeLog:
- Remove the global cn_fork_lock and replace it by a per CPU
counter.
- The processor ID has been added in the data part of the message.
Thus datas sent in a message are: CPU_ID PARENT_PID CHILD_PID
Those
On Mon, 2005-03-21 at 12:52 -0800, Ram wrote:
On Mon, 2005-03-21 at 04:48, Guillaume Thouvenin wrote:
ChangeLog:
- Remove the global cn_fork_lock and replace it by a per CPU
counter.
- The processor ID has been added in the data part of the message.
Thus datas sent in a
On Mon, 2005-03-21 at 12:52 -0800, Ram wrote:
If a bunch of applications are listening for fork events,
your patch allows any application to turn off the
fork event notification? Is this the right behavior?
Yes it is. The main management is done by application so, if several
1 - 100 of 108 matches
Mail list logo