CC'ing linux-rt-users because I think my explanation below may be interesting for the
RT folks.
Mark Hounschell wrote:
Max Krasnyanskiy wrote:
With CPU isolation
it's very easy to achieve single digit usec worst case and around 200
nsec average response times on off-the-shelf
multi-
Max Krasnyanskiy wrote:
> With CPU isolation
> it's very easy to achieve single digit usec worst case and around 200
> nsec average response times on off-the-shelf
> multi- processor/core systems (vanilla kernel plus these patches) even
> under exteme system load.
Hi Max, could you elaborate on
Max Krasnyanskiy wrote:
With CPU isolation
it's very easy to achieve single digit usec worst case and around 200
nsec average response times on off-the-shelf
multi- processor/core systems (vanilla kernel plus these patches) even
under exteme system load.
Hi Max, could you elaborate on what
It seems that git-send-email for some reasons did not send an introductory
email.
So I'm sending it manually. Sorry if you get it twice.
---
Following patch series extends CPU isolation support. Yes, most people want to virtuallize
CPUs these days and I want to isolate them :) .
The
It seems that git-send-email for some reasons did not send an introductory
email.
So I'm sending it manually. Sorry if you get it twice.
---
Following patch series extends CPU isolation support. Yes, most people want to virtuallize
CPUs these days and I want to isolate them :) .
The
Paul Jackson wrote:
> Max K wrote:
>>> And for another thing, we already declare externs in cpumask.h for
>>> the other, more widely used, cpu_*_map variables cpu_possible_map,
>>> cpu_online_map, and cpu_present_map.
>> Well, to address #2 and #3 isolated map will need to be
Max K wrote:
> > And for another thing, we already declare externs in cpumask.h for
> > the other, more widely used, cpu_*_map variables cpu_possible_map,
> > cpu_online_map, and cpu_present_map.
> Well, to address #2 and #3 isolated map will need to be exported as well.
> Those other
Peter Zijlstra wrote:
On Mon, 2008-01-28 at 14:00 -0500, Steven Rostedt wrote:
On Mon, 28 Jan 2008, Max Krasnyanskiy wrote:
[PATCH] [CPUISOL] Support for workqueue isolation
The thing about workqueues is that they should only be woken on a CPU if
something on that CPU accessed them. IOW,
Paul Jackson wrote:
Max wrote:
Looks like I failed to explain what I'm trying to achieve. So let me try again.
Well done. I read through that, expecting to disagree or at least
to not understand at some point, and got all the way through nodding
my head in agreement. Good.
Whether the
Max wrote:
> Looks like I failed to explain what I'm trying to achieve. So let me try
> again.
Well done. I read through that, expecting to disagree or at least
to not understand at some point, and got all the way through nodding
my head in agreement. Good.
Whether the earlier confusions were
Max wrote:
Looks like I failed to explain what I'm trying to achieve. So let me try
again.
Well done. I read through that, expecting to disagree or at least
to not understand at some point, and got all the way through nodding
my head in agreement. Good.
Whether the earlier confusions were
Paul Jackson wrote:
Max wrote:
Looks like I failed to explain what I'm trying to achieve. So let me try again.
Well done. I read through that, expecting to disagree or at least
to not understand at some point, and got all the way through nodding
my head in agreement. Good.
Whether the
Peter Zijlstra wrote:
On Mon, 2008-01-28 at 14:00 -0500, Steven Rostedt wrote:
On Mon, 28 Jan 2008, Max Krasnyanskiy wrote:
[PATCH] [CPUISOL] Support for workqueue isolation
The thing about workqueues is that they should only be woken on a CPU if
something on that CPU accessed them. IOW,
Max K wrote:
And for another thing, we already declare externs in cpumask.h for
the other, more widely used, cpu_*_map variables cpu_possible_map,
cpu_online_map, and cpu_present_map.
Well, to address #2 and #3 isolated map will need to be exported as well.
Those other maps do
Paul Jackson wrote:
Max K wrote:
And for another thing, we already declare externs in cpumask.h for
the other, more widely used, cpu_*_map variables cpu_possible_map,
cpu_online_map, and cpu_present_map.
Well, to address #2 and #3 isolated map will need to be exported as well.
Hi Daniel,
Sorry for not replying right away.
Daniel Walker wrote:
> On Mon, 2008-01-28 at 16:12 -0800, Max Krasnyanskiy wrote:
>
>> Not accurate enough and way too much overhead for what I need. I know at
>> this point it probably
>> sounds like I'm talking BS :). I wish I've released the
Paul Jackson wrote:
> Max wrote:
>> Paul, I actually mentioned at the beginning of my email that I did read that
>> thread
>> started by Peter. I did learn quite a bit from it :)
>
> Ah - sorry - I missed that part. However, I'm still getting the feeling
> that there were some key points in
Paul Jackson wrote:
Max wrote:
Paul, I actually mentioned at the beginning of my email that I did read that
thread
started by Peter. I did learn quite a bit from it :)
Ah - sorry - I missed that part. However, I'm still getting the feeling
that there were some key points in that thread
Hi Daniel,
Sorry for not replying right away.
Daniel Walker wrote:
On Mon, 2008-01-28 at 16:12 -0800, Max Krasnyanskiy wrote:
Not accurate enough and way too much overhead for what I need. I know at
this point it probably
sounds like I'm talking BS :). I wish I've released the engine and
Max wrote:
> Paul, I actually mentioned at the beginning of my email that I did read that
> thread
> started by Peter. I did learn quite a bit from it :)
Ah - sorry - I missed that part. However, I'm still getting the feeling
that there were some key points in that thread that we have not
Paul Jackson wrote:
> Max wrote:
>> Here is the list of things of issues with sched_load_balance flag from CPU
>> isolation
>> perspective:
>
> A separate thread happened to start up on lkml.org, shortly after
> yours, that went into this in considerable detail.
>
> For example, the
Paul Jackson wrote:
Max wrote:
Here is the list of things of issues with sched_load_balance flag from CPU
isolation
perspective:
A separate thread happened to start up on lkml.org, shortly after
yours, that went into this in considerable detail.
For example, the interaction of
Max wrote:
Paul, I actually mentioned at the beginning of my email that I did read that
thread
started by Peter. I did learn quite a bit from it :)
Ah - sorry - I missed that part. However, I'm still getting the feeling
that there were some key points in that thread that we have not managed
Max wrote:
> Here is the list of things of issues with sched_load_balance flag from CPU
> isolation
> perspective:
A separate thread happened to start up on lkml.org, shortly after
yours, that went into this in considerable detail.
For example, the interaction of cpusets, sched_load_balance,
Max wrote:
Here is the list of things of issues with sched_load_balance flag from CPU
isolation
perspective:
A separate thread happened to start up on lkml.org, shortly after
yours, that went into this in considerable detail.
For example, the interaction of cpusets, sched_load_balance,
Hi Mark,
[EMAIL PROTECTED] wrote:
Following patch series extends CPU isolation support. Yes, most people want to virtuallize
CPUs these days and I want to isolate them :).
The primary idea here is to be able to use some CPU cores as dedicated engines
for running
user-space code with minimal
Paul Jackson wrote:
Max wrote:
So far it seems that extending cpu_isolated_map
is more natural way of propagating this notion to the rest of the kernel.
Since it's very similar to the cpu_online_map concept and it's easy to
integrated
with the code that already uses it.
If it were just
[EMAIL PROTECTED] wrote:
> Following patch series extends CPU isolation support. Yes, most people want
> to virtuallize
> CPUs these days and I want to isolate them :).
> The primary idea here is to be able to use some CPU cores as dedicated
> engines for running
> user-space code with minimal
[EMAIL PROTECTED] wrote:
Following patch series extends CPU isolation support. Yes, most people want
to virtuallize
CPUs these days and I want to isolate them :).
The primary idea here is to be able to use some CPU cores as dedicated
engines for running
user-space code with minimal kernel
Hi Mark,
[EMAIL PROTECTED] wrote:
Following patch series extends CPU isolation support. Yes, most people want to virtuallize
CPUs these days and I want to isolate them :).
The primary idea here is to be able to use some CPU cores as dedicated engines
for running
user-space code with minimal
Paul Jackson wrote:
Max wrote:
So far it seems that extending cpu_isolated_map
is more natural way of propagating this notion to the rest of the kernel.
Since it's very similar to the cpu_online_map concept and it's easy to
integrated
with the code that already uses it.
If it were just
On Mon, 2008-01-28 at 16:12 -0800, Max Krasnyanskiy wrote:
> Not accurate enough and way too much overhead for what I need. I know at this
> point it probably
> sounds like I'm talking BS :). I wish I've released the engine and examples
> by now. Anyway let
> me just say that SW MAC has
Daniel Walker wrote:
On Mon, 2008-01-28 at 10:32 -0800, Max Krasnyanskiy wrote:
Just this patches. RT patches cannot achieve what I needed. Even RTAI/Xenomai
can't do that.
For example I have separate tasks with hard deadlines that must be enforced in 50usec kind
of range and basically no
On Mon, 2008-01-28 at 10:32 -0800, Max Krasnyanskiy wrote:
> Just this patches. RT patches cannot achieve what I needed. Even RTAI/Xenomai
> can't do that.
> For example I have separate tasks with hard deadlines that must be enforced
> in 50usec kind
> of range and basically no idle time
Paul Jackson wrote:
Max wrote:
So far it seems that extending cpu_isolated_map
is more natural way of propagating this notion to the rest of the kernel.
Since it's very similar to the cpu_online_map concept and it's easy to
integrated
with the code that already uses it.
If it were just
Peter Zijlstra wrote:
On Mon, 2008-01-28 at 14:00 -0500, Steven Rostedt wrote:
On Mon, 28 Jan 2008, Max Krasnyanskiy wrote:
[PATCH] [CPUISOL] Support for workqueue isolation
The thing about workqueues is that they should only be woken on a CPU if
something on that CPU accessed them. IOW,
On Mon, 2008-01-28 at 14:00 -0500, Steven Rostedt wrote:
>
> On Mon, 28 Jan 2008, Max Krasnyanskiy wrote:
> > >> [PATCH] [CPUISOL] Support for workqueue isolation
> > >
> > > The thing about workqueues is that they should only be woken on a CPU if
> > > something on that CPU accessed them.
Max wrote:
> Also "CPU sets" seem to mostly deal with the scheduler domains.
True - though "cpusets" (no space ;) sched_load_balance flag can
be used to see that some CPUs are not in any scheduler domain,
which is equivalent to not having the scheduler run on them.
--
I won't
Max wrote:
> So far it seems that extending cpu_isolated_map
> is more natural way of propagating this notion to the rest of the kernel.
> Since it's very similar to the cpu_online_map concept and it's easy to
> integrated
> with the code that already uses it.
If it were just realtime support,
On Mon, 28 Jan 2008, Max Krasnyanskiy wrote:
> >> [PATCH] [CPUISOL] Support for workqueue isolation
> >
> > The thing about workqueues is that they should only be woken on a CPU if
> > something on that CPU accessed them. IOW, the workqueue on a CPU handles
> > work that was called by
Peter Zijlstra wrote:
On Mon, 2008-01-28 at 11:34 -0500, Steven Rostedt wrote:
On Mon, Jan 28, 2008 at 08:59:10AM -0600, Paul Jackson wrote:
Thanks for the CC, Peter.
Thanks from me too.
Max wrote:
We've had scheduler support for CPU isolation ever since O(1) scheduler went it.
I'd like to
Steven Rostedt wrote:
On Mon, Jan 28, 2008 at 08:59:10AM -0600, Paul Jackson wrote:
Thanks for the CC, Peter.
Thanks from me too.
Max wrote:
We've had scheduler support for CPU isolation ever since O(1) scheduler went it.
I'd like to extend it further to avoid kernel activity on those CPUs
Paul Jackson wrote:
Thanks for the CC, Peter.
Ingo - see question at end of message.
Max wrote:
We've had scheduler support for CPU isolation ever since O(1) scheduler went it.
I'd like to extend it further to avoid kernel activity on those CPUs as much as possible.
I recently added the
Hi Peter,
Peter Zijlstra wrote:
[ You really ought to CC people :-) ]
I was not sure who though :)
Do we have a mailing list for scheduler development btw ?
Or it's just folks that you included in CC ?
Some of the latest scheduler patches brake things that I'm doing and I'd like
to make
them
On Mon, 2008-01-28 at 11:34 -0500, Steven Rostedt wrote:
> On Mon, Jan 28, 2008 at 08:59:10AM -0600, Paul Jackson wrote:
> > Thanks for the CC, Peter.
>
> Thanks from me too.
>
> > Max wrote:
> > > We've had scheduler support for CPU isolation ever since O(1) scheduler
> > > went it.
> > >
On Mon, Jan 28, 2008 at 08:59:10AM -0600, Paul Jackson wrote:
> Thanks for the CC, Peter.
Thanks from me too.
> Max wrote:
> > We've had scheduler support for CPU isolation ever since O(1) scheduler
> > went it.
> > I'd like to extend it further to avoid kernel activity on those CPUs as
> >
Thanks for the CC, Peter.
Ingo - see question at end of message.
Max wrote:
> We've had scheduler support for CPU isolation ever since O(1) scheduler went
> it.
> I'd like to extend it further to avoid kernel activity on those CPUs as much
> as possible.
I recently added the per-cpuset
[ You really ought to CC people :-) ]
On Sun, 2008-01-27 at 20:09 -0800, [EMAIL PROTECTED] wrote:
> Following patch series extends CPU isolation support. Yes, most people want
> to virtuallize
> CPUs these days and I want to isolate them :).
> The primary idea here is to be able to use some CPU
Peter Zijlstra wrote:
On Mon, 2008-01-28 at 11:34 -0500, Steven Rostedt wrote:
On Mon, Jan 28, 2008 at 08:59:10AM -0600, Paul Jackson wrote:
Thanks for the CC, Peter.
Thanks from me too.
Max wrote:
We've had scheduler support for CPU isolation ever since O(1) scheduler went it.
I'd like to
On Mon, 2008-01-28 at 14:00 -0500, Steven Rostedt wrote:
On Mon, 28 Jan 2008, Max Krasnyanskiy wrote:
[PATCH] [CPUISOL] Support for workqueue isolation
The thing about workqueues is that they should only be woken on a CPU if
something on that CPU accessed them. IOW, the workqueue
Max wrote:
Also CPU sets seem to mostly deal with the scheduler domains.
True - though cpusets (no space ;) sched_load_balance flag can
be used to see that some CPUs are not in any scheduler domain,
which is equivalent to not having the scheduler run on them.
--
I won't rest
Peter Zijlstra wrote:
On Mon, 2008-01-28 at 14:00 -0500, Steven Rostedt wrote:
On Mon, 28 Jan 2008, Max Krasnyanskiy wrote:
[PATCH] [CPUISOL] Support for workqueue isolation
The thing about workqueues is that they should only be woken on a CPU if
something on that CPU accessed them. IOW,
Paul Jackson wrote:
Max wrote:
So far it seems that extending cpu_isolated_map
is more natural way of propagating this notion to the rest of the kernel.
Since it's very similar to the cpu_online_map concept and it's easy to
integrated
with the code that already uses it.
If it were just
Thanks for the CC, Peter.
Ingo - see question at end of message.
Max wrote:
We've had scheduler support for CPU isolation ever since O(1) scheduler went
it.
I'd like to extend it further to avoid kernel activity on those CPUs as much
as possible.
I recently added the per-cpuset flag
Max wrote:
So far it seems that extending cpu_isolated_map
is more natural way of propagating this notion to the rest of the kernel.
Since it's very similar to the cpu_online_map concept and it's easy to
integrated
with the code that already uses it.
If it were just realtime support, then
On Mon, 2008-01-28 at 11:34 -0500, Steven Rostedt wrote:
On Mon, Jan 28, 2008 at 08:59:10AM -0600, Paul Jackson wrote:
Thanks for the CC, Peter.
Thanks from me too.
Max wrote:
We've had scheduler support for CPU isolation ever since O(1) scheduler
went it.
I'd like to extend
On Mon, 2008-01-28 at 10:32 -0800, Max Krasnyanskiy wrote:
Just this patches. RT patches cannot achieve what I needed. Even RTAI/Xenomai
can't do that.
For example I have separate tasks with hard deadlines that must be enforced
in 50usec kind
of range and basically no idle time
Daniel Walker wrote:
On Mon, 2008-01-28 at 10:32 -0800, Max Krasnyanskiy wrote:
Just this patches. RT patches cannot achieve what I needed. Even RTAI/Xenomai
can't do that.
For example I have separate tasks with hard deadlines that must be enforced in 50usec kind
of range and basically no
On Mon, 2008-01-28 at 16:12 -0800, Max Krasnyanskiy wrote:
Not accurate enough and way too much overhead for what I need. I know at this
point it probably
sounds like I'm talking BS :). I wish I've released the engine and examples
by now. Anyway let
me just say that SW MAC has crazy
[ You really ought to CC people :-) ]
On Sun, 2008-01-27 at 20:09 -0800, [EMAIL PROTECTED] wrote:
Following patch series extends CPU isolation support. Yes, most people want
to virtuallize
CPUs these days and I want to isolate them :).
The primary idea here is to be able to use some CPU
On Mon, 28 Jan 2008, Max Krasnyanskiy wrote:
[PATCH] [CPUISOL] Support for workqueue isolation
The thing about workqueues is that they should only be woken on a CPU if
something on that CPU accessed them. IOW, the workqueue on a CPU handles
work that was called by something on that
On Mon, Jan 28, 2008 at 08:59:10AM -0600, Paul Jackson wrote:
Thanks for the CC, Peter.
Thanks from me too.
Max wrote:
We've had scheduler support for CPU isolation ever since O(1) scheduler
went it.
I'd like to extend it further to avoid kernel activity on those CPUs as
much as
Steven Rostedt wrote:
On Mon, Jan 28, 2008 at 08:59:10AM -0600, Paul Jackson wrote:
Thanks for the CC, Peter.
Thanks from me too.
Max wrote:
We've had scheduler support for CPU isolation ever since O(1) scheduler went it.
I'd like to extend it further to avoid kernel activity on those CPUs
Paul Jackson wrote:
Thanks for the CC, Peter.
Ingo - see question at end of message.
Max wrote:
We've had scheduler support for CPU isolation ever since O(1) scheduler went it.
I'd like to extend it further to avoid kernel activity on those CPUs as much as possible.
I recently added the
Hi Peter,
Peter Zijlstra wrote:
[ You really ought to CC people :-) ]
I was not sure who though :)
Do we have a mailing list for scheduler development btw ?
Or it's just folks that you included in CC ?
Some of the latest scheduler patches brake things that I'm doing and I'd like
to make
them
Following patch series extends CPU isolation support. Yes, most people want to
virtuallize
CPUs these days and I want to isolate them :).
The primary idea here is to be able to use some CPU cores as dedicated engines
for running
user-space code with minimal kernel overhead/intervention, think
Following patch series extends CPU isolation support. Yes, most people want to
virtuallize
CPUs these days and I want to isolate them :).
The primary idea here is to be able to use some CPU cores as dedicated engines
for running
user-space code with minimal kernel overhead/intervention, think
67 matches
Mail list logo