Gregory, we seem to more or less agree with each other, but still...
On 08/02, Oleg Nesterov wrote:
>
> On 08/01, Gregory Haskins wrote:
> >
> > On Thu, 2007-08-02 at 02:22 +0400, Oleg Nesterov wrote:
> >
> > > No,
> >
> > You sure are a confident one ;)
>
> Yeah, this is a rare case when I am
On Mon, 2007-08-06 at 23:33 +0400, Oleg Nesterov wrote:
> OK. I have to take my words back. I completely misunderstood why you
> are doing this and which problems you are trying to solve, my bad.
No problem man. You found some legitimate problems too so your input is
very much appreciated.
>
On Mon, 2007-08-06 at 20:50 +0400, Oleg Nesterov wrote:
> Yes.
>
> > Do you agree that if the context was the same there is a bug? Or did I
> > miss something else?
>
> Yes sure. We can't expect we can "flush" work_struct with flush_workqueue()
> unless we know it doesn't re-schedule itself.
On 08/06, Gregory Haskins wrote:
>
> On Mon, 2007-08-06 at 19:36 +0400, Oleg Nesterov wrote:
>
> > > E.g. whatever work was being flushed was allowed to escape
> > > out from behind the barrier. If you don't care about the flush working,
> > > why do it at all?
> >
> > The caller of
On 08/06, Peter Zijlstra wrote:
>
> On Mon, 2007-08-06 at 18:45 +0400, Oleg Nesterov wrote:
> > On 08/06, Peter Zijlstra wrote:
>
> > > > I suspect most of the barrier/flush semantics could be replaced with
> > > > completions from specific work items.
> >
> > Hm. But this is exactly how it
On Mon, 2007-08-06 at 19:36 +0400, Oleg Nesterov wrote:
> > Well, the "trylock+requeue" avoids the obvious recursive deadlock, but
> > it introduces a more subtle error: the reschedule effectively bypasses
> > the flush.
>
> this is OK, flush_workqueue() should only care about work_struct's that
On 08/06, Gregory Haskins wrote:
>
> On Mon, 2007-08-06 at 18:45 +0400, Oleg Nesterov wrote:
> > On 08/06, Peter Zijlstra wrote:
>
> > >
> > > still this does not change the fundamental issue of a high prio piece of
> > > work waiting on a lower prio task.
> >^^^
> > waiting. This is
On 08/06, Gregory Haskins wrote:
>
> On Mon, 2007-08-06 at 18:26 +0400, Oleg Nesterov wrote:
>
> > Immediately, A inserts the work on CPU 1.
>
> Well, if you didn't care about which CPU, that's true. But suppose we
> want to direct this specifically at CWQ for cpu 0.
please see below...
> >
On Mon, 2007-08-06 at 18:45 +0400, Oleg Nesterov wrote:
> On 08/06, Peter Zijlstra wrote:
> >
> > still this does not change the fundamental issue of a high prio piece of
> > work waiting on a lower prio task.
>^^^
> waiting. This is a "key" word, and this was my (perhaps wrong)
On Mon, 2007-08-06 at 18:26 +0400, Oleg Nesterov wrote:
>
> This is true of course, and I didn't claim this.
My apologies. I misunderstood you.
> > When will the job complete?
>
> Immediately, A inserts the work on CPU 1.
Well, if you didn't care about which CPU, that's true. But suppose
On Mon, 2007-08-06 at 18:45 +0400, Oleg Nesterov wrote:
> On 08/06, Peter Zijlstra wrote:
> > still this does not change the fundamental issue of a high prio piece of
> > work waiting on a lower prio task.
>^^^
> waiting. This is a "key" word, and this was my (perhaps wrong) point.
On 08/06, Peter Zijlstra wrote:
>
> On Mon, 2007-08-06 at 15:29 +0200, Peter Zijlstra wrote:
> > On Mon, 2007-08-06 at 17:18 +0400, Oleg Nesterov wrote:
> >
> > > Yes, I still disagree with the whole idea because I hope we can make
> > > something more simpler to solve the problem, but I must
On 08/06, Gregory Haskins wrote:
>
> On Thu, 2007-08-02 at 23:50 +0400, Oleg Nesterov wrote:
>
> > I strongly believe you guys take a _completely_ wrong approach.
> > queue_work() should _not_ take the priority of the caller into
> > account, this is bogus.
>
> I think you have argued very
On Mon, 2007-08-06 at 15:29 +0200, Peter Zijlstra wrote:
> On Mon, 2007-08-06 at 17:18 +0400, Oleg Nesterov wrote:
>
> > Yes, I still disagree with the whole idea because I hope we can make
> > something more simpler to solve the problem, but I must admit I don't
> > quite understand what the
On Mon, 2007-08-06 at 17:18 +0400, Oleg Nesterov wrote:
> Yes, I still disagree with the whole idea because I hope we can make
> something more simpler to solve the problem, but I must admit I don't
> quite understand what the problem is.
>
> So, please consider a noise from my side as my
Gregory, Ingo,
On 08/06, Ingo Molnar wrote:
>
> * Oleg Nesterov <[EMAIL PROTECTED]> wrote:
>
> > On 08/01, Gregory Haskins wrote:
> > >
> > > On Thu, 2007-08-02 at 02:22 +0400, Oleg Nesterov wrote:
> > >
> > > > No,
> > >
> > > You sure are a confident one ;)
> >
> > Yeah, this is a rare
* Oleg Nesterov <[EMAIL PROTECTED]> wrote:
> On 08/01, Gregory Haskins wrote:
> >
> > On Thu, 2007-08-02 at 02:22 +0400, Oleg Nesterov wrote:
> >
> > > No,
> >
> > You sure are a confident one ;)
>
> Yeah, this is a rare case when I am very sure I am right ;)
>
> I strongly believe you guys
On Thu, 2007-08-02 at 23:50 +0400, Oleg Nesterov wrote:
> I strongly believe you guys take a _completely_ wrong approach.
> queue_work() should _not_ take the priority of the caller into
> account, this is bogus.
I think you have argued very effectively that there are situations in
which the
On Thu, 2007-08-02 at 23:50 +0400, Oleg Nesterov wrote:
I strongly believe you guys take a _completely_ wrong approach.
queue_work() should _not_ take the priority of the caller into
account, this is bogus.
I think you have argued very effectively that there are situations in
which the
* Oleg Nesterov [EMAIL PROTECTED] wrote:
On 08/01, Gregory Haskins wrote:
On Thu, 2007-08-02 at 02:22 +0400, Oleg Nesterov wrote:
No,
You sure are a confident one ;)
Yeah, this is a rare case when I am very sure I am right ;)
I strongly believe you guys take a _completely_
Gregory, Ingo,
On 08/06, Ingo Molnar wrote:
* Oleg Nesterov [EMAIL PROTECTED] wrote:
On 08/01, Gregory Haskins wrote:
On Thu, 2007-08-02 at 02:22 +0400, Oleg Nesterov wrote:
No,
You sure are a confident one ;)
Yeah, this is a rare case when I am very sure I am
On Mon, 2007-08-06 at 17:18 +0400, Oleg Nesterov wrote:
Yes, I still disagree with the whole idea because I hope we can make
something more simpler to solve the problem, but I must admit I don't
quite understand what the problem is.
So, please consider a noise from my side as my attempt to
On Mon, 2007-08-06 at 15:29 +0200, Peter Zijlstra wrote:
On Mon, 2007-08-06 at 17:18 +0400, Oleg Nesterov wrote:
Yes, I still disagree with the whole idea because I hope we can make
something more simpler to solve the problem, but I must admit I don't
quite understand what the problem is.
On 08/06, Gregory Haskins wrote:
On Thu, 2007-08-02 at 23:50 +0400, Oleg Nesterov wrote:
I strongly believe you guys take a _completely_ wrong approach.
queue_work() should _not_ take the priority of the caller into
account, this is bogus.
I think you have argued very effectively
On 08/06, Peter Zijlstra wrote:
On Mon, 2007-08-06 at 15:29 +0200, Peter Zijlstra wrote:
On Mon, 2007-08-06 at 17:18 +0400, Oleg Nesterov wrote:
Yes, I still disagree with the whole idea because I hope we can make
something more simpler to solve the problem, but I must admit I don't
On Mon, 2007-08-06 at 18:45 +0400, Oleg Nesterov wrote:
On 08/06, Peter Zijlstra wrote:
still this does not change the fundamental issue of a high prio piece of
work waiting on a lower prio task.
^^^
waiting. This is a key word, and this was my (perhaps wrong) point.
Yeah, its
On Mon, 2007-08-06 at 18:26 +0400, Oleg Nesterov wrote:
This is true of course, and I didn't claim this.
My apologies. I misunderstood you.
When will the job complete?
Immediately, A inserts the work on CPU 1.
Well, if you didn't care about which CPU, that's true. But suppose we
On Mon, 2007-08-06 at 18:45 +0400, Oleg Nesterov wrote:
On 08/06, Peter Zijlstra wrote:
still this does not change the fundamental issue of a high prio piece of
work waiting on a lower prio task.
^^^
waiting. This is a key word, and this was my (perhaps wrong) point.
On 08/06, Gregory Haskins wrote:
On Mon, 2007-08-06 at 18:26 +0400, Oleg Nesterov wrote:
Immediately, A inserts the work on CPU 1.
Well, if you didn't care about which CPU, that's true. But suppose we
want to direct this specifically at CWQ for cpu 0.
please see below...
It can
On 08/06, Gregory Haskins wrote:
On Mon, 2007-08-06 at 18:45 +0400, Oleg Nesterov wrote:
On 08/06, Peter Zijlstra wrote:
still this does not change the fundamental issue of a high prio piece of
work waiting on a lower prio task.
^^^
waiting. This is a key word, and
On Mon, 2007-08-06 at 19:36 +0400, Oleg Nesterov wrote:
Well, the trylock+requeue avoids the obvious recursive deadlock, but
it introduces a more subtle error: the reschedule effectively bypasses
the flush.
this is OK, flush_workqueue() should only care about work_struct's that are
On 08/06, Peter Zijlstra wrote:
On Mon, 2007-08-06 at 18:45 +0400, Oleg Nesterov wrote:
On 08/06, Peter Zijlstra wrote:
I suspect most of the barrier/flush semantics could be replaced with
completions from specific work items.
Hm. But this is exactly how it works?
Yes,
On 08/06, Gregory Haskins wrote:
On Mon, 2007-08-06 at 19:36 +0400, Oleg Nesterov wrote:
E.g. whatever work was being flushed was allowed to escape
out from behind the barrier. If you don't care about the flush working,
why do it at all?
The caller of flush_workueue() doesn't
On Mon, 2007-08-06 at 20:50 +0400, Oleg Nesterov wrote:
Yes.
Do you agree that if the context was the same there is a bug? Or did I
miss something else?
Yes sure. We can't expect we can flush work_struct with flush_workqueue()
unless we know it doesn't re-schedule itself.
Agreed.
On Mon, 2007-08-06 at 23:33 +0400, Oleg Nesterov wrote:
OK. I have to take my words back. I completely misunderstood why you
are doing this and which problems you are trying to solve, my bad.
No problem man. You found some legitimate problems too so your input is
very much appreciated.
Gregory, we seem to more or less agree with each other, but still...
On 08/02, Oleg Nesterov wrote:
On 08/01, Gregory Haskins wrote:
On Thu, 2007-08-02 at 02:22 +0400, Oleg Nesterov wrote:
No,
You sure are a confident one ;)
Yeah, this is a rare case when I am very sure I am
On 08/01, Gregory Haskins wrote:
>
> On Thu, 2007-08-02 at 02:22 +0400, Oleg Nesterov wrote:
>
> > No,
>
> You sure are a confident one ;)
Yeah, this is a rare case when I am very sure I am right ;)
I strongly believe you guys take a _completely_ wrong approach.
queue_work() should _not_ take
On 08/01, Gregory Haskins wrote:
On Thu, 2007-08-02 at 02:22 +0400, Oleg Nesterov wrote:
No,
You sure are a confident one ;)
Yeah, this is a rare case when I am very sure I am right ;)
I strongly believe you guys take a _completely_ wrong approach.
queue_work() should _not_ take the
On Thu, 2007-08-02 at 02:22 +0400, Oleg Nesterov wrote:
>
> No.
>
> > However, IIUC the point of flush_workqueue() is a barrier only relative
> > to your own submissions, correct?. E.g. to make sure *your* requests
> > are finished, not necessarily the entire queue.
>
> No,
You sure are a
On 08/01, Gregory Haskins wrote:
>
> On Thu, 2007-08-02 at 01:34 +0400, Oleg Nesterov wrote:
> > On 08/01, Gregory Haskins wrote:
> > >
> > > On Thu, 2007-08-02 at 00:50 +0400, Oleg Nesterov wrote:
> > > > On 08/01, Daniel Walker wrote:
> > > > >
> > > > > It's translating priorities through the
On Thu, 2007-08-02 at 01:34 +0400, Oleg Nesterov wrote:
> On 08/01, Gregory Haskins wrote:
> >
> > On Thu, 2007-08-02 at 00:50 +0400, Oleg Nesterov wrote:
> > > On 08/01, Daniel Walker wrote:
> > > >
> > > > It's translating priorities through the work queues, which doesn't seem
> > > > to happen
On Wed, 1 Aug 2007, Daniel Walker wrote:
On Wed, 2007-08-01 at 07:59 -0400, Gregory Haskins wrote:
On Tue, 2007-07-31 at 20:52 -0700, Daniel Walker wrote:
Here's a simpler version .. uses the plist data structure instead of the
100 queues, which makes for a cleaner patch ..
Hi Daniel,
On 08/01, Gregory Haskins wrote:
>
> On Thu, 2007-08-02 at 00:50 +0400, Oleg Nesterov wrote:
> > On 08/01, Daniel Walker wrote:
> > >
> > > It's translating priorities through the work queues, which doesn't seem
> > > to happen with the current implementation. A high priority, say
> > > SCHED_FIFO
On Thu, 2007-08-02 at 00:50 +0400, Oleg Nesterov wrote:
> On 08/01, Daniel Walker wrote:
> >
> > On Thu, 2007-08-02 at 00:18 +0400, Oleg Nesterov wrote:
> > > On 08/01, Daniel Walker wrote:
> > > >
> > > > On Wed, 2007-08-01 at 22:12 +0400, Oleg Nesterov wrote:
> > > >
> > > > > And I personally
On Thu, 2007-08-02 at 00:50 +0400, Oleg Nesterov wrote:
> On 08/01, Daniel Walker wrote:
> >
> > On Thu, 2007-08-02 at 00:18 +0400, Oleg Nesterov wrote:
> > > On 08/01, Daniel Walker wrote:
> > > >
> > > > On Wed, 2007-08-01 at 22:12 +0400, Oleg Nesterov wrote:
> > > >
> > > > > And I personally
On 08/01, Daniel Walker wrote:
>
> On Thu, 2007-08-02 at 00:18 +0400, Oleg Nesterov wrote:
> > On 08/01, Daniel Walker wrote:
> > >
> > > On Wed, 2007-08-01 at 22:12 +0400, Oleg Nesterov wrote:
> > >
> > > > And I personally think it is not very useful, even if it was correct.
> > > > You can
On Thu, 2007-08-02 at 00:32 +0400, Oleg Nesterov wrote:
> On 08/02, Oleg Nesterov wrote:
> >
> > And I don't understand why rt_mutex_setprio() is called just before
> > calling work->func(). This means that a high-priority work could
> > be delayed by the low-priority ->current_work.
>
> Aha, I
On Thu, 2007-08-02 at 00:18 +0400, Oleg Nesterov wrote:
> On 08/01, Daniel Walker wrote:
> >
> > On Wed, 2007-08-01 at 22:12 +0400, Oleg Nesterov wrote:
> >
> > > And I personally think it is not very useful, even if it was correct.
> > > You can create your own workqueue and change the priority
On 08/02, Oleg Nesterov wrote:
>
> And I don't understand why rt_mutex_setprio() is called just before
> calling work->func(). This means that a high-priority work could
> be delayed by the low-priority ->current_work.
Aha, I missed the rt_mutex_setprio() in insert_work().
This is not good
On 08/01, Daniel Walker wrote:
>
> On Wed, 2007-08-01 at 22:26 +0400, Oleg Nesterov wrote:
>
> > No, the "tail" option has nothing to do with prioritize, we can't remove
> > it. Please look at the code.
>
> So you insert a work struct that executes last which wakes the flushing
> thread?
No,
On 08/01, Daniel Walker wrote:
>
> On Wed, 2007-08-01 at 22:12 +0400, Oleg Nesterov wrote:
>
> > And I personally think it is not very useful, even if it was correct.
> > You can create your own workqueue and change the priority of cwq->thread.
>
> This change is more dynamic than than just
On Wed, 2007-08-01 at 22:26 +0400, Oleg Nesterov wrote:
> No, the "tail" option has nothing to do with prioritize, we can't remove
> it. Please look at the code.
So you insert a work struct that executes last which wakes the flushing
thread?
> Also, flush_workqueue() must not be delayed by the
On Wed, 2007-08-01 at 22:12 +0400, Oleg Nesterov wrote:
> And I personally think it is not very useful, even if it was correct.
> You can create your own workqueue and change the priority of cwq->thread.
This change is more dynamic than than just setting a single priority ..
There was some other
On 08/01, Daniel Walker wrote:
>
> On Wed, 2007-08-01 at 19:01 +0200, Peter Zijlstra wrote:
> > > static void insert_work(struct cpu_workqueue_struct *cwq,
> > > struct work_struct *work, int tail)
> > > {
> > > + int prio = current->normal_prio;
> > > +
> > >
On 08/01, Peter Zijlstra wrote:
>
> On Tue, 2007-07-31 at 20:52 -0700, Daniel Walker wrote:
>
> > static void insert_work(struct cpu_workqueue_struct *cwq,
> > struct work_struct *work, int tail)
> > {
> > + int prio = current->normal_prio;
> > +
> >
On Wed, 2007-08-01 at 08:55 -0700, Daniel Walker wrote:
> On Wed, 2007-08-01 at 11:19 -0400, Gregory Haskins wrote:
> > On Wed, 2007-08-01 at 08:10 -0700, Daniel Walker wrote:
> >
> > >
> > > rt_mutex_setprio() is just a function. It was also designed specifically
> > > for PI , so it seems
On Wed, 2007-08-01 at 19:01 +0200, Peter Zijlstra wrote:
> (you guys forgot to CC Ingo, Oleg and me)
>
> On Tue, 2007-07-31 at 20:52 -0700, Daniel Walker wrote:
>
> > Here's a simpler version .. uses the plist data structure instead of the
> > 100 queues, which makes for a cleaner patch ..
> >
(you guys forgot to CC Ingo, Oleg and me)
On Tue, 2007-07-31 at 20:52 -0700, Daniel Walker wrote:
> Here's a simpler version .. uses the plist data structure instead of the
> 100 queues, which makes for a cleaner patch ..
>
> Signed-off-by: Daniel Walker <[EMAIL PROTECTED]>
looks good,
On Wed, 2007-08-01 at 11:19 -0400, Gregory Haskins wrote:
> On Wed, 2007-08-01 at 08:10 -0700, Daniel Walker wrote:
>
> >
> > rt_mutex_setprio() is just a function. It was also designed specifically
> > for PI , so it seems fairly sane to use it in other PI type
> > situations ..
> >
>
> Yes.
On Wed, 2007-08-01 at 08:10 -0700, Daniel Walker wrote:
>
> rt_mutex_setprio() is just a function. It was also designed specifically
> for PI , so it seems fairly sane to use it in other PI type
> situations ..
>
Yes. It is designed for PI and I wasn't suggesting you shouldn't use
the logic
On Wed, 2007-08-01 at 07:59 -0400, Gregory Haskins wrote:
> On Tue, 2007-07-31 at 20:52 -0700, Daniel Walker wrote:
>
> >
> > Here's a simpler version .. uses the plist data structure instead of the
> > 100 queues, which makes for a cleaner patch ..
>
> Hi Daniel,
>
> I like your idea on the
On Tue, 2007-07-31 at 20:52 -0700, Daniel Walker wrote:
>
> Here's a simpler version .. uses the plist data structure instead of the
> 100 queues, which makes for a cleaner patch ..
Hi Daniel,
I like your idea on the plist simplification a lot. I will definitely
roll that into my series.
I
On Tue, 2007-07-31 at 20:52 -0700, Daniel Walker wrote:
Here's a simpler version .. uses the plist data structure instead of the
100 queues, which makes for a cleaner patch ..
Hi Daniel,
I like your idea on the plist simplification a lot. I will definitely
roll that into my series.
I am
On Wed, 2007-08-01 at 07:59 -0400, Gregory Haskins wrote:
On Tue, 2007-07-31 at 20:52 -0700, Daniel Walker wrote:
Here's a simpler version .. uses the plist data structure instead of the
100 queues, which makes for a cleaner patch ..
Hi Daniel,
I like your idea on the plist
On Wed, 2007-08-01 at 08:10 -0700, Daniel Walker wrote:
rt_mutex_setprio() is just a function. It was also designed specifically
for PI , so it seems fairly sane to use it in other PI type
situations ..
Yes. It is designed for PI and I wasn't suggesting you shouldn't use
the logic
On Wed, 2007-08-01 at 11:19 -0400, Gregory Haskins wrote:
On Wed, 2007-08-01 at 08:10 -0700, Daniel Walker wrote:
rt_mutex_setprio() is just a function. It was also designed specifically
for PI , so it seems fairly sane to use it in other PI type
situations ..
Yes. It is
(you guys forgot to CC Ingo, Oleg and me)
On Tue, 2007-07-31 at 20:52 -0700, Daniel Walker wrote:
Here's a simpler version .. uses the plist data structure instead of the
100 queues, which makes for a cleaner patch ..
Signed-off-by: Daniel Walker [EMAIL PROTECTED]
looks good, assuming you
On Wed, 2007-08-01 at 19:01 +0200, Peter Zijlstra wrote:
(you guys forgot to CC Ingo, Oleg and me)
On Tue, 2007-07-31 at 20:52 -0700, Daniel Walker wrote:
Here's a simpler version .. uses the plist data structure instead of the
100 queues, which makes for a cleaner patch ..
On Wed, 2007-08-01 at 08:55 -0700, Daniel Walker wrote:
On Wed, 2007-08-01 at 11:19 -0400, Gregory Haskins wrote:
On Wed, 2007-08-01 at 08:10 -0700, Daniel Walker wrote:
rt_mutex_setprio() is just a function. It was also designed specifically
for PI , so it seems fairly sane to use
On 08/01, Peter Zijlstra wrote:
On Tue, 2007-07-31 at 20:52 -0700, Daniel Walker wrote:
static void insert_work(struct cpu_workqueue_struct *cwq,
struct work_struct *work, int tail)
{
+ int prio = current-normal_prio;
+
set_wq_data(work, cwq);
On 08/01, Daniel Walker wrote:
On Wed, 2007-08-01 at 19:01 +0200, Peter Zijlstra wrote:
static void insert_work(struct cpu_workqueue_struct *cwq,
struct work_struct *work, int tail)
{
+ int prio = current-normal_prio;
+
set_wq_data(work, cwq);
On Wed, 2007-08-01 at 22:12 +0400, Oleg Nesterov wrote:
And I personally think it is not very useful, even if it was correct.
You can create your own workqueue and change the priority of cwq-thread.
This change is more dynamic than than just setting a single priority ..
There was some other
On Wed, 2007-08-01 at 22:26 +0400, Oleg Nesterov wrote:
No, the tail option has nothing to do with prioritize, we can't remove
it. Please look at the code.
So you insert a work struct that executes last which wakes the flushing
thread?
Also, flush_workqueue() must not be delayed by the new
On 08/01, Daniel Walker wrote:
On Wed, 2007-08-01 at 22:12 +0400, Oleg Nesterov wrote:
And I personally think it is not very useful, even if it was correct.
You can create your own workqueue and change the priority of cwq-thread.
This change is more dynamic than than just setting a
On 08/01, Daniel Walker wrote:
On Wed, 2007-08-01 at 22:26 +0400, Oleg Nesterov wrote:
No, the tail option has nothing to do with prioritize, we can't remove
it. Please look at the code.
So you insert a work struct that executes last which wakes the flushing
thread?
No, tail == 1
On 08/01, Daniel Walker wrote:
On Thu, 2007-08-02 at 00:18 +0400, Oleg Nesterov wrote:
On 08/01, Daniel Walker wrote:
On Wed, 2007-08-01 at 22:12 +0400, Oleg Nesterov wrote:
And I personally think it is not very useful, even if it was correct.
You can create your own
On Thu, 2007-08-02 at 00:18 +0400, Oleg Nesterov wrote:
On 08/01, Daniel Walker wrote:
On Wed, 2007-08-01 at 22:12 +0400, Oleg Nesterov wrote:
And I personally think it is not very useful, even if it was correct.
You can create your own workqueue and change the priority of
On Thu, 2007-08-02 at 00:32 +0400, Oleg Nesterov wrote:
On 08/02, Oleg Nesterov wrote:
And I don't understand why rt_mutex_setprio() is called just before
calling work-func(). This means that a high-priority work could
be delayed by the low-priority -current_work.
Aha, I missed the
On 08/02, Oleg Nesterov wrote:
And I don't understand why rt_mutex_setprio() is called just before
calling work-func(). This means that a high-priority work could
be delayed by the low-priority -current_work.
Aha, I missed the rt_mutex_setprio() in insert_work().
This is not good either.
On Thu, 2007-08-02 at 00:50 +0400, Oleg Nesterov wrote:
On 08/01, Daniel Walker wrote:
On Thu, 2007-08-02 at 00:18 +0400, Oleg Nesterov wrote:
On 08/01, Daniel Walker wrote:
On Wed, 2007-08-01 at 22:12 +0400, Oleg Nesterov wrote:
And I personally think it is not very
On Thu, 2007-08-02 at 00:50 +0400, Oleg Nesterov wrote:
On 08/01, Daniel Walker wrote:
On Thu, 2007-08-02 at 00:18 +0400, Oleg Nesterov wrote:
On 08/01, Daniel Walker wrote:
On Wed, 2007-08-01 at 22:12 +0400, Oleg Nesterov wrote:
And I personally think it is not very
On 08/01, Gregory Haskins wrote:
On Thu, 2007-08-02 at 00:50 +0400, Oleg Nesterov wrote:
On 08/01, Daniel Walker wrote:
It's translating priorities through the work queues, which doesn't seem
to happen with the current implementation. A high priority, say
SCHED_FIFO priority 99,
On Wed, 1 Aug 2007, Daniel Walker wrote:
On Wed, 2007-08-01 at 07:59 -0400, Gregory Haskins wrote:
On Tue, 2007-07-31 at 20:52 -0700, Daniel Walker wrote:
Here's a simpler version .. uses the plist data structure instead of the
100 queues, which makes for a cleaner patch ..
Hi Daniel,
On Thu, 2007-08-02 at 01:34 +0400, Oleg Nesterov wrote:
On 08/01, Gregory Haskins wrote:
On Thu, 2007-08-02 at 00:50 +0400, Oleg Nesterov wrote:
On 08/01, Daniel Walker wrote:
It's translating priorities through the work queues, which doesn't seem
to happen with the current
On 08/01, Gregory Haskins wrote:
On Thu, 2007-08-02 at 01:34 +0400, Oleg Nesterov wrote:
On 08/01, Gregory Haskins wrote:
On Thu, 2007-08-02 at 00:50 +0400, Oleg Nesterov wrote:
On 08/01, Daniel Walker wrote:
It's translating priorities through the work queues, which
On Thu, 2007-08-02 at 02:22 +0400, Oleg Nesterov wrote:
No.
However, IIUC the point of flush_workqueue() is a barrier only relative
to your own submissions, correct?. E.g. to make sure *your* requests
are finished, not necessarily the entire queue.
No,
You sure are a confident one
On Tue, 2007-07-31 at 20:26 -0400, Gregory Haskins wrote:
> The following workqueue related patch is a port of the original VFCIPI PI
> patch I submitted earlier. There is still more work to be done to add the
> "schedule_on_cpu()" type behavior, and even more if we want to use this as
> part of
The following workqueue related patch is a port of the original VFCIPI PI
patch I submitted earlier. There is still more work to be done to add the
"schedule_on_cpu()" type behavior, and even more if we want to use this as
part of KVM. But for now, this patch can stand alone so I thought I would
The following workqueue related patch is a port of the original VFCIPI PI
patch I submitted earlier. There is still more work to be done to add the
schedule_on_cpu() type behavior, and even more if we want to use this as
part of KVM. But for now, this patch can stand alone so I thought I would
On Tue, 2007-07-31 at 20:26 -0400, Gregory Haskins wrote:
The following workqueue related patch is a port of the original VFCIPI PI
patch I submitted earlier. There is still more work to be done to add the
schedule_on_cpu() type behavior, and even more if we want to use this as
part of KVM.
90 matches
Mail list logo