On Mon, Jul 23, 2018 at 10:08:59AM +0200, David Woodhouse wrote:
> On Thu, 2018-07-19 at 10:09 -0700, Paul E. McKenney wrote:
> >
> > Of course, the real reason for the lack of fault on your part will not
> > because I believe I found the bug elsewhere, but instead because I will
> > be dropping y
On Thu, 2018-07-19 at 10:09 -0700, Paul E. McKenney wrote:
>
> Of course, the real reason for the lack of fault on your part will not
> because I believe I found the bug elsewhere, but instead because I will
> be dropping your patch (and mine as well) on Frederic's advice. ;-)
You're keeping the
On Wed, Jul 18, 2018 at 09:37:12AM -0700, Paul E. McKenney wrote:
> On Wed, Jul 18, 2018 at 06:01:51PM +0200, David Woodhouse wrote:
> > On Wed, 2018-07-18 at 08:36 -0700, Paul E. McKenney wrote:
> > > And I finally did get some near misses from an earlier commit, so we
> > > should consider your p
On Thu, 2018-07-19 at 15:14 +0200, Frederic Weisbecker wrote:
> > I'm not sure about the context tracking condition in the code snippet
> > cited above, though. I think that's what caused my problem in the first
> > place — I have CONTEXT_TRACKING_FORCE && !NO_HZ_FULL. So in 4.15, that
> > means
On Thu, Jul 19, 2018 at 08:16:47AM +0200, David Woodhouse wrote:
>
>
> On Wed, 2018-07-18 at 20:11 -0700, Paul E. McKenney wrote:
> >
> > > That is interesting. As I replied to Paul, we are already calling
> > > rcu_user_enter/exit() on guest_enter/exit_irqsoff(). So I'm wondering why
> > > you'
On Wed, Jul 18, 2018 at 08:11:52PM -0700, Paul E. McKenney wrote:
> On Thu, Jul 19, 2018 at 02:32:06AM +0200, Frederic Weisbecker wrote:
> > On Wed, Jul 11, 2018 at 06:03:42PM +0100, David Woodhouse wrote:
> > > On Wed, 2018-07-11 at 09:49 -0700, Paul E. McKenney wrote:
> > > > And here is an updat
On Thu, Jul 19, 2018 at 09:20:33AM +0200, David Woodhouse wrote:
> On Thu, 2018-07-19 at 08:45 +0200, Christian Borntraeger wrote:
> >
> > > My thought would be something like this:
> > >
> > > if (context_tracking_cpu_is_enabled())
> > > rcu_kvm_enter();
> > > else
> >
On Thu, Jul 19, 2018 at 12:23:34PM +0200, Christian Borntraeger wrote:
>
>
> On 07/19/2018 09:20 AM, David Woodhouse wrote:
> > On Thu, 2018-07-19 at 08:45 +0200, Christian Borntraeger wrote:
> >>
> >>> My thought would be something like this:
> >>>
> >>> if (context_tracking_cpu_is_enab
On 07/19/2018 09:20 AM, David Woodhouse wrote:
> On Thu, 2018-07-19 at 08:45 +0200, Christian Borntraeger wrote:
>>
>>> My thought would be something like this:
>>>
>>> if (context_tracking_cpu_is_enabled())
>>> rcu_kvm_enter();
>>> else
>>> rcu_virt
On Thu, 2018-07-19 at 08:45 +0200, Christian Borntraeger wrote:
>
> > My thought would be something like this:
> >
> > if (context_tracking_cpu_is_enabled())
> > rcu_kvm_enter();
> > else
> > rcu_virt_note_context_switch(smp_processor_id());
>
> In the pas
On Wed, 2018-07-18 at 20:11 -0700, Paul E. McKenney wrote:
>
> > That is interesting. As I replied to Paul, we are already calling
> > rcu_user_enter/exit() on guest_enter/exit_irqsoff(). So I'm wondering why
> > you're seeing such an optimization by repeating those calls.
> >
> > Perhaps the r
On 07/18/2018 10:17 PM, Paul E. McKenney wrote:
> On Wed, Jul 18, 2018 at 09:41:05PM +0200, David Woodhouse wrote:
>>
>>
>> On Wed, 2018-07-18 at 09:37 -0700, Paul E. McKenney wrote:
>>> On Wed, Jul 18, 2018 at 06:01:51PM +0200, David Woodhouse wrote:
On Wed, 2018-07-18 at 08:36 -0700,
On Thu, Jul 19, 2018 at 02:32:06AM +0200, Frederic Weisbecker wrote:
> On Wed, Jul 11, 2018 at 06:03:42PM +0100, David Woodhouse wrote:
> > On Wed, 2018-07-11 at 09:49 -0700, Paul E. McKenney wrote:
> > > And here is an updated v4.15 patch with Marius's Reported-by and David's
> > > fix to my lost
On Wed, Jul 11, 2018 at 06:03:42PM +0100, David Woodhouse wrote:
> On Wed, 2018-07-11 at 09:49 -0700, Paul E. McKenney wrote:
> > And here is an updated v4.15 patch with Marius's Reported-by and David's
> > fix to my lost exclamation point.
>
> Thanks. Are you sending the original version of that
On Wed, Jul 18, 2018 at 01:17:00PM -0700, Paul E. McKenney wrote:
> On Wed, Jul 18, 2018 at 09:41:05PM +0200, David Woodhouse wrote:
> >
> >
> > On Wed, 2018-07-18 at 09:37 -0700, Paul E. McKenney wrote:
> > > On Wed, Jul 18, 2018 at 06:01:51PM +0200, David Woodhouse wrote:
> > > >
> > > > On We
On Wed, Jul 18, 2018 at 09:41:05PM +0200, David Woodhouse wrote:
>
>
> On Wed, 2018-07-18 at 09:37 -0700, Paul E. McKenney wrote:
> > On Wed, Jul 18, 2018 at 06:01:51PM +0200, David Woodhouse wrote:
> > >
> > > On Wed, 2018-07-18 at 08:36 -0700, Paul E. McKenney wrote:
> > > >
> > > > And I fin
On Wed, 2018-07-18 at 09:37 -0700, Paul E. McKenney wrote:
> On Wed, Jul 18, 2018 at 06:01:51PM +0200, David Woodhouse wrote:
> >
> > On Wed, 2018-07-18 at 08:36 -0700, Paul E. McKenney wrote:
> > >
> > > And I finally did get some near misses from an earlier commit, so we
> > > should consider
On Wed, Jul 18, 2018 at 06:01:51PM +0200, David Woodhouse wrote:
> On Wed, 2018-07-18 at 08:36 -0700, Paul E. McKenney wrote:
> > And I finally did get some near misses from an earlier commit, so we
> > should consider your patch to be officially off the hook.
>
> Yay, I like it when it's not my f
On Wed, 2018-07-18 at 08:36 -0700, Paul E. McKenney wrote:
> And I finally did get some near misses from an earlier commit, so we
> should consider your patch to be officially off the hook.
Yay, I like it when it's not my fault. I'll redo it with the ifdef
CONFIG_NO_HZ_FULL.
What should it do for
On Tue, Jul 17, 2018 at 05:56:53AM -0700, Paul E. McKenney wrote:
> On Tue, Jul 17, 2018 at 10:19:08AM +0200, David Woodhouse wrote:
> > On Mon, 2018-07-16 at 08:40 -0700, Paul E. McKenney wrote:
> > > Most of the weekend was devoted to testing today's upcoming pull request,
> > > but I did get a b
On Tue, Jul 17, 2018 at 10:19:08AM +0200, David Woodhouse wrote:
> On Mon, 2018-07-16 at 08:40 -0700, Paul E. McKenney wrote:
> > Most of the weekend was devoted to testing today's upcoming pull request,
> > but I did get a bit more testing done on this.
> >
> > I was able to make this happen more
On Mon, 2018-07-16 at 08:40 -0700, Paul E. McKenney wrote:
> Most of the weekend was devoted to testing today's upcoming pull request,
> but I did get a bit more testing done on this.
>
> I was able to make this happen more often by tweaking rcutorture a
> bit, but I still do not yet have statisti
On Thu, Jul 12, 2018 at 09:17:04AM -0700, Paul E. McKenney wrote:
> On Thu, Jul 12, 2018 at 05:53:51AM -0700, Paul E. McKenney wrote:
> > On Thu, Jul 12, 2018 at 01:00:42PM +0100, David Woodhouse wrote:
> > >
> > >
> > > On Wed, 2018-07-11 at 14:08 -0700, Paul E. McKenney wrote:
> > > >
> > > >
On Thu, Jul 12, 2018 at 05:53:51AM -0700, Paul E. McKenney wrote:
> On Thu, Jul 12, 2018 at 01:00:42PM +0100, David Woodhouse wrote:
> >
> >
> > On Wed, 2018-07-11 at 14:08 -0700, Paul E. McKenney wrote:
> > >
> > > > Also... why in $DEITY's name was the existing
> > > > rcu_virt_note_context_sw
On Thu, Jul 12, 2018 at 01:00:42PM +0100, David Woodhouse wrote:
>
>
> On Wed, 2018-07-11 at 14:08 -0700, Paul E. McKenney wrote:
> >
> > > Also... why in $DEITY's name was the existing
> > > rcu_virt_note_context_switch() not actually sufficient? If we had that
> > > there, why did we need an a
On Wed, 2018-07-11 at 14:08 -0700, Paul E. McKenney wrote:
>
> > Also... why in $DEITY's name was the existing
> > rcu_virt_note_context_switch() not actually sufficient? If we had that
> > there, why did we need an additional explicit calls to rcu_all_qs() in
> > the KVM loop, or the more compl
On Wed, Jul 11, 2018 at 09:19:44PM +0100, David Woodhouse wrote:
> On Wed, 2018-07-11 at 13:17 -0700, Paul E. McKenney wrote:
> > As I understand it, they would like to have their guest run uninterrupted
> > for extended times. Because rcu_virt_note_context_switch() is a
> > point-in-time quiescen
On Wed, 2018-07-11 at 13:17 -0700, Paul E. McKenney wrote:
> As I understand it, they would like to have their guest run uninterrupted
> for extended times. Because rcu_virt_note_context_switch() is a
> point-in-time quiescent state, it cannot tell RCU about the extended
> quiescent state.
>
> Sh
On Wed, Jul 11, 2018 at 08:31:55PM +0200, Christian Borntraeger wrote:
> So why is the rcu_virt_note_context_switch(smp_processor_id());
> in guest_enter_irqoff not good enough?
>
> This was actually supposed to tell rcu that being in the guest
> is an extended quiescing period (like userspace).
So why is the rcu_virt_note_context_switch(smp_processor_id());
in guest_enter_irqoff not good enough?
This was actually supposed to tell rcu that being in the guest
is an extended quiescing period (like userspace).
What has changed?
On 07/11/2018 07:03 PM, David Woodhouse wrote:
> On Wed, 20
On Wed, Jul 11, 2018 at 06:03:42PM +0100, David Woodhouse wrote:
> On Wed, 2018-07-11 at 09:49 -0700, Paul E. McKenney wrote:
> > And here is an updated v4.15 patch with Marius's Reported-by and David's
> > fix to my lost exclamation point.
>
> Thanks. Are you sending the original version of that
On Wed, 2018-07-11 at 09:49 -0700, Paul E. McKenney wrote:
> And here is an updated v4.15 patch with Marius's Reported-by and David's
> fix to my lost exclamation point.
Thanks. Are you sending the original version of that to Linus? It'd be
useful to have the commit ID so that we can watch for it
On Wed, Jul 11, 2018 at 07:43:03AM -0700, Paul E. McKenney wrote:
> On Wed, Jul 11, 2018 at 03:23:45PM +0100, David Woodhouse wrote:
> >
> >
> > On Mon, 2018-07-09 at 15:08 -0700, Paul E. McKenney wrote:
> > > index f9c0ca2ccf0c..3350ece366ab 100644
> > > --- a/kernel/rcu/tree.c
> > > +++ b/kerne
On Wed, Jul 11, 2018 at 03:23:45PM +0100, David Woodhouse wrote:
>
>
> On Mon, 2018-07-09 at 15:08 -0700, Paul E. McKenney wrote:
> > index f9c0ca2ccf0c..3350ece366ab 100644
> > --- a/kernel/rcu/tree.c
> > +++ b/kernel/rcu/tree.c
> > @@ -2839,6 +2839,15 @@ void rcu_check_callbacks(int user)
> >
On Mon, 2018-07-09 at 15:08 -0700, Paul E. McKenney wrote:
> index f9c0ca2ccf0c..3350ece366ab 100644
> --- a/kernel/rcu/tree.c
> +++ b/kernel/rcu/tree.c
> @@ -2839,6 +2839,15 @@ void rcu_check_callbacks(int user)
> rcu_bh_qs();
> }
> rcu_preempt_check_callbacks();
On Wed, Jul 11, 2018 at 01:58:22PM +0100, David Woodhouse wrote:
> On Wed, 2018-07-11 at 05:51 -0700, Paul E. McKenney wrote:
> >
> > Interesting. (I am assuming that the guest is printing these messages,
> > not the host, but please let me know if my assumption is incorrect.)
>
> No, this is al
On Wed, 2018-07-11 at 05:51 -0700, Paul E. McKenney wrote:
>
> Interesting. (I am assuming that the guest is printing these messages,
> not the host, but please let me know if my assumption is incorrect.)
No, this is all in the host. When the VMM (qemu, etc.) opens more files
and has to expand i
On Wed, Jul 11, 2018 at 11:57:43AM +0100, David Woodhouse wrote:
> On Mon, 2018-07-09 at 15:08 -0700, Paul E. McKenney wrote:
>
> >
> > And the earlier patch was against my -rcu tree, which won't be all that
> > helpful for v4.15. Please see below for a lightly tested backport to v4.15.
> >
> >
On Mon, 2018-07-09 at 15:08 -0700, Paul E. McKenney wrote:
>
> And the earlier patch was against my -rcu tree, which won't be all that
> helpful for v4.15. Please see below for a lightly tested backport to v4.15.
>
> It should apply to all the releases of interest. If other backports
> are nee
On Tue, Jul 10, 2018 at 11:24:26AM +0200, Peter Zijlstra wrote:
> On Mon, Jul 09, 2018 at 01:42:48PM -0700, Paul E. McKenney wrote:
> > On Mon, Jul 09, 2018 at 09:35:38PM +0100, David Woodhouse wrote:
> > >
> > >
> > > On Mon, 2018-07-09 at 13:34 -0700, Paul E. McKenney wrote:
> > > >
> > > > So
On Mon, Jul 09, 2018 at 01:42:48PM -0700, Paul E. McKenney wrote:
> On Mon, Jul 09, 2018 at 09:35:38PM +0100, David Woodhouse wrote:
> >
> >
> > On Mon, 2018-07-09 at 13:34 -0700, Paul E. McKenney wrote:
> > >
> > > So here are the possible code paths when .rcu_urgent_qs is set to true:
> > >
>
On Mon, Jul 09, 2018 at 02:05:32PM -0700, Paul E. McKenney wrote:
> On Mon, Jul 09, 2018 at 09:45:45PM +0100, David Woodhouse wrote:
> > On Mon, 2018-07-09 at 13:42 -0700, Paul E. McKenney wrote:
> > > On Mon, Jul 09, 2018 at 09:35:38PM +0100, David Woodhouse wrote:
> > > >
> > > >
> > > > On Mon
On Mon, Jul 09, 2018 at 09:45:45PM +0100, David Woodhouse wrote:
> On Mon, 2018-07-09 at 13:42 -0700, Paul E. McKenney wrote:
> > On Mon, Jul 09, 2018 at 09:35:38PM +0100, David Woodhouse wrote:
> > >
> > >
> > > On Mon, 2018-07-09 at 13:34 -0700, Paul E. McKenney wrote:
> > > >
> > > > So here
On Mon, 2018-07-09 at 13:42 -0700, Paul E. McKenney wrote:
> On Mon, Jul 09, 2018 at 09:35:38PM +0100, David Woodhouse wrote:
> >
> >
> > On Mon, 2018-07-09 at 13:34 -0700, Paul E. McKenney wrote:
> > >
> > > So here are the possible code paths when .rcu_urgent_qs is set to true:
> > >
> > > 1.
On Mon, Jul 09, 2018 at 09:35:38PM +0100, David Woodhouse wrote:
>
>
> On Mon, 2018-07-09 at 13:34 -0700, Paul E. McKenney wrote:
> >
> > So here are the possible code paths when .rcu_urgent_qs is set to true:
> >
> > 1. A context switch will record the quiescent state and clear
> >
On Mon, 2018-07-09 at 13:34 -0700, Paul E. McKenney wrote:
>
> So here are the possible code paths when .rcu_urgent_qs is set to true:
>
> 1. A context switch will record the quiescent state and clear
> .rcu_urgent_qs. (The failure to do the clearing in current -rcu
> for
On Mon, Jul 09, 2018 at 07:50:54PM +0100, David Woodhouse wrote:
>
>
> On Mon, 2018-07-09 at 09:34 -0700, Paul E. McKenney wrote:
> > diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
> > index 51919985f6cf..33b0a1ec0536 100644
> > --- a/kernel/rcu/tree.c
> > +++ b/kernel/rcu/tree.c
> > @@ -2496
On Mon, 2018-07-09 at 09:34 -0700, Paul E. McKenney wrote:
> diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
> index 51919985f6cf..33b0a1ec0536 100644
> --- a/kernel/rcu/tree.c
> +++ b/kernel/rcu/tree.c
> @@ -2496,6 +2496,10 @@ void rcu_check_callbacks(int user)
> {
> trace_rcu_utilizat
On Mon, Jul 09, 2018 at 09:34:32AM -0700, Paul E. McKenney wrote:
> On Mon, Jul 09, 2018 at 05:26:32PM +0200, Peter Zijlstra wrote:
> > On Mon, Jul 09, 2018 at 07:29:32AM -0700, Paul E. McKenney wrote:
> > > OK, so here are our options:
> > >
> > > 1.Add the RCU conditional to need_resched
On Mon, Jul 09, 2018 at 05:26:32PM +0200, Peter Zijlstra wrote:
> On Mon, Jul 09, 2018 at 07:29:32AM -0700, Paul E. McKenney wrote:
> > OK, so here are our options:
> >
> > 1. Add the RCU conditional to need_resched(), as David suggests.
> > Peter has concerns about overhead.
> >
> > 2. Cre
On Mon, Jul 09, 2018 at 07:29:32AM -0700, Paul E. McKenney wrote:
> OK, so here are our options:
>
> 1.Add the RCU conditional to need_resched(), as David suggests.
> Peter has concerns about overhead.
>
> 2.Create a new need_resched_rcu_qs() that is to be used when
> deciding
On Mon, Jul 09, 2018 at 04:43:38PM +0200, Peter Zijlstra wrote:
> On Mon, Jul 09, 2018 at 07:29:32AM -0700, Paul E. McKenney wrote:
>
> > OK, so here are our options:
> >
> > 1. Add the RCU conditional to need_resched(), as David suggests.
> > Peter has concerns about overhead.
>
> Not only
On Mon, Jul 09, 2018 at 07:29:32AM -0700, Paul E. McKenney wrote:
> OK, so here are our options:
>
> 1.Add the RCU conditional to need_resched(), as David suggests.
> Peter has concerns about overhead.
Not only overhead, its plain broken, because:
1) we keep preemption state in other
On Mon, Jul 09, 2018 at 01:47:14PM +0100, David Woodhouse wrote:
> On Mon, 2018-07-09 at 05:34 -0700, Paul E. McKenney wrote:
> > The reason that David's latencies went from 100ms to one second is
> > because I made this code less aggressive about invoking resched_cpu().
>
> Ten seconds. We saw sy
On Mon, Jul 09, 2018 at 03:02:27PM +0200, Peter Zijlstra wrote:
> On Mon, Jul 09, 2018 at 02:55:16PM +0200, Peter Zijlstra wrote:
> > On Mon, Jul 09, 2018 at 05:34:57AM -0700, Paul E. McKenney wrote:
> > > But KVM defeats this by checking need_resched() before invoking
> > > cond_resched().
> >
>
On Mon, Jul 09, 2018 at 02:55:16PM +0200, Peter Zijlstra wrote:
> On Mon, Jul 09, 2018 at 05:34:57AM -0700, Paul E. McKenney wrote:
> > But KVM defeats this by checking need_resched() before invoking
> > cond_resched().
>
> That's not wrong or even uncommon I think.
In fact, I think we recently p
On Mon, 2018-07-09 at 14:55 +0200, Peter Zijlstra wrote:
> On Mon, Jul 09, 2018 at 05:34:57AM -0700, Paul E. McKenney wrote:
> > But KVM defeats this by checking need_resched() before invoking
> > cond_resched().
>
> That's not wrong or even uncommon I think.
Right. Which is precisely why I baul
On Mon, Jul 09, 2018 at 05:34:57AM -0700, Paul E. McKenney wrote:
> But KVM defeats this by checking need_resched() before invoking
> cond_resched().
That's not wrong or even uncommon I think.
On Mon, 2018-07-09 at 05:34 -0700, Paul E. McKenney wrote:
> The reason that David's latencies went from 100ms to one second is
> because I made this code less aggressive about invoking resched_cpu().
Ten seconds. We saw synchronize_sched() take ten seconds in 4.15. We
wouldn't have been happy wit
On Mon, Jul 09, 2018 at 01:06:57PM +0200, Peter Zijlstra wrote:
> On Mon, Jul 09, 2018 at 11:56:41AM +0100, David Woodhouse wrote:
>
> > > But either proposal is exactly the same in this respect. The whole
> > > rcu_urgent_qs thing won't be set any earlier either.
> >
> > Er Marius, our laten
On Mon, Jul 09, 2018 at 12:12:15PM +0100, David Woodhouse wrote:
> On Mon, 2018-07-09 at 13:06 +0200, Peter Zijlstra wrote:
> > On Mon, Jul 09, 2018 at 11:56:41AM +0100, David Woodhouse wrote:
> > > > But either proposal is exactly the same in this respect. The whole
> > > > rcu_urgent_qs thing wo
On Mon, 2018-07-09 at 13:06 +0200, Peter Zijlstra wrote:
> On Mon, Jul 09, 2018 at 11:56:41AM +0100, David Woodhouse wrote:
>
> >
> > >
> > > But either proposal is exactly the same in this respect. The whole
> > > rcu_urgent_qs thing won't be set any earlier either.
> > Er Marius, our laten
On Mon, Jul 09, 2018 at 11:56:41AM +0100, David Woodhouse wrote:
> > But either proposal is exactly the same in this respect. The whole
> > rcu_urgent_qs thing won't be set any earlier either.
>
> Er Marius, our latencies in expand_fdtable() definitely went from
> ~10s to well below one secon
On Mon, 2018-07-09 at 12:44 +0200, Peter Zijlstra wrote:
> On Mon, Jul 09, 2018 at 10:18:55AM +0100, David Woodhouse wrote:
> >
> > >
> > > Which seems like an entirely reasonable amount of time to kick a task.
> > > Not scheduling for a second is like an eternity.
> >
> > If that is our only "fi
On Mon, Jul 09, 2018 at 10:18:55AM +0100, David Woodhouse wrote:
> > Which seems like an entirely reasonable amount of time to kick a task.
> > Not scheduling for a second is like an eternity.
>
> If that is our only "fix" for KVM, then wouldn't that mean that things
> like expand_fdtable() would
On Mon, 2018-07-09 at 10:53 +0200, Peter Zijlstra wrote:
> On Fri, Jul 06, 2018 at 10:11:50AM -0700, Paul E. McKenney wrote:
> > On Fri, Jul 06, 2018 at 06:29:05PM +0200, Peter Zijlstra wrote:
> > > On Fri, Jul 06, 2018 at 03:53:30PM +0100, David Woodhouse wrote:
> > > >
> > > > diff --git a/inc
On Fri, Jul 06, 2018 at 06:14:44PM +0100, David Woodhouse wrote:
> On Fri, 2018-07-06 at 10:11 -0700, Paul E. McKenney wrote:
> > > The preempt state is alread a bit complicated and shadowed in the
> > > preempt_count (on some architectures) adding additional bits to it like
> > > this is just aski
On Fri, Jul 06, 2018 at 10:11:50AM -0700, Paul E. McKenney wrote:
> On Fri, Jul 06, 2018 at 06:29:05PM +0200, Peter Zijlstra wrote:
> > On Fri, Jul 06, 2018 at 03:53:30PM +0100, David Woodhouse wrote:
> > > diff --git a/include/linux/sched.h b/include/linux/sched.h
> > > index e4d4e60..89f5814 1006
On Fri, Jul 06, 2018 at 06:14:44PM +0100, David Woodhouse wrote:
> On Fri, 2018-07-06 at 10:11 -0700, Paul E. McKenney wrote:
> > > The preempt state is alread a bit complicated and shadowed in the
> > > preempt_count (on some architectures) adding additional bits to it like
> > > this is just aski
On Fri, 2018-07-06 at 10:11 -0700, Paul E. McKenney wrote:
> > The preempt state is alread a bit complicated and shadowed in the
> > preempt_count (on some architectures) adding additional bits to it like
> > this is just asking for trouble.
>
> How about a separate need_resched_rcu() that include
On Fri, Jul 06, 2018 at 06:29:05PM +0200, Peter Zijlstra wrote:
> On Fri, Jul 06, 2018 at 03:53:30PM +0100, David Woodhouse wrote:
> > diff --git a/include/linux/sched.h b/include/linux/sched.h
> > index e4d4e60..89f5814 100644
> > --- a/include/linux/sched.h
> > +++ b/include/linux/sched.h
> > @@
On Fri, Jul 06, 2018 at 03:53:30PM +0100, David Woodhouse wrote:
> diff --git a/include/linux/sched.h b/include/linux/sched.h
> index e4d4e60..89f5814 100644
> --- a/include/linux/sched.h
> +++ b/include/linux/sched.h
> @@ -1616,7 +1616,8 @@ static inline int spin_needbreak(spinlock_t *lock)
>
>
In 4.15 without CONFIG_PREEMPT we observed expand_fdtable() taking
about 10 seconds for synchronize_sched() to complete, when most of the
other threads were running KVM guests.
In vcpu_run() there's a loop with the fairly common construct:
if (need_resched()) {
… local unlocks …
con
73 matches
Mail list logo