On Thu, Aug 20, 2020 at 07:20:46PM +0200, Marco Elver wrote:
> From 4ec9dd472c978e1eba622fb22bc04e4357f10421 Mon Sep 17 00:00:00 2001
> From: Marco Elver
> Date: Thu, 20 Aug 2020 19:06:09 +0200
> Subject: [PATCH] sched: Turn inline into __always_inline due to noinstr use
>
> is_idle_task() may
On Thu, Aug 20, 2020 at 09:43:15AM -0700, Andy Lutomirski wrote:
> I’ve lost track of how many bugs QEMU and KVM have in this space.
> Let’s keep it as a warning, but a bug. But let’s get rid of the
> totally bogus TIF_SINGLESTEP manipulation.
OK, I've shuffled the series around to fix that
On Thu, Aug 20, 2020 at 11:17:29AM -0500, Josh Poimboeuf wrote:
> On Thu, Aug 20, 2020 at 05:21:11PM +0200, pet...@infradead.org wrote:
> > qemu-gdb stub should eat the event before it lands in the guest
>
> Are we sure about that? I triggered the warning just now, stepping
> through the debug
On Thu, Aug 20, 2020 at 04:28:28PM +0100, Daniel Thompson wrote:
> On Thu, Aug 20, 2020 at 12:38:36PM +0200, Peter Zijlstra wrote:
> >
> > Signed-off-by: Peter Zijlstra (Intel)
> > ---
> > arch/x86/kernel/traps.c | 24
> > 1 file changed, 12 insertions(+), 12
On Thu, Aug 20, 2020 at 10:16:59AM -0500, Josh Poimboeuf wrote:
> On Thu, Aug 20, 2020 at 05:08:41PM +0200, pet...@infradead.org wrote:
> > On Thu, Aug 20, 2020 at 10:45:12AM -0400, Brian Gerst wrote:
> > > On Thu, Aug 20, 2020 at 6:53 AM Peter Zijlstra
> > > wrote:
> > > >
> > > >
> > > >
On Thu, Aug 20, 2020 at 10:45:12AM -0400, Brian Gerst wrote:
> On Thu, Aug 20, 2020 at 6:53 AM Peter Zijlstra wrote:
> >
> >
> > Signed-off-by: Peter Zijlstra (Intel)
> > ---
> > arch/x86/kernel/traps.c | 24
> > 1 file changed, 12 insertions(+), 12 deletions(-)
> >
>
On Thu, Aug 20, 2020 at 10:36:43AM -0400, Steven Rostedt wrote:
>
> I tested this series on top of tip/master and triggered the below
> warning when running the irqsoff tracer boot up test (config attached).
>
> -- Steve
>
> Testing tracer irqsoff:
>
> =
>
> That's been said, not compensating the vruntime for a sched_idle task
> makes sense for me. Even if that will only help for others task in the
> same cfs_rq
Yeah, but it is worth the extra pointer chasing and branches?
Then again, I suppose we started all that with the idle_h_nr_running
On Thu, Aug 20, 2020 at 08:19:27AM +0200, Christoph Hellwig wrote:
> On Tue, Aug 18, 2020 at 12:51:10PM +0200, Peter Zijlstra wrote:
> > if (blk_mq_complete_need_ipi(rq)) {
> > - INIT_CSD(>csd, __blk_mq_complete_request_remote, rq);
> > -
On Wed, Aug 19, 2020 at 03:04:56PM -0700, Linus Torvalds wrote:
> On Wed, Aug 19, 2020 at 12:41 PM wrote:
> >
> > I'm not sure I get the "expensive irq_work queues" argument, I fully
> > agree with you that adding the atomic op is fairly crap.
>
> There's an atomic op on the actual runing side
On Thu, Aug 20, 2020 at 02:51:06PM +0200, Vincent Guittot wrote:
> On Thu, 20 Aug 2020 at 14:00, Jiang Biao wrote:
> >
> > From: Jiang Biao
> >
> > Vruntime compensation has been down in place_entity() to
> > boot the waking procedure for fair tasks. There is no need to
>
> s/boot/boost/ ?
>
>
On Thu, Aug 20, 2020 at 01:43:48PM +0200, Sebastian Andrzej Siewior wrote:
> On 2020-08-20 13:40:36 [+0200], pet...@infradead.org wrote:
> > Anyway, all 3 users should have the same wait context, so where is the
> > actual problem?
>
> I have one in RT which is a per-CPU spinlock within
On Mon, Jun 29, 2020 at 10:15:29PM +0200, Sebastian Andrzej Siewior wrote:
> The novalidate class is ignored in the lockchain validation but is
> considered in the wait context validation.
> If a mutex and a spinlock_t is ignored by using
> lockdep_set_novalidate_class() then both locks will share
On Sat, Jul 04, 2020 at 05:49:10PM -, tip-bot2 for Andy Lutomirski wrote:
> diff --git a/arch/x86/entry/common.c b/arch/x86/entry/common.c
> index f392a8b..e83b3f1 100644
> --- a/arch/x86/entry/common.c
> +++ b/arch/x86/entry/common.c
> @@ -49,6 +49,23 @@
> static void check_user_regs(struct
On Wed, Aug 19, 2020 at 05:15:43PM -0700, Andy Lutomirski wrote:
> +static __always_inline void debug_enter(unsigned long *dr6, unsigned long
> *dr7)
> +{
> + *dr6 = debug_read_clear_dr6();
> }
>
> static __always_inline void debug_exit(unsigned long dr7)
> {
> -
On Wed, Aug 19, 2020 at 10:46:36PM -0500, Josh Poimboeuf wrote:
> On Wed, Aug 19, 2020 at 05:14:18PM -0700, Andy Lutomirski wrote:
> > I'm pretty sure you have the buggy sequence of events right, but for
> > the wrong reason. There's nothing wrong with scheduling when
> > delivering SIGTRAP, but
On Wed, Aug 19, 2020 at 11:50:55AM -0700, Linus Torvalds wrote:
> On Wed, Aug 19, 2020 at 12:22 AM wrote:
> >
> > That is, the external serialization comes from the non-atomic
> > test-and-set they both have. This works nicely when there is external
> > state that already serializes things, but
On Wed, Aug 19, 2020 at 10:53:58AM -0700, Kyle Huey wrote:
> rr, a userspace record and replay debugger[0], has a test suite that
> attempts to exercise strange corners of the Linux API. One such
> test[1] began failing after 2bbc68f8373c0631ebf137f376fbea00e8086be7.
> I have not tried to
On Wed, Aug 19, 2020 at 05:32:50PM +0200, pet...@infradead.org wrote:
> On Wed, Aug 19, 2020 at 08:39:13PM +1000, Alexey Kardashevskiy wrote:
>
> > > or current upstream?
> >
> > The upstream 18445bf405cb (13 hours old) also shows the problem. Yours
> > 1/2 still fixes it.
>
> Afaict that just
On Wed, Aug 19, 2020 at 08:39:13PM +1000, Alexey Kardashevskiy wrote:
> > or current upstream?
>
> The upstream 18445bf405cb (13 hours old) also shows the problem. Yours
> 1/2 still fixes it.
Afaict that just reduces the window.
Isn't the problem that:
arch/powerpc/kernel/exceptions-64e.S
On Wed, Aug 19, 2020 at 02:34:16PM +0100, Alexandru Elisei wrote:
> From: Julien Thierry
>
> When handling events, armv8pmu_handle_irq() calls perf_event_overflow(),
> and subsequently calls irq_work_run() to handle any work queued by
> perf_event_overflow(). As perf_event_overflow() raises
On Wed, Aug 19, 2020 at 03:33:20PM +0200, Sebastian Andrzej Siewior wrote:
> On 2020-08-19 15:15:07 [+0200], pet...@infradead.org wrote:
> If you want to optimize further, we could move PF_IO_WORKER to an lower
> bit. x86 can test for both via
> (gcc-10)
> | testl $536870944, 44(%rbp)
On Wed, Aug 19, 2020 at 02:37:58PM +0200, Sebastian Andrzej Siewior wrote:
> I don't see a significant reason why this lock should become a
> raw_spinlock_t therefore I suggest to move it after the
> tsk_is_pi_blocked() check.
> Any feedback on this vs raw_spinlock_t?
>
> Signed-off-by:
On Sun, Aug 16, 2020 at 05:02:00PM -0700, Randy Dunlap wrote:
> --- lnx-59-rc1.orig/include/linux/seqlock.h
> +++ lnx-59-rc1/include/linux/seqlock.h
> @@ -173,7 +173,6 @@ seqcount_##lockname##_init(seqcount_##lo
> seqcount_init(>seqcount);\
>
On Mon, Aug 17, 2020 at 05:04:41PM +0800, Libing Zhou wrote:
> In nmi_check_duration(), the 'whole_msecs' value should
> get from 'duration' to reflect actual time duration,
> but not 'action->max_duration'.
Fixes: 248ed51048c4 ("x86/nmi: Remove irq_work from the long duration NMI
handler")
>
On Tue, Aug 18, 2020 at 06:44:47PM +0100, Matthew Wilcox wrote:
> On Tue, Aug 18, 2020 at 07:34:00PM +0200, Christian Brauner wrote:
> > The only remaining function callable outside of kernel/fork.c is
> > _do_fork(). It doesn't really follow the naming of kernel-internal
> > syscall helpers as
On Tue, Aug 18, 2020 at 01:02:26PM -0700, Nick Desaulniers wrote:
> On Tue, Aug 18, 2020 at 12:57 PM Alex Dewar wrote:
> >
> > On Tue, Aug 18, 2020 at 11:13:10AM -0700, Nick Desaulniers wrote:
> > > On Tue, Aug 18, 2020 at 10:04 AM Alex Dewar
> > > wrote:
> > > >
> > > > Depending on config
On Tue, Aug 18, 2020 at 06:25:42PM +0200, Christoph Hellwig wrote:
> On Tue, Aug 18, 2020 at 12:51:10PM +0200, Peter Zijlstra wrote:
> > Convert the performance sensitive users of
> > smp_call_single_function_async() over to the new
> > irq_work_queue_remote_static().
> >
> > The new API is
On Tue, Aug 18, 2020 at 05:22:33PM +1000, Nicholas Piggin wrote:
> Excerpts from pet...@infradead.org's message of August 12, 2020 8:35 pm:
> > On Wed, Aug 12, 2020 at 06:18:28PM +1000, Nicholas Piggin wrote:
> >> Excerpts from pet...@infradead.org's message of August 7, 2020 9:11 pm:
> >> >
> >>
On Tue, Aug 18, 2020 at 11:07:37AM +0200, Vincent Guittot wrote:
> On Tue, 11 Aug 2020 at 13:32, Jiang Biao wrote:
> >
> > From: Jiang Biao
> >
> > The code in reweight_entity() can be simplified.
> >
> > For a sched entity on the rq, the entity accounting can be replaced by
> > cfs_rq
On Tue, Aug 18, 2020 at 12:30:59PM +0200, Michal Hocko wrote:
> The proposal also aims at much richer interface to define the
> oom behavior.
Oh yeah, I'm not defending any of that prctl() nonsense.
Just saying that from a math / control theory point of view, the current
thing is a abhorrent
On Mon, Aug 17, 2020 at 06:00:05AM -0700, Paul E. McKenney wrote:
> On Mon, Aug 17, 2020 at 11:16:33AM +0200, pet...@infradead.org wrote:
> > On Mon, Aug 17, 2020 at 11:03:25AM +0200, pet...@infradead.org wrote:
> > > On Thu, Jul 23, 2020 at 09:14:11AM -0700, Paul E. McKenney wrote:
> > > > > ---
On Tue, Aug 18, 2020 at 11:17:56AM +0100, Chris Down wrote:
> I'd ask that you understand a bit more about the tradeoffs and intentions of
> the patch before rushing in to declare its failure, considering it works
> just fine :-)
>
> Clamping the maximal time allows the application to take some
On Tue, Aug 18, 2020 at 12:05:16PM +0200, Michal Hocko wrote:
> > But then how can it run-away like Waiman suggested?
>
> As Chris mentioned in other reply. This functionality is quite new.
>
> > /me goes look... and finds MEMCG_MAX_HIGH_DELAY_JIFFIES.
>
> We can certainly tune a different
On Tue, Aug 18, 2020 at 10:27:37AM +0100, Chris Down wrote:
> pet...@infradead.org writes:
> > On Mon, Aug 17, 2020 at 10:08:23AM -0400, Waiman Long wrote:
> > > Memory controller can be used to control and limit the amount of
> > > physical memory used by a task. When a limit is set in
On Tue, Aug 18, 2020 at 11:26:17AM +0200, Michal Hocko wrote:
> On Tue 18-08-20 11:14:53, Peter Zijlstra wrote:
> > On Mon, Aug 17, 2020 at 10:08:23AM -0400, Waiman Long wrote:
> > > Memory controller can be used to control and limit the amount of
> > > physical memory used by a task. When a limit
On Mon, Aug 17, 2020 at 10:08:23AM -0400, Waiman Long wrote:
> Memory controller can be used to control and limit the amount of
> physical memory used by a task. When a limit is set in "memory.high" in
> a v2 non-root memory cgroup, the memory controller will try to reclaim
> memory if the limit
On Mon, Aug 17, 2020 at 11:03:25AM +0200, pet...@infradead.org wrote:
> On Thu, Jul 23, 2020 at 09:14:11AM -0700, Paul E. McKenney wrote:
> > > --- a/kernel/rcu/tree.c
> > > +++ b/kernel/rcu/tree.c
> > > @@ -1287,8 +1287,6 @@ static int rcu_implicit_dynticks_qs(stru
> > > if
On Thu, Jul 23, 2020 at 09:14:11AM -0700, Paul E. McKenney wrote:
> > --- a/kernel/rcu/tree.c
> > +++ b/kernel/rcu/tree.c
> > @@ -1287,8 +1287,6 @@ static int rcu_implicit_dynticks_qs(stru
> > if (IS_ENABLED(CONFIG_IRQ_WORK) &&
> > !rdp->rcu_iw_pending &&
On Mon, Aug 17, 2020 at 11:46:40AM +0800, Jiaxun Yang wrote:
> Here we reworked the whole procdure. Now the synchronise event on CPU0
> is triggered by smp call function, and we won't touch the count on CPU0
> at all.
Are you telling me, that in 2020 you're building chips that need
horrible crap
On Fri, Aug 14, 2020 at 11:18:57PM -0400, Joel Fernandes (Google) wrote:
https://lkml.kernel.org/r/20200722153017.024407...@infradead.org
On Fri, Aug 14, 2020 at 11:18:58PM -0400, Joel Fernandes (Google) wrote:
> Currently only RCU hooks for idle entry/exit are called. In later
> patches, kernel-entry protection functionality will be added.
>
> Signed-off-by: Joel Fernandes (Google)
NAK, rcu_idle_enter() is broken where it is
On Fri, Aug 14, 2020 at 07:14:25AM -0700, Paul E. McKenney wrote:
> Doing this to kfree_rcu() is the first step. We will also be doing this
> to call_rcu(), which has some long-standing invocations from various
> raw contexts, including hardirq handler.
Most hardirq handler are not raw on RT
On Fri, Aug 14, 2020 at 12:54:35PM +0530, Sumit Garg wrote:
> On Thu, 13 Aug 2020 at 05:30, Doug Anderson wrote:
> > On Tue, Jul 21, 2020 at 5:10 AM Sumit Garg wrote:
> > Wishful thinking, but (as far as I can tell) irq_work_queue() only
> > queues work on the CPU running the NMI. I don't have
On Fri, Aug 14, 2020 at 10:30:37AM +0200, Peter Zijlstra wrote:
> > > 1.Prohibit invoking allocators from raw atomic context, such
> > > as when holding a raw spinlock.
> >
> > Clearly the simplest solution but not Pauls favourite and
> > unfortunately he has a good reason.
>
>
On Thu, Aug 13, 2020 at 11:52:57AM -0700, Paul E. McKenney wrote:
> On Thu, Aug 13, 2020 at 08:26:18PM +0200, pet...@infradead.org wrote:
> > I thought the rule was:
> >
> > - No allocators (alloc/free) inside raw_spinlock_t, full-stop.
> >
> > Why are we trying to craft an exception?
>
> So
On Thu, Aug 13, 2020 at 08:15:27AM -0500, Uriel Guajardo wrote:
> On Thu, Aug 13, 2020 at 5:36 AM wrote:
> >
> > On Wed, Aug 12, 2020 at 07:33:32PM +, Uriel Guajardo wrote:
> > > KUnit will fail tests upon observing a lockdep failure. Because lockdep
> > > turns itself off after its first
On Thu, Aug 13, 2020 at 09:29:04AM -0700, Paul E. McKenney wrote:
> OK. So the current situation requires a choice between these these
> alternatives, each of which has shortcomings that have been mentioned
> earlier in this thread:
>
> 1.Prohibit invoking allocators from raw atomic context,
s of worms. If we start this here then
> Joe programmer and his dog will use these lockdep annotation to evade
> warnings and when exposed to RT it will fall apart in pieces. Just that
> at that point Joe programmer moved on to something else and the usual
> suspects can mop up the pieces. We've seen that
On Thu, Aug 13, 2020 at 08:31:15AM +0100, Christoph Hellwig wrote:
> On Thu, Aug 13, 2020 at 10:44:38AM +0800, Jacob Wen wrote:
> > wake_up_bit() uses waitqueue_active() that needs the explicit smp_mb().
>
> Sounds like the barrier should go into wake_up_bit then..
Oh, thanks for reminding me..
On Thu, Aug 13, 2020 at 12:50:26PM +0200, Sebastian Andrzej Siewior wrote:
> The pte lock is never acquired in-IRQ context so it does not require the
> interrupts to be disabled.
>
> RT complains here because the spinlock_t must not be acquired with
> disabled interrupts.
>
> use_temporary_mm()
On Wed, Aug 12, 2020 at 07:33:32PM +, Uriel Guajardo wrote:
> KUnit will fail tests upon observing a lockdep failure. Because lockdep
> turns itself off after its first failure, only fail the first test and
> warn users to not expect any future failures from lockdep.
>
> Similar to
On Wed, Aug 12, 2020 at 01:32:58PM +0200, Paolo Bonzini wrote:
> On 12/08/20 13:11, pet...@infradead.org wrote:
> > Right, but we want to tighten the permission checks and not excluding_hv
> > is just sloppy.
>
> I would just document that it's ignored as it doesn't make sense. ARM64
> does that
On Wed, Aug 12, 2020 at 12:25:43PM +0200, Paolo Bonzini wrote:
> On 12/08/20 07:07, Like Xu wrote:
> > To emulate PMC counter for guest, KVM would create an
> > event on the host with 'exclude_guest=0, exclude_hv=0'
> > which simply makes no sense and is utterly broken.
> >
> > To keep perf
On Wed, Aug 12, 2020 at 10:56:56AM +0200, Ard Biesheuvel wrote:
> The module .lds has BYTE(0) in the section contents to prevent the
> linker from pruning them entirely. The (NOLOAD) is there to ensure
> that this byte does not end up in the .ko, which is more a matter of
> principle than anything
On Wed, Aug 12, 2020 at 10:36:14AM +0200, Christian König wrote:
> Am 12.08.20 um 10:10 schrieb pet...@infradead.org:
> > On Tue, Aug 11, 2020 at 01:18:52PM +0200, Christian König wrote:
> > > From: Guchun Chen
> > >
> > > Otherwise, braces are needed when using it.
> > >
> > > Signed-off-by:
On Wed, Aug 12, 2020 at 06:18:28PM +1000, Nicholas Piggin wrote:
> Excerpts from pet...@infradead.org's message of August 7, 2020 9:11 pm:
> >
> > What's wrong with something like this?
> >
> > AFAICT there's no reason to actually try and add IRQ tracing here, it's
> > just a hand full of
On Wed, Aug 12, 2020 at 10:18:32AM +0200, pet...@infradead.org wrote:
> > trace_hardirqs_restore+0x59/0x80 kernel/trace/trace_preemptirq.c:106
> > rcu_irq_enter_irqson+0x43/0x70 kernel/rcu/tree.c:1074
> > trace_irq_enable_rcuidle+0x87/0x120
> > include/trace/events/preemptirq.h:40
On Wed, Aug 12, 2020 at 10:06:50AM +0200, Marco Elver wrote:
> On Tue, Aug 11, 2020 at 10:17PM +0200, pet...@infradead.org wrote:
> > On Tue, Aug 11, 2020 at 11:46:51AM +0200, pet...@infradead.org wrote:
> >
> > > So let me once again see if I can't find a better solution for this all.
> > >
On Tue, Aug 11, 2020 at 01:18:52PM +0200, Christian König wrote:
> From: Guchun Chen
>
> Otherwise, braces are needed when using it.
>
> Signed-off-by: Guchun Chen
> Reviewed-by: Christian König
Thanks!
On Tue, Aug 11, 2020 at 11:46:51AM +0200, pet...@infradead.org wrote:
> So let me once again see if I can't find a better solution for this all.
> Clearly it needs one :/
So the below boots without triggering the debug code from Marco -- it
should allow nesting local_irq_save/restore under
On Tue, Aug 11, 2020 at 12:03:51PM -0500, Uriel Guajardo wrote:
> On Mon, Aug 10, 2020 at 4:43 PM Peter Zijlstra wrote:
> >
> > On Mon, Aug 10, 2020 at 09:32:57PM +, Uriel Guajardo wrote:
> > > +static inline void kunit_check_locking_bugs(struct kunit *test,
> > > +
On Tue, Aug 11, 2020 at 06:01:35PM +0200, Jessica Yu wrote:
> > > On Tue, Aug 11, 2020 at 04:34:27PM +0200, Mauro Carvalho Chehab wrote:
> > > > [33] .plt PROGBITS 0340 00035c80
> > > >0001 WAX 0 0 1
> > > >
On Tue, Aug 11, 2020 at 04:34:27PM +0200, Mauro Carvalho Chehab wrote:
> [33] .plt PROGBITS 0340 00035c80
>0001 WAX 0 0 1
> [34] .init.plt NOBITS 0341 00035c81
>
On Tue, Aug 11, 2020 at 12:50:27PM +0200, Jiri Olsa wrote:
> if it works for all events, which I'm not sure of
That's what we have cap_user_rdpmc for.
On Tue, Aug 11, 2020 at 11:13:13AM +0100, Will Deacon wrote:
> Hi,
>
> Using magic-sysrq via a keyboard interrupt over the serial console results in
> the following lockdep splat with the PL011 UART driver on v5.8. I can
> reproduce
> the issue under QEMU with arm64 defconfig + PROVE_LOCKING.
>
On Tue, Aug 11, 2020 at 11:20:54AM +0200, pet...@infradead.org wrote:
> On Tue, Aug 11, 2020 at 10:38:50AM +0200, Jürgen Groß wrote:
> > In case you don't want to do it I can send the patch for the Xen
> > variants.
>
> I might've opened a whole new can of worms here. I'm not sure we
> can/want
On Tue, Aug 11, 2020 at 10:38:50AM +0200, Jürgen Groß wrote:
> In case you don't want to do it I can send the patch for the Xen
> variants.
I might've opened a whole new can of worms here. I'm not sure we
can/want to fix the entire fallout this release :/
Let me ponder this a little, because the
On Mon, Aug 10, 2020 at 10:07:44AM +0200, Marco Elver wrote:
> On Fri, 7 Aug 2020 at 19:06, Paul E. McKenney wrote:
> > On Fri, Aug 07, 2020 at 11:00:31AM +0200, Marco Elver wrote:
> > > Since KCSAN instrumentation is everywhere, we need to treat the hooks
> > > NMI-like for interrupt tracing. In
On Mon, Aug 10, 2020 at 12:18:25PM +0100, Valentin Schneider wrote:
>
> On 10/08/20 09:30, Lukasz Luba wrote:
> > In find_energy_efficient_cpu() 'cpu_cap' could be less that 'util'.
> > It might be because of RT, DL (so higher sched class than CFS), irq or
> > thermal pressure signal, which
On Fri, Aug 07, 2020 at 09:23:38PM +0200, Peter Zijlstra wrote:
> Much of the complexity in irqenter_{enter,exit}() is due to #PF being
> the sole exception that can schedule from kernel context.
>
> One additional wrinkle with #PF is that it is non-maskable, it can
> happen _anywhere_. Due to
Subject: lockdep,trace: Expose tracepoints
From: Peter Zijlstra
Date: Fri Aug 7 20:53:16 CEST 2020
The lockdep tracepoints are under the lockdep recursion counter, this
has a bunch of nasty side effects:
- TRACE_IRQFLAGS doesn't work across the entire tracepoint
- RCU-lockdep doesn't see the
On Mon, Aug 10, 2020 at 11:55:35AM +0200, Marco Elver wrote:
> Unfortunately I get LOCKDEP_DEBUG warnings, when testing with one of
> syzbot's configs. This appears at some point during boot (no other
> test):
>
> DEBUG_LOCKS_WARN_ON(lockdep_hardirqs_enabled())
> WARNING: CPU: 0 PID:
On Fri, Aug 07, 2020 at 04:21:48PM -0400, Steven Rostedt wrote:
> On Fri, 07 Aug 2020 21:23:38 +0200
> Peter Zijlstra wrote:
>
> > Much of the complexity in irqenter_{enter,exit}() is due to #PF being
> > the sole exception that can schedule from kernel context.
> >
> > One additional wrinkle
On Sat, Aug 08, 2020 at 03:43:50PM -0600, Jens Axboe wrote:
> Any pre-existing caller of this function uses 'true' to signal to use
> notifications or not, but we now also have signaled notifications.
> Update existing callers that specify 'true' for notify to use the
> updated TWA_RESUME instead.
On Sun, Aug 09, 2020 at 06:29:51PM -0700, Michael Kelley wrote:
> Make hv_setup_sched_clock inline so the reference to pv_ops works
> correctly with objtool updates to detect noinstr violations.
> See https://lore.kernel.org/patchwork/patch/1283635/
>
> Signed-off-by: Michael Kelley
Thanks!
On Mon, Aug 10, 2020 at 12:23:48PM +0300, Anatoly Pugachev wrote:
> On Tue, Aug 4, 2020 at 4:34 PM wrote:
> >
> > On Tue, Aug 04, 2020 at 04:17:16PM +0300, Anatoly Pugachev wrote:
> > > Hello!
> > >
> > > Linus git master sources:
> > >
> > > $ git desc
> > > v5.8-2483-gc0842fbc1b18
> > >
> >
> >
On Mon, Aug 10, 2020 at 10:59:54AM +0200, Greg KH wrote:
> On Sun, Aug 09, 2020 at 08:42:51PM +0200, Ahmed S. Darwish wrote:
> > @Peter, I think let's revert this one for now?
>
> Please do, it's blowing up my local builds as well :(
There's a bunch of patches queued here:
On Fri, Aug 07, 2020 at 02:07:59PM -0400, Mathieu Desnoyers wrote:
> One thing I find weird about Peter's patch is that it adds a
> MEMBERRIER_CMD_PRIVATE_EXPEDITED_RSEQ without a corresponding
> MEMBARRIER_CMD_REGISTER_PRIVATE_EXPEDITED_RSEQ. Considering that
> the SYNC_CORE variant already has
On Fri, Aug 07, 2020 at 03:16:32PM +0200, luca abeni wrote:
> If I understand well, the patchset does not apply deadline servers to
> FIFO and RR tasks, right? How does this patchset interact with RT
> throttling?
ideally it will replace it ;-)
But of course, there's the whole cgroup trainwreck
On Fri, Aug 07, 2020 at 03:43:53PM +0200, Juri Lelli wrote:
> Right, but I fear we won't be able to keep current behavior for wakeups:
> RT with highest prio always gets scheduled right away?
If you consider RT throttling, that's already not the case. We can
consider this fair server to be just
On Fri, Aug 07, 2020 at 03:49:41PM +0200, luca abeni wrote:
> Hi Peter,
>
> pet...@infradead.org wrote:
> > One thing I considerd was scheduling this as a least-laxity entity --
> > such that it runs late, not early
>
> Are you thinking about scheduling both RT and non-RT tasks through
>
On Thu, Aug 06, 2020 at 10:05:43AM -0700, Peter Oskolkov wrote:
> +#ifdef CONFIG_RSEQ
> +static void membarrier_rseq_ipi(void *arg)
> +{
> + if (current->mm != arg) /* Not our process. */
> + return;
> + if (!current->rseq) /* RSEQ not set up for the current task/thread. */
>
What's wrong with something like this?
AFAICT there's no reason to actually try and add IRQ tracing here, it's
just a hand full of instructions at the most.
---
diff --git a/arch/powerpc/include/asm/hw_irq.h
b/arch/powerpc/include/asm/hw_irq.h
index 3a0db7b0b46e..6be22c1838e2 100644
---
On Fri, Aug 07, 2020 at 11:56:04AM +0200, Juri Lelli wrote:
> Starting deadline server for lower priority classes right away when
> first task is enqueued might break guarantees, as tasks belonging to
> intermediate priority classes could be uselessly preempted. E.g., a well
> behaving (non hog)
On Fri, Aug 07, 2020 at 12:02:59PM +0200, Jürgen Groß wrote:
> On 07.08.20 11:39, pet...@infradead.org wrote:
> > On Fri, Aug 07, 2020 at 10:38:23AM +0200, Juergen Gross wrote:
> >
> > > -# else
> > > - const unsigned char cpu_iret[1];
> > > -# endif
> > > };
> > > static const struct
On Fri, Aug 07, 2020 at 10:38:23AM +0200, Juergen Gross wrote:
> -# else
> - const unsigned char cpu_iret[1];
> -# endif
> };
>
> static const struct patch_xxl patch_data_xxl = {
> @@ -42,7 +38,6 @@ static const struct patch_xxl patch_data_xxl = {
> .irq_save_fl= {
On Fri, Aug 07, 2020 at 02:24:30PM +0800, Jin, Yao wrote:
> Hi Peter,
>
> On 8/6/2020 7:00 PM, pet...@infradead.org wrote:
> > On Thu, Aug 06, 2020 at 11:18:27AM +0200, pet...@infradead.org wrote:
> >
> > > Suppose we have nested virt:
> > >
> > > L0-hv
> > > |
> > > G0/L1-hv
> > > |
On Fri, Aug 07, 2020 at 08:22:36AM +0800, Guo Ren wrote:
> Hi Peter,
>
> On Thu, Aug 6, 2020 at 3:53 AM wrote:
> >
> > Hi,
> >
> > While doing an audit of smp_mb__after_spinlock, I found that csky
> > defines it, why?
> >
> > CSKY only has smp_mb(), it doesn't override __atomic_acquire_fence or
One long standing annoyance I have with using vim-tags is that our tags
file is not properly sorted. That is, the sorting exhuberant Ctags does
is only on the tag itself.
The problem with that is that, for example, the tag 'mutex' appears a
mere 505 times, 492 of those are structure members.
On Thu, Aug 06, 2020 at 10:25:12PM +1000, Michael Ellerman wrote:
> pet...@infradead.org writes:
> > On Thu, Aug 06, 2020 at 03:32:25PM +1000, Michael Ellerman wrote:
> >
> >> That brings with it a bunch of problems, such as existing software that
> >> has been developed/configured for Power8 and
On Thu, Aug 06, 2020 at 11:18:27AM +0200, pet...@infradead.org wrote:
> Suppose we have nested virt:
>
> L0-hv
> |
> G0/L1-hv
> |
> G1
>
> And we're running in G0, then:
>
> - 'exclude_hv' would exclude L0 events
> - 'exclude_host' would ... exclude L1-hv
On Wed, Aug 05, 2020 at 05:08:58PM -0700, Peter Oskolkov wrote:
Thanks for the Cc!
> + * @MEMBARRIER_CMD_PRIVATE_RESTART_RSEQ_ON_CPU:
> + * If a thread belonging to the current process
> + * is currently in an RSEQ critical section on the
> + *
On Thu, Aug 06, 2020 at 11:41:06AM +0200, Thomas Gleixner wrote:
> pet...@infradead.org writes:
> > On Wed, Aug 05, 2020 at 02:56:49PM +0100, Valentin Schneider wrote:
> >
> >> I've been tempted to say the test case is a bit bogus, but am not familiar
> >> enough with the RT throttling details to
On Thu, Aug 06, 2020 at 01:13:46PM +0100, Will Deacon wrote:
> I'm not sure I really see the benefit of the rename, to be honest with you,
> especially if smp_mb__after_spinlock() doesn't disappear at the same time.
The reason I proposed a rename is because:
mutex_lock();
On Thu, Aug 06, 2020 at 09:47:23AM +0200, Marco Elver wrote:
> Testing my hypothesis that raw then nested non-raw
> local_irq_save/restore() breaks IRQ state tracking -- see the reproducer
> below. This is at least 1 case I can think of that we're bound to hit.
Aaargh!
> diff --git a/init/main.c
On Thu, Aug 06, 2020 at 11:18:27AM +0200, pet...@infradead.org wrote:
> On Thu, Aug 06, 2020 at 10:26:29AM +0800, Jin, Yao wrote:
>
> > > +static struct pt_regs *sanitize_sample_regs(struct perf_event *event,
> > > struct pt_regs *regs)
> > > +{
> > > + struct pt_regs *sample_regs = regs;
> > >
On Thu, Aug 06, 2020 at 10:26:29AM +0800, Jin, Yao wrote:
> > +static struct pt_regs *sanitize_sample_regs(struct perf_event *event,
> > struct pt_regs *regs)
> > +{
> > + struct pt_regs *sample_regs = regs;
> > +
> > + /* user only */
> > + if (!event->attr.exclude_kernel ||
On Thu, Aug 06, 2020 at 03:32:25PM +1000, Michael Ellerman wrote:
> That brings with it a bunch of problems, such as existing software that
> has been developed/configured for Power8 and expects to see SMT8.
>
> We also allow LPARs to be live migrated from Power8 to Power9 (and back), so
>
On Thu, Aug 06, 2020 at 09:15:08AM +0300, Adrian Hunter wrote:
> On 8/07/20 6:16 pm, Alexander Shishkin wrote:
> > Hi guys,
> >
> > I've been looking at reducing the number of open file descriptors per perf
> > session. If we retain one descriptor per event, in a large group they add
> > up. At
201 - 300 of 423 matches
Mail list logo