On Fri, Sep 04, 2020 at 09:41:42AM +0200, pet...@infradead.org wrote:
> On Thu, Aug 27, 2020 at 01:40:42PM +0200, Ahmed S. Darwish wrote:
>
> > __always_inline void cyc2ns_read_begin(struct cyc2ns_data *data)
> > {
> > + seqcount_latch_t *seqcount;
> > int seq, idx;
> >
> >
On Thu, Aug 27, 2020 at 01:40:42PM +0200, Ahmed S. Darwish wrote:
> __always_inline void cyc2ns_read_begin(struct cyc2ns_data *data)
> {
> + seqcount_latch_t *seqcount;
> int seq, idx;
>
> preempt_disable_notrace();
>
> + seqcount = _cpu_ptr()->seq;
> do {
> -
FWIW, can you please start a new thread with ever posting? I lost this
one because it got stuck onto some ancient thread.
On Thu, Sep 03, 2020 at 08:19:38AM -0700, Guenter Roeck wrote:
> This doesn't compile for me - there is no "name" parameter in __DO_TRACE().
>
> Error log:
> In file included from ./include/linux/rculist.h:11,
> from ./include/linux/pid.h:5,
> from
On Thu, Sep 03, 2020 at 04:36:35PM +0200, Ulf Hansson wrote:
> On Thu, 3 Sep 2020 at 15:53, wrote:
> > static int cpu_pm_notify(enum cpu_pm_event event)
> > {
> > int ret;
> >
> > + lockdep_assert_irqs_disabled();
>
> Nitpick, maybe the lockdep should be moved to a separate
On Wed, Sep 02, 2020 at 05:58:55PM +0200, Ulf Hansson wrote:
> On Wed, 2 Sep 2020 at 14:14, wrote:
> >
> > On Wed, Sep 02, 2020 at 09:03:37AM +0200, Ulf Hansson wrote:
> > > Lots of cpuidle drivers are using CPU_PM notifiers (grep for
> > > cpu_pm_enter and you will see) from their idlestates
On Thu, Sep 03, 2020 at 04:00:47PM +0200, pet...@infradead.org wrote:
> I stuck a tracepoint in intel_idle and had a rummage around. The below
> seems to work for me now.
Note that this will insta-kill all trace_*_rcuidle() tracepoint that are
actually used in rcuidle.
A git-grep seems to
On Wed, Sep 02, 2020 at 06:57:36AM -0700, Guenter Roeck wrote:
> On 9/2/20 1:56 AM, pet...@infradead.org wrote:
> > On Tue, Sep 01, 2020 at 08:51:46PM -0700, Guenter Roeck wrote:
> >
> >> [ 27.056457] include/trace/events/lock.h:13 suspicious
> >> rcu_dereference_check() usage!
> >
> >> [
On Wed, Sep 02, 2020 at 10:35:34PM +, gengdongjiu wrote:
> > NAK, that tracepoint is already broken, we don't want to proliferate the
> > broken.
>
> Sorry, What the meaning that tracepoint is already broken?
Just that, the tracepoint is crap. But we can't fix it because ABI. Did
I tell
On Thu, Sep 03, 2020 at 11:07:28AM +0900, Masahiro Yamada wrote:
> Contributors stop caring after their code is merged,
> but maintaining it is tiring.
This seems to hold in general :/
> Will re-implementing your sorting logic
> in bash look cleaner?
Possibly, I can try, we'll see.
> Or, in
On Wed, Sep 02, 2020 at 06:55:01PM +0200, Borislav Petkov wrote:
> On Wed, Sep 02, 2020 at 06:45:38PM +0200, pet...@infradead.org wrote:
> > We really should clear the CPUID bits when the kernel explicitly
> > disables things.
>
> Actually, you want to *disable* the functionality behind it by
On Wed, Sep 02, 2020 at 09:52:33AM -0700, Dave Hansen wrote:
> On 9/2/20 9:45 AM, pet...@infradead.org wrote:
> > On Thu, Aug 27, 2020 at 03:49:03PM +0800, Feng Tang wrote:
> >> End users frequently want to know what features their processor
> >> supports, independent of what the kernel supports.
On Wed, Sep 02, 2020 at 03:32:18PM +, Nadav Amit wrote:
> Thanks for pointer. I did not see the discussion, and embarrassingly, I have
> also never figured out how to reply on lkml emails without registering to
> lkml.
The lore.kernel.org thing I pointed you to allows you to download an
mbox
On Thu, Aug 27, 2020 at 03:49:03PM +0800, Feng Tang wrote:
> End users frequently want to know what features their processor
> supports, independent of what the kernel supports.
>
> /proc/cpuinfo is great. It is omnipresent and since it is provided by
> the kernel it is always as up to date as
On Wed, Sep 02, 2020 at 06:24:27PM +0200, Jürgen Groß wrote:
> On 02.09.20 17:58, Brian Gerst wrote:
> > On Wed, Sep 2, 2020 at 9:38 AM Peter Zijlstra wrote:
> > >
> > > From: Peter Zijlstra
> > >
> > > The WARN added in commit 3c73b81a9164 ("x86/entry, selftests: Further
> > > improve user
On Thu, Sep 03, 2020 at 12:58:14AM +0900, Masahiro Yamada wrote:
> Sorry for the long delay.
>
> First, this patch breaks 'make TAGS'
> if 'etags' is a symlink to exuberant ctags.
>
>
> masahiro@oscar:~/ref/linux$ etags --version
> Exuberant Ctags 5.9~svn20110310, Copyright (C) 1996-2009
During the LPC RCU BoF Paul asked how come the "USED" <- "IN-NMI"
detector doesn't trip over rcu_read_lock()'s lockdep annotation.
Looking into this I found a very embarrasing typo in
verify_lock_unused():
- if (!(class->usage_mask & LOCK_USED))
+ if (!(class->usage_mask &
On Wed, Sep 02, 2020 at 02:21:27PM +0100, Leo Yan wrote:
> The system register CNTVCT_EL0 can be used to retrieve the counter from
> user space. Add rdtsc() for Arm64.
> +u64 rdtsc(void)
> +{
> + u64 val;
Would it make sense to put a comment in that this counter is/could-be
'short' ?
On Wed, Sep 02, 2020 at 10:19:26PM +0900, Masami Hiramatsu wrote:
> On Wed, 2 Sep 2020 11:36:13 +0200
> pet...@infradead.org wrote:
>
> > On Wed, Sep 02, 2020 at 05:17:55PM +0900, Masami Hiramatsu wrote:
> >
> > > > Ok, but then lockdep will yell at you if you have that enabled and run
> > > >
On Tue, Sep 01, 2020 at 09:18:57AM -0700, Nadav Amit wrote:
> Unless I misunderstand the logic, __force_order should also be used by
> rdpkru() and wrpkru() which do not have dependency on __force_order. I
> also did not understand why native_write_cr0() has R/W dependency on
> __force_order, and
On Wed, Sep 02, 2020 at 08:37:15PM +0800, Boqun Feng wrote:
> On Wed, Sep 02, 2020 at 12:14:12PM +0200, pet...@infradead.org wrote:
> > > To be accurate, atomic_set() doesn't return any value, so it cannot be
> > > ordered against DR and DW ;-)
> >
> > Surely DW is valid for any store.
> >
>
>
On Wed, Sep 02, 2020 at 09:03:37AM +0200, Ulf Hansson wrote:
> Lots of cpuidle drivers are using CPU_PM notifiers (grep for
> cpu_pm_enter and you will see) from their idlestates ->enter()
> callbacks. And for those we are already calling
> rcu_irq_enter_irqson|off() in cpu_pm_notify() when firing
On Wed, Sep 02, 2020 at 07:16:37PM +0900, Masami Hiramatsu wrote:
> Is the data format in the section same as others?
All 3 sections (mcount, jump_label and static_call) have different
layout.
On Wed, Sep 02, 2020 at 11:54:48AM +0800, Boqun Feng wrote:
> On Mon, Aug 31, 2020 at 11:20:34AM -0700, paul...@kernel.org wrote:
> > From: "Paul E. McKenney"
> >
> > This commit adds a key entry enumerating the various types of relaxed
> > operations.
> >
> > Signed-off-by: Paul E. McKenney
>
On Sun, Aug 30, 2020 at 07:31:39AM -0500, Eric W. Biederman wrote:
> pet...@infradead.org writes:
> > Could we check privs twice instead?
> >
> > Something like the completely untested below..
>
> That might work.
>
> I am thinking that for cases where we want to do significant work it
> might
On Wed, Sep 02, 2020 at 10:35:08AM +0900, Masami Hiramatsu wrote:
> On Tue, 18 Aug 2020 15:57:43 +0200
> Peter Zijlstra wrote:
>
> > Similar to how we disallow kprobes on any other dynamic text
> > (ftrace/jump_label) also disallow kprobes on inline static_call()s.
>
> Looks good to me.
>
>
On Wed, Sep 02, 2020 at 08:00:24AM +0200, Juri Lelli wrote:
> On 31/08/20 13:07, Lucas Stach wrote:
> > When a boosted task gets throttled, what normally happens is that it's
> > immediately enqueued again with ENQUEUE_REPLENISH, which replenishes the
> > runtime and clears the dl_throttled flag.
On Wed, Sep 02, 2020 at 05:17:55PM +0900, Masami Hiramatsu wrote:
> > Ok, but then lockdep will yell at you if you have that enabled and run
> > the unoptimized things.
>
> Oh, does it warn for all spinlock things in kprobes if it is unoptimized?
> Hmm, it has to be noted in the documentation.
On Wed, Sep 02, 2020 at 11:09:35AM +0200, pet...@infradead.org wrote:
> On Tue, Sep 01, 2020 at 09:21:37PM -0700, Guenter Roeck wrote:
> > [0.00] WARNING: CPU: 0 PID: 0 at kernel/locking/lockdep.c:4875
> > check_flags.part.39+0x280/0x2a0
> > [0.00]
On Tue, Sep 01, 2020 at 09:21:37PM -0700, Guenter Roeck wrote:
> [0.00] WARNING: CPU: 0 PID: 0 at kernel/locking/lockdep.c:4875
> check_flags.part.39+0x280/0x2a0
> [0.00] DEBUG_LOCKS_WARN_ON(lockdep_hardirqs_enabled())
> [0.00] [<004cff18>]
On Tue, Sep 01, 2020 at 08:51:46PM -0700, Guenter Roeck wrote:
> [ 27.056457] include/trace/events/lock.h:13 suspicious
> rcu_dereference_check() usage!
> [ 27.057006] Hardware name: Generic OMAP3-GP (Flattened Device Tree)
> [ 27.057098] [] (unwind_backtrace) from []
>
On Tue, Sep 01, 2020 at 08:51:46PM -0700, Guenter Roeck wrote:
> On Fri, Aug 21, 2020 at 10:47:49AM +0200, Peter Zijlstra wrote:
> > The lockdep tracepoints are under the lockdep recursion counter, this
> > has a bunch of nasty side effects:
> >
> > - TRACE_IRQFLAGS doesn't work across the
On Thu, Aug 27, 2020 at 06:30:01PM -0400, Joel Fernandes wrote:
> On Thu, Aug 27, 2020 at 09:47:48AM +0200, pet...@infradead.org wrote:
> > All trace_*_rcuidle() and RCU_NONIDLE() usage is a bug IMO.
> >
> > Ideally RCU-trace goes away too.
>
> I was thinking that unless the rcu_idle_enter/exit
On Wed, Sep 02, 2020 at 09:37:39AM +0900, Masami Hiramatsu wrote:
> On Tue, 1 Sep 2020 21:08:08 +0200
> Peter Zijlstra wrote:
>
> > On Sat, Aug 29, 2020 at 09:59:49PM +0900, Masami Hiramatsu wrote:
> > > Masami Hiramatsu (16):
> > > kprobes: Add generic kretprobe trampoline handler
> > >
On Tue, Sep 01, 2020 at 12:06:30PM -0300, Arnaldo Carvalho de Melo wrote:
> Also you mixed up tools/ with include/ things, the perf part of the
> kernel is maintained by Ingo, PeterZ.
Right, it helps if the right people are on Cc.
> Peter, the patch is the one below, I'll collect th
On Tue, Sep 01, 2020 at 09:44:17AM -0600, Lina Iyer wrote:
> > > > > > I could add RCU_NONIDLE for the calls to
> > > > > > pm_runtime_put_sync_suspend()
> > > > > > and pm_runtime_get_sync() in psci_enter_domain_idle_state(). Perhaps
> > > > > > that's the easiest approach, at least to start
On Tue, Sep 01, 2020 at 02:35:52PM +0200, Ulf Hansson wrote:
> On Tue, 1 Sep 2020 at 12:42, wrote:
> > That said; I pushed the rcu_idle_enter() about as deep as it goes into
> > generic code in commit 1098582a0f6c ("sched,idle,rcu: Push rcu_idle
> > deeper into the idle path")
>
> Aha, that
On Tue, Sep 01, 2020 at 08:50:57AM +0200, Ulf Hansson wrote:
> On Tue, 1 Sep 2020 at 08:46, Ulf Hansson wrote:
> > On Mon, 31 Aug 2020 at 21:44, Paul E. McKenney wrote:
> > > > [5.308588] =
> > > > [5.308593] WARNING: suspicious RCU usage
> > > > [
On Fri, Aug 28, 2020 at 02:13:43PM +, albert.li...@gmail.com wrote:
> @@ -82,6 +83,8 @@ __copy_from_user_inatomic(void *to, const void __user
> *from, unsigned long n)
> static __always_inline __must_check unsigned long
> __copy_from_user(void *to, const void __user *from, unsigned long n)
On Sun, Aug 30, 2020 at 07:31:39AM -0500, Eric W. Biederman wrote:
> I am thinking that for cases where we want to do significant work it
> might be better to ask the process to pause at someplace safe (probably
> get_signal) and then do all of the work when we know nothing is changing
> in the
On Sun, Aug 30, 2020 at 11:54:19AM -0700, Linus Torvalds wrote:
> On Sun, Aug 30, 2020 at 11:04 AM Thomas Gleixner wrote:
> >
> > - Make is_idle_task() __always_inline to prevent the compiler from putting
> >it out of line into the wrong section because it's used inside noinstr
> >
On Fri, Aug 28, 2020 at 06:02:25PM -0400, Vineeth Pillai wrote:
> On 8/28/20 4:51 PM, Peter Zijlstra wrote:
> > So where do things go side-ways?
> During hotplug stress test, we have noticed that while a sibling is in
> pick_next_task, another sibling can go offline or come online. What
> we
On Sat, Aug 29, 2020 at 11:01:55AM +0900, Masami Hiramatsu wrote:
> On Fri, 28 Aug 2020 21:29:55 +0900
> Masami Hiramatsu wrote:
>
> > From: Peter Zijlstra
>
> In the next version I will drop this since I will merge the kretprobe_holder
> things into removing kretporbe hash patch.
>
>
On Sat, Aug 29, 2020 at 03:37:26AM +0900, Masami Hiramatsu wrote:
> cd /sys/kernel/debug/tracing/
>
> echo r:schedule schedule >> kprobe_events
> echo 1 > events/kprobes/enable
>
> sleep 333
Thanks! that does indeed trigger it reliably. Let me go have dinner and
then I'll try and figure out
On Fri, Aug 28, 2020 at 04:46:52PM +0200, Oleg Nesterov wrote:
> On 08/27, Peter Zijlstra wrote:
> >
> > 1 file changed, 129 insertions(+)
>
> 129 lines! And I spent more than 2 hours trying to understand these
> 129 lines ;) looks correct...
Yes, even though it already has a bunch of comments,
On Sat, Aug 29, 2020 at 12:10:10AM +0900, Masami Hiramatsu wrote:
> On Fri, 28 Aug 2020 14:52:36 +0200
> pet...@infradead.org wrote:
> > > synchronize_rcu();
> >
> > This one might help, this means we can do rcu_read_lock() around
> > get_kretprobe() and it's usage. Can we call rp->handler()
On Fri, Aug 28, 2020 at 02:11:18PM +, eddy...@trendmicro.com wrote:
> > From: Masami Hiramatsu
> >
> > OK, schedule function will be the key. I guess the senario is..
> >
> > 1) kretporbe replace the return address with kretprobe_trampoline on
> > task1's kernel stack
> > 2) the task1 forks
On Fri, Aug 28, 2020 at 10:51:13PM +0900, Masami Hiramatsu wrote:
> OK, schedule function will be the key. I guess the senario is..
>
> 1) kretporbe replace the return address with kretprobe_trampoline on task1's
> kernel stack
> 2) the task1 forks task2 before returning to the
On Fri, Aug 28, 2020 at 01:11:15PM +, eddy...@trendmicro.com wrote:
> > -Original Message-
> > From: Peter Zijlstra
> > Sent: Friday, August 28, 2020 12:13 AM
> > To: linux-kernel@vger.kernel.org; mhira...@kernel.org
> > Cc: Eddy Wu (RD-TW) ; x...@kernel.org;
> > da...@davemloft.net;
On Fri, Aug 28, 2020 at 02:31:31PM +0100, Mark Rutland wrote:
> Hi,
>
> On Fri, Aug 28, 2020 at 09:27:22PM +0900, Masami Hiramatsu wrote:
> > Use the generic kretprobe trampoline handler, and use the
> > kernel_stack_pointer(regs) for framepointer verification.
> >
> > Signed-off-by: Masami
If you do this, can you merge this into the previos patch and then
delete the sched try_to_invoke..() patch?
Few comments below.
On Fri, Aug 28, 2020 at 09:30:17PM +0900, Masami Hiramatsu wrote:
> +static nokprobe_inline struct kretprobe *get_kretprobe(struct
> kretprobe_instance *ri)
> +{
On Fri, Aug 28, 2020 at 07:01:17AM -0500, Eric W. Biederman wrote:
> This feels like an issue where perf can just do too much under
> exec_update_mutex. In particular calling kern_path from
> create_local_trace_uprobe. Calling into the vfs at the very least
> makes it impossible to know exactly
On Fri, Aug 28, 2020 at 05:02:46PM +0530, Vamshi K Sthambamkadi wrote:
> On i386, the order of parameters passed on regs is eax,edx,and ecx
> (as per regparm(3) calling conventions).
>
> Change the mapping in regs_get_kernel_argument(), so that arg1=ax
> arg2=dx, and arg3=cx.
>
> Running the
On Fri, Aug 28, 2020 at 08:00:19PM +1000, Nicholas Piggin wrote:
> Closing this race only requires interrupts to be disabled while ->mm
> and ->active_mm are being switched, but the TLB problem requires also
> holding interrupts off over activate_mm. Unfortunately not all archs
> can do that yet,
On Fri, Aug 28, 2020 at 11:41:29AM +0200, Jan Kara wrote:
> On Fri 28-08-20 11:07:29, pet...@infradead.org wrote:
> > On Fri, Aug 28, 2020 at 02:07:12PM +0800, Xianting Tian wrote:
> > > As the normal aio wait path(read_events() ->
> > > wait_event_interruptible_hrtimeout()) doesn't account iowait
On Fri, Aug 28, 2020 at 06:13:41PM +0900, Masami Hiramatsu wrote:
> On Fri, 28 Aug 2020 10:48:51 +0200
> pet...@infradead.org wrote:
>
> > On Thu, Aug 27, 2020 at 06:12:44PM +0200, Peter Zijlstra wrote:
> > > struct kretprobe_instance {
> > > union {
> > > + /*
> > > + * Dodgy
On Fri, Aug 28, 2020 at 02:07:12PM +0800, Xianting Tian wrote:
> As the normal aio wait path(read_events() ->
> wait_event_interruptible_hrtimeout()) doesn't account iowait time, so use
> this patch to make it to account iowait time, which can truely reflect
> the system io situation when using a
On Fri, Aug 28, 2020 at 03:07:09AM +0200, Ahmed S. Darwish wrote:
> +#define __SEQ_RT IS_ENABLED(CONFIG_PREEMPT_RT)
> +
> +SEQCOUNT_LOCKTYPE(raw_spinlock, raw_spinlock_t, false,s->lock,
> raw_spin, raw_spin_lock(s->lock))
> +SEQCOUNT_LOCKTYPE(spinlock, spinlock_t,
On Fri, Aug 28, 2020 at 03:07:09AM +0200, Ahmed S. Darwish wrote:
> +/*
> + * Automatically disable preemption for seqcount_LOCKTYPE_t writers, if the
> + * associated lock does not implicitly disable preemption.
> + *
> + * Don't do it for PREEMPT_RT. Check __SEQ_LOCK().
> + */
> +#define
On Thu, Aug 27, 2020 at 06:12:44PM +0200, Peter Zijlstra wrote:
> struct kretprobe_instance {
> union {
> + /*
> + * Dodgy as heck, this relies on not clobbering freelist::refs.
> + * llist: only clobbers freelist::next.
> + * rcu: clobbers
On Fri, Aug 28, 2020 at 03:00:59AM +0900, Masami Hiramatsu wrote:
> On Thu, 27 Aug 2020 18:12:40 +0200
> Peter Zijlstra wrote:
>
> > +static void invalidate_rp_inst(struct task_struct *t, struct kretprobe *rp)
> > +{
> > + struct invl_rp_ipi iri = {
> > + .task = t,
> > +
On Fri, Aug 28, 2020 at 03:07:08AM +0200, Ahmed S. Darwish wrote:
> #define __read_seqcount_begin(s) \
> +({ \
> + unsigned seq; \
> +
On Fri, Aug 28, 2020 at 03:07:07AM +0200, Ahmed S. Darwish wrote:
> Differentiate the first group by using "__seqcount_t_" as prefix. This
> also conforms with the rest of seqlock.h naming conventions.
> #define __seqprop_case(s, locktype, prop)\
>
On Fri, Aug 28, 2020 at 03:07:06AM +0200, Ahmed S. Darwish wrote:
> At seqlock.h, sequence counters with associated locks are either called
> seqcount_LOCKNAME_t, seqcount_LOCKTYPE_t, or seqcount_locktype_t.
>
> Standardize on "seqcount_LOCKTYPE_t" for all instances in comments,
> kernel-doc, and
On Thu, Aug 27, 2020 at 12:49:20PM -0400, Cameron wrote:
> For what it's worth, the freelist.h code seems to be a faithful adaptation
> of my original blog post code. Didn't think it would end up in the Linux
> kernel one day :-)
Hehe, I ran into the traditional ABA problem for the lockless stack
On Thu, Aug 27, 2020 at 06:12:43PM +0200, Peter Zijlstra wrote:
> +struct freelist_node {
> + atomic_trefs;
> + struct freelist_node*next;
> +};
Bah, the next patch relies on this structure to be ordered the other
way around.
Clearly writing code and listening to
On Thu, Aug 27, 2020 at 08:37:49PM +0900, Masami Hiramatsu wrote:
> Free kretprobe_instance with rcu callback instead of directly
> freeing the object in the kretprobe handler context.
>
> This will make kretprobe run safer in NMI context.
>
> Signed-off-by: Masami Hiramatsu
> ---
>
On Thu, Aug 27, 2020 at 08:37:49PM +0900, Masami Hiramatsu wrote:
> +void recycle_rp_inst(struct kretprobe_instance *ri)
Also note, that at this point there is no external caller of this
function anymore.
On Thu, Aug 27, 2020 at 11:50:07AM +0300, Andy Shevchenko wrote:
> On Thu, Aug 27, 2020 at 10:57 AM tip-bot2 for Valentin Schneider
> wrote:
> >
> > The following commit has been merged into the sched/core branch of tip:
>
> > Fixes: b6e862f38672 ("sched/topology: Define and assign sched_domain
On Tue, Aug 25, 2020 at 10:15:55PM +0900, Masami Hiramatsu wrote:
> Yeah, kretprobe already provided the per-instance data (as far as
> I know, only systemtap depends on it). We need to provide it for
> such users.
Well, systemtap is out of tree, we don't _need_ to provide anything for
them.
On Wed, Aug 26, 2020 at 09:24:19PM -0400, Joel Fernandes wrote:
> On Wed, Aug 26, 2020 at 09:18:26PM -0400, Joel Fernandes wrote:
> > On Fri, Aug 21, 2020 at 10:47:41AM +0200, Peter Zijlstra wrote:
> > > Lots of things take locks, due to a wee bug, rcu_lockdep didn't notice
> > > that the locking
On Thu, Aug 27, 2020 at 12:04:05AM +0900, Masami Hiramatsu wrote:
> > Argh, I replied to the wrong variant, I mean the one that uses
> > kernel_stack_pointer(regs).
>
> Would you mean using kernel_stack_pointer() for the frame_pointer?
> Some arch will be OK, but others can not get the
On Wed, Aug 26, 2020 at 04:08:52PM +0200, pet...@infradead.org wrote:
> On Wed, Aug 26, 2020 at 10:46:43PM +0900, Masami Hiramatsu wrote:
> > static __used __kprobes void *trampoline_handler(struct pt_regs *regs)
> > {
> > + return (void *)kretprobe_trampoline_handler(regs,
> > +
On Wed, Aug 26, 2020 at 10:46:43PM +0900, Masami Hiramatsu wrote:
> static __used __kprobes void *trampoline_handler(struct pt_regs *regs)
> {
> + return (void *)kretprobe_trampoline_handler(regs,
> + (unsigned long)_trampoline,
> +
On Wed, Aug 26, 2020 at 07:00:41PM +0900, Masami Hiramatsu wrote:
> Of course, this doesn't solve the llist_del_first() contention in the
> pre_kretprobe_handler(). So anyway we need a lock for per-probe llist
> (if I understand llist.h comment correctly.)
Bah, lemme think about that. Kprobes
On Thu, Aug 06, 2020 at 02:04:38PM +0200, pet...@infradead.org wrote:
>
> One long standing annoyance I have with using vim-tags is that our tags
> file is not properly sorted. That is, the sorting exhuberant Ctags does
> is only on the tag itself.
>
> The problem with that is that, for example,
Cc: Thomas Bogendoerfer
Cc: Paul Burton
Reported-by: kernel test robot
Signed-off-by: Peter Zijlstra (Intel)
---
arch/mips/include/asm/irqflags.h |5 +
1 file changed, 5 insertions(+)
--- a/arch/mips/include/asm/irqflags.h
+++ b/arch/mips/include/asm/irqflags.h
@@ -137,6 +137,11
On Tue, Aug 25, 2020 at 08:48:41AM -0700, Paul E. McKenney wrote:
> > Paul, I wanted to use this function, but found it has very weird
> > semantics.
> >
> > Why do you need it to (remotely) call @func when p is current? The user
> > in rcu_print_task_stall() explicitly bails in this case, and
On Wed, Aug 26, 2020 at 11:01:02AM +0200, pet...@infradead.org wrote:
> Known broken archs include: Sparc32-SMP, PARISC, ARC-v1-SMP.
> There might be a few more, but I've forgotten.
Note that none of those actually have NMIs and llist is mostly OK on
those architectures too.
The problem is when
On Wed, Aug 26, 2020 at 07:07:09AM +, eddy...@trendmicro.com wrote:
> llist operations require atomic cmpxchg, for some arch doesn't have
> CONFIG_ARCH_HAVE_NMI_SAFE_CMPXCHG, in_nmi() check might still needed.
> (HAVE_KRETPROBES && !CONFIG_ARCH_HAVE_NMI_SAFE_CMPXCHG): arc, arm,
> csky, mips
>
On Tue, Aug 25, 2020 at 10:59:54PM +0900, Masami Hiramatsu wrote:
> On Tue, 25 Aug 2020 15:30:05 +0200
> pet...@infradead.org wrote:
>
> > On Tue, Aug 25, 2020 at 10:15:55PM +0900, Masami Hiramatsu wrote:
> >
> > > > damn... one last problem is dangling instances.. so close.
> > > > We can
On Tue, Aug 25, 2020 at 03:30:05PM +0200, pet...@infradead.org wrote:
> On Tue, Aug 25, 2020 at 10:15:55PM +0900, Masami Hiramatsu wrote:
> > OK, this looks good to me too.
> > I'll make a series to rewrite kretprobe based on this patch, OK?
>
> Please, I'll send the fix along when I have it.
On Tue, Aug 25, 2020 at 10:15:55PM +0900, Masami Hiramatsu wrote:
> > damn... one last problem is dangling instances.. so close.
> > We can apparently unregister a kretprobe while there's still active
> > kretprobe_instance's out referencing it.
>
> Yeah, kretprobe already provided the
+Cc Paul, who was weirdly forgotten last time
And one additional question below, which made me remember this thing.
On Wed, Jul 29, 2020 at 02:58:11PM +0200, pet...@infradead.org wrote:
> > rcu: INFO: rcu_preempt detected stalls on CPUs/tasks:
> > rcu:Tasks blocked on level-0 rcu_node
On Tue, Aug 25, 2020 at 03:15:38PM +0900, Masami Hiramatsu wrote:
> From 24390dffe6eb9a3e95f7d46a528a1dcfd716dc81 Mon Sep 17 00:00:00 2001
> From: Masami Hiramatsu
> Date: Tue, 25 Aug 2020 01:37:00 +0900
> Subject: [PATCH] kprobes/x86: Fixes NMI context check on x86
>
> Since commit
On Tue, Aug 25, 2020 at 03:15:03AM +0900, Masami Hiramatsu wrote:
> > I did the below, but i'm not at all sure that isn't horrible broken. I
> > can't really find many rp->lock sites and this might break things by
> > limiting contention.
>
> This is not enough.
I was afraid of that..
> For
On Mon, Aug 24, 2020 at 03:22:06PM +0100, Andrew Cooper wrote:
> On 24/08/2020 11:14, pet...@infradead.org wrote:
> > The WARN added in commit 3c73b81a9164 ("x86/entry, selftests: Further
> > improve user entry sanity checks") unconditionally triggers on my IVB
> > machine because it does not
On Mon, Aug 24, 2020 at 12:26:01PM +0100, Andrew Cooper wrote:
> > INT1 is a trap,
> > instruction breakpoint is a fault
> >
> > So if you have:
> >
> > INT1
> > 1: some-instr
> >
> > and set an X breakpoint on 1, we'll loose the INT1, right?
>
> You should get two. First with a dr6 of 0
On Sun, Aug 23, 2020 at 04:09:42PM -0700, Andy Lutomirski wrote:
> On Fri, Aug 21, 2020 at 3:21 AM Peter Zijlstra wrote:
> >
> > Get rid of the two variables, avoid computing si_code when not needed
> > and be consistent about which dr6 value is used.
> >
>
> > - if (tsk->thread.debugreg6
The WARN added in commit 3c73b81a9164 ("x86/entry, selftests: Further
improve user entry sanity checks") unconditionally triggers on my IVB
machine because it does not support SMAP.
For !SMAP hardware we patch out CLAC/STAC instructions and thus if
userspace sets AC, we'll still have it set
On Sat, Aug 22, 2020 at 09:04:09AM -0700, Michel Lespinasse wrote:
> Hi,
>
> I am wondering about how to describe the following situation to lockdep:
>
> - lock A would be something that's already implemented (a mutex or
> possibly a spinlock).
> - lock B is a range lock, which I would be
On Fri, Aug 21, 2020 at 05:03:34PM -0400, Steven Rostedt wrote:
> > Sigh. Is it too hard to make mutex_trylock() usable from interrupt
> > context?
>
>
> That's a question for Thomas and Peter Z.
You should really know that too, the TL;DR answer is it's fundamentally
buggered, can't work.
On Fri, Aug 21, 2020 at 11:09:51AM +0530, Aneesh Kumar K.V wrote:
> Peter Zijlstra writes:
>
> > For SMP systems using IPI based TLB invalidation, looking at
> > current->active_mm is entirely reasonable. This then presents the
> > following race condition:
> >
> >
> > CPU0
| 18 ++---
> include/linux/mmu_context.h |5 ++
> kernel/locking/lockdep.c | 18 +
> kernel/sched/idle.c | 25 +
> 18 files changed, 118 insertions(+), 122 deletions(-)
Whole set also available at:
git://git.kernel.org/pub/scm/linux/kernel/git/peterz/queue.git locking/wip
gt; - fixed notifier order (Josh, Daniel)
> - tested kgdb
Whole set also available at:
git://git.kernel.org/pub/scm/linux/kernel/git/peterz/queue.git x86/debug
On Fri, Aug 21, 2020 at 11:39:20AM +0200, Peter Zijlstra wrote:
> Remove the historical junk and replace it with a WARN and a comment.
>
> The problem is that even though the kernel only uses TF single-step in
> kprobes and KGDB, both of which consume the event before this,
> QEMU/KVM has bugs in
On Fri, Aug 21, 2020 at 12:13:44PM +0100, Will Deacon wrote:
> On Wed, Jul 01, 2020 at 01:57:20PM +0800, qiang.zh...@windriver.com wrote:
> > From: Zqiang
> >
> > Remove WQ_FLAG_EXCLUSIVE from "wq_entry.flags", using function
> > __add_wait_queue_entry_tail_exclusive substitution.
> >
> >
On Thu, Aug 20, 2020 at 04:28:28PM +0100, Daniel Thompson wrote:
> Specifically I've entered the kdb in pretty much the simplest way
> possible: a direct call to kgdb_breakpoint() from a task context. I
> generate a backtrace to illustrate this, just to give you a better
> understanding of what
On Fri, Aug 21, 2020 at 08:30:43AM +0200, Marco Elver wrote:
> With KCSAN enabled, prandom_u32() may be called from any context,
> including idle CPUs.
>
> Therefore, switch to using trace_prandom_u32_rcuidle(), to avoid various
> issues due to recursion and lockdep warnings when KCSAN and
On Thu, Aug 20, 2020 at 07:50:50PM -0700, Sean Christopherson wrote:
> diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S
> index 70dea93378162..fd915c46297c5 100644
> --- a/arch/x86/entry/entry_64.S
> +++ b/arch/x86/entry/entry_64.S
> @@ -842,8 +842,13 @@
101 - 200 of 423 matches
Mail list logo