On 05/20, Andrii Nakryiko wrote:
>
> Fixes: 1b8f85defbc8 ("uprobes: prepare uprobe args buffer lazily")
> Reported-by: Breno Leitao
> Signed-off-by: Andrii Nakryiko
> ---
> kernel/trace/trace_uprobe.c | 14 +-
> 1 file changed, 9 insertions(+), 5
ions to update values on shadow stack and using
> them in uprobe code to keep shadow stack in sync with uretprobe
> changes to user stack.
I don't think my ack has any value in this area but looks good to me.
Reviewed-by: Oleg Nesterov
> Fixes: 8b1c23543436 ("x86/shst
On 05/15, Edgecombe, Rick P wrote:
>
> On Wed, 2024-05-15 at 13:35 +0200, Oleg Nesterov wrote:
> >
> > > I'm ok with not using optimized uretprobe when shadow stack is detected
> > > as enabled and we go with current uretprobe in that case
> >
> > But how
On 05/15, Jiri Olsa wrote:
>
> On Wed, May 15, 2024 at 01:19:20PM +0200, Oleg Nesterov wrote:
> > Let me ask a couple of really stupid questions. What if the shadow stack
> > is "shorter" than the normal stack? I mean,
> >
> > enable_shstk()
> >
Let me repeat I know nothing about shadow stacks, only tried to
read Documentation/arch/x86/shstk.rst few minutes ago ;)
On 05/13, Jiri Olsa wrote:
>
> 1) current uretprobe which are not working at the moment and we change
>the top value of shadow stack with shstk_push_frame
> 2) optimized
Sorry for the late reply, I was on PTO.
On 05/14, Deepak Gupta wrote:
>
> Question,
>
> Is it kernel who is maintaining all return probes, meaning original return
> addresses
> are saved in kernel data structures on per task basis.
Yes. task_struct->utask->return_instances
See
On 04/21, Jiri Olsa wrote:
>
> arch/x86/entry/syscalls/syscall_64.tbl | 1 +
> include/linux/syscalls.h | 2 ++
> include/uapi/asm-generic/unistd.h | 5 -
> kernel/sys_ni.c| 2 ++
> 4 files changed, 9 insertions(+), 1 deletion(-)
On 04/21, Jiri Olsa wrote:
>
> arch/x86/kernel/uprobes.c | 115 ++
> include/linux/uprobes.h | 3 +
> kernel/events/uprobes.c | 24 +---
> 3 files changed, 135 insertions(+), 7 deletions(-)
Reviewed-by: Oleg Nesterov
On 04/08, Jiri Olsa wrote:
>
> On Fri, Apr 05, 2024 at 01:02:30PM +0200, Oleg Nesterov wrote:
> >
> > And what should sys_uretprobe() do if it is not called from the trampoline?
> > I'd prefer force_sig(SIGILL) to punish the abuser ;) OK, OK, EINVAL.
>
> so the simil
On 04/06, Masami Hiramatsu wrote:
>
> On Fri, 5 Apr 2024 13:02:30 +0200
> Oleg Nesterov wrote:
>
> > With or without this patch userpace can also do
> >
> > foo() { <-- retprobe1
> > bar() {
&
On 04/05, Jiri Olsa wrote:
>
> On Fri, Apr 05, 2024 at 10:22:03AM +0900, Masami Hiramatsu wrote:
> >
> > I think this expects setjmp/longjmp as below
> >
> > foo() { <- retprobe1
> > setjmp()
> > bar() { <- retprobe2
> > longjmp()
> > }
> > } <- return to trampoline
> >
> >
On 04/05, Masami Hiramatsu wrote:
>
> Can we make this syscall and uprobe behavior clearer? As you said, if
> the application use sigreturn or longjump, it may skip returns and
> shadow stack entries are left in the kernel. In such cases, can uretprobe
> detect it properly, or just crash the
Again, I leave this to you and Jiri, but
On 04/03, Masami Hiramatsu wrote:
>
> On Wed, 3 Apr 2024 11:47:41 +0200
> > > set in the user function, what happen if the user function directly
> > > calls this syscall? (maybe it consumes shadow stack?)
> >
> > the process should receive SIGILL if
I leave this to you and Masami, but...
On 04/03, Jiri Olsa wrote:
>
> On Wed, Apr 03, 2024 at 10:07:08AM +0900, Masami Hiramatsu wrote:
> >
> > This is interesting approach. But I doubt we need to add additional
> > syscall just for this purpose. Can't we use another syscall or ioctl?
>
> so the
/trace_uprobe.c | 103 +---
> 1 file changed, 59 insertions(+), 44 deletions(-)
Reviewed-by: Oleg Nesterov
Again, looks good to me, but I have a minor nit. Feel free to ignore.
On 03/12, Andrii Nakryiko wrote:
>
> static void __uprobe_trace_func(struct trace_uprobe *tu,
> unsigned long func, struct pt_regs *regs,
> - struct uprobe_cpu_buffer
LGTM, one nit below.
On 03/12, Andrii Nakryiko wrote:
>
> +static struct uprobe_cpu_buffer *prepare_uprobe_buffer(struct trace_uprobe
> *tu,
> +struct pt_regs *regs)
> +{
> + struct uprobe_cpu_buffer *ucb;
> + int dsize, esize;
> +
> +
I forgot everything about this code, plus it has changed a lot since
I looked at it many years ago, but ...
I think this change is fine but the changelog looks a bit confusing
(overcomplicated) to me.
On 03/12, Andrii Nakryiko wrote:
>
> This patch adds a speculative check before grabbing that
On 04/16, He Zhe wrote:
>
> --- a/arch/arm64/include/asm/syscall.h
> +++ b/arch/arm64/include/asm/syscall.h
> @@ -44,7 +44,12 @@ static inline long syscall_get_error(struct task_struct
> *task,
> static inline long syscall_get_return_value(struct task_struct *task,
>
On 04/15, He Zhe wrote:
>
>
> On 4/15/21 12:55 AM, Oleg Nesterov wrote:
> > I think in_compat_syscall() should be used instead.
> >
> > But this doesn't matter, I still can't understand the problem.
>
> Sorry for not enough clarification.
>
> This was found
On 04/13, Andrei Vagin wrote:
>
> +static void swap_mm(struct mm_struct *prev_mm, struct mm_struct *target_mm)
> +{
> + struct task_struct *tsk = current;
> + struct mm_struct *active_mm;
> +
> + task_lock(tsk);
> + /* Hold off tlb flush IPIs while switching mm's */
> +
On 04/14, David Laight wrote:
>
> From: Oleg Nesterov
> > Sent: 14 April 2021 16:08
> >
> > Add audit maintainers...
> >
> > On 04/14, He Zhe wrote:
> > >
> > > When 32-bit userspace application is running on 64-bit kernel, the 32-bit
>
Add audit maintainers...
On 04/14, He Zhe wrote:
>
> When 32-bit userspace application is running on 64-bit kernel, the 32-bit
> syscall return code would be changed from u32 to u64 in regs_return_value
> and then changed to s64. Hence the negative return code would be treated
> as a positive
Hi Igor,
sorry for delay...
On 04/12, Igor Zhbanov wrote:
>
> Hi Oleg,
>
> So what is the cause of this problem?
The cause is clear. And well known ;) And again, this has almost nothing to do
with the mutual debugging.
The tracee sleeps in ptrace_stop(). You send SIGKILL. This wakes the tracee
On 04/08, Jens Axboe wrote:
>
> On 4/8/21 3:46 AM, Peter Zijlstra wrote:
> >
> > do_each_pid_thread() { } while_each_pid_thread() is a double loop and
> > thus break doesn't work as expected. Also, it should be used under
> > tasklist_lock because otherwise we can race against change_pid() for
> >
On 04/07, Peter Zijlstra wrote:
>
> On Tue, Apr 06, 2021 at 07:43:53PM +0200, Oleg Nesterov wrote:
> > On 04/06, Oleg Nesterov wrote:
> > >
> > > perf_mmap_close() was added by 9bb5d40cd93c9 ("perf: Fix mmap()
> > > accounting hole"
k_work_match_t" ?
Either way,
Reviewed-by: Oleg Nesterov
On 04/06, Oleg Nesterov wrote:
>
> perf_mmap_close() was added by 9bb5d40cd93c9 ("perf: Fix mmap() accounting
> hole")
I meant perf_mmap_close() -> put_event()
> and this commit doesn't look right anyway
It seems there is another problem or I am totally confused. I do
Song, Peter, please take a look.
On 04/02, Hillf Danton wrote:
>
> On Thu, 1 Apr 2021 12:53:26 Oleg Nesterov wrote:
> >
> >Hmm, please correct me, but I don't think so. I think mmap_lock ->
> >dup_mmap_sem
> >is not possible.
> >
> Given perf_tra
On 04/01, Hillf Danton wrote:
>
> If I dont misread it, the lockdep chain will likely evolve from
>
>event_mutex -> uprobe.register_rwsem -> dup_mmap_sem -> mm.mmap_lock ->
>event_mutex
> to
>dup_mmap_sem -> mm.mmap_lock -> dup_mmap_sem
>
> after this patch as both uprobe_register()
On 03/28, Hillf Danton wrote:
>
> On Sat, 27 Mar 2021 18:53:08 Oleg Nesterov wrote:
> >Hi Hillf,
> >
> >it seems that you already understand the problem ;) I don't.
>
> It is simpler than you thought - I always blindly believe what syzbot
> reported is true
On 03/29, Marco Elver wrote:
>
> So, per off-list discussion, it appears that I should ask to clarify:
> PF_EXISTING or PF_EXITING?
Aaah, sorry Marco.
PF_EXITING, of course.
Oleg.
:
> CapPrm:
> CapEff:
> CapBnd: 01ff
> CapAmb:
> NoNewPrivs: 0
> Seccomp:0
> Seccomp_filters:0
> Speculation_Store_Bypass: vulnerable
> SpeculationIndirectBranch
On 03/29, Igor Zhbanov wrote:
>
> Mutual debugging of 2 processes can stuck in unkillable stopped state
can't reproduce and can't understand...
> Hi!
>
> When one process, let's say "A", is tracing the another process "B", and the
> process "B" is trying to attach to the process "A", then both
On 03/29, Peter Zijlstra wrote:
>
> On Thu, Mar 25, 2021 at 09:14:39AM +0100, Marco Elver wrote:
> > @@ -6395,6 +6395,13 @@ static void perf_sigtrap(struct perf_event *event)
> > {
> > struct kernel_siginfo info;
> >
> > + /*
> > +* This irq_work can race with an exiting task; bail out
Hi Hillf,
it seems that you already understand the problem ;) I don't.
Could you explain in details how double __register is possible ? and how
it connects to this lockdep report?
On 03/27, Hillf Danton wrote:
>
> On Fri, 26 Mar 2021 03:29:19
> > Hello,
> >
> > syzbot found the following issue
Jens, sorry, I got lost :/
On 03/25, Jens Axboe wrote:
>
> With IO threads accepting signals, including SIGSTOP,
where can I find this change? Looks like I wasn't cc'ed...
> unmask the
> SIGSTOP signal from the default blocked mask.
>
> Signed-off-by: Jens Axboe
> ---
> kernel/fork.c | 2 +-
On 03/25, Linus Torvalds wrote:
>
> The whole "signals are very special for IO threads" thing has caused
> so many problems, that maybe the solution is simply to _not_ make them
> special?
Or may be IO threads should not abuse CLONE_THREAD?
Why does create_io_thread() abuse CLONE_THREAD ?
One
On 03/25, Eric W. Biederman wrote:
>
> So looking quickly the flip side of the coin is gdb (and other
> debuggers) needs a way to know these threads are special, so it can know
> not to attach.
may be,
> I suspect getting -EPERM (or possibly a different error code) when
> attempting attach is
I didn't even try to read this series yet, will try tomorrow.
But sorry, I can't resist...
On 03/25, Jens Axboe wrote:
>
> On 3/25/21 1:33 PM, Eric W. Biederman wrote:
> > Jens Axboe writes:
> >
> >> Hi,
> >>
> >> Stefan reports that attaching to a task with io_uring will leave gdb
> >> very
On 03/25, Peter Zijlstra wrote:
>
> static void ptrace_unfreeze_traced(struct task_struct *task)
> {
> - if (task->state != __TASK_TRACED)
> + if (READ_ONCE(task->__state) != __TASK_TRACED)
> return;
this change is correct,
> @@ -201,11 +201,11 @@ static void
Hi,
On 03/23, qianli zhao wrote:
>
> Hi,Oleg
>
> > You certainly don't understand me :/
>
> > Please read my email you quoted below. I didn't mean the current logic.
> > I meant the logic after your patch which moves atomic_dec_and_test() and
> > panic() before exit_signals().
>
> Sorry, I think
, 46 insertions(+), 2 deletions(-)
both patches look good to me, feel free to add
Reviewed-by: Oleg Nesterov
On 03/22, Thomas Gleixner wrote:
>
> +static void sigqueue_cache_or_free(struct sigqueue *q, bool cache)
> +{
> + /*
> + * Cache one sigqueue per task. This pairs with the consumer side
> + * in __sigqueue_alloc() and needs READ/WRITE_ONCE() to prevent the
> + * compiler from
ive--" and other variable are set in a
> different order(such as signal->live and PF_EXITING),this can cause
> abnormalities in the logic associated with these two variables,that is
> my thinking.
> Of course, check all the "signal->live--" path is definitely
> necessa
On 03/22, Oleg Nesterov wrote:
>
> On 03/22, qianli zhao wrote:
> >
> > Moving the decrement position should only affect between new and old
> > code position of movement of the decrement of
> > signal->live.
>
> Why do you think so? It can affect _any_
On 03/22, qianli zhao wrote:
>
> Moving the decrement position should only affect between new and old
> code position of movement of the decrement of
> signal->live.
Why do you think so? It can affect _any_ code which runs under
"if (group_dead)". Again, I don't see anything wrong, but I didn't
Hi,
It seems that we don't understand each other.
If we move atomic_dec_and_test(signal->live) and do
if (group_dead && is_global_init)
panic(...);
before setting PF_EXITING like your patch does, then zap_pid_ns_processes()
simply won't be called.
Because:
On 03/21,
On 03/20, Eric W. Biederman wrote:
>
> But please tell me why is it not the right thing to have the io_uring
> helper threads stop? Why is it ok for that process to go on consuming
> cpu resources in a non-stoppable way.
Yes, I have the same question ;)
Oleg.
On 03/20, Jens Axboe wrote:
>
> --- a/kernel/signal.c
> +++ b/kernel/signal.c
> @@ -2349,6 +2349,10 @@ static bool do_signal_stop(int signr)
>
> t = current;
> while_each_thread(current, t) {
> + /* don't STOP PF_IO_WORKER threads */
> +
On 03/20, Jens Axboe wrote:
>
> Hi,
>
> Been trying to ensure that we do the right thing wrt signals and
> PF_IO_WORKER threads
OMG. Am I understand correctly? create_io_thread() can create a sub-
thread of userspace process which acts as a kernel thread?
Looks like this is the recent feature I
On 03/19, qianli zhao wrote:
>
> I will think about the risks of movement of the decrement of
> signal->live before exit_signal().
> If is difficult to judge movement of the decrement of signal->live is
> safe,how about only test 'signal->live==1' not use group_dead?
>
> Such as:
> diff --git
On 03/19, qianli zhao wrote:
>
> > But then I don't understand the SIGNAL_GROUP_EXIT check added by your
> > patch. Do we really need it if we want to avoid zap_pid_ns_processes()
> > when the global init exits?
>
> I think check SIGNAL_GROUP_EXIT is necessary,or panic() will happen
> after all
On 03/18, Eric W. Biederman wrote:
>
> Oleg Nesterov writes:
>
> > On 03/18, qianli zhao wrote:
> >>
> >> In addition, the patch also protects the init process state to
> >> successfully get usable init coredump.
> >
> > Could you spell please
On 03/18, qianli zhao wrote:
>
> Hi,Oleg
>
> Thank you for your reply.
>
> >> When init sub-threads running on different CPUs exit at the same time,
> >> zap_pid_ns_processe()->BUG() may be happened.
>
> > and why do you think your patch can't prevent this?
>
> > Sorry, I must have missed
On 03/17, Qianli Zhao wrote:
>
> From: Qianli Zhao
>
> When init sub-threads running on different CPUs exit at the same time,
> zap_pid_ns_processe()->BUG() may be happened.
and why do you think your patch can't prevent this?
Sorry, I must have missed something. But it seems to me that you are
The following commit has been merged into the x86/urgent branch of tip:
Commit-ID: 5abbe51a526253b9f003e9a0a195638dc882d660
Gitweb:
https://git.kernel.org/tip/5abbe51a526253b9f003e9a0a195638dc882d660
Author:Oleg Nesterov
AuthorDate:Mon, 01 Feb 2021 18:46:41 +01:00
The following commit has been merged into the x86/urgent branch of tip:
Commit-ID: 8c150ba2fb5995c84a7a43848250d444a3329a7d
Gitweb:
https://git.kernel.org/tip/8c150ba2fb5995c84a7a43848250d444a3329a7d
Author:Oleg Nesterov
AuthorDate:Mon, 01 Feb 2021 18:47:09 +01:00
The following commit has been merged into the x86/urgent branch of tip:
Commit-ID: 66c1b6d74cd7035e85c426f0af4aede19e805c8a
Gitweb:
https://git.kernel.org/tip/66c1b6d74cd7035e85c426f0af4aede19e805c8a
Author:Oleg Nesterov
AuthorDate:Mon, 01 Feb 2021 18:46:49 +01:00
The following commit has been merged into the x86/urgent branch of tip:
Commit-ID: b2e9df850c58c2b36e915e7d3bed3f6107cccba6
Gitweb:
https://git.kernel.org/tip/b2e9df850c58c2b36e915e7d3bed3f6107cccba6
Author:Oleg Nesterov
AuthorDate:Mon, 01 Feb 2021 18:47:16 +01:00
On 02/04, Oleg Nesterov wrote:
>
> It seems that nobody objects,
>
> Andrew, Andy, Thomas, how do you think this series should be routed?
ping...
What can I do to finally get this merged?
Should I resend once again? Whom?
> On 02/01, Oleg Nesterov wrote:
> >
>
o print it in KASAN reports */
> + kasan_record_aux_stack(work);
> +
> do {
> head = READ_ONCE(task->task_works);
> if (unlikely(head == _exited))
Acked-by: Oleg Nesterov
On 03/14, Jim Newsome wrote:
>
> Since v5:
> * Switched back to explicitly looking up by tgid and then pid.
OK, as I said I won't argue, still looks good to me.
Reviewed-by: Oleg Nesterov
On 03/15, Cyrill Gorcunov wrote:
>
> On Mon, Mar 15, 2021 at 01:08:03PM +0100, Oleg Nesterov wrote:
>
> >
> > And why task_lock(current) ? What does it try to protect?
>
> As far as I remember this was related to reading from procfs
> at time the patch was writ
On 03/14, Alexey Dobriyan wrote:
>
> prctl(PR_SET_MM, PR_SET_MM_AUXV, addr, 1);
>
> will copy 1 byte from userspace to (quite big) on-stack array
> and then stash everything to mm->saved_auxv.
I too don't understand, memcpy(mm->saved_auxv, user_auxv, len) will
copy 1 byte...
And why
On 03/12, Eric W. Biederman wrote:
>
> I am going to kibitz just a little bit more.
>
> When I looked at this a second time it became apparent that using
> pid_task twice should actually be faster as it removes a dependent load
> caused by thread_group_leader, and replaces it by accessing two
On 03/12, Thomas Gleixner wrote:
>
> On Fri, Mar 12 2021 at 20:26, Thomas Gleixner wrote:
> > On Fri, Mar 12 2021 at 17:11, Oleg Nesterov wrote:
> >> On 03/11, Thomas Gleixner wrote:
> >>>
> >>> @@ -456,7 +460,12 @@ static void __sigq
On 03/11, Jim Newsome wrote:
>
> I suppose even if the corruption of the register-values-themselves is
> acceptable, some synchronization may be needed to avoid the possibility
> of corrupting the kernel's data structures?
Yes, the kernel can crash. Just look at the comment above
On 03/11, Jim Newsome wrote:
>
> +static bool is_effectively_child(struct wait_opts *wo, bool ptrace,
> + struct task_struct *target)
> +{
> + struct task_struct *parent =
> + !ptrace ? target->real_parent : target->parent;
> +
> + return current ==
On 03/12, Qianli Zhao wrote:
>
> --- a/kernel/exit.c
> +++ b/kernel/exit.c
> @@ -767,6 +767,17 @@ void __noreturn do_exit(long code)
> validate_creds_for_do_exit(tsk);
>
> /*
> + * If global init has exited,
> + * panic immediately to get a useable coredump.
> + */
> +
On 03/12, Sebastian Andrzej Siewior wrote:
>
> On 2021-03-11 14:20:39 [+0100], Thomas Gleixner wrote:
> > --- a/kernel/signal.c
> > +++ b/kernel/signal.c
> > @@ -433,7 +433,11 @@ static struct sigqueue *
> > rcu_read_unlock();
> >
> > if (override_rlimit || likely(sigpending <=
On 03/11, Thomas Gleixner wrote:
>
> @@ -456,7 +460,12 @@ static void __sigqueue_free(struct sigqu
> return;
> if (atomic_dec_and_test(>user->sigpending))
> free_uid(q->user);
> - kmem_cache_free(sigqueue_cachep, q);
> +
> + /* Cache one sigqueue per task
On 03/10, Jim Newsome wrote:
>
> @@ -238,7 +238,7 @@ static int ptrace_check_attach(struct task_struct *child,
> bool ignore_state)
>* be changed by us so it's not changing right after this.
>*/
> read_lock(_lock);
> - if (child->ptrace && child->parent == current) {
> +
On 03/10, Jim Newsome wrote:
>
> On 3/10/21 16:40, Eric W. Biederman wrote:
>
> >> +static int do_wait_pid(struct wait_opts *wo)
> >> +{
> >> + struct task_struct *target = pid_task(wo->wo_pid, PIDTYPE_PID);
> > ^
> > This is
On 03/10, Eric W. Biederman wrote:
>
> Jim Newsome writes:
>
> > +static int do_wait_pid(struct wait_opts *wo)
> > +{
> > + struct task_struct *target = pid_task(wo->wo_pid, PIDTYPE_PID);
> ^
> This is subtle change in
I don't think I can review this patch, I don't understand the problem space
well enough. But just in case, I see nothing wrong in this simple patch.
Feel free to add
Acked-by: Oleg Nesterov
On 02/26, Piotr Figiel wrote:
>
> For userspace checkpoint and restore (C/R) a way of getting p
On 03/10, Eric W. Biederman wrote:
>
> /* If global init has exited,
> * panic immediately to get a useable coredump.
> */
> if (unlikely(is_global_init(tsk) &&
> (thread_group_empty(tsk) ||
> (tsk->signal->flags & SIGNAL_GROUP_EXIT {
>
o skip these scans
> and instead do an O(1) lookup. This improves performance when waiting on
> a pid from a thread group with many children and/or tracees.
>
> Signed-off-by: James Newsome
Reviewed-by: Oleg Nesterov
On 03/09, Qianli Zhao wrote:
>
> From: Qianli Zhao
>
> Once any init thread finds SIGNAL_GROUP_EXIT, trigger panic immediately
> instead of last thread of global init has exited, and do not allow other
> init threads to exit, protect task/memory state of all sub-threads for
> get reliable init
Jim,
Thanks, the patch looks good to me. Yet I think you need to send V3 even
if I personally do not care ;) Please consider ./scripts/checkpatch.pl,
it reports all the coding-style problems I was going to mention.
I too have a couple of cosmetic nits, but feel free to ignore, this is
Ah, and you forgot to CC lkml ;) let me resend my email.
Hi Jim,
Please do not use the attachments, just send the patch as plain text.
See Documentation/process/submitting-patches.rst
On 03/08, Jim Newsome wrote:
>
> --- a/kernel/exit.c
> +++ b/kernel/exit.c
> @@ -1462,8 +1462,61 @@ static
On 03/04, Thomas Gleixner wrote:
>
> On Wed, Mar 03 2021 at 16:37, Oleg Nesterov wrote:
> > On 03/03, Sebastian Andrzej Siewior wrote:
> >>
> >> +static struct sigqueue *sigqueue_from_cache(struct task_struct *t)
> >> +{
> >> + struct sigqu
On 03/03, Sebastian Andrzej Siewior wrote:
>
> +static struct sigqueue *sigqueue_from_cache(struct task_struct *t)
> +{
> + struct sigqueue *q = t->sigqueue_cache;
> +
> + if (q && cmpxchg(>sigqueue_cache, q, NULL) == q)
> + return q;
> + return NULL;
> +}
> +
> +static
On 03/02, Sergei Trofimovich wrote:
>
> > --- a/arch/ia64/include/asm/syscall.h
> > +++ b/arch/ia64/include/asm/syscall.h
> > @@ -32,7 +32,7 @@ static inline void syscall_rollback(struct task_struct
> > *task,
> > static inline long syscall_get_error(struct task_struct *task,
> >
On 03/02, Alexei Starovoitov wrote:
>
> Especially if such tightening will come with performance boost for
> uprobe on a nop and unprobe at the start (which is typically push or
> alu on %sp).
> That would be a great step forward.
Just in case, nop and push are emulated without additional
On 03/02, Masami Hiramatsu wrote:
>
> > Not sure I understand you correctly, I know almost nothing about low-level
> > x86 magic.
>
> x86 has normal interrupt and NMI. When an NMI occurs the CPU masks NMI
> (the mask itself is hidden status) and IRET releases the mask. The problem
> is that if an
forgot to add Srikar, sorry for resend...
On 03/01, Andy Lutomirski wrote:
>
> On Mon, Mar 1, 2021 at 8:51 AM Oleg Nesterov wrote:
> >
> > But I guess this has nothing to do with uprobes, they do not single-step
> > in kernel mode, right?
>
> They single-step u
On 03/01, Andy Lutomirski wrote:
>
> On Mon, Mar 1, 2021 at 8:51 AM Oleg Nesterov wrote:
> >
> > But I guess this has nothing to do with uprobes, they do not single-step
> > in kernel mode, right?
>
> They single-step user code, though, and the code that makes this
Hi Andy,
sorry for delay.
On 02/23, Andy Lutomirski wrote:
>
> A while back, I let myself be convinced that kprobes genuinely need to
> single-step the kernel on occasion, and I decided that this sucked but
> I could live with it. it would, however, be Really Really Nice (tm)
> if we could have
On 02/04, Ravi Bangoria wrote:
>
> +static int get_instr(struct mm_struct *mm, unsigned long addr, u32 *instr)
> +{
> + struct page *page;
> + struct vm_area_struct *vma;
> + void *kaddr;
> + unsigned int gup_flags = FOLL_FORCE | FOLL_SPLIT_PMD;
> +
> + if
It seems that nobody objects,
Andrew, Andy, Thomas, how do you think this series should be routed?
On 02/01, Oleg Nesterov wrote:
>
> Somehow I forgot about this problem. Let me resend the last version
> based on discussion with Linus. IIRC he was agree with this series.
>
> An
On 02/02, Linus Torvalds wrote:
>
> On Tue, Feb 2, 2021 at 7:56 AM Oleg Nesterov wrote:
> >
> > There is the "erestartsys-trap-debugger" test in ptrace-tests suite.
> > Do you mean you want another test in tools/testing/selftests/ptrace ?
>
> No, I gues
On 02/01, Linus Torvalds wrote:
>
> On Mon, Feb 1, 2021 at 10:18 AM Linus Torvalds
> wrote:
> >
> > Yeah, looks sane to me.
>
> Oh, and in a perfect world we'd have a test for this condition too, no?
There is the "erestartsys-trap-debugger" test in ptrace-tests suite.
Do you mean you want
On 02/01, Andy Lutomirski wrote:
>
> On Mon, Feb 1, 2021 at 9:47 AM Oleg Nesterov wrote:
> >
> > The comment in get_nr_restart_syscall() says:
> >
> > * The problem is that we can get here when ptrace pokes
> > * syscall-like values into
r in ->arch_data.
Signed-off-by: Oleg Nesterov
---
arch/x86/include/asm/thread_info.h | 12 ++--
arch/x86/kernel/signal.c | 2 +-
include/linux/restart_block.h | 1 +
3 files changed, 4 insertions(+), 11 deletions(-)
diff --git a/arch/x86/include/asm/thread_info.h
b/a
d the
'status' field back but TS_COMPAT was forgotten.
Signed-off-by: Oleg Nesterov
---
arch/x86/include/asm/processor.h | 9 -
arch/x86/include/asm/thread_info.h | 9 +
2 files changed, 9 insertions(+), 9 deletions(-)
diff --git a/arch/x86/include/asm/processor.h b/arch/x86/inc
s-trap-debugger.c:421
Reported-by: Jan Kratochvil
Signed-off-by: Oleg Nesterov
---
arch/x86/include/asm/thread_info.h | 14 +-
arch/x86/kernel/signal.c | 24 +---
2 files changed, 14 insertions(+), 24 deletions(-)
diff --git a/arch/x86/include/asm/thre
Preparation. Add the new helper which sets restart_block->fn and calls
the dummy arch_set_restart_data() helper.
Signed-off-by: Oleg Nesterov
---
fs/select.c| 10 --
include/linux/thread_info.h| 13 +
kernel/futex.c | 3 +--
ker
Somehow I forgot about this problem. Let me resend the last version
based on discussion with Linus. IIRC he was agree with this series.
And let me remind why 3/4 temporary adds the "transient" TS_COMPAT_RESTART
flag killed by the next patch: to simplify the backporting. 1-3 can fix
the problem
On 01/26, Steven Rostedt wrote:
>
> On Tue, 26 Jan 2021 21:20:59 +0100
> Oleg Nesterov wrote:
>
> > > No, not wrong. Even offset != 0, if the symbol exists in the kernel,
> > > kprobe_on_func_entry() will check it.
> >
> > Yes, but unless I am totally con
1 - 100 of 12532 matches
Mail list logo