On Thu, Apr 18, 2019 at 05:42:55PM +0200, Thomas Gleixner wrote:
> On Thu, 18 Apr 2019, Josh Poimboeuf wrote:
> > Another idea I had (but never got a chance to work on) was to extend the
> > x86 unwind interface to all arches. So instead of the callbacks, each
> > arch would implement something l
On Thu, Apr 18, 2019 at 10:41:47AM +0200, Thomas Gleixner wrote:
> +typedef bool (*stack_trace_consume_fn)(void *cookie, unsigned long addr,
> + bool reliable);
> +void arch_stack_walk(stack_trace_consume_fn consume_entry, void *cookie,
> + st
On Fri, Apr 19, 2019 at 10:32:30AM +0200, Thomas Gleixner wrote:
> On Fri, 19 Apr 2019, Peter Zijlstra wrote:
> > On Thu, Apr 18, 2019 at 10:41:47AM +0200, Thomas Gleixner wrote:
> >
> > > +typedef bool (*stack_trace_consume_fn)(void *co
On Mon, Apr 22, 2019 at 10:27:45AM -0300, Mauro Carvalho Chehab wrote:
> .../{atomic_bitops.txt => atomic_bitops.rst} | 2 +
What's happend to atomic_t.txt, also NAK, I still occationally touch
these files.
--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
A: Top-posting.
Q: What is the most annoying thing in e-mail?
--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel
On Tue, Apr 23, 2019 at 08:55:19AM -0400, Mike Snitzer wrote:
> On Tue, Apr 23 2019 at 4:31am -0400,
> Peter Zijlstra wrote:
>
> > On Mon, Apr 22, 2019 at 10:27:45AM -0300, Mauro Carvalho Chehab wrote:
> >
> > > .../{atomic_bitops.txt => atomic_bitops.rst}
On Tue, Apr 23, 2019 at 10:30:53AM -0600, Jonathan Corbet wrote:
> On Tue, 23 Apr 2019 15:01:32 +0200
> Peter Zijlstra wrote:
>
> > But yes, I have 0 motivation to learn or abide by rst. It simply doesn't
> > give me anything in return. There is no upside, only wor
On Tue, Apr 23, 2019 at 11:38:16PM +0200, Borislav Petkov wrote:
> If that is all the changes it would need, then I guess that's ok. Btw,
> those rst-conversion patches don't really show what got changed. Dunno
> if git can even show that properly. I diffed the two files by hand to
> see what got c
On Tue, Apr 23, 2019 at 11:53:49AM -0600, Jonathan Corbet wrote:
> > Look at crap like this:
> >
> > "The memory allocations via :c:func:`kmalloc`, :c:func:`vmalloc`,
> > :c:func:`kmem_cache_alloc` and"
> >
> > That should've been written like:
> >
> > "The memory allocations via kmalloc(), vmal
On Thu, Apr 18, 2019 at 10:41:37AM +0200, Thomas Gleixner wrote:
> There is only one caller of check_prev_add() which hands in a zeroed struct
> stack trace and a function pointer to save_stack(). Inside check_prev_add()
> the stack_trace struct is checked for being empty, which is always
> true. B
On Thu, Apr 18, 2019 at 10:41:38AM +0200, Thomas Gleixner wrote:
> Replace the indirection through struct stack_trace by using the storage
> array based interfaces and storing the information is a small lockdep
> specific data structure.
>
Acked-by: Peter Zijlstra (Intel)
--
dm-d
On Thu, Apr 25, 2019 at 11:45:11AM +0200, Thomas Gleixner wrote:
> There is only one caller which hands in save_trace as function pointer.
>
> Signed-off-by: Thomas Gleixner
Acked-by: Peter Zijlstra (Intel)
--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman
(sorry for cross-posting to moderated lists btw, I've since
acquired a patch to get_maintainers.pl that wil exclude them
in the future)
On Tue, Jun 25, 2019 at 08:51:01AM +0100, David Howells wrote:
> Peter Zijlstra wrote:
>
> > I tried using wake_up_var() today and acc
Hi all,
I tried using wake_up_var() today and accidentally noticed that it
didn't imply an smp_mb() and specifically requires it through
wake_up_bit() / waitqueue_active().
Now, wake_up_bit() doesn't imply the barrier because it is assumed to be
used with the atomic bitops API which either implie
On Tue, Jun 25, 2019 at 02:12:22PM +0200, Andreas Gruenbacher wrote:
> > Only if we do as David suggested and make clean_and_wake_up_bit()
> > provide the RELEASE barrier.
>
> (It's clear_and_wake_up_bit, not clean_and_wake_up_bit.)
Yes, typing hard.
> > That is, currently clear_and_wake_up_bit
On Tue, Jun 25, 2019 at 11:19:35AM +0200, Andreas Gruenbacher wrote:
> > diff --git a/fs/gfs2/glops.c b/fs/gfs2/glops.c
> > index cf4c767005b1..29ea5da7 100644
> > --- a/fs/gfs2/glops.c
> > +++ b/fs/gfs2/glops.c
> > @@ -227,6 +227,7 @@ static void gfs2_clear_glop_pending(struct gfs2_inode
> >
On Mon, Sep 02, 2019 at 11:51:55PM -0700, Christoph Hellwig wrote:
> On Tue, Sep 03, 2019 at 12:05:58AM +0300, Alexey Dobriyan wrote:
> > 32-bit accesses are shorter than 64-bit accesses on x86_64.
> > Nothing uses 64-bitness of ->state.
> >
> > Space savings are ~2KB on F30 kernel config.
>
> I
On Tue, Feb 02, 2021 at 07:09:44PM -0800, Ivan Babrou wrote:
> On Thu, Jan 28, 2021 at 7:35 PM Ivan Babrou wrote:
> > ==
> > [ 128.368523][C0] BUG: KASAN: stack-out-of-bounds in
> > unwind_next_frame (arch/x86/kernel/unwind_orc.
On Wed, Feb 03, 2021 at 09:46:55AM -0800, Ivan Babrou wrote:
> > Can you pretty please not line-wrap console output? It's unreadable.
>
> GMail doesn't make it easy, I'll send a link to a pastebin next time.
> Let me know if you'd like me to regenerate the decoded stack.
Not my problem that you c
Remove yet another few p->state accesses.
Signed-off-by: Peter Zijlstra (Intel)
---
block/blk-mq.c|2 +-
include/linux/sched.h |2 ++
kernel/freezer.c |2 +-
kernel/sched/core.c |6 +++---
4 files changed, 7 insertions(+), 5 deletions(-)
--- a/block/blk-m
On Wed, Jun 02, 2021 at 10:06:58AM -0400, Mathieu Desnoyers wrote:
> - On Jun 2, 2021, at 9:12 AM, Peter Zijlstra pet...@infradead.org wrote:
> > @@ -134,14 +134,14 @@ struct task_group;
> > do {\
> >
On Wed, Jun 02, 2021 at 10:15:16AM -0400, Mathieu Desnoyers wrote:
> - On Jun 2, 2021, at 9:12 AM, Peter Zijlstra pet...@infradead.org wrote:
> [...]
> > --- a/kernel/events/core.c
> > +++ b/kernel/events/core.c
> > @@ -8568,13 +8568,12 @@ static void perf
On Wed, Jun 02, 2021 at 10:01:29AM -0400, Mathieu Desnoyers wrote:
> - On Jun 2, 2021, at 9:12 AM, Peter Zijlstra pet...@infradead.org wrote:
>
> > Remove yet another few p->state accesses.
>
> [...]
>
> >
> > --- a/include/linux/sched.h
> > +++ b/
Change the type and name of task_struct::state. Drop the volatile and
shrink it to an 'unsigned int'. Rename it in order to find all uses
such that we can use READ_ONCE/WRITE_ONCE as appropriate.
Signed-off-by: Peter Zijlstra (Intel)
---
block/blk-mq.c |2 -
drive
On Wed, Jun 02, 2021 at 09:59:07AM -0400, Mathieu Desnoyers wrote:
> - On Jun 2, 2021, at 9:12 AM, Peter Zijlstra pet...@infradead.org wrote:
>
> > When ran from the sched-out path (preempt_notifier or perf_event),
> > p->state is irrelevant to determine preemption.
Signed-off-by: Peter Zijlstra (Intel)
---
kernel/time/timer.c |2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
--- a/kernel/time/timer.c
+++ b/kernel/time/timer.c
@@ -1879,7 +1879,7 @@ signed long __sched schedule_timeout(sig
printk(KERN_ERR "schedule_ti
(t->state != RUNNING)
|wake_up_process(t); // not done
if (COND) -'
break;
schedule(); // forever waiting
}
t->state = TASK_RUNNING;
Signed-off-by: Peter Zijlstra (Intel)
---
drivers/net/ethernet/qualcomm/qca_spi.c |
Replace a bunch of 'p->state == TASK_RUNNING' with a new helper:
task_is_running(p).
Signed-off-by: Peter Zijlstra (Intel)
---
arch/x86/kernel/process.c |4 ++--
block/blk-mq.c|2 +-
include/linux/sched.h |2 ++
kernel/locking/lockdep.c |2 +
On Wed, Jun 02, 2021 at 03:59:21PM +0100, Will Deacon wrote:
> On Wed, Jun 02, 2021 at 03:12:27PM +0200, Peter Zijlstra wrote:
> > Replace a bunch of 'p->state == TASK_RUNNING' with a new helper:
> > task_is_running(p).
> >
> > Signed-off-by: Peter Zijlst
When ran from the sched-out path (preempt_notifier or perf_event),
p->state is irrelevant to determine preemption. You can get preempted
with !task_is_running() just fine.
The right indicator for preemption is if the task is still on the
runqueue in the sched-out path.
Signed-off-by: Pe
Hi!
The task_struct::state variable is a bit odd in a number of ways:
- it's declared 'volatile' (against current practises);
- it's 'unsigned long' which is a weird size;
- it's type is inconsistent when used for function arguments.
These patches clean that up by making it consistently 'unsi
On Wed, Jun 02, 2021 at 12:54:58PM -0700, Davidlohr Bueso wrote:
> On Wed, 02 Jun 2021, Peter Zijlstra wrote:
>
> -ENOCHANGELONG
I completely failed to come up with something useful, still do. Subject
says it all.
> But yeah, I thought we had gotten rid of all these.
I too was
On Mon, Jun 07, 2021 at 11:45:00AM +0100, Daniel Thompson wrote:
> On Wed, Jun 02, 2021 at 03:12:31PM +0200, Peter Zijlstra wrote:
> > Change the type and name of task_struct::state. Drop the volatile and
> > shrink it to an 'unsigned int'. Rename it in order to find all
Hi all,
While grepping for PREEMPT_VOLUNTARY I ran into dm_bufio_cond_resched()
and wondered WTH it was about.
Is there anything wrong with the below patch?
---
diff --git a/drivers/md/dm-bufio.c b/drivers/md/dm-bufio.c
index 8625040bae92..125aedc3875f 100644
--- a/drivers/md/dm-bufio.c
+++ b/dr
On Mon, Sep 19, 2016 at 05:49:07AM -0400, Mikulas Patocka wrote:
>
>
> On Tue, 13 Sep 2016, Peter Zijlstra wrote:
>
> > Hi all,
> >
> > While grepping for PREEMPT_VOLUNTARY I ran into dm_bufio_cond_resched()
> > and wondered WTH it was about.
>
> co
ere goes.
---
Subject: dm: Remove dm_bufio_cond_resched()
From: Peter Zijlstra
Date: Tue, 13 Sep 2016 10:45:20 +0200
Remove pointless local wrappery. Use cond_resched() like everybody else.
Cc: Ingo Molnar
Cc: Mikulas Patocka
Cc: Mike Snitzer
Cc: Alasdair Kergon
Acked-by: Thomas Gleixner
On Thu, Sep 22, 2016 at 10:59:30PM +0200, Thomas Gleixner wrote:
> On Thu, 22 Sep 2016, Mikulas Patocka wrote:
> > On Mon, 19 Sep 2016, Peter Zijlstra wrote:
> >
> > > On Tue, Sep 13, 2016 at 09:39:59AM -0400, Mike Snitzer wrote:
> > > > So I'm not
On Fri, Sep 23, 2016 at 10:00:37AM +0200, Thomas Gleixner wrote:
> On Fri, 23 Sep 2016, Peter Zijlstra wrote:
> > It is, might_sleep() implies might_resched(). In fact, that's all what
> > PREEMPT_VOLUNTARY is, make the might_sleep() debug test imply a resched
> > poin
On Fri, Sep 23, 2016 at 02:17:10PM +0200, Mike Galbraith wrote:
> On Fri, 2016-09-23 at 10:00 +0200, Thomas Gleixner wrote:
> > On Fri, 23 Sep 2016, Peter Zijlstra wrote:
>
> > > Is anybody still using PREEMPT_NONE? Most workloads also care about
> > > latency to
On Fri, Sep 23, 2016 at 08:42:51AM -0400, Mike Snitzer wrote:
> On Fri, Sep 23 2016 at 8:26am -0400,
> Peter Zijlstra wrote:
>
> > On Fri, Sep 23, 2016 at 02:17:10PM +0200, Mike Galbraith wrote:
> > > On Fri, 2016-09-23 at 10:00 +0200, Thomas Gleixner wrote:
> >
On Tue, May 22, 2018 at 02:52:54PM -0400, Mike Snitzer wrote:
> On Tue, May 22 2018 at 2:34am -0400,
> Christoph Hellwig wrote:
> > Please CC the author and maintainers of the swait code.
> >
> > My impression is that this is the wrong thing to do. The swait code
> > is supposed to be simple a
s.
>
> A new consumer of swait (in dm-writecache) reduces its locking overhead
> by using the spinlock in swait_queue_head to protect not only the wait
> queue, but also the list of events. Consequently, this swait consuming
> kernel module needs to use these unlocked functions.
On Mon, Jun 04, 2018 at 12:39:11PM -0700, Linus Torvalds wrote:
> On Mon, Jun 4, 2018 at 12:37 PM Peter Zijlstra wrote:
> >
> > Would it help if we did s/swake_up/swake_up_one/g ?
> >
> > Then there would not be an swake_up() to cause confusion.
>
> Yes, i
On Mon, Jun 04, 2018 at 12:29:21PM -0700, Linus Torvalds wrote:
> On Mon, Jun 4, 2018 at 12:09 PM Mike Snitzer wrote:
> >
> > Mikulas elected to use swait because of the very low latency nature of
> > layering ontop of persistent memory. Use of "simple waitqueues"
> > _seemed_ logical to me.
>
>
On Mon, Jun 04, 2018 at 03:16:31PM -0700, Linus Torvalds wrote:
> We've always had that issue, and yes, we should handle it fine. Code
> that doesn't handle it fine is broken, but I don't think we've ever
> had that situation.
We've had a whole bunch of broken. We fixed a pile of them a few
years
On Mon, Jul 24, 2023 at 05:43:10PM +0800, Qi Zheng wrote:
> +void shrinker_unregister(struct shrinker *shrinker)
> +{
> + struct dentry *debugfs_entry;
> + int debugfs_id;
> +
> + if (!shrinker || !(shrinker->flags & SHRINKER_REGISTERED))
> + return;
> +
> + down_write(
46 matches
Mail list logo