Michal Hocko wrote:
> On Thu 09-11-17 10:34:46, peter enderborg wrote:
> > On 11/09/2017 09:52 AM, Michal Hocko wrote:
> > > I am not sure. I would rather see a tracepoint to mark the allocator
> > > entry. This would allow both 1) measuring the allocation latency (to
> > > compare it to the
Michal Hocko wrote:
> On Thu 09-11-17 10:34:46, peter enderborg wrote:
> > On 11/09/2017 09:52 AM, Michal Hocko wrote:
> > > I am not sure. I would rather see a tracepoint to mark the allocator
> > > entry. This would allow both 1) measuring the allocation latency (to
> > > compare it to the
On Thu 09-11-17 10:34:46, peter enderborg wrote:
> On 11/09/2017 09:52 AM, Michal Hocko wrote:
> > I am not sure. I would rather see a tracepoint to mark the allocator
> > entry. This would allow both 1) measuring the allocation latency (to
> > compare it to the trace_mm_page_alloc and 2) check
On Thu 09-11-17 10:34:46, peter enderborg wrote:
> On 11/09/2017 09:52 AM, Michal Hocko wrote:
> > I am not sure. I would rather see a tracepoint to mark the allocator
> > entry. This would allow both 1) measuring the allocation latency (to
> > compare it to the trace_mm_page_alloc and 2) check
[Please try to trim the context you are replying to]
On Wed 08-11-17 11:30:23, peter enderborg wrote:
[...]
> What about the idea to keep the function, but instead of printing only do a
> trace event.
I am not sure. I would rather see a tracepoint to mark the allocator
entry. This would allow
[Please try to trim the context you are replying to]
On Wed 08-11-17 11:30:23, peter enderborg wrote:
[...]
> What about the idea to keep the function, but instead of printing only do a
> trace event.
I am not sure. I would rather see a tracepoint to mark the allocator
entry. This would allow
Hi Tetsuo,
Can you see if this patch helps your situation?
OK, for the rest of you. Let's have the showdown ;-)
This patch implements what I discussed in Kernel Summit. I added
lockdep annotation (hopefully correctly), and it hasn't had any splats
(since I fixed some bugs in the first
Hi Tetsuo,
Can you see if this patch helps your situation?
OK, for the rest of you. Let's have the showdown ;-)
This patch implements what I discussed in Kernel Summit. I added
lockdep annotation (hopefully correctly), and it hasn't had any splats
(since I fixed some bugs in the first
On Thu, 2 Nov 2017 17:53:13 +0900
Sergey Senozhatsky wrote:
> On (10/31/17 15:32), Steven Rostedt wrote:
> [..]
> > (new globals)
> > static DEFINE_SPIN_LOCK(console_owner_lock);
> > static struct task_struct console_owner;
> > static bool waiter;
> >
> >
On Thu, 2 Nov 2017 17:53:13 +0900
Sergey Senozhatsky wrote:
> On (10/31/17 15:32), Steven Rostedt wrote:
> [..]
> > (new globals)
> > static DEFINE_SPIN_LOCK(console_owner_lock);
> > static struct task_struct console_owner;
> > static bool waiter;
> >
> > console_unlock() {
> >
> > [ Assumes
On Thu, 2 Nov 2017 12:46:50 +0100
Petr Mladek wrote:
> On Wed 2017-11-01 11:36:47, Steven Rostedt wrote:
> > On Wed, 1 Nov 2017 14:38:45 +0100
> > Petr Mladek wrote:
> > > My current main worry with Steven's approach is a risk of deadlocks
> > > that Jan
On Thu, 2 Nov 2017 12:46:50 +0100
Petr Mladek wrote:
> On Wed 2017-11-01 11:36:47, Steven Rostedt wrote:
> > On Wed, 1 Nov 2017 14:38:45 +0100
> > Petr Mladek wrote:
> > > My current main worry with Steven's approach is a risk of deadlocks
> > > that Jan Kara saw when he played with similar
On Tue 31-10-17 15:32:25, Steven Rostedt wrote:
>
> Thank you for the perfect timing. You posted this the day after I
> proposed a new solution at Kernel Summit in Prague for the printk lock
> loop that you experienced here.
>
> I attached the pdf that I used for that discussion (ignore the last
On Tue 31-10-17 15:32:25, Steven Rostedt wrote:
>
> Thank you for the perfect timing. You posted this the day after I
> proposed a new solution at Kernel Summit in Prague for the printk lock
> loop that you experienced here.
>
> I attached the pdf that I used for that discussion (ignore the last
On Wed 2017-11-01 11:36:47, Steven Rostedt wrote:
> On Wed, 1 Nov 2017 14:38:45 +0100
> Petr Mladek wrote:
> > My current main worry with Steven's approach is a risk of deadlocks
> > that Jan Kara saw when he played with similar solution.
>
> And if there exists such a
On Wed 2017-11-01 11:36:47, Steven Rostedt wrote:
> On Wed, 1 Nov 2017 14:38:45 +0100
> Petr Mladek wrote:
> > My current main worry with Steven's approach is a risk of deadlocks
> > that Jan Kara saw when he played with similar solution.
>
> And if there exists such a deadlock, then the
On (11/02/17 17:53), Sergey Senozhatsky wrote:
> On (10/31/17 15:32), Steven Rostedt wrote:
> [..]
> > (new globals)
> > static DEFINE_SPIN_LOCK(console_owner_lock);
> > static struct task_struct console_owner;
> > static bool waiter;
> >
> > console_unlock() {
> >
> > [ Assumes this part can
On (11/02/17 17:53), Sergey Senozhatsky wrote:
> On (10/31/17 15:32), Steven Rostedt wrote:
> [..]
> > (new globals)
> > static DEFINE_SPIN_LOCK(console_owner_lock);
> > static struct task_struct console_owner;
> > static bool waiter;
> >
> > console_unlock() {
> >
> > [ Assumes this part can
On (10/31/17 15:32), Steven Rostedt wrote:
[..]
> (new globals)
> static DEFINE_SPIN_LOCK(console_owner_lock);
> static struct task_struct console_owner;
> static bool waiter;
>
> console_unlock() {
>
> [ Assumes this part can not preempt ]
>
> spin_lock(console_owner_lock);
>
On (10/31/17 15:32), Steven Rostedt wrote:
[..]
> (new globals)
> static DEFINE_SPIN_LOCK(console_owner_lock);
> static struct task_struct console_owner;
> static bool waiter;
>
> console_unlock() {
>
> [ Assumes this part can not preempt ]
>
> spin_lock(console_owner_lock);
>
On Wed, 1 Nov 2017 18:42:25 +0100
Vlastimil Babka wrote:
> On 11/01/2017 04:33 PM, Steven Rostedt wrote:
> > On Wed, 1 Nov 2017 09:30:05 +0100
> > Vlastimil Babka wrote:
> >
> >>
> >> But still, it seems to me that the scheme only works as long as there
> >>
On Wed, 1 Nov 2017 18:42:25 +0100
Vlastimil Babka wrote:
> On 11/01/2017 04:33 PM, Steven Rostedt wrote:
> > On Wed, 1 Nov 2017 09:30:05 +0100
> > Vlastimil Babka wrote:
> >
> >>
> >> But still, it seems to me that the scheme only works as long as there
> >> are printk()'s coming with some
On 11/01/2017 04:33 PM, Steven Rostedt wrote:
> On Wed, 1 Nov 2017 09:30:05 +0100
> Vlastimil Babka wrote:
>
>>
>> But still, it seems to me that the scheme only works as long as there
>> are printk()'s coming with some reasonable frequency. There's still a
>> corner case when a
On 11/01/2017 04:33 PM, Steven Rostedt wrote:
> On Wed, 1 Nov 2017 09:30:05 +0100
> Vlastimil Babka wrote:
>
>>
>> But still, it seems to me that the scheme only works as long as there
>> are printk()'s coming with some reasonable frequency. There's still a
>> corner case when a storm of
On Wed, 1 Nov 2017 14:38:45 +0100
Petr Mladek wrote:
> This was my fear as well. Steven argued that this was theoretical.
> And I do not have a real-life bullets against this argument at
> the moment.
And my argument is still if such a situation happens, the system is so
On Wed, 1 Nov 2017 14:38:45 +0100
Petr Mladek wrote:
> This was my fear as well. Steven argued that this was theoretical.
> And I do not have a real-life bullets against this argument at
> the moment.
And my argument is still if such a situation happens, the system is so
fscked up that it
On Wed, 1 Nov 2017 09:30:05 +0100
Vlastimil Babka wrote:
>
> But still, it seems to me that the scheme only works as long as there
> are printk()'s coming with some reasonable frequency. There's still a
> corner case when a storm of printk()'s can come that will fill the ring
>
On Wed, 1 Nov 2017 09:30:05 +0100
Vlastimil Babka wrote:
>
> But still, it seems to me that the scheme only works as long as there
> are printk()'s coming with some reasonable frequency. There's still a
> corner case when a storm of printk()'s can come that will fill the ring
> buffers, and
On Wed 2017-11-01 09:30:05, Vlastimil Babka wrote:
> On 10/31/2017 08:32 PM, Steven Rostedt wrote:
> >
> > Thank you for the perfect timing. You posted this the day after I
> > proposed a new solution at Kernel Summit in Prague for the printk lock
> > loop that you experienced here.
> >
> > I
On Wed 2017-11-01 09:30:05, Vlastimil Babka wrote:
> On 10/31/2017 08:32 PM, Steven Rostedt wrote:
> >
> > Thank you for the perfect timing. You posted this the day after I
> > proposed a new solution at Kernel Summit in Prague for the printk lock
> > loop that you experienced here.
> >
> > I
On 10/31/2017 08:32 PM, Steven Rostedt wrote:
>
> Thank you for the perfect timing. You posted this the day after I
> proposed a new solution at Kernel Summit in Prague for the printk lock
> loop that you experienced here.
>
> I attached the pdf that I used for that discussion (ignore the last
>
On 10/31/2017 08:32 PM, Steven Rostedt wrote:
>
> Thank you for the perfect timing. You posted this the day after I
> proposed a new solution at Kernel Summit in Prague for the printk lock
> loop that you experienced here.
>
> I attached the pdf that I used for that discussion (ignore the last
>
Thank you for the perfect timing. You posted this the day after I
proposed a new solution at Kernel Summit in Prague for the printk lock
loop that you experienced here.
I attached the pdf that I used for that discussion (ignore the last
slide, it was left over and I never went there).
My
Thank you for the perfect timing. You posted this the day after I
proposed a new solution at Kernel Summit in Prague for the printk lock
loop that you experienced here.
I attached the pdf that I used for that discussion (ignore the last
slide, it was left over and I never went there).
My
On Thu, Oct 26, 2017 at 08:28:59PM +0900, Tetsuo Handa wrote:
> [...] it is possible to trigger OOM lockup and/or soft lockups when
> many threads concurrently called warn_alloc() (in order to warn
> about memory allocation stalls) due to current implementation of
> printk(), and it is difficult
On Thu, Oct 26, 2017 at 08:28:59PM +0900, Tetsuo Handa wrote:
> [...] it is possible to trigger OOM lockup and/or soft lockups when
> many threads concurrently called warn_alloc() (in order to warn
> about memory allocation stalls) due to current implementation of
> printk(), and it is difficult
On Thu 26-10-17 20:28:59, Tetsuo Handa wrote:
> Commit 63f53dea0c9866e9 ("mm: warn about allocations which stall for too
> long") was a great step for reducing possibility of silent hang up problem
> caused by memory allocation stalls. But this commit reverts it, for it is
> possible to trigger
On Thu 26-10-17 20:28:59, Tetsuo Handa wrote:
> Commit 63f53dea0c9866e9 ("mm: warn about allocations which stall for too
> long") was a great step for reducing possibility of silent hang up problem
> caused by memory allocation stalls. But this commit reverts it, for it is
> possible to trigger
Commit 63f53dea0c9866e9 ("mm: warn about allocations which stall for too
long") was a great step for reducing possibility of silent hang up problem
caused by memory allocation stalls. But this commit reverts it, for it is
possible to trigger OOM lockup and/or soft lockups when many threads
Commit 63f53dea0c9866e9 ("mm: warn about allocations which stall for too
long") was a great step for reducing possibility of silent hang up problem
caused by memory allocation stalls. But this commit reverts it, for it is
possible to trigger OOM lockup and/or soft lockups when many threads
40 matches
Mail list logo