On Tue, May 02, 2017 at 09:44:33AM +0200, Michal Hocko wrote:
> On Mon 01-05-17 21:12:35, Marc MERLIN wrote:
> > Howdy,
> >
> > Well, sadly, the problem is more or less back is 4.11.0. The system doesn't
> > really
> > crash but it goes into an infinite loop with
> > [34776.826800] BUG:
On Tue, May 02, 2017 at 09:44:33AM +0200, Michal Hocko wrote:
> On Mon 01-05-17 21:12:35, Marc MERLIN wrote:
> > Howdy,
> >
> > Well, sadly, the problem is more or less back is 4.11.0. The system doesn't
> > really
> > crash but it goes into an infinite loop with
> > [34776.826800] BUG:
On 2017/05/02 13:12, Marc MERLIN wrote:
> Well, sadly, the problem is more or less back is 4.11.0. The system doesn't
> really
> crash but it goes into an infinite loop with
> [34776.826800] BUG: workqueue lockup - pool cpus=6 node=0 flags=0x0 nice=0
> stuck for 33s!
Wow, two of workqueues are
On 2017/05/02 13:12, Marc MERLIN wrote:
> Well, sadly, the problem is more or less back is 4.11.0. The system doesn't
> really
> crash but it goes into an infinite loop with
> [34776.826800] BUG: workqueue lockup - pool cpus=6 node=0 flags=0x0 nice=0
> stuck for 33s!
Wow, two of workqueues are
On Mon 01-05-17 21:12:35, Marc MERLIN wrote:
> Howdy,
>
> Well, sadly, the problem is more or less back is 4.11.0. The system doesn't
> really
> crash but it goes into an infinite loop with
> [34776.826800] BUG: workqueue lockup - pool cpus=6 node=0 flags=0x0 nice=0
> stuck for 33s!
> More
On Mon 01-05-17 21:12:35, Marc MERLIN wrote:
> Howdy,
>
> Well, sadly, the problem is more or less back is 4.11.0. The system doesn't
> really
> crash but it goes into an infinite loop with
> [34776.826800] BUG: workqueue lockup - pool cpus=6 node=0 flags=0x0 nice=0
> stuck for 33s!
> More
Howdy,
Well, sadly, the problem is more or less back is 4.11.0. The system doesn't
really
crash but it goes into an infinite loop with
[34776.826800] BUG: workqueue lockup - pool cpus=6 node=0 flags=0x0 nice=0
stuck for 33s!
More logs: https://pastebin.com/YqE4riw0
(I upgraded from 4.8 with
Howdy,
Well, sadly, the problem is more or less back is 4.11.0. The system doesn't
really
crash but it goes into an infinite loop with
[34776.826800] BUG: workqueue lockup - pool cpus=6 node=0 flags=0x0 nice=0
stuck for 33s!
More logs: https://pastebin.com/YqE4riw0
(I upgraded from 4.8 with
On 12/01/2016 11:37 AM, Linus Torvalds wrote:
> On Thu, Dec 1, 2016 at 10:30 AM, Jens Axboe wrote:
>>
>> It's two different kinds of throttling. The vm absolutely should
>> throttle at dirty time, to avoid having insane amounts of memory dirty.
>> On the block layer side, throttling
On 12/01/2016 11:37 AM, Linus Torvalds wrote:
> On Thu, Dec 1, 2016 at 10:30 AM, Jens Axboe wrote:
>>
>> It's two different kinds of throttling. The vm absolutely should
>> throttle at dirty time, to avoid having insane amounts of memory dirty.
>> On the block layer side, throttling is about
On Thu, Dec 1, 2016 at 10:30 AM, Jens Axboe wrote:
>
> It's two different kinds of throttling. The vm absolutely should
> throttle at dirty time, to avoid having insane amounts of memory dirty.
> On the block layer side, throttling is about avoid the device queues
> being too long.
On Thu, Dec 1, 2016 at 10:30 AM, Jens Axboe wrote:
>
> It's two different kinds of throttling. The vm absolutely should
> throttle at dirty time, to avoid having insane amounts of memory dirty.
> On the block layer side, throttling is about avoid the device queues
> being too long. It's very
On 12/01/2016 11:16 AM, Linus Torvalds wrote:
> On Thu, Dec 1, 2016 at 5:50 AM, Kent Overstreet
> wrote:
>>
>> That said, I'm not sure how I feel about Jens's exact approach... it seems
>> to me
>> that this can really just live within the writeback code, I don't know
On 12/01/2016 11:16 AM, Linus Torvalds wrote:
> On Thu, Dec 1, 2016 at 5:50 AM, Kent Overstreet
> wrote:
>>
>> That said, I'm not sure how I feel about Jens's exact approach... it seems
>> to me
>> that this can really just live within the writeback code, I don't know why it
>> should involve
On Thu, Dec 1, 2016 at 5:50 AM, Kent Overstreet
wrote:
>
> That said, I'm not sure how I feel about Jens's exact approach... it seems to
> me
> that this can really just live within the writeback code, I don't know why it
> should involve the block layer at all. plus,
On Thu, Dec 1, 2016 at 5:50 AM, Kent Overstreet
wrote:
>
> That said, I'm not sure how I feel about Jens's exact approach... it seems to
> me
> that this can really just live within the writeback code, I don't know why it
> should involve the block layer at all. plus, if I understand correctly
On Wed, Nov 30, 2016 at 03:30:11PM -0500, Tejun Heo wrote:
> Hello,
>
> On Wed, Nov 30, 2016 at 10:14:50AM -0800, Linus Torvalds wrote:
> > Tejun/Kent - any way to just limit the workqueue depth for bcache?
> > Because that really isn't helping, and things *will* time out and
> > cause those
On Wed, Nov 30, 2016 at 03:30:11PM -0500, Tejun Heo wrote:
> Hello,
>
> On Wed, Nov 30, 2016 at 10:14:50AM -0800, Linus Torvalds wrote:
> > Tejun/Kent - any way to just limit the workqueue depth for bcache?
> > Because that really isn't helping, and things *will* time out and
> > cause those
Hello,
On Wed, Nov 30, 2016 at 10:14:50AM -0800, Linus Torvalds wrote:
> Tejun/Kent - any way to just limit the workqueue depth for bcache?
> Because that really isn't helping, and things *will* time out and
> cause those problems when you have hundreds of IO's queued on a disk
> that likely as a
Hello,
On Wed, Nov 30, 2016 at 10:14:50AM -0800, Linus Torvalds wrote:
> Tejun/Kent - any way to just limit the workqueue depth for bcache?
> Because that really isn't helping, and things *will* time out and
> cause those problems when you have hundreds of IO's queued on a disk
> that likely as a
On 11/30/2016 11:14 AM, Linus Torvalds wrote:
> On Wed, Nov 30, 2016 at 9:47 AM, Marc MERLIN wrote:
>>
>> I gave it a thought again, I think it is exactly the nasty situation you
>> described.
>> bcache takes I/O quickly while sending to SSD cache. SSD fills up, now
>> bcache
On 11/30/2016 11:14 AM, Linus Torvalds wrote:
> On Wed, Nov 30, 2016 at 9:47 AM, Marc MERLIN wrote:
>>
>> I gave it a thought again, I think it is exactly the nasty situation you
>> described.
>> bcache takes I/O quickly while sending to SSD cache. SSD fills up, now
>> bcache can't handle IO as
On Wed, Nov 30, 2016 at 10:14:50AM -0800, Linus Torvalds wrote:
> Anyway, none of this seems new per se. I'm adding Kent and Jens to the
> cc (Tejun already was), in the hope that maybe they have some idea how
> to control the nasty worst-case behavior wrt workqueue lockup (it's
> not really a
On Wed, Nov 30, 2016 at 10:14:50AM -0800, Linus Torvalds wrote:
> Anyway, none of this seems new per se. I'm adding Kent and Jens to the
> cc (Tejun already was), in the hope that maybe they have some idea how
> to control the nasty worst-case behavior wrt workqueue lockup (it's
> not really a
On Wed, Nov 30, 2016 at 9:47 AM, Marc MERLIN wrote:
>
> I gave it a thought again, I think it is exactly the nasty situation you
> described.
> bcache takes I/O quickly while sending to SSD cache. SSD fills up, now
> bcache can't handle IO as quickly and has to hang until the
On Wed, Nov 30, 2016 at 9:47 AM, Marc MERLIN wrote:
>
> I gave it a thought again, I think it is exactly the nasty situation you
> described.
> bcache takes I/O quickly while sending to SSD cache. SSD fills up, now
> bcache can't handle IO as quickly and has to hang until the SSD has been
>
On Tue, Nov 29, 2016 at 10:01:10AM -0800, Linus Torvalds wrote:
> On Tue, Nov 29, 2016 at 9:40 AM, Marc MERLIN wrote:
> >
> > In my case, it is a 5x 4TB HDD with
> > software raid 5 < bcache < dmcrypt < btrfs
>
> It doesn't sound like the nasty situations I have seen
On Tue, Nov 29, 2016 at 10:01:10AM -0800, Linus Torvalds wrote:
> On Tue, Nov 29, 2016 at 9:40 AM, Marc MERLIN wrote:
> >
> > In my case, it is a 5x 4TB HDD with
> > software raid 5 < bcache < dmcrypt < btrfs
>
> It doesn't sound like the nasty situations I have seen (particularly
> with large
On 2016/11/30 8:01, Marc MERLIN wrote:
> And, after 5H of copying, not a single hang, or USB disconnect, or anything.
> Obviously this seems to point to other problems in the code, and I have no
> idea which layer is a culprit here, but reducing the buffers absolutely
> helped a lot.
Maybe you
On 2016/11/30 8:01, Marc MERLIN wrote:
> And, after 5H of copying, not a single hang, or USB disconnect, or anything.
> Obviously this seems to point to other problems in the code, and I have no
> idea which layer is a culprit here, but reducing the buffers absolutely
> helped a lot.
Maybe you
On Tue, Nov 29, 2016 at 09:40:19AM -0800, Marc MERLIN wrote:
> Thanks for the reply and suggestions.
>
> On Tue, Nov 29, 2016 at 09:07:03AM -0800, Linus Torvalds wrote:
> > On Tue, Nov 29, 2016 at 8:34 AM, Marc MERLIN wrote:
> > > Now, to be fair, this is not a new problem,
On Tue, Nov 29, 2016 at 09:40:19AM -0800, Marc MERLIN wrote:
> Thanks for the reply and suggestions.
>
> On Tue, Nov 29, 2016 at 09:07:03AM -0800, Linus Torvalds wrote:
> > On Tue, Nov 29, 2016 at 8:34 AM, Marc MERLIN wrote:
> > > Now, to be fair, this is not a new problem, it's just varying
On Tue, Nov 29, 2016 at 9:40 AM, Marc MERLIN wrote:
>
> In my case, it is a 5x 4TB HDD with
> software raid 5 < bcache < dmcrypt < btrfs
It doesn't sound like the nasty situations I have seen (particularly
with large USB flash storage - often high momentary speed for
On Tue, Nov 29, 2016 at 9:40 AM, Marc MERLIN wrote:
>
> In my case, it is a 5x 4TB HDD with
> software raid 5 < bcache < dmcrypt < btrfs
It doesn't sound like the nasty situations I have seen (particularly
with large USB flash storage - often high momentary speed for
benchmarks, but slows down
Thanks for the reply and suggestions.
On Tue, Nov 29, 2016 at 09:07:03AM -0800, Linus Torvalds wrote:
> On Tue, Nov 29, 2016 at 8:34 AM, Marc MERLIN wrote:
> > Now, to be fair, this is not a new problem, it's just varying degrees of
> > bad and usually only happens when I do a
Thanks for the reply and suggestions.
On Tue, Nov 29, 2016 at 09:07:03AM -0800, Linus Torvalds wrote:
> On Tue, Nov 29, 2016 at 8:34 AM, Marc MERLIN wrote:
> > Now, to be fair, this is not a new problem, it's just varying degrees of
> > bad and usually only happens when I do a lot of I/O with
On Tue, Nov 29, 2016 at 8:34 AM, Marc MERLIN wrote:
> Now, to be fair, this is not a new problem, it's just varying degrees of
> bad and usually only happens when I do a lot of I/O with btrfs.
One situation where I've seen something like this happen is
(a) lots and lots of
On Tue, Nov 29, 2016 at 8:34 AM, Marc MERLIN wrote:
> Now, to be fair, this is not a new problem, it's just varying degrees of
> bad and usually only happens when I do a lot of I/O with btrfs.
One situation where I've seen something like this happen is
(a) lots and lots of dirty data queued up
On Tue, Nov 29, 2016 at 05:25:15PM +0100, Michal Hocko wrote:
> On Tue 22-11-16 17:38:01, Greg KH wrote:
> > On Tue, Nov 22, 2016 at 05:14:02PM +0100, Vlastimil Babka wrote:
> > > On 11/22/2016 05:06 PM, Marc MERLIN wrote:
> > > > On Mon, Nov 21, 2016 at 01:56:39PM -0800, Marc MERLIN wrote:
> > >
On Tue, Nov 29, 2016 at 05:25:15PM +0100, Michal Hocko wrote:
> On Tue 22-11-16 17:38:01, Greg KH wrote:
> > On Tue, Nov 22, 2016 at 05:14:02PM +0100, Vlastimil Babka wrote:
> > > On 11/22/2016 05:06 PM, Marc MERLIN wrote:
> > > > On Mon, Nov 21, 2016 at 01:56:39PM -0800, Marc MERLIN wrote:
> > >
On Tue, Nov 29, 2016 at 05:07:51PM +0100, Michal Hocko wrote:
> On Tue 29-11-16 07:55:37, Marc MERLIN wrote:
> > On Mon, Nov 28, 2016 at 08:23:15AM +0100, Michal Hocko wrote:
> > > Marc, could you try this patch please? I think it should be pretty clear
> > > it should help you but running it
On Tue, Nov 29, 2016 at 05:07:51PM +0100, Michal Hocko wrote:
> On Tue 29-11-16 07:55:37, Marc MERLIN wrote:
> > On Mon, Nov 28, 2016 at 08:23:15AM +0100, Michal Hocko wrote:
> > > Marc, could you try this patch please? I think it should be pretty clear
> > > it should help you but running it
On Tue 22-11-16 17:38:01, Greg KH wrote:
> On Tue, Nov 22, 2016 at 05:14:02PM +0100, Vlastimil Babka wrote:
> > On 11/22/2016 05:06 PM, Marc MERLIN wrote:
> > > On Mon, Nov 21, 2016 at 01:56:39PM -0800, Marc MERLIN wrote:
> > >> On Mon, Nov 21, 2016 at 10:50:20PM +0100, Vlastimil Babka wrote:
> >
On Tue 22-11-16 17:38:01, Greg KH wrote:
> On Tue, Nov 22, 2016 at 05:14:02PM +0100, Vlastimil Babka wrote:
> > On 11/22/2016 05:06 PM, Marc MERLIN wrote:
> > > On Mon, Nov 21, 2016 at 01:56:39PM -0800, Marc MERLIN wrote:
> > >> On Mon, Nov 21, 2016 at 10:50:20PM +0100, Vlastimil Babka wrote:
> >
On Mon, Nov 28, 2016 at 08:23:15AM +0100, Michal Hocko wrote:
> Marc, could you try this patch please? I think it should be pretty clear
> it should help you but running it through your use case would be more
> than welcome before I ask Greg to take this to the 4.8 stable tree.
>
> Thanks!
>
>
On Mon, Nov 28, 2016 at 08:23:15AM +0100, Michal Hocko wrote:
> Marc, could you try this patch please? I think it should be pretty clear
> it should help you but running it through your use case would be more
> than welcome before I ask Greg to take this to the 4.8 stable tree.
>
> Thanks!
>
>
On Tue 29-11-16 07:55:37, Marc MERLIN wrote:
> On Mon, Nov 28, 2016 at 08:23:15AM +0100, Michal Hocko wrote:
> > Marc, could you try this patch please? I think it should be pretty clear
> > it should help you but running it through your use case would be more
> > than welcome before I ask Greg to
On Tue 29-11-16 07:55:37, Marc MERLIN wrote:
> On Mon, Nov 28, 2016 at 08:23:15AM +0100, Michal Hocko wrote:
> > Marc, could you try this patch please? I think it should be pretty clear
> > it should help you but running it through your use case would be more
> > than welcome before I ask Greg to
On Mon, Nov 28, 2016 at 08:23:15AM +0100, Michal Hocko wrote:
> Marc, could you try this patch please? I think it should be pretty clear
> it should help you but running it through your use case would be more
> than welcome before I ask Greg to take this to the 4.8 stable tree.
I ran it overnight
On Mon, Nov 28, 2016 at 08:23:15AM +0100, Michal Hocko wrote:
> Marc, could you try this patch please? I think it should be pretty clear
> it should help you but running it through your use case would be more
> than welcome before I ask Greg to take this to the 4.8 stable tree.
I ran it overnight
On Mon, Nov 28, 2016 at 08:23:15AM +0100, Michal Hocko wrote:
> Marc, could you try this patch please? I think it should be pretty clear
> it should help you but running it through your use case would be more
> than welcome before I ask Greg to take this to the 4.8 stable tree.
This will take a
On Mon, Nov 28, 2016 at 08:23:15AM +0100, Michal Hocko wrote:
> Marc, could you try this patch please? I think it should be pretty clear
> it should help you but running it through your use case would be more
> than welcome before I ask Greg to take this to the 4.8 stable tree.
This will take a
On 11/22/2016 10:46 PM, Simon Kirby wrote:
On Tue, Nov 22, 2016 at 05:14:02PM +0100, Vlastimil Babka wrote:
On 11/22/2016 05:06 PM, Marc MERLIN wrote:
On Mon, Nov 21, 2016 at 01:56:39PM -0800, Marc MERLIN wrote:
On Mon, Nov 21, 2016 at 10:50:20PM +0100, Vlastimil Babka wrote:
4.9rc5 however
On 11/22/2016 10:46 PM, Simon Kirby wrote:
On Tue, Nov 22, 2016 at 05:14:02PM +0100, Vlastimil Babka wrote:
On 11/22/2016 05:06 PM, Marc MERLIN wrote:
On Mon, Nov 21, 2016 at 01:56:39PM -0800, Marc MERLIN wrote:
On Mon, Nov 21, 2016 at 10:50:20PM +0100, Vlastimil Babka wrote:
4.9rc5 however
Marc, could you try this patch please? I think it should be pretty clear
it should help you but running it through your use case would be more
than welcome before I ask Greg to take this to the 4.8 stable tree.
Thanks!
On Wed 23-11-16 07:34:10, Michal Hocko wrote:
[...]
> commit
Marc, could you try this patch please? I think it should be pretty clear
it should help you but running it through your use case would be more
than welcome before I ask Greg to take this to the 4.8 stable tree.
Thanks!
On Wed 23-11-16 07:34:10, Michal Hocko wrote:
[...]
> commit
On 11/23/2016 07:34 AM, Michal Hocko wrote:
On Tue 22-11-16 11:38:47, Linus Torvalds wrote:
On Tue, Nov 22, 2016 at 8:14 AM, Vlastimil Babka wrote:
Thanks a lot for the testing. So what do we do now about 4.8? (4.7 is
already EOL AFAICS).
- send the patch [1] as 4.8-only
On 11/23/2016 07:34 AM, Michal Hocko wrote:
On Tue 22-11-16 11:38:47, Linus Torvalds wrote:
On Tue, Nov 22, 2016 at 8:14 AM, Vlastimil Babka wrote:
Thanks a lot for the testing. So what do we do now about 4.8? (4.7 is
already EOL AFAICS).
- send the patch [1] as 4.8-only stable.
I think
On Wed 23-11-16 14:53:12, Hillf Danton wrote:
> On Wednesday, November 23, 2016 2:34 PM Michal Hocko wrote:
> > @@ -3161,6 +3161,16 @@ should_compact_retry(struct alloc_context *ac,
> > unsigned int order, int alloc_fla
> > if (!order || order > PAGE_ALLOC_COSTLY_ORDER)
> > return
On Wed 23-11-16 14:53:12, Hillf Danton wrote:
> On Wednesday, November 23, 2016 2:34 PM Michal Hocko wrote:
> > @@ -3161,6 +3161,16 @@ should_compact_retry(struct alloc_context *ac,
> > unsigned int order, int alloc_fla
> > if (!order || order > PAGE_ALLOC_COSTLY_ORDER)
> > return
On Wednesday, November 23, 2016 2:34 PM Michal Hocko wrote:
> @@ -3161,6 +3161,16 @@ should_compact_retry(struct alloc_context *ac,
> unsigned int order, int alloc_fla
> if (!order || order > PAGE_ALLOC_COSTLY_ORDER)
> return false;
>
> +#ifdef CONFIG_COMPACTION
> + /*
>
On Wednesday, November 23, 2016 2:34 PM Michal Hocko wrote:
> @@ -3161,6 +3161,16 @@ should_compact_retry(struct alloc_context *ac,
> unsigned int order, int alloc_fla
> if (!order || order > PAGE_ALLOC_COSTLY_ORDER)
> return false;
>
> +#ifdef CONFIG_COMPACTION
> + /*
>
On Tue 22-11-16 11:38:47, Linus Torvalds wrote:
> On Tue, Nov 22, 2016 at 8:14 AM, Vlastimil Babka wrote:
> >
> > Thanks a lot for the testing. So what do we do now about 4.8? (4.7 is
> > already EOL AFAICS).
> >
> > - send the patch [1] as 4.8-only stable.
>
> I think that's the
On Tue 22-11-16 11:38:47, Linus Torvalds wrote:
> On Tue, Nov 22, 2016 at 8:14 AM, Vlastimil Babka wrote:
> >
> > Thanks a lot for the testing. So what do we do now about 4.8? (4.7 is
> > already EOL AFAICS).
> >
> > - send the patch [1] as 4.8-only stable.
>
> I think that's the right thing to
On Tue, Nov 22, 2016 at 05:14:02PM +0100, Vlastimil Babka wrote:
> On 11/22/2016 05:06 PM, Marc MERLIN wrote:
> > On Mon, Nov 21, 2016 at 01:56:39PM -0800, Marc MERLIN wrote:
> >> On Mon, Nov 21, 2016 at 10:50:20PM +0100, Vlastimil Babka wrote:
> 4.9rc5 however seems to be doing better, and
On Tue, Nov 22, 2016 at 05:14:02PM +0100, Vlastimil Babka wrote:
> On 11/22/2016 05:06 PM, Marc MERLIN wrote:
> > On Mon, Nov 21, 2016 at 01:56:39PM -0800, Marc MERLIN wrote:
> >> On Mon, Nov 21, 2016 at 10:50:20PM +0100, Vlastimil Babka wrote:
> 4.9rc5 however seems to be doing better, and
On Tue, Nov 22, 2016 at 8:14 AM, Vlastimil Babka wrote:
>
> Thanks a lot for the testing. So what do we do now about 4.8? (4.7 is
> already EOL AFAICS).
>
> - send the patch [1] as 4.8-only stable.
I think that's the right thing to do. It's pretty small, and the
argument that it
On Tue, Nov 22, 2016 at 8:14 AM, Vlastimil Babka wrote:
>
> Thanks a lot for the testing. So what do we do now about 4.8? (4.7 is
> already EOL AFAICS).
>
> - send the patch [1] as 4.8-only stable.
I think that's the right thing to do. It's pretty small, and the
argument that it changes the oom
On Tue, Nov 22, 2016 at 05:25:44PM +0100, Michal Hocko wrote:
> currently AFAIR. I hate that Marc is not falling into that category but
> is it really problem for you to run with 4.9? If we have more users
Don't do anything just on my account. I had a problem, it's been fixed
in 2 different ways:
On Tue, Nov 22, 2016 at 05:25:44PM +0100, Michal Hocko wrote:
> currently AFAIR. I hate that Marc is not falling into that category but
> is it really problem for you to run with 4.9? If we have more users
Don't do anything just on my account. I had a problem, it's been fixed
in 2 different ways:
On Tue, Nov 22, 2016 at 05:14:02PM +0100, Vlastimil Babka wrote:
> On 11/22/2016 05:06 PM, Marc MERLIN wrote:
> > On Mon, Nov 21, 2016 at 01:56:39PM -0800, Marc MERLIN wrote:
> >> On Mon, Nov 21, 2016 at 10:50:20PM +0100, Vlastimil Babka wrote:
> 4.9rc5 however seems to be doing better, and
On Tue, Nov 22, 2016 at 05:14:02PM +0100, Vlastimil Babka wrote:
> On 11/22/2016 05:06 PM, Marc MERLIN wrote:
> > On Mon, Nov 21, 2016 at 01:56:39PM -0800, Marc MERLIN wrote:
> >> On Mon, Nov 21, 2016 at 10:50:20PM +0100, Vlastimil Babka wrote:
> 4.9rc5 however seems to be doing better, and
On Tue 22-11-16 17:14:02, Vlastimil Babka wrote:
> On 11/22/2016 05:06 PM, Marc MERLIN wrote:
> > On Mon, Nov 21, 2016 at 01:56:39PM -0800, Marc MERLIN wrote:
> >> On Mon, Nov 21, 2016 at 10:50:20PM +0100, Vlastimil Babka wrote:
> 4.9rc5 however seems to be doing better, and is still running
On Tue 22-11-16 17:14:02, Vlastimil Babka wrote:
> On 11/22/2016 05:06 PM, Marc MERLIN wrote:
> > On Mon, Nov 21, 2016 at 01:56:39PM -0800, Marc MERLIN wrote:
> >> On Mon, Nov 21, 2016 at 10:50:20PM +0100, Vlastimil Babka wrote:
> 4.9rc5 however seems to be doing better, and is still running
On 11/22/2016 05:06 PM, Marc MERLIN wrote:
> On Mon, Nov 21, 2016 at 01:56:39PM -0800, Marc MERLIN wrote:
>> On Mon, Nov 21, 2016 at 10:50:20PM +0100, Vlastimil Babka wrote:
4.9rc5 however seems to be doing better, and is still running after 18
hours. However, I got a few page allocation
On 11/22/2016 05:06 PM, Marc MERLIN wrote:
> On Mon, Nov 21, 2016 at 01:56:39PM -0800, Marc MERLIN wrote:
>> On Mon, Nov 21, 2016 at 10:50:20PM +0100, Vlastimil Babka wrote:
4.9rc5 however seems to be doing better, and is still running after 18
hours. However, I got a few page allocation
On Mon, Nov 21, 2016 at 01:56:39PM -0800, Marc MERLIN wrote:
> On Mon, Nov 21, 2016 at 10:50:20PM +0100, Vlastimil Babka wrote:
> > > 4.9rc5 however seems to be doing better, and is still running after 18
> > > hours. However, I got a few page allocation failures as per below, but the
> > > system
On Mon, Nov 21, 2016 at 01:56:39PM -0800, Marc MERLIN wrote:
> On Mon, Nov 21, 2016 at 10:50:20PM +0100, Vlastimil Babka wrote:
> > > 4.9rc5 however seems to be doing better, and is still running after 18
> > > hours. However, I got a few page allocation failures as per below, but the
> > > system
On Mon, Nov 21, 2016 at 10:50:20PM +0100, Vlastimil Babka wrote:
> > 4.9rc5 however seems to be doing better, and is still running after 18
> > hours. However, I got a few page allocation failures as per below, but the
> > system seems to recover.
> > Vlastimil, do you want me to continue the copy
On Mon, Nov 21, 2016 at 10:50:20PM +0100, Vlastimil Babka wrote:
> > 4.9rc5 however seems to be doing better, and is still running after 18
> > hours. However, I got a few page allocation failures as per below, but the
> > system seems to recover.
> > Vlastimil, do you want me to continue the copy
On 11/21/2016 04:43 PM, Marc MERLIN wrote:
> Howdy,
>
> As a followup to https://plus.google.com/u/0/+MarcMERLIN/posts/A3FrLVo3kc6
>
> http://pastebin.com/yJybSHNq and http://pastebin.com/B6xEH4Dw
> show a system with plenty of RAM (24GB) falling over and killing inoccent
> user space apps, a
On 11/21/2016 04:43 PM, Marc MERLIN wrote:
> Howdy,
>
> As a followup to https://plus.google.com/u/0/+MarcMERLIN/posts/A3FrLVo3kc6
>
> http://pastebin.com/yJybSHNq and http://pastebin.com/B6xEH4Dw
> show a system with plenty of RAM (24GB) falling over and killing inoccent
> user space apps, a
82 matches
Mail list logo