On Tue 04-09-18 13:30:44, Mikulas Patocka wrote:
>
>
> On Tue, 4 Sep 2018, Michal Hocko wrote:
>
> > Regarding other other workloads. AFAIR the problem was due to the
> > wait_iff_congested in the direct reclaim. And I've been arguing that
> > special casing __GFP_NORETRY is not a propoer way
On Tue, 4 Sep 2018, Michal Hocko wrote:
> Regarding other other workloads. AFAIR the problem was due to the
> wait_iff_congested in the direct reclaim. And I've been arguing that
> special casing __GFP_NORETRY is not a propoer way to handle that case.
> We have PF_LESS_THROTTLE to handle cases
On Tue 04-09-18 11:18:44, Mike Snitzer wrote:
> On Tue, Sep 04 2018 at 3:08am -0400,
> Michal Hocko wrote:
>
> > On Mon 03-09-18 18:23:17, Mikulas Patocka wrote:
> > >
> > >
> > > On Wed, 1 Aug 2018, jing xia wrote:
> > >
> > > > We reproduced this issue again and found out the root cause.
>
On Tue, Sep 04 2018 at 3:08am -0400,
Michal Hocko wrote:
> On Mon 03-09-18 18:23:17, Mikulas Patocka wrote:
> >
> >
> > On Wed, 1 Aug 2018, jing xia wrote:
> >
> > > We reproduced this issue again and found out the root cause.
> > > dm_bufio_prefetch() with dm_bufio_lock enters the direct
On Mon 03-09-18 18:23:17, Mikulas Patocka wrote:
>
>
> On Wed, 1 Aug 2018, jing xia wrote:
>
> > We reproduced this issue again and found out the root cause.
> > dm_bufio_prefetch() with dm_bufio_lock enters the direct reclaim and
> > takes a long time to do the soft_limit_reclaim, because of
On Wed, 1 Aug 2018, jing xia wrote:
> We reproduced this issue again and found out the root cause.
> dm_bufio_prefetch() with dm_bufio_lock enters the direct reclaim and
> takes a long time to do the soft_limit_reclaim, because of the huge
> number of memory excess of the memcg.
> Then, all
On Wed 01-08-18 10:48:00, jing xia wrote:
> We reproduced this issue again and found out the root cause.
> dm_bufio_prefetch() with dm_bufio_lock enters the direct reclaim and
> takes a long time to do the soft_limit_reclaim, because of the huge
> number of memory excess of the memcg.
> Then, all
We reproduced this issue again and found out the root cause.
dm_bufio_prefetch() with dm_bufio_lock enters the direct reclaim and
takes a long time to do the soft_limit_reclaim, because of the huge
number of memory excess of the memcg.
Then, all the task who do shrink_slab() wait for
On Thu 28-06-18 22:43:29, Mikulas Patocka wrote:
>
>
> On Mon, 25 Jun 2018, Michal Hocko wrote:
>
> > On Mon 25-06-18 10:42:30, Mikulas Patocka wrote:
> > >
> > >
> > > On Mon, 25 Jun 2018, Michal Hocko wrote:
> > >
> > > > > And the throttling in dm-bufio prevents kswapd from making forward
On Mon, 25 Jun 2018, Michal Hocko wrote:
> On Mon 25-06-18 10:42:30, Mikulas Patocka wrote:
> >
> >
> > On Mon, 25 Jun 2018, Michal Hocko wrote:
> >
> > > > And the throttling in dm-bufio prevents kswapd from making forward
> > > > progress, causing this situation...
> > >
> > > Which is
On Mon 25-06-18 10:42:30, Mikulas Patocka wrote:
>
>
> On Mon, 25 Jun 2018, Michal Hocko wrote:
>
> > > And the throttling in dm-bufio prevents kswapd from making forward
> > > progress, causing this situation...
> >
> > Which is what we have PF_THROTTLE_LESS for. Geez, do we have to go in
>
On Mon, 25 Jun 2018, Michal Hocko wrote:
> > And the throttling in dm-bufio prevents kswapd from making forward
> > progress, causing this situation...
>
> Which is what we have PF_THROTTLE_LESS for. Geez, do we have to go in
> circles like that? Are you even listening?
>
> [...]
>
> > And
On Mon 25-06-18 09:53:34, Mikulas Patocka wrote:
> y
>
> On Mon, 25 Jun 2018, Michal Hocko wrote:
>
> > On Fri 22-06-18 14:57:10, Mikulas Patocka wrote:
> > >
> > >
> > > On Fri, 22 Jun 2018, Michal Hocko wrote:
> > >
> > > > On Fri 22-06-18 08:52:09, Mikulas Patocka wrote:
> > > > >
> > > >
y
On Mon, 25 Jun 2018, Michal Hocko wrote:
> On Fri 22-06-18 14:57:10, Mikulas Patocka wrote:
> >
> >
> > On Fri, 22 Jun 2018, Michal Hocko wrote:
> >
> > > On Fri 22-06-18 08:52:09, Mikulas Patocka wrote:
> > > >
> > > >
> > > > On Fri, 22 Jun 2018, Michal Hocko wrote:
> > > >
> > > > >
On Fri 22-06-18 14:57:10, Mikulas Patocka wrote:
>
>
> On Fri, 22 Jun 2018, Michal Hocko wrote:
>
> > On Fri 22-06-18 08:52:09, Mikulas Patocka wrote:
> > >
> > >
> > > On Fri, 22 Jun 2018, Michal Hocko wrote:
> > >
> > > > On Fri 22-06-18 11:01:51, Michal Hocko wrote:
> > > > > On Thu
On Fri, 22 Jun 2018, Michal Hocko wrote:
> On Fri 22-06-18 08:52:09, Mikulas Patocka wrote:
> >
> >
> > On Fri, 22 Jun 2018, Michal Hocko wrote:
> >
> > > On Fri 22-06-18 11:01:51, Michal Hocko wrote:
> > > > On Thu 21-06-18 21:17:24, Mikulas Patocka wrote:
> > > [...]
> > > > > What about
On Fri 22-06-18 08:44:52, Mikulas Patocka wrote:
> On Fri, 22 Jun 2018, Michal Hocko wrote:
[...]
> > Why? How are you going to audit all the callers that the behavior makes
> > sense and moreover how are you going to ensure that future usage will
> > still make sense. The more subtle side effects
On Fri 22-06-18 08:52:09, Mikulas Patocka wrote:
>
>
> On Fri, 22 Jun 2018, Michal Hocko wrote:
>
> > On Fri 22-06-18 11:01:51, Michal Hocko wrote:
> > > On Thu 21-06-18 21:17:24, Mikulas Patocka wrote:
> > [...]
> > > > What about this patch? If __GFP_NORETRY and __GFP_FS is not set (i.e.
> >
On Fri, 22 Jun 2018, Michal Hocko wrote:
> On Fri 22-06-18 11:01:51, Michal Hocko wrote:
> > On Thu 21-06-18 21:17:24, Mikulas Patocka wrote:
> [...]
> > > What about this patch? If __GFP_NORETRY and __GFP_FS is not set (i.e. the
> > > request comes from a block device driver or a
On Fri, 22 Jun 2018, Michal Hocko wrote:
> On Thu 21-06-18 21:17:24, Mikulas Patocka wrote:
> [...]
> > > But seriously, isn't the best way around the throttling issue to use
> > > PF_LESS_THROTTLE?
> >
> > Yes - it could be done by setting PF_LESS_THROTTLE. But I think it would
> > be
On Thu 21-06-18 21:17:24, Mikulas Patocka wrote:
[...]
> > But seriously, isn't the best way around the throttling issue to use
> > PF_LESS_THROTTLE?
>
> Yes - it could be done by setting PF_LESS_THROTTLE. But I think it would
> be better to change it just in one place than to add
On Tue, 19 Jun 2018, Michal Hocko wrote:
> On Mon 18-06-18 18:11:26, Mikulas Patocka wrote:
> [...]
> > I grepped the kernel for __GFP_NORETRY and triaged them. I found 16 cases
> > without a fallback - those are bugs that make various functions randomly
> > return -ENOMEM.
>
> Well, maybe
On Fri, 15 Jun 2018, Michal Hocko wrote:
> On Fri 15-06-18 08:47:52, Mikulas Patocka wrote:
> >
> >
> > On Fri, 15 Jun 2018, Michal Hocko wrote:
> >
> > > On Fri 15-06-18 07:35:07, Mikulas Patocka wrote:
> > > >
> > > > Because mempool uses it. Mempool uses allocations with "GFP_NOIO |
>
On Fri 15-06-18 08:47:52, Mikulas Patocka wrote:
>
>
> On Fri, 15 Jun 2018, Michal Hocko wrote:
>
> > On Fri 15-06-18 07:35:07, Mikulas Patocka wrote:
> > >
> > > Because mempool uses it. Mempool uses allocations with "GFP_NOIO |
> > > __GFP_NORETRY | __GFP_NOMEMALLOC | __GFP_NOWARN". An so
On Fri, 15 Jun 2018, Michal Hocko wrote:
> On Fri 15-06-18 07:35:07, Mikulas Patocka wrote:
> >
> > Because mempool uses it. Mempool uses allocations with "GFP_NOIO |
> > __GFP_NORETRY | __GFP_NOMEMALLOC | __GFP_NOWARN". An so dm-bufio uses
> > these flags too. dm-bufio is just a big
On Fri 15-06-18 07:35:07, Mikulas Patocka wrote:
>
>
> On Fri, 15 Jun 2018, Michal Hocko wrote:
>
> > On Thu 14-06-18 14:34:06, Mikulas Patocka wrote:
> > >
> > >
> > > On Thu, 14 Jun 2018, Michal Hocko wrote:
> > >
> > > > On Thu 14-06-18 15:18:58, jing xia wrote:
> > > > [...]
> > > > >
On Fri, 15 Jun 2018, Michal Hocko wrote:
> On Thu 14-06-18 14:34:06, Mikulas Patocka wrote:
> >
> >
> > On Thu, 14 Jun 2018, Michal Hocko wrote:
> >
> > > On Thu 14-06-18 15:18:58, jing xia wrote:
> > > [...]
> > > > PID: 22920 TASK: ffc0120f1a00 CPU: 1 COMMAND: "kworker/u8:2"
> > >
On Thu 14-06-18 14:34:06, Mikulas Patocka wrote:
>
>
> On Thu, 14 Jun 2018, Michal Hocko wrote:
>
> > On Thu 14-06-18 15:18:58, jing xia wrote:
> > [...]
> > > PID: 22920 TASK: ffc0120f1a00 CPU: 1 COMMAND: "kworker/u8:2"
> > > #0 [ffc0282af3d0] __switch_to at ff8008085e48
> > >
On Thu, 14 Jun 2018, Michal Hocko wrote:
> On Thu 14-06-18 15:18:58, jing xia wrote:
> [...]
> > PID: 22920 TASK: ffc0120f1a00 CPU: 1 COMMAND: "kworker/u8:2"
> > #0 [ffc0282af3d0] __switch_to at ff8008085e48
> > #1 [ffc0282af3f0] __schedule at ff8008850cc8
> > #2
Thanks for your comment.
On Wed, Jun 13, 2018 at 10:02 PM, Mikulas Patocka wrote:
>
>
> On Tue, 12 Jun 2018, Mike Snitzer wrote:
>
>> On Tue, Jun 12 2018 at 4:03am -0400,
>> Jing Xia wrote:
>>
>> > Performance test in android reports that the phone sometimes gets
>> > hanged and shows black
Thank for your comment,I appreciate it.
On Wed, Jun 13, 2018 at 5:20 AM, Mike Snitzer wrote:
> On Tue, Jun 12 2018 at 4:03am -0400,
> Jing Xia wrote:
>
>> Performance test in android reports that the phone sometimes gets
>> hanged and shows black screen for about several minutes.The sysdump
On Thu 14-06-18 15:18:58, jing xia wrote:
[...]
> PID: 22920 TASK: ffc0120f1a00 CPU: 1 COMMAND: "kworker/u8:2"
> #0 [ffc0282af3d0] __switch_to at ff8008085e48
> #1 [ffc0282af3f0] __schedule at ff8008850cc8
> #2 [ffc0282af450] schedule at ff8008850f4c
> #3
On Tue, 12 Jun 2018, Mike Snitzer wrote:
> On Tue, Jun 12 2018 at 4:03am -0400,
> Jing Xia wrote:
>
> > Performance test in android reports that the phone sometimes gets
> > hanged and shows black screen for about several minutes.The sysdump shows:
> > 1. kswapd and other tasks who enter
On Tue, Jun 12 2018 at 4:03am -0400,
Jing Xia wrote:
> Performance test in android reports that the phone sometimes gets
> hanged and shows black screen for about several minutes.The sysdump shows:
> 1. kswapd and other tasks who enter the direct-reclaim path are waiting
> on the dm_bufio_lock;
34 matches
Mail list logo