On Fri 24-07-20 21:56:29, Muchun Song wrote:
> On Fri, Jul 24, 2020 at 7:34 PM Michal Hocko wrote:
[...]
> > I believe you can simplify this and use a similar pattern as the page
> > allocator. Something like
> >
> > for_each_n
On Fri 24-07-20 09:35:26, jingrui wrote:
>
> On Friday, July 24, 2020 3:55 PM, Michal Hocko wrote:
>
> > What is the reason to run under !root cgroup in those sessions if you do
> > not care about accounting anyway?
>
> The systemd not support run those sessions
ent_mems_allowed;
> + }
I believe you can simplify this and use a similar pattern as the page
allocator. Something like
for_each_node_mask(node, mpol_allowed) {
if (node_isset(node, _current_mems_allowed))
nr += array[node];
}
There shouldn't be any need to allocate a potentially large nodemask on
the stack.
--
Michal Hocko
SUSE Labs
der !root cgroup in those sessions if you do
not care about accounting anyway? tmpfs is a persistent charge until the
file is removed. So if those outlive the session then you either want
them to be charged to somebody or you do not care about accounting at
all, no? Or could you explain your usecase so
mempolicy *mpol;
> + nodemask_t *nodemask;
> +
> + mpol = get_task_policy(current);
> + if (mpol->mode == MPOL_BIND)
> + nodemask = >v.nodes;
> + else
> + nodemask = NULL;
> +
> + return nodemask;
> +}
We already have policy_nodemask which tries to do this. Is there any
reason to not reuse it?
--
Michal Hocko
SUSE Labs
map+identity mapping+iomem resource. I think reserving such a
> region during boot as suggested is the easiest approach, but I am
> *absolutely* not an expert on all these XEN-specific things :)
I am late to the discussion but FTR I completely agree.
--
Michal Hocko
SUSE Labs
> So if somebody else took the page lock, I think we should already have
> stopped walking the list.
Right! I didn't bother to look at the wakeup callback so have missed
this. For completeness this behavior is there since 3510ca20ece01 which
we have in our 4.12 based kernel as well.
--
Michal Hocko
SUSE Labs
ral state of the machine I suspect this is not the
case.
Thanks again for your great help!
--
Michal Hocko
SUSE Labs
On Tue 21-07-20 08:33:33, Linus Torvalds wrote:
> On Mon, Jul 20, 2020 at 11:33 PM Michal Hocko wrote:
> >
> > The lockup is in page_unlock in do_read_fault and I suspect that this is
> > yet another effect of a very long waitqueue chain which has been
> > addresses b
on this patch, hence RFC, but I am
simply not seeing a much better, yet not convoluted, solution.
--
Michal Hocko
SUSE Labs
On Tue 21-07-20 09:23:44, Qian Cai wrote:
> On Tue, Jul 21, 2020 at 02:17:52PM +0200, Michal Hocko wrote:
> > On Tue 21-07-20 07:44:07, Qian Cai wrote:
> > >
> > >
> > > > On Jul 21, 2020, at 7:25 AM, Michal Hocko wrote:
> > > >
> > >
On Tue 21-07-20 07:44:07, Qian Cai wrote:
>
>
> > On Jul 21, 2020, at 7:25 AM, Michal Hocko wrote:
> >
> > Are these really important? I believe I can dig that out from the bug
> > report but I didn't really consider that important enough.
>
> Please dig
On Tue 21-07-20 07:10:14, Qian Cai wrote:
>
>
> > On Jul 21, 2020, at 2:33 AM, Michal Hocko wrote:
> >
> > on a large ppc machine. The very likely cause is a suboptimal
> > configuration when systed-udev spawns way too many workders to bring the
> > system u
From: Michal Hocko
We have seen a bug report with huge number of soft lockups during the
system boot on !PREEMPT kernel
NMI watchdog: BUG: soft lockup - CPU#1291 stuck for 22s! [systemd-udevd:43283]
[...]
NIP [c094e66c] _raw_spin_lock_irqsave+0xac/0x100
LR [c094e654
On Mon 20-07-20 16:02:43, Alan Stern wrote:
> On Mon, Jul 20, 2020 at 08:16:05PM +0200, Michal Hocko wrote:
> > On Mon 20-07-20 13:48:12, Alan Stern wrote:
> > > On Mon, Jul 20, 2020 at 07:45:30PM +0200, Michal Hocko wrote:
> > > > On Mon 20-07-20 13:38:07, Alan Ster
On Mon 20-07-20 13:48:12, Alan Stern wrote:
> On Mon, Jul 20, 2020 at 07:45:30PM +0200, Michal Hocko wrote:
> > On Mon 20-07-20 13:38:07, Alan Stern wrote:
> > > On Mon, Jul 20, 2020 at 06:33:55PM +0200, Michal Hocko wrote:
> > > > On Mon 20-07
On Mon 20-07-20 13:38:07, Alan Stern wrote:
> On Mon, Jul 20, 2020 at 06:33:55PM +0200, Michal Hocko wrote:
> > On Mon 20-07-20 11:12:55, Alan Stern wrote:
> > [...]
> > > sudo echo 'module usbcore =p' >/debug/dynamic_debug/control
> > >
> > > Then w
spend
[ 95.400714] usb usb2: bus auto-suspend, wakeup 1
[ 95.400721] usb usb2: bus suspend fail, err -16
[ 95.400722] hub 2-0:1.0: hub_resume
--
Michal Hocko
SUSE Labs
usbmon which contains quite some files for me
0s 0u 1s 1t 1u 2s 2t 2u
most of them provide data when cating them.
> section of the dmesg log with dynamic debugging enabled for the usbcore
> module, as well.
Could you give me more details steps please?
--
Michal Hocko
SUSE Labs
T: B
>] rpm_suspend+0x2af/0x440
[<0>] __pm_runtime_suspend+0x48/0x62
[<0>] usb_runtime_idle+0x26/0x2d
[<0>] __rpm_callback+0x70/0xd4
[<0>] rpm_idle+0x179/0x1df
[<0>] pm_runtime_work+0x6b/0x81
[<0>] process_one_work+0x1bd/0x2c6
[<0>] worker_thread+0x19c/0x240
[<0>] kthread+0x11b/0x123
[<0>] ret_from_fork+0x22/0x30
Is this something known or something I can give more information about?
>From a very quick look into the code it sounds as if the system wanted
to suspend an USB device/controller but that keeps failing again and
again.
--
Michal Hocko
SUSE Labs
m and avoid generating false warnings, let's just
> relax the condition and warn only if the value is less than minus
> the maximum theoretically possible drift value, which is 125 *
> number of online CPUs. It will still allow to catch systematic leaks,
> but will not generate bogus war
On Fri 17-07-20 18:28:16, Joonsoo Kim wrote:
> 2020년 7월 17일 (금) 오후 5:26, Michal Hocko 님이 작성:
> >
> > On Fri 17-07-20 16:46:38, Joonsoo Kim wrote:
> > > 2020년 7월 15일 (수) 오후 5:24, Michal Hocko 님이 작성:
> > > >
> > > > On Wed 15-07-20 14:05:27
On Fri 17-07-20 16:46:38, Joonsoo Kim wrote:
> 2020년 7월 15일 (수) 오후 5:24, Michal Hocko 님이 작성:
> >
> > On Wed 15-07-20 14:05:27, Joonsoo Kim wrote:
> > > From: Joonsoo Kim
> > >
> > > We have well defined scope API to exclude CMA region.
> > >
start __mmput() from shrinker context.
>
> [1]
> https://syzkaller.appspot.com/bug?id=bc9e7303f537c41b2b0cc2dfcea3fc42964c2d45
>
> Reported-by: syzbot
> Reported-by: syzbot
> Signed-off-by: Tetsuo Handa
Reviewed-by: Michal Hocko
Thanks!
> ---
> drivers/android/bin
On Thu 16-07-20 22:41:14, Tetsuo Handa wrote:
> On 2020/07/16 17:35, Michal Hocko wrote:
[...]
> > But in order for this to happen the shrinker would have to do the last
> > put on the mm. But mm cannot go away from under uprobe_mmap so those two
> > paths cannot race with eac
u_status binder_alloc_free_page(struct list_head
> *item,
> trace_binder_unmap_user_end(alloc, index);
> }
> mmap_read_unlock(mm);
> - mmput(mm);
> + mmput_async(mm);
>
> trace_binder_unmap_kernel_start(alloc, index);
>
> --
> 2.18.4
>
--
Michal Hocko
SUSE Labs
On Wed 15-07-20 14:05:29, Joonsoo Kim wrote:
> From: Joonsoo Kim
>
> There is a well-defined migration target allocation callback. Use it.
>
> Acked-by: Vlastimil Babka
> Signed-off-by: Joonsoo Kim
Acked-by: Michal Hocko
>
cannot be utilized.
>
> This patch tries to fix this situation by making the deque function on
> hugetlb CMA aware. In the deque function, CMA memory is skipped if
> PF_MEMALLOC_NOCMA flag is found.
Now that this is in sync with the global case I do not have any
objections.
> Acked-by: Mike
MOVABLE)
> - alloc_flags |= ALLOC_CMA;
> -#endif
> return alloc_flags;
> }
>
> @@ -4808,9 +4814,6 @@ static inline bool prepare_alloc_pages(gfp_t gfp_mask,
> unsigned int order,
> if (should_fail_alloc_page(gfp_mask, order))
> return false;
>
> - if (IS_ENABLED(CONFIG_CMA) && ac->migratetype == MIGRATE_MOVABLE)
> - *alloc_flags |= ALLOC_CMA;
> -
> return true;
> }
>
> --
> 2.7.4
--
Michal Hocko
SUSE Labs
s orthogonal.
Btw __GFP_NOWARN change is not documented.
> Suggested-by: Michal Hocko
> Signed-off-by: Joonsoo Kim
> ---
> include/linux/hugetlb.h | 2 ++
> mm/gup.c| 17 -
> 2 files changed, 10 insertions(+), 9 deletions(-)
>
> diff --git a/i
r
backing inode. But asking lockdep experts would be better than relying
on my vague recollection
--
Michal Hocko
SUSE Labs
On Tue 14-07-20 08:32:09, Shakeel Butt wrote:
> On Tue, Jul 14, 2020 at 1:41 AM Michal Hocko wrote:
> >
> > On Fri 10-07-20 12:19:37, Shakeel Butt wrote:
> > > On Fri, Jul 10, 2020 at 11:42 AM Roman Gushchin wrote:
> > > >
> > > > On Fri, Jul
On Tue 14-07-20 22:08:59, Hillf Danton wrote:
>
> On Tue, 14 Jul 2020 10:26:29 +0200 Michal Hocko wrote:
> > On Tue 14-07-20 13:32:05, Hillf Danton wrote:
> > >
> > > On Mon, 13 Jul 2020 20:41:11 -0700 Eric Biggers wrote:
> > > > On Tue, Jul 14, 20
On Fri 10-07-20 12:19:37, Shakeel Butt wrote:
> On Fri, Jul 10, 2020 at 11:42 AM Roman Gushchin wrote:
> >
> > On Fri, Jul 10, 2020 at 07:12:22AM -0700, Shakeel Butt wrote:
> > > On Fri, Jul 10, 2020 at 5:29 AM Michal Hocko wrote:
> > > >
> > > >
an appropriate fix. First of all is this a real
deadlock or a lockdep false positive? Is it possible that ashmem just
needs to properly annotate its shmem inodes? Or is it possible that
the internal backing shmem file is visible to the userspace so the write
path would be possible?
If this a real problem then the proper fix would be to set internal
shmem mapping's gfp_mask to drop __GFP_FS.
--
Michal Hocko
SUSE Labs
n seen during
> large mmaps initialization. There is no indication that this is a
> problem for migration as well but theoretically the same might happen
> when migrating large mappings to a different node. Make the migration
> callback consistent with regular THP allocations.
>
> Signed
On Fri 10-07-20 14:58:54, Michal Hocko wrote:
[...]
> I will have a closer look. Is the full dmesg available somewhere?
Ups, I have missed this:
diff --git a/mm/oom_kill.c b/mm/oom_kill.c
index 2dd5a90f2f81..7f01835862f4 100644
--- a/mm/oom_kill.c
+++ b/mm/oom_kill.c
@@ -306,7 +306,7 @@ sta
has higher priority to prevent from being killed by
> > system oom.
> >
> > To fix this issue, we should make the calculation of oom point more
> > accurate. We can achieve it by convert the chosen_point from 'unsigned
> > long' to 'long'.
> >
> > Signed-off-by: Yafang Shao
>
> Reverting this commit fixed the crash below while recovering from kenrel OOM,
I suspect that the previous version of the patch has been tested (in
Linux next). Does this version exhibit the same problem?
I will have a closer look. Is the full dmesg available somewhere?
--
Michal Hocko
SUSE Labs
plus side a break out from the reclaim loop would just enforce the limit
so if the operation takes too long then the reclaim burden will move
over to consumers eventually. So I do not see any real danger.
> Signed-off-by: Roman Gushchin
> Reported-by: Domas Mituzas
> Cc: Johannes
On Thu 09-07-20 16:15:07, Joonsoo Kim wrote:
> 2020년 7월 8일 (수) 오전 4:00, Michal Hocko 님이 작성:
> >
> > On Tue 07-07-20 16:49:51, Vlastimil Babka wrote:
> > > On 7/7/20 9:44 AM, js1...@gmail.com wrote:
> > > > From: Joonsoo Kim
> > > >
> > >
On Thu 09-07-20 17:01:06, Yafang Shao wrote:
> On Thu, Jul 9, 2020 at 4:18 PM Michal Hocko wrote:
> >
> > On Thu 09-07-20 15:41:11, Yafang Shao wrote:
> > > On Thu, Jul 9, 2020 at 2:26 PM Michal Hocko wrote:
> > > >
> > > > From: Michal Ho
On Thu 09-07-20 15:41:11, Yafang Shao wrote:
> On Thu, Jul 9, 2020 at 2:26 PM Michal Hocko wrote:
> >
> > From: Michal Hocko
> >
> > The exported value includes oom_score_adj so the range is no [0, 1000]
> > as described in the previous section but rather
On Wed 08-07-20 09:41:06, Michal Hocko wrote:
> On Wed 08-07-20 16:16:02, Joonsoo Kim wrote:
> > On Tue, Jul 07, 2020 at 01:22:31PM +0200, Vlastimil Babka wrote:
> > > On 7/7/20 9:44 AM, js1...@gmail.com wrote:
> > > > From: Joonsoo Kim
> > > >
>
From: Michal Hocko
There are at least two notes in the oom section. The 3% discount for
root processes is gone since d46078b28889 ("mm, oom: remove 3% bonus for
CAP_SYS_ADMIN processes").
Likewise children of the selected oom victim are not sacrificed since
bbbe48029720 ("
From: Michal Hocko
The exported value includes oom_score_adj so the range is no [0, 1000]
as described in the previous section but rather [0, 2000]. Mention that
fact explicitly.
Signed-off-by: Michal Hocko
---
Documentation/filesystems/proc.rst | 3 +++
1 file changed, 3 insertions(+)
diff
On Wed 08-07-20 16:27:16, Aneesh Kumar K.V wrote:
> Vlastimil Babka writes:
>
> > On 7/8/20 9:41 AM, Michal Hocko wrote:
> >> On Wed 08-07-20 16:16:02, Joonsoo Kim wrote:
> >>> On Tue, Jul 07, 2020 at 01:22:31PM +0200, Vlastimil Babka wrote:
> >>&g
On Wed 08-07-20 16:19:17, Joonsoo Kim wrote:
> On Tue, Jul 07, 2020 at 01:40:19PM +0200, Michal Hocko wrote:
[...]
> Subject: [PATCH] mm/migrate: clear __GFP_RECLAIM for THP allocation for
> migration
>
> In migration target allocation functions, THP allocations uses diffe
< 0)
goto out;
@@ -1802,11 +1801,13 @@ static long __gup_longterm_locked(struct task_struct
*tsk,
for (i = 0; i < rc; i++)
put_page(pages[i]);
rc = -EOPNOTSUPP;
+ memalloc_nocma_restore(flags);
goto out;
}
rc = check_and_migrate_cma_pages(tsk, mm, start, rc, pages,
vmas_tmp, gup_flags);
+ memalloc_nocma_restore(flags);
}
out:
--
Michal Hocko
SUSE Labs
On Tue 07-07-20 13:31:19, Michal Hocko wrote:
> Btw. you are keeping his acks even
> after considerable changes to patches which I am not really sure he is
> ok with.
I am sorry but I have missed the last email from Mike in v3.
--
Michal Hocko
SUSE Labs
fffe0
Hmm, this looks like -EPIPE (-32) which is unexpected to say the least.
Does the test pass without this patch applied? Also there has been v4
posted just yesterday. Does it suffer from the same problem?
--
Michal Hocko
SUSE Labs
BLE and so we shouldn't
really end up here for !movable pages in the first place (not sure about
soft offlining at this moment). But yeah it would be simply better to
override gfp mask for hugetlb which we have been doing anyway.
--
Michal Hocko
SUSE Labs
On Tue 07-07-20 17:03:50, Vlastimil Babka wrote:
> On 7/7/20 1:48 PM, Michal Hocko wrote:
> > On Tue 07-07-20 16:44:48, Joonsoo Kim wrote:
> >> From: Joonsoo Kim
> >>
> >> There is a well-defined standard migration target callback. Use it
> >> di
On Tue 07-07-20 09:04:36, Qian Cai wrote:
> On Tue, Jul 07, 2020 at 02:06:19PM +0200, Michal Hocko wrote:
> > On Tue 07-07-20 07:43:48, Qian Cai wrote:
> > >
> > >
> > > > On Jul 7, 2020, at 6:28 AM, Michal Hocko wrote:
> > > >
> > &g
On Fri 03-07-20 07:23:14, Shakeel Butt wrote:
> On Thu, Jul 2, 2020 at 11:35 PM Michal Hocko wrote:
> >
> > On Thu 02-07-20 08:22:22, Shakeel Butt wrote:
> > [...]
> > > Interface options:
> > > --
> > >
> >
On Tue 07-07-20 07:43:48, Qian Cai wrote:
>
>
> > On Jul 7, 2020, at 6:28 AM, Michal Hocko wrote:
> >
> > Would you have any examples? Because I find this highly unlikely.
> > OVERCOMMIT_NEVER only works when virtual memory is not largerly
> > ov
ory_add_physaddr_to_nid);
Does it make sense to export a noop function? Wouldn't make more sense
to simply make it static inline somewhere in a header? I haven't checked
whether there is an easy way to do that sanely bu this just hit my eyes.
--
Michal Hocko
SUSE Labs
flining only operates on a single zone. Have a look at
test_pages_in_a_zone().
>
> Signed-off-by: Joonsoo Kim
Acked-by: Michal Hocko
> ---
> mm/memory_hotplug.c | 46 ++
> 1 file changed, 22 insertions(+), 24 deletions(-)
>
>
loc_migration_target, NULL,
> + (unsigned long), MIGRATE_SYNC, MR_MEMORY_FAILURE);
> if (!ret) {
> bool release = !huge;
>
> --
> 2.7.4
>
--
Michal Hocko
SUSE Labs
2,6 +1576,9 @@ struct page *alloc_migration_target(struct page *page,
> unsigned long private)
> if (new_page && PageTransHuge(new_page))
> prep_transhuge_page(new_page);
>
> + if (mtc->skip_cma)
> + memalloc_nocma_restore(flags);
> +
> return new_page;
> }
>
> --
> 2.7.4
--
Michal Hocko
SUSE Labs
all in previous function is changed to open-code
> "is_highmem_idx()" since it provides more readability.
>
> Acked-by: Vlastimil Babka
> Signed-off-by: Joonsoo Kim
Acked-by: Michal Hocko
Thanks!
> ---
> include/linux/migrate.h | 9 +
> mm/
_KSWAPD here. So the only
difference is that the migration won't wake up kswapd now.
All that being said the changelog should be probably more explicit about
the fact that this is solely done for consistency and be honest that the
runtime effect is not really clear. This would help people reading it in
future.
--
Michal Hocko
SUSE Labs
e();
This is pointless for a scope that is already defined up in the call
chain and fundamentally this is breaking the expected use of the scope
API. The primary reason for that API to exist is to define the scope and
have it sticky for _all_ allocation contexts. So if you have to use it
deep in the allocator then you are doing something wrong.
--
Michal Hocko
SUSE Labs
This makes much more sense than the
prvevious gfp_mask as a modifier approach.
I hope there won't be any weird include dependency problems but 0day
will tell us soon about that.
For the patch, feel free to add
Acked-by: Michal Hocko
> ---
> include/linux/hugetlb.h | 2
is not something we should
really lose sleep over. It would be nice to find a way to flush existing
batches but I would rather see a real workload that would suffer from
this imprecision.
On the other hand perf. boost with larger batches with defualt overcommit
setting sounds like a nice improvement to have.
--
Michal Hocko
SUSE Labs
10/0xfe0 [dax_pmem]
>
> Fixes: f1037ec0cc8a ("mm/memory_hotplug: fix remove_memory() lockdep splat")
> Cc: sta...@vger.kernel.org # v5.6+
> Signed-off-by: Jia He
Ups, I have missed that when reviewing that commit. Thanks for catching
this up!
Acked-by: Michal Hocko
> ---
On Fri 03-07-20 13:32:21, David Hildenbrand wrote:
> On 03.07.20 12:59, Michal Hocko wrote:
> > On Fri 03-07-20 11:24:17, Michal Hocko wrote:
> >> [Cc Andi]
> >>
> >> On Fri 03-07-20 11:10:01, Michal Suchanek wrote:
> >>> On Wed, Jul 01, 2020 at 02:
On Fri 03-07-20 11:24:17, Michal Hocko wrote:
> [Cc Andi]
>
> On Fri 03-07-20 11:10:01, Michal Suchanek wrote:
> > On Wed, Jul 01, 2020 at 02:21:10PM +0200, Michal Hocko wrote:
> > > On Wed 01-07-20 13:30:57, David Hildenbrand wrote:
> [.
[Cc Andi]
On Fri 03-07-20 11:10:01, Michal Suchanek wrote:
> On Wed, Jul 01, 2020 at 02:21:10PM +0200, Michal Hocko wrote:
> > On Wed 01-07-20 13:30:57, David Hildenbrand wrote:
[...]
> > > Yep, looks like it.
> > >
> > > [0.009726] SRAT: PXM 1 ->
[Cc Andrew - the patch is
http://lkml.kernel.org/r/1593641660-13254-2-git-send-email-bhsha...@redhat.com]
On Thu 02-07-20 08:00:27, Michal Hocko wrote:
> On Thu 02-07-20 03:44:19, Bhupesh Sharma wrote:
> > Prabhakar reported an OOPS inside mem_cgroup_get_nr_swap_pages()
> > funct
emory.high as an interface to trigger pro-active
memory reclaim is not sufficient. Also memory.low limit to protect
latency sensitve workloads?
--
Michal Hocko
SUSE Labs
On Thu 02-07-20 09:37:38, Roman Gushchin wrote:
> On Thu, Jul 02, 2020 at 06:22:02PM +0200, Michal Hocko wrote:
> > On Wed 01-07-20 11:45:52, Roman Gushchin wrote:
> > [...]
> > > >From c97afecd32c0db5e024be9ba72f43d22974f5bcd Mon Sep 17 00:00:00 2001
> > > From
On Thu 02-07-20 18:35:31, Vlastimil Babka wrote:
> On 7/2/20 6:22 PM, Michal Hocko wrote:
> > On Wed 01-07-20 11:45:52, Roman Gushchin wrote:
> > [...]
> >> >From c97afecd32c0db5e024be9ba72f43d22974f5bcd Mon Sep 17 00:00:00 2001
> >> From: Roman Gushchin
>
(memcg->kmem_state == KMEM_ALLOCATED)
> - static_branch_dec(_kmem_enabled_key);
> }
> #else
> static int memcg_online_kmem(struct mem_cgroup *memcg)
> --
> 2.26.2
--
Michal Hocko
SUSE Labs
On Thu 02-07-20 12:14:08, Srikar Dronamraju wrote:
> * Michal Hocko [2020-07-01 14:21:10]:
>
> > > >>>>>>
> > > >>>>>> 2. Also existence of dummy node also leads to inconsistent
> > > >>>>>> in
-bhsha...@redhat.com?
--
Michal Hocko
SUSE Labs
e: aa1403e3 91106000 97f82a27 1411 (f940c663)
> [0.507770] ---[ end trace 9795948475817de4 ]---
> [0.512429] Kernel panic - not syncing: Fatal exception
> [0.517705] Rebooting in 10 seconds..
>
> Cc: Johannes Weiner
> Cc: Michal Hocko
> Cc
; used by the kernel and can be used arbitrarily?
> >
>
> I thought Michal Hocko already gave a clear picture on why mapping is a bad
> idea. https://lore.kernel.org/lkml/20200316085425.gb11...@dhcp22.suse.cz/t/#u
> Are you suggesting that we add that as part of the changel
On Wed 01-07-20 13:30:57, David Hildenbrand wrote:
> On 01.07.20 13:06, David Hildenbrand wrote:
> > On 01.07.20 13:01, Srikar Dronamraju wrote:
> >> * David Hildenbrand [2020-07-01 12:15:54]:
> >>
> >>> On 01.07.20 12:04, Srikar Dronamraju wrote:
>
n the machine size or even use something better than
pvec (e.g. lru_deactivate_file could scale much more and I am not sure
pcp aspect is really improving anything - why don't we simply invalidate
all gathered pages at once at the end of invalidate_mapping_pages?).
--
Michal Hocko
SUSE Labs
RC we have discussed testing in the previous
version and David has provided a way to emulate these configurations
on x86. Did you manage to use those instruction for additional testing
on other than ppc architectures?
> Cc: linuxppc-...@lists.ozlabs.org
> Cc: linux...@kvack.org
> Cc
000 R09:
> 00000000
> [ 808.584281] R10: R11: 0246 R12:
>
> [ 808.591406] R13: R14: R15:
> 0009
> [ 808.598532] BUG: Bad page state in process systemd-journal pfn:418192
> [ 808.605075] page:ea0010606480 refcount:0 mapcount:0
> mapping: index:0x1
> [ 808.613367] flags: 0x200()
> [ 808.617115] raw: 0200 dead0100 dead0122
>
> [ 808.624851] raw: 0001 0010
> 88841cc82601
> [ 808.632580] page dumped because: page still charged to cgroup
> [ 808.638318] page->mem_cgroup:88841cc82601
> [ 808.642668] Modules linked in: x86_pkg_temp_thermal
> [ 808.647543] CPU: 1 PID: 332 Comm: systemd-journal Tainted: GB
> 5.8.0-rc3-next-20200630 #1
> [ 808.657013] Hardware name: Supermicro SYS-5019S-ML/X11SSH-F, BIOS
> 2.0b 07/27/2017
>
>
> Full test log link,
> https://lkft.validation.linaro.org/scheduler/job/1535880#L11102
>
> --
> Linaro LKFT
> https://lkft.linaro.org
--
Michal Hocko
SUSE Labs
memcg reference
counting which is showing up on the stack. This might be a side effect
of something else of course but bisection would tell us more.
Thanks
--
Michal Hocko
SUSE Labs
On Wed 01-07-20 05:12:03, Matthew Wilcox wrote:
> On Tue, Jun 30, 2020 at 08:34:36AM +0200, Michal Hocko wrote:
> > On Mon 29-06-20 22:28:30, Matthew Wilcox wrote:
> > [...]
> > > The documentation is hard to add a new case to, so I rewrote it. What
> > > do
On Tue 30-06-20 15:30:04, Joonsoo Kim wrote:
> 2020년 6월 29일 (월) 오후 4:55, Michal Hocko 님이 작성:
[...]
> > diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> > index 57ece74e3aae..c1595b1d36f3 100644
> > --- a/mm/hugetlb.c
> > +++ b/mm/hugetlb.c
> > @@ -1092,
ation. Maybe we will learn later that there is
just too much unhelpful noise in the kernel log and will reconsider but
I wouldn't just start with that. Also we might learn that there will be
other modifiers for atomic (or should I say non-sleeping) scopes to be
defined. E.g. access to memory reserves but let's just wait for real
usecases.
Thanks a lot Matthew!
--
Michal Hocko
SUSE Labs
d of many instances where "this is a fs
code so it has to use NOFS gfp mask".
Some of that has happened and that is really great. On the other hand
many people still like to use that api as a workaround for an immediate
problem because no-recursion scopes are much harder to recognize unles
On Mon 29-06-20 15:41:37, Joonsoo Kim wrote:
> 2020년 6월 26일 (금) 오후 4:33, Michal Hocko 님이 작성:
> >
> > On Fri 26-06-20 14:02:49, Joonsoo Kim wrote:
> > > 2020년 6월 25일 (목) 오후 9:05, Michal Hocko 님이 작성:
> > > >
> > > > On Tue 23-06-20 15:13:45, Joo
are we still not on the same page wrt
to the actual problem?
--
Michal Hocko
SUSE Labs
ons:
> memalloc_noio_save memalloc_noio_restore
> Documentation/core-api/gfp_mask-from-fs-io.rst:allows nesting so it is safe
> to call ``memalloc_noio_save`` or
The patch is adding memalloc_nowait* and I suspect Mike had that in
mind, which would be a fair request. Btw. we are missing memalloc_nocma*
documentation either - I was just reminded of its existence today...
--
Michal Hocko
SUSE Labs
our of the page allocator.
>
> It would be pity to keep this description buried in the log so let's expose
> it in the Documentation/ as well.
>
> Cc: Michal Hocko
> Signed-off-by: Mike Rapoport
Thanks for making that into the documentation.
Acked-by: Michal Hocko
> ---
> Hi
On Mon 22-06-20 15:17:39, Daniel Jordan wrote:
> Hello Michal,
>
> (I've been away and may be slow to respond for a little while)
>
> On Fri, Jun 19, 2020 at 02:07:04PM +0200, Michal Hocko wrote:
> > On Tue 09-06-20 18:54:51, Daniel Jordan wrote:
> > [...]
> >
On Fri 26-06-20 14:02:49, Joonsoo Kim wrote:
> 2020년 6월 25일 (목) 오후 9:05, Michal Hocko 님이 작성:
> >
> > On Tue 23-06-20 15:13:45, Joonsoo Kim wrote:
[...]
> > > -struct page *new_page_nodemask(struct page *page,
> > > - int pr
On Fri 26-06-20 13:49:15, Joonsoo Kim wrote:
> 2020년 6월 25일 (목) 오후 8:54, Michal Hocko 님이 작성:
> >
> > On Tue 23-06-20 15:13:44, Joonsoo Kim wrote:
> > > From: Joonsoo Kim
> > >
> > > new_non_cma_page() in gup.c which try to allocate migration target pa
ed back. But
PF_MEMALLOC_NOFS needs to stay for the scoped NOFS semantic.
Hope this clarifies it a bit.
--
Michal Hocko
SUSE Labs
On Thu 25-06-20 12:00:47, Chris Wilson wrote:
> Quoting Michal Hocko (2020-06-25 08:57:25)
> > On Wed 24-06-20 20:14:17, Chris Wilson wrote:
> > > A general rule of thumb is that shrinkers should be fast and effective.
> > > They are called from direct reclaim at the mos
On Thu 25-06-20 12:31:20, Matthew Wilcox wrote:
> We're short on PF_* flags, so make memalloc_nofs its own bit where we
> have plenty of space.
>
> Signed-off-by: Matthew Wilcox (Oracle)
forgot to add
Acked-by: Michal Hocko
> ---
> fs/iomap/buffered-io.c | 2 +-
>
On Thu 25-06-20 14:10:55, Matthew Wilcox wrote:
> On Thu, Jun 25, 2020 at 02:40:17PM +0200, Michal Hocko wrote:
> > On Thu 25-06-20 12:31:22, Matthew Wilcox wrote:
> > > Similar to memalloc_noio() and memalloc_nofs(), memalloc_nowait()
> > > guarantees we will not sl
On Thu 25-06-20 13:34:18, Matthew Wilcox wrote:
> On Thu, Jun 25, 2020 at 02:22:39PM +0200, Michal Hocko wrote:
> > On Thu 25-06-20 12:31:17, Matthew Wilcox wrote:
> > > We're short on PF_* flags, so make memalloc_noio its own bit where we
> > > have plenty of space.
&g
t_restore(nowait_flag);
This looks confusing though. I am not familiar with alloc_buffer and
there is quite some tweaking around __GFP_NORETRY in alloc_buffer_data
which I do not follow but GFP_KERNEL just struck my eyes. So why cannot
we have
alloc_buffer(GFP_NOWAIT | __GFP_NOMEMALLOC | __GFP_NOWARN);
--
Michal Hocko
SUSE Labs
the task_struct is about to be destroyed anyway.
>
> Signed-off-by: Matthew Wilcox (Oracle)
Certainly better than an opencoded PF_$FOO manipulation
Acked-by: Michal Hocko
I would just ask for a clarification because this is rellying to have a
good MM knowledge to follow
> +/*
> +
1101 - 1200 of 20557 matches
Mail list logo