On Wed 18-11-20 11:22:21, Suren Baghdasaryan wrote:
> On Wed, Nov 18, 2020 at 11:10 AM Michal Hocko wrote:
> >
> > On Fri 13-11-20 18:16:32, Andrew Morton wrote:
> > [...]
> > > It's all sounding a bit painful (but not *too* painful). But to
> > > r
an tying it to SIGKILL, agree?
I am not sure TBH. Is there any reasonable usecase where uncoordinated
memory tear down is OK and a target process which is able to see the
unmapped memory?
--
Michal Hocko
SUSE Labs
lang W=1 warns:
>
> mm/memcontrol.c:3421:20:
> warning: unused function 'memcg_has_children' [-Wunused-function]
>
> Simply remove this obsolete unused function.
>
> Signed-off-by: Lukas Bulwahn
git grep agrees
Acked-by: Michal Hocko
> ---
> appl
On Thu 12-11-20 20:28:44, Feng Tang wrote:
> Hi Michal,
>
> On Wed, Nov 04, 2020 at 09:15:46AM +0100, Michal Hocko wrote:
> > > > > Hi Michal,
> > > > >
> > > > > We used the default configure of cgroups, not sure what configuration
> >
t; + huge_gfp = limit_gfp_mask(huge_gfp, gfp);
> page = shmem_alloc_and_acct_page(huge_gfp, inode, index, true);
> if (IS_ERR(page)) {
> alloc_nohuge:
> --
> 2.25.4
--
Michal Hocko
SUSE Labs
> ---
> include/linux/gfp.h | 2 ++
> mm/huge_memory.c| 6 +++---
> mm/shmem.c | 8 +---
> 3 files changed, 10 insertions(+), 6 deletions(-)
--
Michal Hocko
SUSE Labs
On Wed 11-11-20 11:05:21, David Hildenbrand wrote:
> On 11.11.20 10:58, Vlastimil Babka wrote:
> > On 11/11/20 10:06 AM, David Hildenbrand wrote:
> > > On 11.11.20 09:47, Michal Hocko wrote:
> > > > On Tue 10-11-20 20:32:40, David Hildenbrand wrote:
> > >
where but I would expect
init_on_alloc to be handled there.
--
Michal Hocko
SUSE Labs
On Mon 09-11-20 07:39:33, Minchan Kim wrote:
> On Mon, Nov 09, 2020 at 08:37:06AM +0100, Michal Hocko wrote:
> > On Fri 06-11-20 12:32:38, Minchan Kim wrote:
> > > It's hard to have some tests to be supposed to work under heavy
> > > memory pressure(e.g., injec
to reclaim
memory and it has to retry really hard. Having to handle worker context
explicitly/differently is error prone and as your example of final iput
in NFS shows that the allocator is not the only path affected so having
a general solution is better.
That being said I would really love to see cond_resched to work
transparently.
Thanks!
--
Michal Hocko
SUSE Labs
the oom killer is really disruptive but a global on/off switch
sounds like a too coarse interface. Really what kind of production
environment would ever go with oom killer disabled completely?
--
Michal Hocko
SUSE Labs
t check. My RFC patch just gave a easiest one-for-all hack to
> let them bypass it.
>
> Do we need to tackle them case by case?
No, I do not think, how we can change those __GFP_HARDWALL without
breaking the isolation.
--
Michal Hocko
SUSE Labs
On Fri 06-11-20 15:06:56, Feng Tang wrote:
> On Thu, Nov 05, 2020 at 05:16:12PM +0100, Michal Hocko wrote:
> > On Thu 05-11-20 21:43:05, Feng Tang wrote:
> > > On Thu, Nov 05, 2020 at 02:12:45PM +0100, Michal Hocko wrote:
> > > > On Thu 05-1
On Fri 06-11-20 12:32:44, Huang, Ying wrote:
> Michal Hocko writes:
>
> > On Thu 05-11-20 09:40:28, Feng Tang wrote:
> >> On Wed, Nov 04, 2020 at 09:53:43AM +0100, Michal Hocko wrote:
> >>
> >> > > > As I've said in reply to your
On Thu 05-11-20 09:21:13, Suren Baghdasaryan wrote:
> On Thu, Nov 5, 2020 at 9:16 AM Michal Hocko wrote:
> >
> > On Thu 05-11-20 08:50:58, Suren Baghdasaryan wrote:
> > > On Thu, Nov 5, 2020 at 4:20 AM Michal Hocko wrote:
> > > >
> > > > On Wed 04
On Thu 05-11-20 08:50:58, Suren Baghdasaryan wrote:
> On Thu, Nov 5, 2020 at 4:20 AM Michal Hocko wrote:
> >
> > On Wed 04-11-20 12:40:51, Minchan Kim wrote:
> > > On Wed, Nov 04, 2020 at 07:58:44AM +0100, Michal Hocko wrote:
> > > > On Tue 03-11-20 13:32:28,
On Thu 05-11-20 21:43:05, Feng Tang wrote:
> On Thu, Nov 05, 2020 at 02:12:45PM +0100, Michal Hocko wrote:
> > On Thu 05-11-20 21:07:10, Feng Tang wrote:
> > [...]
> > > My debug traces shows it is, and its gfp_mask is 'GFP_KERNEL'
> >
> > Can you p
On Thu 05-11-20 14:14:25, Vlastimil Babka wrote:
> On 11/5/20 1:58 PM, Michal Hocko wrote:
> > On Thu 05-11-20 13:53:24, Vlastimil Babka wrote:
> > > On 11/5/20 1:08 PM, Michal Hocko wrote:
> > > > On Thu 05-11-20 09:40:28, Feng Tang wrote:
> > > > >
re dump_stack without any further
context is not really helpful.
--
Michal Hocko
SUSE Labs
On Thu 05-11-20 13:53:24, Vlastimil Babka wrote:
> On 11/5/20 1:08 PM, Michal Hocko wrote:
> > On Thu 05-11-20 09:40:28, Feng Tang wrote:
> > > > > Could you be more specific? This sounds like a bug. Allocations
> > > > shouldn't spill over to a node
On Wed 04-11-20 12:40:51, Minchan Kim wrote:
> On Wed, Nov 04, 2020 at 07:58:44AM +0100, Michal Hocko wrote:
> > On Tue 03-11-20 13:32:28, Minchan Kim wrote:
> > > On Tue, Nov 03, 2020 at 10:35:50AM +0100, Michal Hocko wrote:
> > > > On Mon 02-11-20 12:2
On Thu 05-11-20 09:40:28, Feng Tang wrote:
> On Wed, Nov 04, 2020 at 09:53:43AM +0100, Michal Hocko wrote:
>
> > > > As I've said in reply to your second patch. I think we can make the oom
> > > > killer behavior more sensible in this misconfigured cases but I
from there. I suspect the only reason for having pgdat here is
that many callers already know it and we optimize for memcg disable
case. Hard to tell whether this actually matters because most of those
paths are not really hot but something that would require a deeper
investigation. Hint hint...
Anyway, this looks like a nice simplification already. There were some
attempts to do similar thing recently but they were adding nodeid as an
additional argument and I really disliked those.
Acked-by: Michal Hocko
Thanks!
--
Michal Hocko
SUSE Labs
On Wed 04-11-20 16:40:21, Feng Tang wrote:
> On Wed, Nov 04, 2020 at 08:58:19AM +0100, Michal Hocko wrote:
> > On Wed 04-11-20 15:38:26, Feng Tang wrote:
> > [...]
> > > > Could you be more specific about the usecase here? Why do you need a
> > > > binding to
On Tue 03-11-20 13:27:24, Roman Gushchin wrote:
> Update cgroup v1 docs after the deprecation of the non-hierarchical
> mode of the memory controller.
>
> Signed-off-by: Roman Gushchin
Acked-by: Michal Hocko
> ---
> .../admin-guide/cgroup-v1/memcg_test.rst | 8 ++--
gt; Signed-off-by: Roman Gushchin
I do not see any problems with the patch or any left overs behind
(except for the documentation which you handle in the follow up
patches).
Acked-by: Michal Hocko
Thanks and let's see whether some last minute usecase show up.
> ---
> include/li
On Wed 04-11-20 09:20:04, Xing Zhengjun wrote:
>
>
> On 11/2/2020 6:02 PM, Michal Hocko wrote:
> > On Mon 02-11-20 17:53:14, Rong Chen wrote:
> > >
> > >
> > > On 11/2/20 5:27 PM, Michal Hocko wrote:
> > > > On Mon 02-11-20
;ve said in reply to your second patch. I think we can make the oom
killer behavior more sensible in this misconfigured cases but I do not
think we want break the cpuset isolation for such a configuration.
--
Michal Hocko
SUSE Labs
for_each_node_mask(nid, cpuset_current_mems_allowed) {
> + unmovable += NODE_DATA(nid)->node_present_pages -
> +
> NODE_DATA(nid)->node_zones[ZONE_MOVABLE].present_pages;
> + }
> +
> + if (!unmovable) {
> +
_NODES, oc->nodemask);
> + show_mem(SHOW_MEM_FILTER_NODES, &node_states[N_MEMORY]);
> if (is_dump_unreclaim_slabs())
> dump_unreclaimable_slab();
> }
> --
> 2.7.4
--
Michal Hocko
SUSE Labs
uset containment, right? I consider this quite
unexpected for something that looks like a misconfiguration. I do agree
that this is unexpected for anybody who is not really familiar with
concept of movable zone but we should probably call out all these
details rather than tweak the existing semantic.
Could you be more specific about the usecase here? Why do you need a
binding to a pure movable node?
--
Michal Hocko
SUSE Labs
On Tue 03-11-20 13:32:28, Minchan Kim wrote:
> On Tue, Nov 03, 2020 at 10:35:50AM +0100, Michal Hocko wrote:
> > On Mon 02-11-20 12:29:24, Suren Baghdasaryan wrote:
> > [...]
> > > To follow up on this. Should I post an RFC implementing SIGKILL_SYNC
> > > which in
hich GFP_MOVABLE allocations aren't really movable.
Absolutely agreed. What is even worse the proposed approach doesn't
really add any new guarantee. Just look at how the new flag is used for
any anonymous page and that is a subject to a long term pinning as well.
So in the end a new and co
(except for LTP driven ones). So we might be really close to
simply drop this functionality completely. This would simplify the code
and prevent from future surprises.
Thanks!
--
Michal Hocko
SUSE Labs
discussion moving forward?
Yeah, having a code, even preliminary, might help here. This definitely
needs a good to go from process management people as that proper is land
full of surprises...
--
Michal Hocko
SUSE Labs
, kernel_page_present() used
> along PG_reserved in hibernation code will always return "true"
> on powerpc, resulting in the pages getting touched. It's too
> generic - e.g., indicates boot allocations.
>
> Note 3: For now, we keep using memory_block
On Mon 02-11-20 17:53:14, Rong Chen wrote:
>
>
> On 11/2/20 5:27 PM, Michal Hocko wrote:
> > On Mon 02-11-20 17:15:43, kernel test robot wrote:
> > > Greeting,
> > >
> > > FYI, we noticed a -22.7% regression of will-it-scale.per_process_ops due
&
ot;)
> https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master
I really fail to see how this can be anything else than a data structure
layout change. There is one counter less.
btw. are cgroups configured at all? What would be the configuration?
--
Michal Hocko
SUSE Labs
;
> Signed-off-by: Hui Su
Acked-by: Michal Hocko
Thanks!
> ---
> mm/oom_kill.c | 14 --
> 1 file changed, 8 insertions(+), 6 deletions(-)
>
> diff --git a/mm/oom_kill.c b/mm/oom_kill.c
> index 8b84661a6410..04b19b7b5435 100644
> --- a/mm/oom_kill.c
On Fri 30-10-20 18:24:25, Muchun Song wrote:
> On Fri, Oct 30, 2020 at 5:14 PM Michal Hocko wrote:
> >
> > On Mon 26-10-20 22:50:55, Muchun Song wrote:
> > > If we uses the 1G hugetlbpage, we can save 4095 pages. This is a very
> > > substantial gain.
On Fri 30-10-20 10:35:43, Zi Yan wrote:
> On 30 Oct 2020, at 9:36, Michal Hocko wrote:
>
> > On Fri 30-10-20 08:20:50, Zi Yan wrote:
> >> On 30 Oct 2020, at 5:43, Michal Hocko wrote:
> >>
> >>> [Cc Vlastimil]
> >>>
> >&g
On Fri 30-10-20 08:20:50, Zi Yan wrote:
> On 30 Oct 2020, at 5:43, Michal Hocko wrote:
>
> > [Cc Vlastimil]
> >
> > On Thu 29-10-20 16:04:35, Zi Yan wrote:
> >> From: Zi Yan
> >>
> >> In isolate_migratepages_block, when cc->alloc_cont
cc->nr_migratepages += thp_nr_pages(page);
> + nr_isolated += thp_nr_pages(page);
Does thp_nr_pages work for __PageMovable pages?
--
Michal Hocko
SUSE Labs
pmd based (especially for 2MB hugetlb). Also, how expensive is the
vmemmap page tables reconstruction on the freeing path?
Thanks!
--
Michal Hocko
SUSE Labs
On Fri 30-10-20 15:27:51, Huang, Ying wrote:
> Michal Hocko writes:
>
> > On Wed 28-10-20 10:34:10, Huang Ying wrote:
> >> To follow code-of-conduct better.
> >
> > This is changing a user visible interface and any userspace which refers
> > to the existin
On Thu 29-10-20 09:01:37, Shakeel Butt wrote:
> On Thu, Oct 29, 2020 at 2:08 AM Michal Hocko wrote:
> >
> > On Wed 28-10-20 11:50:13, Muchun Song wrote:
> > [...]
> > > -struct lruvec *mem_cgroup_page_lruvec(struct page *page, struct
> > > pglist_d
On Wed 28-10-20 15:43:46, Andrew Morton wrote:
> On Tue, 27 Oct 2020 11:25:19 +0100 Michal Hocko wrote:
>
> > I have noticed this fix and I do not see it in the mmotm tree.
> > Is there anything blocking this patch or it simply fall through cracks?
>
> It's merge
d)
I thought I have made it clear that this is not a good approach. Please
do not repost new version without that being addressed. If there are any
questions then feel free to ask for details.
--
Michal Hocko
SUSE Labs
n Riel
> Cc: Johannes Weiner
> Cc: Dave Hansen
> Cc: Andi Kleen
> Cc: Michal Hocko
> Cc: David Rientjes
> Cc: Rafael Aquini
> ---
> include/uapi/linux/mempolicy.h | 2 +-
> kernel/sched/debug.c | 2 +-
> mm/mempolicy.c | 6 +++---
>
current logic is to dump if there is more slabs than LRU
pages which should be pretty obvious from the code. Why this rather than
e.g. slab * k > lru? Well, no strong reason, AFAIK. We just want to
catch too much slab memory cases.
--
Michal Hocko
SUSE Labs
On Tue 27-10-20 23:11:56, Hui Su wrote:
> On Tue, Oct 27, 2020 at 03:58:14PM +0100, Michal Hocko wrote:
> > On Tue 27-10-20 22:45:29, Hui Su wrote:
> > > is_dump_unreclaim_slabs() just check whether nr_unreclaimable
> > > slabs amount is greater than user memo
ne void *slab_alloc_node(struct
> kmem_cache *s,
>
> object = c->freelist;
> page = c->page;
> - if (unlikely(!object || !node_match(page, node))) {
> + if (unlikely(!object || !page || !node_match(page, node))) {
> object = __slab_alloc(s, gfpflags, node, addr, c);
> } else {
> void *next_object = get_freepointer_safe(s, object);
> --
> 2.29.1
>
--
Michal Hocko
SUSE Labs
On Tue 27-10-20 22:15:16, Muchun Song wrote:
> On Tue, Oct 27, 2020 at 9:36 PM Michal Hocko wrote:
> >
> > On Tue 27-10-20 16:02:56, Muchun Song wrote:
> > > We can reuse the code of mem_cgroup_lruvec() to simplify the code
> > > of the mem_cgroup_page_lruvec(
if (should_dump_unreclaim_slabs())
> dump_unreclaimable_slab();
> }
> if (sysctl_oom_dump_tasks)
> --
> 2.25.1
>
--
Michal Hocko
SUSE Labs
On Tue 27-10-20 15:39:46, Laurent Dufour wrote:
> Le 27/10/2020 à 15:24, Michal Hocko a écrit :
> > [Cc Vlastimil]
> >
> > On Tue 27-10-20 15:09:26, Laurent Dufour wrote:
> > > While doing memory hot-unplug operation on a PowerPC VM running 1024 CPUs
> > >
int nid)
This is just wrong interface. Either take nid or pgdat. You do not want
both because that just begs for wrong usage.
--
Michal Hocko
SUSE Labs
== page)
> @@ -2713,12 +2719,7 @@ int split_huge_page_to_list(struct page *page, struct
> list_head *list)
> }
>
> __split_huge_page(page, list, end, flags);
> - if (PageSwapCache(head)) {
> - swp_entry_t entry = { .val = page_private(head) };
> -
> - ret = split_swap_cluster(entry);
> - } else
> - ret = 0;
> + ret = 0;
> } else {
> if (IS_ENABLED(CONFIG_DEBUG_VM) && mapcount) {
> pr_alert("total_mapcount: %u, page_count(): %u\n",
> --
> 2.28.0
--
Michal Hocko
SUSE Labs
On Mon 26-10-20 14:08:01, Andrew Morton wrote:
[...]
> From: Andrew Morton
> Subject:
> mm-memcontrol-correct-the-nr_anon_thps-counter-of-hierarchical-memcg-fix
>
> fix printk warning
>
> Cc: Johannes Weiner
> Cc: Michal Hocko
> Cc: zhongjiang-ali
>
r unreclaimable slabs amount is greater than
> + * all user memory(LRU pages).
> */
> static bool is_dump_unreclaim_slabs(void)
> {
> --
> 2.25.1
>
>
--
Michal Hocko
SUSE Labs
es sense to use the global one. Is anybody aware of
usecases where a mount specific configuration would make sense?
--
Michal Hocko
SUSE Labs
per cgroup. We already track file
> THP and shmem THP per node, so making them per-cgroup is only a matter
> of switching from node to lruvec counters. All callsites are in places
> where the pages are charged and locked, so page->memcg is stable.
>
> Signed-off-by: Johanne
On Thu 22-10-20 12:06:01, Rik van Riel wrote:
> On Thu, 2020-10-22 at 17:50 +0200, Michal Hocko wrote:
> > On Thu 22-10-20 09:25:21, Rik van Riel wrote:
> > > On Thu, 2020-10-22 at 10:15 +0200, Michal Hocko wrote:
> > > > On Wed 21-10-20 23:48:46, Rik van Riel wr
On Thu 22-10-20 09:25:21, Rik van Riel wrote:
> On Thu, 2020-10-22 at 10:15 +0200, Michal Hocko wrote:
> > On Wed 21-10-20 23:48:46, Rik van Riel wrote:
> > > The allocation flags of anonymous transparent huge pages can be
> > > controlled
> > >
p;pvma);
> + page = alloc_pages_vma(gfp, HPAGE_PMD_ORDER, &pvma, 0, numa_node_id(),
> +true);
> shmem_pseudo_vma_destroy(&pvma);
> if (page)
> prep_transhuge_page(page);
>
> --
> All rights reversed.
--
Michal Hocko
SUSE Labs
ce folks even though there are enough memory?
> => Either we can introduce ENOVMEM (Cannot create virtual memory mapping)
> => Or, update the documentation with approach to further debug this issue?
No, it is close to impossible to add a new error code for interface that
is used so heavily.
--
Michal Hocko
SUSE Labs
his in the code
would be preferred.
No objection to the change.
> Signed-off-by: Muchun Song
With an improved changelog
Acked-by: Michal Hocko
> ---
> include/linux/page-flags.h | 2 ++
> mm/memory.c| 4 ++--
> 2 files changed, 4 insertions(+), 2 deletions
ode == NUMA_NO_NODE)
> - hctx->numa_node =
> local_memory_node(cpu_to_node(i));
> + hctx->numa_node = cpu_to_node(i);
> }
> }
> }
> --
> 2.17.1
--
Michal Hocko
SUSE Labs
resp. page_table_lock again..
> >
> > Fixes: a7f40cfe3b7a ("mm: mempolicy: make mbind() return -EIO when
> > MPOL_MF_STRICT is specified")
Cc: stable
is due as well. There are even security concerns and I wouldn't be
surprised if this gained a CVE.
> >
allocator should fallback to the proper node. As long
as __GFP_THISNODE is not enforced of course.
--
Michal Hocko
SUSE Labs
nd() return -EIO when
MPOL_MF_STRICT is specified")
"
> Signed-off-by: Shijie Luo
> Signed-off-by: Michal Hocko
> Signed-off-by: Miaohe Lin
No need to add my s-o-b.
> ---
> mm/mempolicy.c | 6 +++---
> 1 file changed, 3 insertions(+), 3 deletions(-)
>
rnel. It can reduce throughput but
this is a memory reclaim path and I do not expect this to contribute to
any moderate hot paths. Direct reclaim doesn't really count as a hot
path.
--
Michal Hocko
SUSE Labs
On Fri 16-10-20 15:15:32, Michal Hocko wrote:
> On Fri 16-10-20 15:11:17, Michal Hocko wrote:
> > On Fri 16-10-20 14:37:08, osalva...@suse.de wrote:
> > > On 2020-10-16 14:31, Michal Hocko wrote:
> > > > I do not like the fix though. The code is really confusing. W
On Fri 16-10-20 15:11:17, Michal Hocko wrote:
> On Fri 16-10-20 14:37:08, osalva...@suse.de wrote:
> > On 2020-10-16 14:31, Michal Hocko wrote:
> > > I do not like the fix though. The code is really confusing. Why should
> > > we check for flags in each iteration of the
On Fri 16-10-20 14:37:08, osalva...@suse.de wrote:
> On 2020-10-16 14:31, Michal Hocko wrote:
> > I do not like the fix though. The code is really confusing. Why should
> > we check for flags in each iteration of the loop when it cannot change?
> > Also why should we take the
some time. do_shrink_slab cannot make any forward progress and
effectivelly busy loop. Unless the caller does cond_resched it might
cause soft lockups.
Anyway let me try to ask again. Why does would this be any problem that
deserves a fix?
>
> -Original Message-
> From: Michal
need migrate other LRU pages.
+*/
+ if (migrate_page_add(page, qp->pagelist, flags))
+ has_unmovable = true;
}
pte_unmap_unlock(pte - 1, ptl);
cond_resched();
if (has_unmovable)
return 1;
return addr
t;
> up_read(&shrinker_rwsem);
> -out:
> +
> cond_resched();
> +out:
> return freed;
> }
>
> --
> 2.17.1
>
--
Michal Hocko
SUSE Labs
On Wed 14-10-20 09:57:20, Suren Baghdasaryan wrote:
> On Wed, Oct 14, 2020 at 5:09 AM Michal Hocko wrote:
[...]
> > > > The need is similar to why oom-reaper was introduced - when a process
> > > > is being killed to free memory we want to make sure memory is freed
&g
be compiled out.
>
> Signed-off-by: Roman Gushchin
> Acked-by: Johannes Weiner
Acked-by: Michal Hocko
> ---
> cover.txt | 4 ++--
> include/linux/memcontrol.h | 37 +
> include/linux/page-flags.h | 11 ++-
&g
memcg flags, defined in enum
> page_memcg_data_flags.
>
> Additional flags might be added later.
>
> Signed-off-by: Roman Gushchin
> Reviewed-by: Shakeel Butt
Acked-by: Michal Hocko
> ---
> include/linux/memcontrol.h | 32
&g
struct page's
> mem_cgroup/obj_cgroups is converted to unsigned long memcg_data.
>
> Signed-off-by: Roman Gushchin
> Acked-by: Johannes Weiner
Acked-by: Michal Hocko
> ---
> fs/buffer.c | 2 +-
> fs/iomap/buffered-io.c | 2 +
e past. Essentially SIG_KILL_SYNC which would
not only send the signal but it would start a teardown of resources
owned by the task - at least those we can remove safely. The interface
would be much more simple and less tricky to use. You just make your
userspace oom killer or potentially other users call SIG_KILL_SYNC which
will be more expensive but you would at least know that as many
resources have been freed as the kernel can afford at the moment.
--
Michal Hocko
SUSE Labs
_, ret,
> s->object_size, s->size, gfpflags, node);
> @@ -2935,7 +2938,10 @@ void *kmem_cache_alloc_node_trace(struct kmem_cache *s,
> gfp_t gfpflags,
> int node, size_t size)
> {
> - void *ret = slab_alloc_node(s, gfpflags, node, _RET_IP_);
> + void *ret;
> +
> + node = local_memory_node(node);
> + ret = slab_alloc_node(s, gfpflags, node, _RET_IP_);
>
> trace_kmalloc_node(_RET_IP_, ret,
> size, s->size, gfpflags, node);
> --
> 2.17.1
>
--
Michal Hocko
SUSE Labs
with else
> equally.
>
> Signed-off-by: Miaohe Lin
Acked-by: Michal Hocko
I believe this is a result of a very unreadable code. Resp. the comment
makes it hard to follow. It would be slightly better to simply drop the
comment which doesn't really explain much IMHO.
> ---
>
et me know if I should repost.
--
Michal Hocko
SUSE Labs
From: Michal Hocko
Many people are still relying on pre built distribution kernels and so
distributions have to provide mutliple kernel flavors to offer different
preemption models. Most of them are providing PREEMPT_NONE for typical
server deployments and PREEMPT_VOLUNTARY for desktop users
From: Michal Hocko
PREEMPT_VOLUNTARY is fully arch agnostic so there shouldn't be any
reason to restrict this preemption mode by ARCH_NO_PREEMPT.
Signed-off-by: Michal Hocko
---
kernel/Kconfig.preempt | 1 -
1 file changed, 1 deletion(-)
diff --git a/kernel/Kconfig.preempt b/k
Hi,
let me repost the pile that has grown from the initial patch based on
the review feedback I have collected from Peter. I do realize that he
also suggested to go from the other direction and implement this for the
full preemption mode first. As I've said I believe this would require to
examine a
From: Michal Hocko
Follow up patch would like to add a static key into kernel.h and that
requires a declaration of the key in the same file. Including
jump_label.h into kernel.h is not possible due to headers dependencies.
Separate parts needed for declaration into its own header which doesn
From: Michal Hocko
Boot time preemption mode selection is currently hardcoded for
!CONFIG_PREEMPTION. Peter has suggested to introduce a dedicated
option for the functionality because not each archiveture implements
implements static branches (jump labels) effectively and therefore
an additional
From: Michal Hocko
Now that preempt_mode command line parameter supports both preempt_none
and preempt_voluntary we do not necessarily need a config option for
this preemption mode and we can reduce the overall config space a bit.
Suggested-by: Peter Zijlstra
Signed-off-by: Michal Hocko
On Fri 09-10-20 13:17:04, Michal Hocko wrote:
[...]
> +config PREEMPT_DYNAMIC
> + bool "Allow boot time preemption model selection"
depends on !ARCH_NO_PREEMPT
> + depends on PREEMPT_NONE || PREEMPT_VOLUNTARY
> + help
> + This option allows
On Fri 09-10-20 12:48:09, Michal Hocko wrote:
[...]
> I will add the CONFIG_PREEMPT_DYNAMIC in the next version. I just have
> to think whether flipping the direction is really safe and easier in the
> end. For our particular usecase we are more interested in
> NONE<->VOLUNTARY
On Fri 09-10-20 12:20:09, Peter Zijlstra wrote:
> On Fri, Oct 09, 2020 at 12:14:05PM +0200, Michal Hocko wrote:
> > On Fri 09-10-20 11:47:41, Peter Zijlstra wrote:
>
> > > That is, work backwards (from PREEMPT back to VOLUNTARY) instead of the
> > > other way around.
On Fri 09-10-20 12:14:31, Peter Zijlstra wrote:
> On Fri, Oct 09, 2020 at 12:10:44PM +0200, Michal Hocko wrote:
> > On Fri 09-10-20 11:42:45, Peter Zijlstra wrote:
> > > On Fri, Oct 09, 2020 at 11:12:18AM +0200, Michal Hocko wrote:
> > > > Is there any additional f
On Fri 09-10-20 11:47:41, Peter Zijlstra wrote:
> On Wed, Oct 07, 2020 at 02:35:53PM +0200, Michal Hocko wrote:
> > On Wed 07-10-20 14:21:44, Peter Zijlstra wrote:
> > > On Wed, Oct 07, 2020 at 02:04:01PM +0200, Michal Hocko wrote:
> > > > I wanted to make s
On Fri 09-10-20 11:42:45, Peter Zijlstra wrote:
> On Fri, Oct 09, 2020 at 11:12:18AM +0200, Michal Hocko wrote:
> > Is there any additional feedback? Should I split up the patch and repost
> > for inclusion?
>
> Maybe remove PREEMPT_NONE after that? Since that's the
On Wed 07-10-20 14:04:01, Michal Hocko wrote:
> From: Michal Hocko
>
> Many people are still relying on pre built distribution kernels and so
> distributions have to provide mutliple kernel flavors to offer different
> preemption models. Most of them are providing PREEMPT_N
On Thu 08-10-20 14:56:13, Vlastimil Babka wrote:
> On 10/8/20 2:23 PM, Michal Hocko wrote:
> > On Thu 08-10-20 13:41:57, Vlastimil Babka wrote:
> > > We initialize boot-time pagesets with setup_pageset(), which sets high and
> > > batch values that effectively disable pc
e take pcp_batch_high_lock in zone_pcp_disable() and release it in
> zone_pcp_enable(). This also synchronizes multiple users of
> zone_pcp_disable()/enable().
>
> Currently the only user of this functionality is offline_pages().
Thanks for simplifying the implementation!
> Suggeste
701 - 800 of 5325 matches
Mail list logo