Hi Michal,
        Thanks for your comments! As discussed, we will
rework the patch set in another direction to hide memoryless
node from normal slab users.
Regards!
Gerry

On 2014/7/18 15:36, Michal Hocko wrote:
> On Fri 11-07-14 15:37:26, Jiang Liu wrote:
>> When CONFIG_HAVE_MEMORYLESS_NODES is enabled, cpu_to_node()/numa_node_id()
>> may return a node without memory, and later cause system failure/panic
>> when calling kmalloc_node() and friends with returned node id.
>> So use cpu_to_mem()/numa_mem_id() instead to get the nearest node with
>> memory for the/current cpu.
>>
>> If CONFIG_HAVE_MEMORYLESS_NODES is disabled, cpu_to_mem()/numa_mem_id()
>> is the same as cpu_to_node()/numa_node_id().
> 
> The change makes difference only for really tiny memcgs. If we really
> have all pages on unevictable list or anon with no swap allowed and that
> is the reason why no node is set in scan_nodes mask then reclaiming
> memoryless node or any arbitrary close one doesn't make any difference.
> The current memcg might not have any memory on that node at all.
> 
> So the change doesn't make any practical difference and the changelog is
> misleading.
> 
>> Signed-off-by: Jiang Liu <jiang....@linux.intel.com>
>> ---
>>  mm/memcontrol.c |    2 +-
>>  1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
>> index a2c7bcb0e6eb..d6c4b7255ca9 100644
>> --- a/mm/memcontrol.c
>> +++ b/mm/memcontrol.c
>> @@ -1933,7 +1933,7 @@ int mem_cgroup_select_victim_node(struct mem_cgroup 
>> *memcg)
>>       * we use curret node.
>>       */
>>      if (unlikely(node == MAX_NUMNODES))
>> -            node = numa_node_id();
>> +            node = numa_mem_id();
>>  
>>      memcg->last_scanned_node = node;
>>      return node;
>> -- 
>> 1.7.10.4
>>
> 
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to