At 10/10/2012 07:27 AM, David Rientjes Wrote:
> On Tue, 9 Oct 2012, Peter Zijlstra wrote:
> 
>> Well the code they were patching is in the wakeup path. As I think Tang
>> said, we leave !runnable tasks on whatever cpu they ran on last, even if
>> that cpu is offlined, we try and fix up state when we get a wakeup.
>>
>> On wakeup, it tries to find a cpu to run on and will try a cpu of the
>> same node first.
>>
>> Now if that node's entirely gone away, it appears the cpu_to_node() map
>> will not return a valid node number.
>>
>> I think that's a change in behaviour, it didn't used to do that afaik.
>> Certainly this code hasn't change in a while.
>>
> 
> If cpu_to_node() always returns a valid node id even if all cpus on the 
> node are offline, then the cpumask_of_node() implementation, which the 
> sched code is using, should either return an empty cpumask (if 
> node_to_cpumask_map[nid] isn't freed) or cpu_online_mask.  The change in 
> behavior here occurred because 
> cpu_hotplug-unmap-cpu2node-when-the-cpu-is-hotremoved.patch in -mm doesn't 
> return a valid node id and forces it to return -1 so a kzalloc_node(..., 
> -1) fallsback to allocate anywhere.
> 
> But if you only need cpu_to_node() when waking up to find a runnable cpu 
> for this NUMA information, then I think you can just change the 
> kzalloc_node() in alloc_{fair,rt}_sched_group() to do 
> kzalloc(..., cpu_online(cpu) ? cpu_to_node(cpu) : NUMA_NO_NODE).
> 
>  [ The changelog here is confusing because it's fixing a problem in 
>    linux-next without saying so. ]
> 

I don't agree with this way. Because it only fix the code which causes a
problem, and we can't say there is no any similar problem. So it is
why I clear the cpu-to-node mapping.

What about the following solution:
1. clear the cpu-to-node mapping when the node is offlined
2. tang's patch is still necessary because we leave !runnable tasks on
   whatever cpu they ran on last. If cpu's node is NUMA_NO_NODE, it means
   the entire node is offlined, and we must migrate the task to the other
   node.

Thanks
Wen Congyang
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to