Re: [PATCH v9 0/7] Make cpuid <-> nodeid mapping persistent
Hi, RJ 在 2016年07月26日 19:53, Rafael J. Wysocki 写道: On Tuesday, July 26, 2016 11:59:38 AM Dou Liyang wrote: 在 2016年07月26日 07:20, Andrew Morton 写道: On Mon, 25 Jul 2016 16:35:42 +0800 Dou Liyangwrote: [Problem] cpuid <-> nodeid mapping is firstly established at boot time. And workqueue caches the mapping in wq_numa_possible_cpumask in wq_numa_init() at boot time. When doing node online/offline, cpuid <-> nodeid mapping is established/destroyed, which means, cpuid <-> nodeid mapping will change if node hotplug happens. But workqueue does not update wq_numa_possible_cpumask. So here is the problem: Assume we have the following cpuid <-> nodeid in the beginning: Node | CPU node 0 | 0-14, 60-74 node 1 | 15-29, 75-89 node 2 | 30-44, 90-104 node 3 | 45-59, 105-119 and we hot-remove node2 and node3, it becomes: Node | CPU node 0 | 0-14, 60-74 node 1 | 15-29, 75-89 and we hot-add node4 and node5, it becomes: Node | CPU node 0 | 0-14, 60-74 node 1 | 15-29, 75-89 node 4 | 30-59 node 5 | 90-119 But in wq_numa_possible_cpumask, cpu30 is still mapped to node2, and the like. When a pool workqueue is initialized, if its cpumask belongs to a node, its pool->node will be mapped to that node. And memory used by this workqueue will also be allocated on that node. Plan B is to hunt down and fix up all the workqueue structures at hotplug-time. Has that option been evaluated? Yes, the option has been evaluate in this patch: http://www.gossamer-threads.com/lists/linux/kernel/2116748 Your fix is x86-only and this bug presumably affects other architectures, yes?I think a "Plan B" would fix all architectures? Yes, the bug may presumably affect few architectures which support CPU hotplug and NUMA. We have sent the "Plan B" in our community and got a lot of advice and ideas. Based on these suggestions, We carefully balance that two plan. Then we choice the first. Thirdly, what is the merge path for these patches? Is an x86 or ACPI maintainer working with you on them? Yes, we get a lot of guidance and help from RJ who is an ACPI maintainer. FWIW, the patches are fine by me from the ACPI perspective. If you want me to apply them, though, ACKs from the x86 and mm maintainers will be necessary. I will continue to investigate this bug and wait for maintainers's advices. Thanks, Rafael Thanks. Dou
Re: [PATCH v9 0/7] Make cpuid <-> nodeid mapping persistent
Hi, RJ 在 2016年07月26日 19:53, Rafael J. Wysocki 写道: On Tuesday, July 26, 2016 11:59:38 AM Dou Liyang wrote: 在 2016年07月26日 07:20, Andrew Morton 写道: On Mon, 25 Jul 2016 16:35:42 +0800 Dou Liyang wrote: [Problem] cpuid <-> nodeid mapping is firstly established at boot time. And workqueue caches the mapping in wq_numa_possible_cpumask in wq_numa_init() at boot time. When doing node online/offline, cpuid <-> nodeid mapping is established/destroyed, which means, cpuid <-> nodeid mapping will change if node hotplug happens. But workqueue does not update wq_numa_possible_cpumask. So here is the problem: Assume we have the following cpuid <-> nodeid in the beginning: Node | CPU node 0 | 0-14, 60-74 node 1 | 15-29, 75-89 node 2 | 30-44, 90-104 node 3 | 45-59, 105-119 and we hot-remove node2 and node3, it becomes: Node | CPU node 0 | 0-14, 60-74 node 1 | 15-29, 75-89 and we hot-add node4 and node5, it becomes: Node | CPU node 0 | 0-14, 60-74 node 1 | 15-29, 75-89 node 4 | 30-59 node 5 | 90-119 But in wq_numa_possible_cpumask, cpu30 is still mapped to node2, and the like. When a pool workqueue is initialized, if its cpumask belongs to a node, its pool->node will be mapped to that node. And memory used by this workqueue will also be allocated on that node. Plan B is to hunt down and fix up all the workqueue structures at hotplug-time. Has that option been evaluated? Yes, the option has been evaluate in this patch: http://www.gossamer-threads.com/lists/linux/kernel/2116748 Your fix is x86-only and this bug presumably affects other architectures, yes?I think a "Plan B" would fix all architectures? Yes, the bug may presumably affect few architectures which support CPU hotplug and NUMA. We have sent the "Plan B" in our community and got a lot of advice and ideas. Based on these suggestions, We carefully balance that two plan. Then we choice the first. Thirdly, what is the merge path for these patches? Is an x86 or ACPI maintainer working with you on them? Yes, we get a lot of guidance and help from RJ who is an ACPI maintainer. FWIW, the patches are fine by me from the ACPI perspective. If you want me to apply them, though, ACKs from the x86 and mm maintainers will be necessary. I will continue to investigate this bug and wait for maintainers's advices. Thanks, Rafael Thanks. Dou
Re: [PATCH v9 0/7] Make cpuid <-> nodeid mapping persistent
On Tuesday, July 26, 2016 11:59:38 AM Dou Liyang wrote: > > 在 2016年07月26日 07:20, Andrew Morton 写道: > > On Mon, 25 Jul 2016 16:35:42 +0800 Dou Liyang> > wrote: > > > >> [Problem] > >> > >> cpuid <-> nodeid mapping is firstly established at boot time. And > >> workqueue caches > >> the mapping in wq_numa_possible_cpumask in wq_numa_init() at boot time. > >> > >> When doing node online/offline, cpuid <-> nodeid mapping is > >> established/destroyed, > >> which means, cpuid <-> nodeid mapping will change if node hotplug happens. > >> But > >> workqueue does not update wq_numa_possible_cpumask. > >> > >> So here is the problem: > >> > >> Assume we have the following cpuid <-> nodeid in the beginning: > >> > >> Node | CPU > >> > >> node 0 | 0-14, 60-74 > >> node 1 | 15-29, 75-89 > >> node 2 | 30-44, 90-104 > >> node 3 | 45-59, 105-119 > >> > >> and we hot-remove node2 and node3, it becomes: > >> > >> Node | CPU > >> > >> node 0 | 0-14, 60-74 > >> node 1 | 15-29, 75-89 > >> > >> and we hot-add node4 and node5, it becomes: > >> > >> Node | CPU > >> > >> node 0 | 0-14, 60-74 > >> node 1 | 15-29, 75-89 > >> node 4 | 30-59 > >> node 5 | 90-119 > >> > >> But in wq_numa_possible_cpumask, cpu30 is still mapped to node2, and the > >> like. > >> > >> When a pool workqueue is initialized, if its cpumask belongs to a node, its > >> pool->node will be mapped to that node. And memory used by this workqueue > >> will > >> also be allocated on that node. > > > > Plan B is to hunt down and fix up all the workqueue structures at > > hotplug-time. Has that option been evaluated? > > > > Yes, the option has been evaluate in this patch: > http://www.gossamer-threads.com/lists/linux/kernel/2116748 > > > > > Your fix is x86-only and this bug presumably affects other > > architectures, yes?I think a "Plan B" would fix all architectures? > > > > Yes, the bug may presumably affect few architectures which support CPU > hotplug and NUMA. > > We have sent the "Plan B" in our community and got a lot of advice and > ideas. Based on these suggestions, We carefully balance that two plan. > Then we choice the first. > > > > > Thirdly, what is the merge path for these patches? Is an x86 > > or ACPI maintainer working with you on them? > > Yes, we get a lot of guidance and help from RJ who is an ACPI maintainer. FWIW, the patches are fine by me from the ACPI perspective. If you want me to apply them, though, ACKs from the x86 and mm maintainers will be necessary. Thanks, Rafael
Re: [PATCH v9 0/7] Make cpuid <-> nodeid mapping persistent
On Tuesday, July 26, 2016 11:59:38 AM Dou Liyang wrote: > > 在 2016年07月26日 07:20, Andrew Morton 写道: > > On Mon, 25 Jul 2016 16:35:42 +0800 Dou Liyang > > wrote: > > > >> [Problem] > >> > >> cpuid <-> nodeid mapping is firstly established at boot time. And > >> workqueue caches > >> the mapping in wq_numa_possible_cpumask in wq_numa_init() at boot time. > >> > >> When doing node online/offline, cpuid <-> nodeid mapping is > >> established/destroyed, > >> which means, cpuid <-> nodeid mapping will change if node hotplug happens. > >> But > >> workqueue does not update wq_numa_possible_cpumask. > >> > >> So here is the problem: > >> > >> Assume we have the following cpuid <-> nodeid in the beginning: > >> > >> Node | CPU > >> > >> node 0 | 0-14, 60-74 > >> node 1 | 15-29, 75-89 > >> node 2 | 30-44, 90-104 > >> node 3 | 45-59, 105-119 > >> > >> and we hot-remove node2 and node3, it becomes: > >> > >> Node | CPU > >> > >> node 0 | 0-14, 60-74 > >> node 1 | 15-29, 75-89 > >> > >> and we hot-add node4 and node5, it becomes: > >> > >> Node | CPU > >> > >> node 0 | 0-14, 60-74 > >> node 1 | 15-29, 75-89 > >> node 4 | 30-59 > >> node 5 | 90-119 > >> > >> But in wq_numa_possible_cpumask, cpu30 is still mapped to node2, and the > >> like. > >> > >> When a pool workqueue is initialized, if its cpumask belongs to a node, its > >> pool->node will be mapped to that node. And memory used by this workqueue > >> will > >> also be allocated on that node. > > > > Plan B is to hunt down and fix up all the workqueue structures at > > hotplug-time. Has that option been evaluated? > > > > Yes, the option has been evaluate in this patch: > http://www.gossamer-threads.com/lists/linux/kernel/2116748 > > > > > Your fix is x86-only and this bug presumably affects other > > architectures, yes?I think a "Plan B" would fix all architectures? > > > > Yes, the bug may presumably affect few architectures which support CPU > hotplug and NUMA. > > We have sent the "Plan B" in our community and got a lot of advice and > ideas. Based on these suggestions, We carefully balance that two plan. > Then we choice the first. > > > > > Thirdly, what is the merge path for these patches? Is an x86 > > or ACPI maintainer working with you on them? > > Yes, we get a lot of guidance and help from RJ who is an ACPI maintainer. FWIW, the patches are fine by me from the ACPI perspective. If you want me to apply them, though, ACKs from the x86 and mm maintainers will be necessary. Thanks, Rafael
Re: [PATCH v9 0/7] Make cpuid <-> nodeid mapping persistent
在 2016年07月26日 07:20, Andrew Morton 写道: On Mon, 25 Jul 2016 16:35:42 +0800 Dou Liyangwrote: [Problem] cpuid <-> nodeid mapping is firstly established at boot time. And workqueue caches the mapping in wq_numa_possible_cpumask in wq_numa_init() at boot time. When doing node online/offline, cpuid <-> nodeid mapping is established/destroyed, which means, cpuid <-> nodeid mapping will change if node hotplug happens. But workqueue does not update wq_numa_possible_cpumask. So here is the problem: Assume we have the following cpuid <-> nodeid in the beginning: Node | CPU node 0 | 0-14, 60-74 node 1 | 15-29, 75-89 node 2 | 30-44, 90-104 node 3 | 45-59, 105-119 and we hot-remove node2 and node3, it becomes: Node | CPU node 0 | 0-14, 60-74 node 1 | 15-29, 75-89 and we hot-add node4 and node5, it becomes: Node | CPU node 0 | 0-14, 60-74 node 1 | 15-29, 75-89 node 4 | 30-59 node 5 | 90-119 But in wq_numa_possible_cpumask, cpu30 is still mapped to node2, and the like. When a pool workqueue is initialized, if its cpumask belongs to a node, its pool->node will be mapped to that node. And memory used by this workqueue will also be allocated on that node. Plan B is to hunt down and fix up all the workqueue structures at hotplug-time. Has that option been evaluated? Yes, the option has been evaluate in this patch: http://www.gossamer-threads.com/lists/linux/kernel/2116748 Your fix is x86-only and this bug presumably affects other architectures, yes?I think a "Plan B" would fix all architectures? Yes, the bug may presumably affect few architectures which support CPU hotplug and NUMA. We have sent the "Plan B" in our community and got a lot of advice and ideas. Based on these suggestions, We carefully balance that two plan. Then we choice the first. Thirdly, what is the merge path for these patches? Is an x86 or ACPI maintainer working with you on them? Yes, we get a lot of guidance and help from RJ who is an ACPI maintainer. Thanks, Dou
Re: [PATCH v9 0/7] Make cpuid <-> nodeid mapping persistent
在 2016年07月26日 07:20, Andrew Morton 写道: On Mon, 25 Jul 2016 16:35:42 +0800 Dou Liyang wrote: [Problem] cpuid <-> nodeid mapping is firstly established at boot time. And workqueue caches the mapping in wq_numa_possible_cpumask in wq_numa_init() at boot time. When doing node online/offline, cpuid <-> nodeid mapping is established/destroyed, which means, cpuid <-> nodeid mapping will change if node hotplug happens. But workqueue does not update wq_numa_possible_cpumask. So here is the problem: Assume we have the following cpuid <-> nodeid in the beginning: Node | CPU node 0 | 0-14, 60-74 node 1 | 15-29, 75-89 node 2 | 30-44, 90-104 node 3 | 45-59, 105-119 and we hot-remove node2 and node3, it becomes: Node | CPU node 0 | 0-14, 60-74 node 1 | 15-29, 75-89 and we hot-add node4 and node5, it becomes: Node | CPU node 0 | 0-14, 60-74 node 1 | 15-29, 75-89 node 4 | 30-59 node 5 | 90-119 But in wq_numa_possible_cpumask, cpu30 is still mapped to node2, and the like. When a pool workqueue is initialized, if its cpumask belongs to a node, its pool->node will be mapped to that node. And memory used by this workqueue will also be allocated on that node. Plan B is to hunt down and fix up all the workqueue structures at hotplug-time. Has that option been evaluated? Yes, the option has been evaluate in this patch: http://www.gossamer-threads.com/lists/linux/kernel/2116748 Your fix is x86-only and this bug presumably affects other architectures, yes?I think a "Plan B" would fix all architectures? Yes, the bug may presumably affect few architectures which support CPU hotplug and NUMA. We have sent the "Plan B" in our community and got a lot of advice and ideas. Based on these suggestions, We carefully balance that two plan. Then we choice the first. Thirdly, what is the merge path for these patches? Is an x86 or ACPI maintainer working with you on them? Yes, we get a lot of guidance and help from RJ who is an ACPI maintainer. Thanks, Dou
Re: [PATCH v9 0/7] Make cpuid <-> nodeid mapping persistent
Hello, On Mon, Jul 25, 2016 at 05:25:49PM -0700, Andrew Morton wrote: > > Yeah, that was one of the early approaches. The issue isn't limited > > to wq. Any memory allocation can have similar issues of underlying > > node association changing and we don't have any synchronization > > mechanism around it. It doesn't make any sense to make NUMA > > association dynamic when the consumer surface is vastly larger and > > there's nothing inherently dynamic about the association itself. > > And other architectures? No idea but it only matters for NUMA + CPU hotplug combination where a whole node can go empty, which would at most be a few archs. Thanks. -- tejun
Re: [PATCH v9 0/7] Make cpuid <-> nodeid mapping persistent
Hello, On Mon, Jul 25, 2016 at 05:25:49PM -0700, Andrew Morton wrote: > > Yeah, that was one of the early approaches. The issue isn't limited > > to wq. Any memory allocation can have similar issues of underlying > > node association changing and we don't have any synchronization > > mechanism around it. It doesn't make any sense to make NUMA > > association dynamic when the consumer surface is vastly larger and > > there's nothing inherently dynamic about the association itself. > > And other architectures? No idea but it only matters for NUMA + CPU hotplug combination where a whole node can go empty, which would at most be a few archs. Thanks. -- tejun
Re: [PATCH v9 0/7] Make cpuid <-> nodeid mapping persistent
Hello, Andrew. On Mon, Jul 25, 2016 at 04:20:22PM -0700, Andrew Morton wrote: > > When a pool workqueue is initialized, if its cpumask belongs to a node, its > > pool->node will be mapped to that node. And memory used by this workqueue > > will > > also be allocated on that node. > > Plan B is to hunt down and fix up all the workqueue structures at > hotplug-time. Has that option been evaluated? > > Your fix is x86-only and this bug presumably affects other > architectures, yes? I think a "Plan B" would fix all architectures? Yeah, that was one of the early approaches. The issue isn't limited to wq. Any memory allocation can have similar issues of underlying node association changing and we don't have any synchronization mechanism around it. It doesn't make any sense to make NUMA association dynamic when the consumer surface is vastly larger and there's nothing inherently dynamic about the association itself. Thanks. -- tejun
Re: [PATCH v9 0/7] Make cpuid <-> nodeid mapping persistent
Hello, Andrew. On Mon, Jul 25, 2016 at 04:20:22PM -0700, Andrew Morton wrote: > > When a pool workqueue is initialized, if its cpumask belongs to a node, its > > pool->node will be mapped to that node. And memory used by this workqueue > > will > > also be allocated on that node. > > Plan B is to hunt down and fix up all the workqueue structures at > hotplug-time. Has that option been evaluated? > > Your fix is x86-only and this bug presumably affects other > architectures, yes? I think a "Plan B" would fix all architectures? Yeah, that was one of the early approaches. The issue isn't limited to wq. Any memory allocation can have similar issues of underlying node association changing and we don't have any synchronization mechanism around it. It doesn't make any sense to make NUMA association dynamic when the consumer surface is vastly larger and there's nothing inherently dynamic about the association itself. Thanks. -- tejun
Re: [PATCH v9 0/7] Make cpuid <-> nodeid mapping persistent
On Mon, 25 Jul 2016 20:11:51 -0400 Tejun Heowrote: > Hello, Andrew. > > On Mon, Jul 25, 2016 at 04:20:22PM -0700, Andrew Morton wrote: > > > When a pool workqueue is initialized, if its cpumask belongs to a node, > > > its > > > pool->node will be mapped to that node. And memory used by this workqueue > > > will > > > also be allocated on that node. > > > > Plan B is to hunt down and fix up all the workqueue structures at > > hotplug-time. Has that option been evaluated? > > > > Your fix is x86-only and this bug presumably affects other > > architectures, yes? I think a "Plan B" would fix all architectures? > > Yeah, that was one of the early approaches. The issue isn't limited > to wq. Any memory allocation can have similar issues of underlying > node association changing and we don't have any synchronization > mechanism around it. It doesn't make any sense to make NUMA > association dynamic when the consumer surface is vastly larger and > there's nothing inherently dynamic about the association itself. And other architectures?
Re: [PATCH v9 0/7] Make cpuid <-> nodeid mapping persistent
On Mon, 25 Jul 2016 20:11:51 -0400 Tejun Heo wrote: > Hello, Andrew. > > On Mon, Jul 25, 2016 at 04:20:22PM -0700, Andrew Morton wrote: > > > When a pool workqueue is initialized, if its cpumask belongs to a node, > > > its > > > pool->node will be mapped to that node. And memory used by this workqueue > > > will > > > also be allocated on that node. > > > > Plan B is to hunt down and fix up all the workqueue structures at > > hotplug-time. Has that option been evaluated? > > > > Your fix is x86-only and this bug presumably affects other > > architectures, yes? I think a "Plan B" would fix all architectures? > > Yeah, that was one of the early approaches. The issue isn't limited > to wq. Any memory allocation can have similar issues of underlying > node association changing and we don't have any synchronization > mechanism around it. It doesn't make any sense to make NUMA > association dynamic when the consumer surface is vastly larger and > there's nothing inherently dynamic about the association itself. And other architectures?
Re: [PATCH v9 0/7] Make cpuid <-> nodeid mapping persistent
On Mon, 25 Jul 2016 16:35:42 +0800 Dou Liyangwrote: > [Problem] > > cpuid <-> nodeid mapping is firstly established at boot time. And workqueue > caches > the mapping in wq_numa_possible_cpumask in wq_numa_init() at boot time. > > When doing node online/offline, cpuid <-> nodeid mapping is > established/destroyed, > which means, cpuid <-> nodeid mapping will change if node hotplug happens. But > workqueue does not update wq_numa_possible_cpumask. > > So here is the problem: > > Assume we have the following cpuid <-> nodeid in the beginning: > > Node | CPU > > node 0 | 0-14, 60-74 > node 1 | 15-29, 75-89 > node 2 | 30-44, 90-104 > node 3 | 45-59, 105-119 > > and we hot-remove node2 and node3, it becomes: > > Node | CPU > > node 0 | 0-14, 60-74 > node 1 | 15-29, 75-89 > > and we hot-add node4 and node5, it becomes: > > Node | CPU > > node 0 | 0-14, 60-74 > node 1 | 15-29, 75-89 > node 4 | 30-59 > node 5 | 90-119 > > But in wq_numa_possible_cpumask, cpu30 is still mapped to node2, and the like. > > When a pool workqueue is initialized, if its cpumask belongs to a node, its > pool->node will be mapped to that node. And memory used by this workqueue will > also be allocated on that node. Plan B is to hunt down and fix up all the workqueue structures at hotplug-time. Has that option been evaluated? Your fix is x86-only and this bug presumably affects other architectures, yes? I think a "Plan B" would fix all architectures? Thirdly, what is the merge path for these patches? Is an x86 or ACPI maintainer working with you on them?
Re: [PATCH v9 0/7] Make cpuid <-> nodeid mapping persistent
On Mon, 25 Jul 2016 16:35:42 +0800 Dou Liyang wrote: > [Problem] > > cpuid <-> nodeid mapping is firstly established at boot time. And workqueue > caches > the mapping in wq_numa_possible_cpumask in wq_numa_init() at boot time. > > When doing node online/offline, cpuid <-> nodeid mapping is > established/destroyed, > which means, cpuid <-> nodeid mapping will change if node hotplug happens. But > workqueue does not update wq_numa_possible_cpumask. > > So here is the problem: > > Assume we have the following cpuid <-> nodeid in the beginning: > > Node | CPU > > node 0 | 0-14, 60-74 > node 1 | 15-29, 75-89 > node 2 | 30-44, 90-104 > node 3 | 45-59, 105-119 > > and we hot-remove node2 and node3, it becomes: > > Node | CPU > > node 0 | 0-14, 60-74 > node 1 | 15-29, 75-89 > > and we hot-add node4 and node5, it becomes: > > Node | CPU > > node 0 | 0-14, 60-74 > node 1 | 15-29, 75-89 > node 4 | 30-59 > node 5 | 90-119 > > But in wq_numa_possible_cpumask, cpu30 is still mapped to node2, and the like. > > When a pool workqueue is initialized, if its cpumask belongs to a node, its > pool->node will be mapped to that node. And memory used by this workqueue will > also be allocated on that node. Plan B is to hunt down and fix up all the workqueue structures at hotplug-time. Has that option been evaluated? Your fix is x86-only and this bug presumably affects other architectures, yes? I think a "Plan B" would fix all architectures? Thirdly, what is the merge path for these patches? Is an x86 or ACPI maintainer working with you on them?