Re: [PATCH] drm/ttm: Implement strict NUMA pool allocations

2024-03-22 Thread Bhardwaj, Rajneesh



On 3/22/2024 11:29 AM, Ruhl, Michael J wrote:

-Original Message-
From: dri-devel  On Behalf Of
Rajneesh Bhardwaj
Sent: Friday, March 22, 2024 3:08 AM
To: amd-...@lists.freedesktop.org; dri-devel@lists.freedesktop.org
Cc: felix.kuehl...@amd.com; alexander.deuc...@amd.com;
christian.koe...@amd.com; Rajneesh Bhardwaj
; Joe Greathouse

Subject: [PATCH] drm/ttm: Implement strict NUMA pool allocations

This change allows TTM to be flexible to honor NUMA localized
allocations which can result in significant performance improvement on a
multi socket NUMA system. On GFXIP 9.4.3 based AMD APUs, we see
manyfold benefits of this change resulting not only in ~10% performance
improvement in certain benchmarks but also generating more consistent
and less sporadic results specially when the NUMA balancing is not
explecitely disabled. In certain scenarios, workloads show a run-to-run
variability e.g. HPL would show a ~10x performance drop after running
back to back 4-5 times and would recover later on a subsequent run. This
is seen with memory intensive other workloads too. It was seen that when
CPU caches were dropped e.g. sudo sysctl -w vm.drop_caches=1, the
variability reduced but the performance was still well below a good run.

Use of __GFP_THISNODE flag ensures that during memory allocation, kernel
prioritizes memory allocations from the local or closest NUMA node
thereby reducing memory access latency. When memory is allocated using
__GFP_THISNODE flag, memory allocations will predominantly be done on
the local node, consequency, the shrinkers may priotitize reclaiming
memory from caches assocoated with local node to maintain memory
locality and minimize latency, thereby provide better shinker targeting.

Reduced memory pressure on remote nodes, can also indirectly influence
shrinker behavior by potentially reducing the frequency and intensity of
memory reclamation operation on remote nodes and could provide improved
overall system performance.

While this change could be more beneficial in general, i.e., without the
use of a module parameter, but in absence of widespread testing, limit
it to the AMD GFXIP 9.4.3 based ttm pool initializations only.


Cc: Joe Greathouse 
Signed-off-by: Rajneesh Bhardwaj 
---
drivers/gpu/drm/amd/amdgpu/amdgpu.h   |  1 +
drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c   |  8 
drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c   |  7 ++-
drivers/gpu/drm/ttm/tests/ttm_pool_test.c | 10 +-
drivers/gpu/drm/ttm/ttm_device.c  |  2 +-
drivers/gpu/drm/ttm/ttm_pool.c|  7 ++-
include/drm/ttm/ttm_pool.h|  4 +++-
7 files changed, 30 insertions(+), 9 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
index 9c62552bec34..96532cfc6230 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
@@ -253,6 +253,7 @@ extern int amdgpu_user_partt_mode;
extern int amdgpu_agp;

extern int amdgpu_wbrf;
+extern bool strict_numa_alloc;

#define AMDGPU_VM_MAX_NUM_CTX   4096
#define AMDGPU_SG_THRESHOLD (256*1024*1024)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
index 80b9642f2bc4..a183a6b4493d 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
@@ -781,6 +781,14 @@ int queue_preemption_timeout_ms = 9000;
module_param(queue_preemption_timeout_ms, int, 0644);
MODULE_PARM_DESC(queue_preemption_timeout_ms, "queue preemption
timeout in ms (1 = Minimum, 9000 = default)");

+/**
+ * DOC: strict_numa_alloc(bool)
+ * Policy to force NUMA allocation requests from the proximity NUMA domain
only.
+ */
+bool strict_numa_alloc;
+module_param(strict_numa_alloc, bool, 0444);
+MODULE_PARM_DESC(strict_numa_alloc, "Force NUMA allocation requests
to be satisfied from the closest node only (false = default)");
+
/**
  * DOC: debug_evictions(bool)
  * Enable extra debug messages to help determine the cause of evictions
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
index b0ed10f4de60..a9f78f85e28c 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
@@ -1768,6 +1768,7 @@ static int amdgpu_ttm_reserve_tmr(struct
amdgpu_device *adev)

static int amdgpu_ttm_pools_init(struct amdgpu_device *adev)
{
+   bool policy = true;
int i;

if (!adev->gmc.is_app_apu || !adev->gmc.num_mem_partitions)
@@ -1779,11 +1780,15 @@ static int amdgpu_ttm_pools_init(struct
amdgpu_device *adev)
if (!adev->mman.ttm_pools)
return -ENOMEM;

+   /* Policy not only depends on the module param but also on the ASIC
+* setting use_strict_numa_alloc as well.
+*/
for (i = 0; i < adev->gmc.num_mem_partitions; i++) {
ttm_pool_init(>mman.ttm_pools[i], adev->dev,
   

Re: [PATCH] drm/ttm: Implement strict NUMA pool allocations

2024-03-22 Thread Bhardwaj, Rajneesh





On 3/22/2024 9:15 AM, Christian König wrote:

Am 22.03.24 um 08:07 schrieb Rajneesh Bhardwaj:

This change allows TTM to be flexible to honor NUMA localized
allocations which can result in significant performance improvement on a
multi socket NUMA system. On GFXIP 9.4.3 based AMD APUs, we see
manyfold benefits of this change resulting not only in ~10% performance
improvement in certain benchmarks but also generating more consistent
and less sporadic results specially when the NUMA balancing is not
explecitely disabled. In certain scenarios, workloads show a run-to-run
variability e.g. HPL would show a ~10x performance drop after running
back to back 4-5 times and would recover later on a subsequent run. This
is seen with memory intensive other workloads too. It was seen that when
CPU caches were dropped e.g. sudo sysctl -w vm.drop_caches=1, the
variability reduced but the performance was still well below a good run.

Use of __GFP_THISNODE flag ensures that during memory allocation, kernel
prioritizes memory allocations from the local or closest NUMA node
thereby reducing memory access latency.


Exactly that's what it doesn't do.

__GFP_THISNODE just means it enforces allocation from the specified node.


Thanks for the feedback, Christian.

Sure, maybe I should have made it clear in the commit message that there 
is no fallback to zonelist while allocating slab cache. And this is 
exactly what we want, we don't want the pages to be landing on remote nodes.




Additional to that there is a mandatory requirement that this flag is 
only used if it has to be used for correctness. And that is simply not 
the case here.



Sorry ,I didn't quite understand this. What is the mandatory 
requirement, could you please clarify this?  Based on the documentation 
I read about this and after reading the kernel source code, I didn't 
find any hard requirement to discourage use of this.





So as long as nobody can explain why that should help this is an 
absolutely no-go.



Could you please clarify the controversial part here? We have absolutely 
strong backing data the proves the usefulness besides this change 
restricts us to a certain HW IP. Also, the possibilty of OOM killer 
seems to less actually when we use __THISNODE.


https://elixir.bootlin.com/linux/latest/source/mm/page_alloc.c#L3439


Another important thing I want to highlight here is that for AMD 
GFXIP9.4.3 APU, which has unified HBM stack (no RAM/VRAM distinction), 
we get the backing vram pages using ttm_pool_alloc_page(nid) already.






Regards,
Christian.


  When memory is allocated using
__GFP_THISNODE flag, memory allocations will predominantly be done on
the local node, consequency, the shrinkers may priotitize reclaiming
memory from caches assocoated with local node to maintain memory
locality and minimize latency, thereby provide better shinker targeting.

Reduced memory pressure on remote nodes, can also indirectly influence
shrinker behavior by potentially reducing the frequency and intensity of
memory reclamation operation on remote nodes and could provide improved
overall system performance.

While this change could be more beneficial in general, i.e., without the
use of a module parameter, but in absence of widespread testing, limit
it to the AMD GFXIP 9.4.3 based ttm pool initializations only.


Cc: Joe Greathouse 
Signed-off-by: Rajneesh Bhardwaj 
---
  drivers/gpu/drm/amd/amdgpu/amdgpu.h   |  1 +
  drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c   |  8 
  drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c   |  7 ++-
  drivers/gpu/drm/ttm/tests/ttm_pool_test.c | 10 +-
  drivers/gpu/drm/ttm/ttm_device.c  |  2 +-
  drivers/gpu/drm/ttm/ttm_pool.c    |  7 ++-
  include/drm/ttm/ttm_pool.h    |  4 +++-
  7 files changed, 30 insertions(+), 9 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu.h

index 9c62552bec34..96532cfc6230 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
@@ -253,6 +253,7 @@ extern int amdgpu_user_partt_mode;
  extern int amdgpu_agp;
    extern int amdgpu_wbrf;
+extern bool strict_numa_alloc;
    #define AMDGPU_VM_MAX_NUM_CTX    4096
  #define AMDGPU_SG_THRESHOLD    (256*1024*1024)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c

index 80b9642f2bc4..a183a6b4493d 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
@@ -781,6 +781,14 @@ int queue_preemption_timeout_ms = 9000;
  module_param(queue_preemption_timeout_ms, int, 0644);
  MODULE_PARM_DESC(queue_preemption_timeout_ms, "queue preemption 
timeout in ms (1 = Minimum, 9000 = default)");

  +/**
+ * DOC: strict_numa_alloc(bool)
+ * Policy to force NUMA allocation requests from the proximity NUMA 
domain only.

+ */
+bool strict_numa_alloc;
+module_param(strict_numa_alloc, bool, 0444);
+MODULE_PARM_DESC(strict_numa_alloc, "Force 

RE: [PATCH] drm/ttm: Implement strict NUMA pool allocations

2024-03-22 Thread Ruhl, Michael J


>-Original Message-
>From: dri-devel  On Behalf Of
>Rajneesh Bhardwaj
>Sent: Friday, March 22, 2024 3:08 AM
>To: amd-...@lists.freedesktop.org; dri-devel@lists.freedesktop.org
>Cc: felix.kuehl...@amd.com; alexander.deuc...@amd.com;
>christian.koe...@amd.com; Rajneesh Bhardwaj
>; Joe Greathouse
>
>Subject: [PATCH] drm/ttm: Implement strict NUMA pool allocations
>
>This change allows TTM to be flexible to honor NUMA localized
>allocations which can result in significant performance improvement on a
>multi socket NUMA system. On GFXIP 9.4.3 based AMD APUs, we see
>manyfold benefits of this change resulting not only in ~10% performance
>improvement in certain benchmarks but also generating more consistent
>and less sporadic results specially when the NUMA balancing is not
>explecitely disabled. In certain scenarios, workloads show a run-to-run
>variability e.g. HPL would show a ~10x performance drop after running
>back to back 4-5 times and would recover later on a subsequent run. This
>is seen with memory intensive other workloads too. It was seen that when
>CPU caches were dropped e.g. sudo sysctl -w vm.drop_caches=1, the
>variability reduced but the performance was still well below a good run.
>
>Use of __GFP_THISNODE flag ensures that during memory allocation, kernel
>prioritizes memory allocations from the local or closest NUMA node
>thereby reducing memory access latency. When memory is allocated using
>__GFP_THISNODE flag, memory allocations will predominantly be done on
>the local node, consequency, the shrinkers may priotitize reclaiming
>memory from caches assocoated with local node to maintain memory
>locality and minimize latency, thereby provide better shinker targeting.
>
>Reduced memory pressure on remote nodes, can also indirectly influence
>shrinker behavior by potentially reducing the frequency and intensity of
>memory reclamation operation on remote nodes and could provide improved
>overall system performance.
>
>While this change could be more beneficial in general, i.e., without the
>use of a module parameter, but in absence of widespread testing, limit
>it to the AMD GFXIP 9.4.3 based ttm pool initializations only.
>
>
>Cc: Joe Greathouse 
>Signed-off-by: Rajneesh Bhardwaj 
>---
> drivers/gpu/drm/amd/amdgpu/amdgpu.h   |  1 +
> drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c   |  8 
> drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c   |  7 ++-
> drivers/gpu/drm/ttm/tests/ttm_pool_test.c | 10 +-
> drivers/gpu/drm/ttm/ttm_device.c  |  2 +-
> drivers/gpu/drm/ttm/ttm_pool.c|  7 ++-
> include/drm/ttm/ttm_pool.h|  4 +++-
> 7 files changed, 30 insertions(+), 9 deletions(-)
>
>diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
>b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
>index 9c62552bec34..96532cfc6230 100644
>--- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
>+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
>@@ -253,6 +253,7 @@ extern int amdgpu_user_partt_mode;
> extern int amdgpu_agp;
>
> extern int amdgpu_wbrf;
>+extern bool strict_numa_alloc;
>
> #define AMDGPU_VM_MAX_NUM_CTX 4096
> #define AMDGPU_SG_THRESHOLD   (256*1024*1024)
>diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
>b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
>index 80b9642f2bc4..a183a6b4493d 100644
>--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
>+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
>@@ -781,6 +781,14 @@ int queue_preemption_timeout_ms = 9000;
> module_param(queue_preemption_timeout_ms, int, 0644);
> MODULE_PARM_DESC(queue_preemption_timeout_ms, "queue preemption
>timeout in ms (1 = Minimum, 9000 = default)");
>
>+/**
>+ * DOC: strict_numa_alloc(bool)
>+ * Policy to force NUMA allocation requests from the proximity NUMA domain
>only.
>+ */
>+bool strict_numa_alloc;
>+module_param(strict_numa_alloc, bool, 0444);
>+MODULE_PARM_DESC(strict_numa_alloc, "Force NUMA allocation requests
>to be satisfied from the closest node only (false = default)");
>+
> /**
>  * DOC: debug_evictions(bool)
>  * Enable extra debug messages to help determine the cause of evictions
>diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
>b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
>index b0ed10f4de60..a9f78f85e28c 100644
>--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
>+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
>@@ -1768,6 +1768,7 @@ static int amdgpu_ttm_reserve_tmr(struct
>amdgpu_device *adev)
>
> static int amdgpu_ttm_pools_init(struct amdgpu_device *adev)
> {
>+  bool policy = true;
>   int i;
>
>   if (!adev->gmc.is_app_apu || !adev->gmc.num_mem_partitions)
>@@ -1779,11 +1780,15 @@ static int amdgpu_ttm_pools_init(struct
>

Re: [PATCH] drm/ttm: Implement strict NUMA pool allocations

2024-03-22 Thread Christian König

Am 22.03.24 um 08:07 schrieb Rajneesh Bhardwaj:

This change allows TTM to be flexible to honor NUMA localized
allocations which can result in significant performance improvement on a
multi socket NUMA system. On GFXIP 9.4.3 based AMD APUs, we see
manyfold benefits of this change resulting not only in ~10% performance
improvement in certain benchmarks but also generating more consistent
and less sporadic results specially when the NUMA balancing is not
explecitely disabled. In certain scenarios, workloads show a run-to-run
variability e.g. HPL would show a ~10x performance drop after running
back to back 4-5 times and would recover later on a subsequent run. This
is seen with memory intensive other workloads too. It was seen that when
CPU caches were dropped e.g. sudo sysctl -w vm.drop_caches=1, the
variability reduced but the performance was still well below a good run.

Use of __GFP_THISNODE flag ensures that during memory allocation, kernel
prioritizes memory allocations from the local or closest NUMA node
thereby reducing memory access latency.


Exactly that's what it doesn't do.

__GFP_THISNODE just means it enforces allocation from the specified node.

Additional to that there is a mandatory requirement that this flag is 
only used if it has to be used for correctness. And that is simply not 
the case here.


So as long as nobody can explain why that should help this is an 
absolutely no-go.


Regards,
Christian.


  When memory is allocated using
__GFP_THISNODE flag, memory allocations will predominantly be done on
the local node, consequency, the shrinkers may priotitize reclaiming
memory from caches assocoated with local node to maintain memory
locality and minimize latency, thereby provide better shinker targeting.

Reduced memory pressure on remote nodes, can also indirectly influence
shrinker behavior by potentially reducing the frequency and intensity of
memory reclamation operation on remote nodes and could provide improved
overall system performance.

While this change could be more beneficial in general, i.e., without the
use of a module parameter, but in absence of widespread testing, limit
it to the AMD GFXIP 9.4.3 based ttm pool initializations only.


Cc: Joe Greathouse 
Signed-off-by: Rajneesh Bhardwaj 
---
  drivers/gpu/drm/amd/amdgpu/amdgpu.h   |  1 +
  drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c   |  8 
  drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c   |  7 ++-
  drivers/gpu/drm/ttm/tests/ttm_pool_test.c | 10 +-
  drivers/gpu/drm/ttm/ttm_device.c  |  2 +-
  drivers/gpu/drm/ttm/ttm_pool.c|  7 ++-
  include/drm/ttm/ttm_pool.h|  4 +++-
  7 files changed, 30 insertions(+), 9 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
index 9c62552bec34..96532cfc6230 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
@@ -253,6 +253,7 @@ extern int amdgpu_user_partt_mode;
  extern int amdgpu_agp;
  
  extern int amdgpu_wbrf;

+extern bool strict_numa_alloc;
  
  #define AMDGPU_VM_MAX_NUM_CTX			4096

  #define AMDGPU_SG_THRESHOLD   (256*1024*1024)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
index 80b9642f2bc4..a183a6b4493d 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
@@ -781,6 +781,14 @@ int queue_preemption_timeout_ms = 9000;
  module_param(queue_preemption_timeout_ms, int, 0644);
  MODULE_PARM_DESC(queue_preemption_timeout_ms, "queue preemption timeout in ms (1 = 
Minimum, 9000 = default)");
  
+/**

+ * DOC: strict_numa_alloc(bool)
+ * Policy to force NUMA allocation requests from the proximity NUMA domain 
only.
+ */
+bool strict_numa_alloc;
+module_param(strict_numa_alloc, bool, 0444);
+MODULE_PARM_DESC(strict_numa_alloc, "Force NUMA allocation requests to be satisfied 
from the closest node only (false = default)");
+
  /**
   * DOC: debug_evictions(bool)
   * Enable extra debug messages to help determine the cause of evictions
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
index b0ed10f4de60..a9f78f85e28c 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
@@ -1768,6 +1768,7 @@ static int amdgpu_ttm_reserve_tmr(struct amdgpu_device 
*adev)
  
  static int amdgpu_ttm_pools_init(struct amdgpu_device *adev)

  {
+   bool policy = true;
int i;
  
  	if (!adev->gmc.is_app_apu || !adev->gmc.num_mem_partitions)

@@ -1779,11 +1780,15 @@ static int amdgpu_ttm_pools_init(struct amdgpu_device 
*adev)
if (!adev->mman.ttm_pools)
return -ENOMEM;
  
+	/* Policy not only depends on the module param but also on the ASIC

+* setting use_strict_numa_alloc as well.
+*/
for (i = 0; i < adev->gmc.num_mem_partitions; i++) {
ttm_pool_init(>mman.ttm_pools[i], adev->dev,
 

[PATCH] drm/ttm: Implement strict NUMA pool allocations

2024-03-22 Thread Rajneesh Bhardwaj
This change allows TTM to be flexible to honor NUMA localized
allocations which can result in significant performance improvement on a
multi socket NUMA system. On GFXIP 9.4.3 based AMD APUs, we see
manyfold benefits of this change resulting not only in ~10% performance
improvement in certain benchmarks but also generating more consistent
and less sporadic results specially when the NUMA balancing is not
explecitely disabled. In certain scenarios, workloads show a run-to-run
variability e.g. HPL would show a ~10x performance drop after running
back to back 4-5 times and would recover later on a subsequent run. This
is seen with memory intensive other workloads too. It was seen that when
CPU caches were dropped e.g. sudo sysctl -w vm.drop_caches=1, the
variability reduced but the performance was still well below a good run.

Use of __GFP_THISNODE flag ensures that during memory allocation, kernel
prioritizes memory allocations from the local or closest NUMA node
thereby reducing memory access latency. When memory is allocated using
__GFP_THISNODE flag, memory allocations will predominantly be done on
the local node, consequency, the shrinkers may priotitize reclaiming
memory from caches assocoated with local node to maintain memory
locality and minimize latency, thereby provide better shinker targeting.

Reduced memory pressure on remote nodes, can also indirectly influence
shrinker behavior by potentially reducing the frequency and intensity of
memory reclamation operation on remote nodes and could provide improved
overall system performance.

While this change could be more beneficial in general, i.e., without the
use of a module parameter, but in absence of widespread testing, limit
it to the AMD GFXIP 9.4.3 based ttm pool initializations only.


Cc: Joe Greathouse 
Signed-off-by: Rajneesh Bhardwaj 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu.h   |  1 +
 drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c   |  8 
 drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c   |  7 ++-
 drivers/gpu/drm/ttm/tests/ttm_pool_test.c | 10 +-
 drivers/gpu/drm/ttm/ttm_device.c  |  2 +-
 drivers/gpu/drm/ttm/ttm_pool.c|  7 ++-
 include/drm/ttm/ttm_pool.h|  4 +++-
 7 files changed, 30 insertions(+), 9 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
index 9c62552bec34..96532cfc6230 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
@@ -253,6 +253,7 @@ extern int amdgpu_user_partt_mode;
 extern int amdgpu_agp;
 
 extern int amdgpu_wbrf;
+extern bool strict_numa_alloc;
 
 #define AMDGPU_VM_MAX_NUM_CTX  4096
 #define AMDGPU_SG_THRESHOLD(256*1024*1024)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
index 80b9642f2bc4..a183a6b4493d 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
@@ -781,6 +781,14 @@ int queue_preemption_timeout_ms = 9000;
 module_param(queue_preemption_timeout_ms, int, 0644);
 MODULE_PARM_DESC(queue_preemption_timeout_ms, "queue preemption timeout in ms 
(1 = Minimum, 9000 = default)");
 
+/**
+ * DOC: strict_numa_alloc(bool)
+ * Policy to force NUMA allocation requests from the proximity NUMA domain 
only.
+ */
+bool strict_numa_alloc;
+module_param(strict_numa_alloc, bool, 0444);
+MODULE_PARM_DESC(strict_numa_alloc, "Force NUMA allocation requests to be 
satisfied from the closest node only (false = default)");
+
 /**
  * DOC: debug_evictions(bool)
  * Enable extra debug messages to help determine the cause of evictions
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
index b0ed10f4de60..a9f78f85e28c 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
@@ -1768,6 +1768,7 @@ static int amdgpu_ttm_reserve_tmr(struct amdgpu_device 
*adev)
 
 static int amdgpu_ttm_pools_init(struct amdgpu_device *adev)
 {
+   bool policy = true;
int i;
 
if (!adev->gmc.is_app_apu || !adev->gmc.num_mem_partitions)
@@ -1779,11 +1780,15 @@ static int amdgpu_ttm_pools_init(struct amdgpu_device 
*adev)
if (!adev->mman.ttm_pools)
return -ENOMEM;
 
+   /* Policy not only depends on the module param but also on the ASIC
+* setting use_strict_numa_alloc as well.
+*/
for (i = 0; i < adev->gmc.num_mem_partitions; i++) {
ttm_pool_init(>mman.ttm_pools[i], adev->dev,
  adev->gmc.mem_partitions[i].numa.node,
- false, false);
+ false, false, policy && strict_numa_alloc);
}
+
return 0;
 }
 
diff --git a/drivers/gpu/drm/ttm/tests/ttm_pool_test.c 
b/drivers/gpu/drm/ttm/tests/ttm_pool_test.c
index 2d9cae8cd984..6ff47aac570a 100644
--- a/drivers/gpu/drm/ttm/tests/ttm_pool_test.c
+++