[clang-tools-extra] [mlir] [MLIR][XeGPU] Refactor Layout access interface (PR #172125)

2025-12-17 Thread LLVM Continuous Integration via cfe-commits

llvm-ci wrote:

LLVM Buildbot has detected a new failure on builder `flang-arm64-windows-msvc` 
running on `linaro-armv8-windows-msvc-01` while building `mlir` at step 6 
"test-build-unified-tree-check-mlir".

Full details are available at: 
https://lab.llvm.org/buildbot/#/builders/207/builds/11203


Here is the relevant piece of the build log for the reference

```
Step 6 (test-build-unified-tree-check-mlir) failure: test (failure)
 TEST 'MLIR :: 
Dialect/XeGPU/propagate-layout-subgroup.mlir' FAILED 
Exit Code: 1

Command Output (stdout):
--
# RUN: at line 1
c:\users\tcwg\llvm-worker\flang-arm64-windows-msvc\build\bin\mlir-opt.exe 
-xevm-attach-target='chip=pvc' -xegpu-propagate-layout="layout-kind=subgroup" 
-split-input-file 
C:\Users\tcwg\llvm-worker\flang-arm64-windows-msvc\llvm-project\mlir\test\Dialect\XeGPU\propagate-layout-subgroup.mlir
 | c:\users\tcwg\llvm-worker\flang-arm64-windows-msvc\build\bin\filecheck.exe 
C:\Users\tcwg\llvm-worker\flang-arm64-windows-msvc\llvm-project\mlir\test\Dialect\XeGPU\propagate-layout-subgroup.mlir
# executed command: 
'c:\users\tcwg\llvm-worker\flang-arm64-windows-msvc\build\bin\mlir-opt.exe' 
-xevm-attach-target=chip=pvc -xegpu-propagate-layout=layout-kind=subgroup 
-split-input-file 
'C:\Users\tcwg\llvm-worker\flang-arm64-windows-msvc\llvm-project\mlir\test\Dialect\XeGPU\propagate-layout-subgroup.mlir'
# executed command: 
'c:\users\tcwg\llvm-worker\flang-arm64-windows-msvc\build\bin\filecheck.exe' 
'C:\Users\tcwg\llvm-worker\flang-arm64-windows-msvc\llvm-project\mlir\test\Dialect\XeGPU\propagate-layout-subgroup.mlir'
# .---command stderr
# | 
C:\Users\tcwg\llvm-worker\flang-arm64-windows-msvc\llvm-project\mlir\test\Dialect\XeGPU\propagate-layout-subgroup.mlir:10:17:
 error: CHECK-SAME: expected string not found in input
# |  // CHECK-SAME: {layout_result_0 = #xegpu.layout}
# | ^
# | :5:90: note: scanning from here
# |  %1 = xegpu.load_nd %0 <{layout = #xegpu.layout}> : !xegpu.tensor_desc<256x128xf32, #xegpu.layout> -> vector<256x128xf32>
# | 
 ^
# | :6:17: note: possible intended match here
# |  xegpu.store_nd %1, %0 <{layout = #xegpu.layout}> : vector<256x128xf32>, !xegpu.tensor_desc<256x128xf32, 
#xegpu.layout>
# | ^
# | 
C:\Users\tcwg\llvm-worker\flang-arm64-windows-msvc\llvm-project\mlir\test\Dialect\XeGPU\propagate-layout-subgroup.mlir:36:17:
 error: CHECK-SAME: expected string not found in input
# |  // CHECK-SAME: {layout_result_0 = #xegpu.layout} :
# | ^
# | :18:112: note: scanning from here
# |  %2 = xegpu.load_nd %0[0, 0] <{layout = #xegpu.layout}> : !xegpu.tensor_desc<256x128xf32, 
#xegpu.layout> -> 
vector<256x128xf32>
# | 
   ^
# | :19:35: note: possible intended match here
# |  %3 = vector.transpose %2, [1, 0] {layout_result_0 = 
#xegpu.layout} : 
vector<256x128xf32> to vector<128x256xf32>
# |   ^
# | 
# | Input file: 
# | Check file: 
C:\Users\tcwg\llvm-worker\flang-arm64-windows-msvc\llvm-project\mlir\test\Dialect\XeGPU\propagate-layout-subgroup.mlir
# | 
# | -dump-input=help explains the following input dump.
# | 
# | Input was:
# | <<
# |1: module { 
# |2:  gpu.module @test [#xevm.target] { 
# |3:  func.func @store_nd(%arg0: memref<256x128xf32>) { 
# |4:  %0 = xegpu.create_nd_tdesc %arg0 : memref<256x128xf32> -> 
!xegpu.tensor_desc<256x128xf32, #xegpu.layout> 
# |5:  %1 = xegpu.load_nd %0 <{layout = #xegpu.layout}> : !xegpu.tensor_desc<256x128xf32, 
#xegpu.layout> -> vector<256x128xf32> 
# | same:10'0   
   
X
 error: no match found
# |6:  xegpu.store_nd %1, %0 <{layout = #xegpu.layout}> : vector<256x128xf32>, 
!xegpu.tensor_desc<256x128xf32, #xegpu.layout> 
# | same:10'0 

# | same:10'1 ? 

   possible intended 
match
# |7:  return 
# | same:10'0 
# |8:  } 
# | same:10'0 ~~~
# |9:  } 
...

```



https://github.com/llvm/llvm-project/pull/172125
___
cfe-commits mailing list
[email protected]
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits

[clang-tools-extra] [mlir] [MLIR][XeGPU] Refactor Layout access interface (PR #172125)

2025-12-17 Thread LLVM Continuous Integration via cfe-commits

llvm-ci wrote:

LLVM Buildbot has detected a new failure on builder `mlir-rocm-mi200` running 
on `mi200-buildbot` while building `mlir` at step 7 
"test-build-check-mlir-build-only-check-mlir".

Full details are available at: 
https://lab.llvm.org/buildbot/#/builders/177/builds/26062


Here is the relevant piece of the build log for the reference

```
Step 7 (test-build-check-mlir-build-only-check-mlir) failure: test (failure)
 TEST 'MLIR :: 
Dialect/XeGPU/propagate-layout-subgroup.mlir' FAILED 
Exit Code: 1

Command Output (stdout):
--
# RUN: at line 1
/vol/worker/mi200-buildbot/mlir-rocm-mi200/build/bin/mlir-opt 
-xevm-attach-target='chip=pvc' -xegpu-propagate-layout="layout-kind=subgroup" 
-split-input-file 
/vol/worker/mi200-buildbot/mlir-rocm-mi200/llvm-project/mlir/test/Dialect/XeGPU/propagate-layout-subgroup.mlir
 | /vol/worker/mi200-buildbot/mlir-rocm-mi200/build/bin/FileCheck 
/vol/worker/mi200-buildbot/mlir-rocm-mi200/llvm-project/mlir/test/Dialect/XeGPU/propagate-layout-subgroup.mlir
# executed command: 
/vol/worker/mi200-buildbot/mlir-rocm-mi200/build/bin/mlir-opt 
-xevm-attach-target=chip=pvc -xegpu-propagate-layout=layout-kind=subgroup 
-split-input-file 
/vol/worker/mi200-buildbot/mlir-rocm-mi200/llvm-project/mlir/test/Dialect/XeGPU/propagate-layout-subgroup.mlir
# executed command: 
/vol/worker/mi200-buildbot/mlir-rocm-mi200/build/bin/FileCheck 
/vol/worker/mi200-buildbot/mlir-rocm-mi200/llvm-project/mlir/test/Dialect/XeGPU/propagate-layout-subgroup.mlir
# .---command stderr
# | 
/vol/worker/mi200-buildbot/mlir-rocm-mi200/llvm-project/mlir/test/Dialect/XeGPU/propagate-layout-subgroup.mlir:10:17:
 error: CHECK-SAME: expected string not found in input
# |  // CHECK-SAME: {layout_result_0 = #xegpu.layout}
# | ^
# | :5:90: note: scanning from here
# |  %1 = xegpu.load_nd %0 <{layout = #xegpu.layout}> : !xegpu.tensor_desc<256x128xf32, #xegpu.layout> -> vector<256x128xf32>
# | 
 ^
# | :6:17: note: possible intended match here
# |  xegpu.store_nd %1, %0 <{layout = #xegpu.layout}> : vector<256x128xf32>, !xegpu.tensor_desc<256x128xf32, 
#xegpu.layout>
# | ^
# | 
/vol/worker/mi200-buildbot/mlir-rocm-mi200/llvm-project/mlir/test/Dialect/XeGPU/propagate-layout-subgroup.mlir:36:17:
 error: CHECK-SAME: expected string not found in input
# |  // CHECK-SAME: {layout_result_0 = #xegpu.layout} :
# | ^
# | :18:112: note: scanning from here
# |  %2 = xegpu.load_nd %0[0, 0] <{layout = #xegpu.layout}> : !xegpu.tensor_desc<256x128xf32, 
#xegpu.layout> -> 
vector<256x128xf32>
# | 
   ^
# | :19:35: note: possible intended match here
# |  %3 = vector.transpose %2, [1, 0] {layout_result_0 = 
#xegpu.layout} : 
vector<256x128xf32> to vector<128x256xf32>
# |   ^
# | 
# | Input file: 
# | Check file: 
/vol/worker/mi200-buildbot/mlir-rocm-mi200/llvm-project/mlir/test/Dialect/XeGPU/propagate-layout-subgroup.mlir
# | 
# | -dump-input=help explains the following input dump.
# | 
# | Input was:
# | <<
# |1: module { 
# |2:  gpu.module @test [#xevm.target] { 
# |3:  func.func @store_nd(%arg0: memref<256x128xf32>) { 
# |4:  %0 = xegpu.create_nd_tdesc %arg0 : memref<256x128xf32> -> 
!xegpu.tensor_desc<256x128xf32, #xegpu.layout> 
# |5:  %1 = xegpu.load_nd %0 <{layout = #xegpu.layout}> : !xegpu.tensor_desc<256x128xf32, 
#xegpu.layout> -> vector<256x128xf32> 
# | same:10'0   
   
X
 error: no match found
# |6:  xegpu.store_nd %1, %0 <{layout = #xegpu.layout}> : vector<256x128xf32>, 
!xegpu.tensor_desc<256x128xf32, #xegpu.layout> 
# | same:10'0 

# | same:10'1 ? 

   possible intended 
match
# |7:  return 
# | same:10'0 
# |8:  } 
# | same:10'0 ~~~
# |9:  } 
...

```



https://github.com/llvm/llvm-project/pull/172125
___
cfe-commits mailing list
[email protected]
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang-tools-extra] [mlir] [MLIR][XeGPU] Refactor Layout access interface (PR #172125)

2025-12-17 Thread Jianhui Li via cfe-commits

https://github.com/Jianhui-Li closed 
https://github.com/llvm/llvm-project/pull/172125
___
cfe-commits mailing list
[email protected]
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang-tools-extra] [mlir] [MLIR][XeGPU] Refactor Layout access interface (PR #172125)

2025-12-16 Thread Sang Ik Lee via cfe-commits

https://github.com/silee2 approved this pull request.


https://github.com/llvm/llvm-project/pull/172125
___
cfe-commits mailing list
[email protected]
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang-tools-extra] [mlir] [MLIR][XeGPU] Refactor Layout access interface (PR #172125)

2025-12-16 Thread Charitha Saumya via cfe-commits

https://github.com/charithaintc approved this pull request.

LGTM. please change LocalLayout -> TemporaryLayout in all variable, function 
naming. 

https://github.com/llvm/llvm-project/pull/172125
___
cfe-commits mailing list
[email protected]
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang-tools-extra] [mlir] [MLIR][XeGPU] Refactor Layout access interface (PR #172125)

2025-12-16 Thread Jianhui Li via cfe-commits


@@ -95,14 +97,16 @@ gpu.func @gemm(%arg0: memref<1024x1024xbf16>, %arg1: 
memref<1024x1024xbf16>, %ar
   -> !xegpu.tensor_desc<16x16xbf16, #xegpu.layout>
 
 %7 = xegpu.load_nd %5[%0, %arg3]
-  {layout_result_0 = #xegpu.layout}
+  {layout = #xegpu.layout}
   : !xegpu.tensor_desc<8x16xbf16, #xegpu.layout> -> vector<8x16xbf16>
 %8 = xegpu.load_nd %6[%arg3, %1]
-  {layout_result_0 = #xegpu.layout}
+  {layout = #xegpu.layout}
   : !xegpu.tensor_desc<16x16xbf16, #xegpu.layout> -> vector<16x16xbf16>
 
 %9 = xegpu.dpas %7, %8, %arg4
-  {layout_result_0 = #xegpu.layout}
+  {layout_a = #xegpu.layout,
+   layout_b = #xegpu.layout,
+   layout_cd = #xegpu.layout}
   : vector<8x16xbf16>, vector<16x16xbf16>, vector<8x16xf32> -> 
vector<8x16xf32>

Jianhui-Li wrote:

Here we face a choice that allowing duplicate layout info (anchor and local 
layout) for anchor ops.
The down-side of having these redundant local layout makes the test case more 
verbose and introduces a potential source of inconsistency between two layouts 
(they should be always same).


https://github.com/llvm/llvm-project/pull/172125
___
cfe-commits mailing list
[email protected]
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang-tools-extra] [mlir] [MLIR][XeGPU] Refactor Layout access interface (PR #172125)

2025-12-16 Thread Jianhui Li via cfe-commits


@@ -2053,24 +2041,11 @@ void XeGPUSubgroupDistributePass::runOnOperation() {
   // 1) It is assumed that there are no layout conflicts.
   // 2) Any existing layout attributes attached to the operands are ignored.
   Operation *op = getOperation();
-  op->walk([&](Operation *op) {
-for (OpOperand &operand : op->getOpOperands()) {
-  // Layouts are needed for vector type only.
-  if (!isa(operand.get().getType()))
-continue;
-  if (isa(op))
-continue;
-
-  auto layout = xegpu::getDistributeLayoutAttr(operand.get());
-  if (!layout) {
-op->emitError("Could not find layout attribute for operand ")
-<< operand.getOperandNumber() << " of operation " << op->getName();
-signalPassFailure();
-return;
-  }
-  xegpu::setDistributeLayoutAttr(operand, layout);
-}
-  });
+  if (!xegpu::localPropagateLayoutsFromAnchor(op)) {

Jianhui-Li wrote:

Change name to "recoverLocalLayouts". 
Yes every pass should call this method upfront. 


https://github.com/llvm/llvm-project/pull/172125
___
cfe-commits mailing list
[email protected]
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang-tools-extra] [mlir] [MLIR][XeGPU] Refactor Layout access interface (PR #172125)

2025-12-16 Thread Jianhui Li via cfe-commits


@@ -285,6 +397,14 @@ void xegpu::removeLayoutAttrs(Operation *op) {
   removeLayoutAttr(opr);
 for (OpResult result : nestOp->getOpResults())
   removeLayoutAttr(result);
+if (op->hasAttrOfType("layout"))
+  op->removeAttr("layout");
+if (op->hasAttrOfType("layout_a"))
+  op->removeAttr("layout_a");
+if (op->hasAttrOfType("layout_b"))
+  op->removeAttr("layout_b");
+if (op->hasAttrOfType("layout_cd"))
+  op->removeAttr("layout_cd");

Jianhui-Li wrote:

Thanks for the idea. Adding an interface requires adding extra code to 
XeGPUOps.td for each anchor op. So At this point, I think it does not worth it. 

https://github.com/llvm/llvm-project/pull/172125
___
cfe-commits mailing list
[email protected]
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang-tools-extra] [mlir] [MLIR][XeGPU] Refactor Layout access interface (PR #172125)

2025-12-16 Thread Jianhui Li via cfe-commits


@@ -740,7 +742,7 @@ struct WgToSgArithConstantOp : public 
OpConversionPattern {
   return failure();
 
 xegpu::DistributeLayoutAttr layout =
-xegpu::getDistributeLayoutAttr(op.getResult());
+xegpu::getTempLayout(dyn_cast(op.getResult()));

Jianhui-Li wrote:

No need after refactoring is full done. 

https://github.com/llvm/llvm-project/pull/172125
___
cfe-commits mailing list
[email protected]
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang-tools-extra] [mlir] [MLIR][XeGPU] Refactor Layout access interface (PR #172125)

2025-12-16 Thread Jianhui Li via cfe-commits


@@ -1478,7 +1487,7 @@ void XeGPUWgToSgDistributePass::runOnOperation() {
   });
 
   target.addDynamicallyLegalOp([=](xegpu::DpasOp op) -> bool {
-auto layout = xegpu::getDistributeLayoutAttr(op.getResult());
+auto layout = op.getLayoutCdAttr();

Jianhui-Li wrote:

see answer above.

https://github.com/llvm/llvm-project/pull/172125
___
cfe-commits mailing list
[email protected]
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang-tools-extra] [mlir] [MLIR][XeGPU] Refactor Layout access interface (PR #172125)

2025-12-16 Thread Jianhui Li via cfe-commits


@@ -218,55 +229,156 @@ maybePickPermanentLayout(xegpu::DistributeLayoutAttr 
layout,
   return candidate;
 }
 
-template 
-void xegpu::setDistributeLayoutAttr(const T &operandOrResult,
-const DistributeLayoutAttr layout,
-bool respectPermLayout) {
-  Operation *owner = operandOrResult.getOwner();
-  std::string name = xegpu::getLayoutName(operandOrResult);
+// TODO-LayoutRefactor: Remove this function after replacing use
+//  with setTempLayout or setAnchorLayout
+void xegpu::setDistributeLayoutAttr(
+const mlir::OpResult &result,
+const mlir::xegpu::DistributeLayoutAttr layout) {
+  Operation *owner = result.getOwner();
 
-  if (owner->hasAttrOfType(name))
+  if (auto anchorOp = dyn_cast(owner)) {
+if (anchorOp.getAnchorLayout() == layout)
+  return;
+anchorOp.setAnchorLayout(layout);
+return;
+  }
+
+  std::string name = xegpu::getTempLayoutName(result);
+  if (owner->hasAttrOfType(name)) {
+return;
+  }
+  if (layout) {

Jianhui-Li wrote:

Not sure. Maybe just for convenience. The original code does so. Removing this 
causing exception. 

https://github.com/llvm/llvm-project/pull/172125
___
cfe-commits mailing list
[email protected]
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang-tools-extra] [mlir] [MLIR][XeGPU] Refactor Layout access interface (PR #172125)

2025-12-16 Thread Jianhui Li via cfe-commits


@@ -73,7 +73,7 @@ gpu.func @no_scf_i8(%arg0: memref<64x64xi8>, %arg1: 
vector<8x32xi8>) -> vector<8
 // CHECK:   %{{.*}} = scf.for %[[K:.*]] = %{{.*}} to %{{.*}} step 
%{{.*}} iter_args(%{{.*}}) -> (vector<8x16xf32>) {
 // CHECK: %[[T7:.*]] = arith.shrui %[[K]], %[[C1]] : index
 // CHECK-NEXT:%[[T8:.*]] = xegpu.load_nd %[[T4]][%{{.*}}, %[[T7]]]
-// CHECK-SAME:  {layout_result_0 = #xegpu.layout} :
+// CHECK-SAME:  <{layout = #xegpu.layout}> :

Jianhui-Li wrote:

We keep layout_operand/result for non-anchor ops, and call them local layout.
Anchor op only has anchor layout, so there is no redundant layout.  
 

https://github.com/llvm/llvm-project/pull/172125
___
cfe-commits mailing list
[email protected]
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang-tools-extra] [mlir] [MLIR][XeGPU] Refactor Layout access interface (PR #172125)

2025-12-16 Thread Jianhui Li via cfe-commits


@@ -124,63 +124,74 @@ xegpu::DistributeLayoutAttr 
xegpu::getDistributeLayoutAttr(const Value value) {
 Operation *defOp = result.getDefiningOp();
 assert(defOp && "result must have a defining op");
 
-// For ConvertLayoutOp, the layout is stored in the targetLayoutAttr
-if (auto convertOp = dyn_cast(defOp))
-  return convertOp.getTargetLayoutAttr();
-
-// for LoadNdOp, the layout is stored in the tensor descriptor
-if (auto loadNd = dyn_cast(defOp))
-  return getDistributeLayoutAttr(loadNd.getTensorDesc());
-
-// for LoadMatrixOp, the layout is attached to the property of the op
-if (auto loadOp = dyn_cast(defOp))
-  return loadOp.getLayoutAttr();
-
-// for StoreMatrixOp, the layout is attached to the property of the op
-if (auto storeOp = dyn_cast(defOp))
-  return storeOp.getLayoutAttr();
-std::string layoutName = getLayoutName(result);
-if (defOp->hasAttr(layoutName))
-  return defOp->getAttrOfType(layoutName);
-
-// check for "permament" layout only after "temporary" layout name lookup
-// for backward compatibility
-if (auto loadGatherOp = dyn_cast(defOp))
-  return loadGatherOp.getLayoutAttr();
+if (auto anchorOp = dyn_cast(defOp)) {
+  auto layout = anchorOp.getAnchorLayout();
+  return layout;
+}
+
+std::string layoutName = getTempLayoutName(result);
+if (defOp->hasAttr(layoutName)) {
+  auto layout =
+  defOp->getAttrOfType(layoutName);
+  return layout;
+}
   }
 
   if (auto arg = dyn_cast(value)) {
 auto *parentOp = arg.getOwner()->getParentOp();
 if (auto loop = dyn_cast(parentOp)) {
   OpOperand *tiedInit = loop.getTiedLoopInit(arg);
-  if (tiedInit)
-return getDistributeLayoutAttr(tiedInit->get());
+  if (tiedInit) {
+auto layout = getDistributeLayoutAttr(tiedInit->get());
+return layout;
+  }
 }
   }
 
   return nullptr;
 }
-
 xegpu::DistributeLayoutAttr
 xegpu::getDistributeLayoutAttr(const OpOperand &opr) {
   Operation *op = opr.getOwner();
+  unsigned idx = const_cast(opr).getOperandNumber();
+
+  if (auto anchorOp = dyn_cast(op)) {
+if (auto dpasOp = dyn_cast(op)) {
+  if (idx == 0) {
+return dpasOp.getLayoutAAttr();
+  } else if (idx == 1) {
+return dpasOp.getLayoutBAttr();
+  } else if (idx == 2) {
+return dpasOp.getLayoutCdAttr();
+  }
+}
+if (auto convertOp = dyn_cast(op)) {
+  return convertOp.getInputLayoutAttr();
+}
+auto layout = anchorOp.getAnchorLayout();
+// For store operations (StoreScatterOp, StoreNdOp, StoreMatrixOp),
+// the layout is valid for the first two operands: value and memref/tdesc.
+// For other operations, the layout applies to the first operand only.
+if (isa(
+op)) {
+  if (idx < 2) {
+return layout;
+  }
+} else {
+  if (idx == 0) {
+return layout;
+  }
+}
+  }
 
-  if (auto loadOp = dyn_cast(op))
-return loadOp.getLayoutAttr();
-
-  if (auto storeOp = dyn_cast(op))
-return storeOp.getLayoutAttr();
-
-  std::string layoutName = xegpu::getLayoutName(opr);
-  if (op->hasAttr(layoutName))
-return op->getAttrOfType(layoutName);
-
-  // check for "permament" layout only after "temporary" layout name lookup
-  if (auto storeScatterOp = dyn_cast(op))
-if (auto layout = storeScatterOp.getLayoutAttr())
-  return layout;
+  std::string layoutName = xegpu::getTempLayoutName(opr);

Jianhui-Li wrote:

Remember that refactoring is a long process. 
Comparing to the previous version, the code gives way better structure: first 
check anchor layout, and local layout, and then def op. The future work is to 
remove the look up from def op and further clean up the special case for store 
operands.  
The previous version mixed them and is position-sensitive due to the test case 
build in certain assumption. 

https://github.com/llvm/llvm-project/pull/172125
___
cfe-commits mailing list
[email protected]
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang-tools-extra] [mlir] [MLIR][XeGPU] Refactor Layout access interface (PR #172125)

2025-12-16 Thread Jianhui Li via cfe-commits


@@ -770,4 +770,30 @@ def XeGPU_MemLayoutAttr : XeGPUAttr<"MemLayout", 
"mem_layout"> {
 
 }
 
+def AnchorLayoutInterface : OpInterface<"AnchorLayoutInterface"> {

Jianhui-Li wrote:

This is for the user who set the anchor layout without having to find out which 
specific anchor op is. 

https://github.com/llvm/llvm-project/pull/172125
___
cfe-commits mailing list
[email protected]
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang-tools-extra] [mlir] [MLIR][XeGPU] Refactor Layout access interface (PR #172125)

2025-12-16 Thread Jianhui Li via cfe-commits


@@ -955,8 +957,8 @@ struct WgToSgStoreScatterOpWithOffset
 if (!valueType)
   return failure();
 
-xegpu::DistributeLayoutAttr layout =
-xegpu::getDistributeLayoutAttr(op.getOperand(0));
+xegpu::DistributeLayoutAttr layout = op.getLayoutAttr();

Jianhui-Li wrote:

see above response.

https://github.com/llvm/llvm-project/pull/172125
___
cfe-commits mailing list
[email protected]
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang-tools-extra] [mlir] [MLIR][XeGPU] Refactor Layout access interface (PR #172125)

2025-12-16 Thread Jianhui Li via cfe-commits


@@ -901,8 +903,8 @@ struct WgToSgLoadGatherOpWithOffset
   return failure();
 ArrayRef wgShape = resultType.getShape();
 
-xegpu::DistributeLayoutAttr layout =
-xegpu::getDistributeLayoutAttr(op.getResult());
+xegpu::DistributeLayoutAttr layout = op.getLayoutAttr();

Jianhui-Li wrote:

Here it is fine to use getLayoutAttr() directly as these pattern works at the 
lower level knowing the operation type. 
If we use getAnchorLayout(), then we need to dynamic cast the op to 
AnchorLayoutInterface and then call the interface.   That is unnecessary step 
in this pattern.

getAnchorLayout() is used when the pass processes the IR without matching the 
specific operations, or some generic call back function where only a generic 
operation is available. 

https://github.com/llvm/llvm-project/pull/172125
___
cfe-commits mailing list
[email protected]
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang-tools-extra] [mlir] [MLIR][XeGPU] Refactor Layout access interface (PR #172125)

2025-12-16 Thread Jianhui Li via cfe-commits


@@ -20,18 +20,20 @@ gpu.module @xevm_module{
 %0 = xegpu.create_nd_tdesc %arg0 : memref<8x16xf16>
   -> !xegpu.tensor_desc<8x16xf16, #xegpu.layout>
 %1 = xegpu.load_nd %0[%c0, %c0]
-  {layout_result_0 = #xegpu.layout} :
+  {layout = #xegpu.layout} :

Jianhui-Li wrote:

The current MLIR support both as input. But when it prints out, it does print a 
different format. Will handle this in separate PR. It touches a lot of test 
cases. 

https://github.com/llvm/llvm-project/pull/172125
___
cfe-commits mailing list
[email protected]
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang-tools-extra] [mlir] [MLIR][XeGPU] Refactor Layout access interface (PR #172125)

2025-12-15 Thread Tuomas Kärnä via cfe-commits


@@ -1508,19 +1517,20 @@ void XeGPUWgToSgDistributePass::runOnOperation() {
vector::ConstantMaskOp, vector::CreateMaskOp>(
   [=](Operation *op) -> bool {
 // Check for either a SliceAttr or LayoutAttr on the result.
-auto layout = xegpu::getDistributeLayoutAttr(op->getResult(0));
+auto layout =
+xegpu::getTempLayout(dyn_cast(op->getResult(0)));
 return isLegal(layout);
   });
 
   target.addDynamicallyLegalOp(
   [=](xegpu::LoadGatherOp op) -> bool {
-auto layout = xegpu::getDistributeLayoutAttr(op.getResult());
+auto layout = op.getLayoutAttr();

tkarna wrote:

getAnchorLayout

https://github.com/llvm/llvm-project/pull/172125
___
cfe-commits mailing list
[email protected]
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang-tools-extra] [mlir] [MLIR][XeGPU] Refactor Layout access interface (PR #172125)

2025-12-15 Thread Tuomas Kärnä via cfe-commits


@@ -901,8 +903,8 @@ struct WgToSgLoadGatherOpWithOffset
   return failure();
 ArrayRef wgShape = resultType.getShape();
 
-xegpu::DistributeLayoutAttr layout =
-xegpu::getDistributeLayoutAttr(op.getResult());
+xegpu::DistributeLayoutAttr layout = op.getLayoutAttr();

tkarna wrote:

getAnchorLayout ? I.e. we define the anchor layout only in the op definition 
and then reuse the interface methods elsewhere

https://github.com/llvm/llvm-project/pull/172125
___
cfe-commits mailing list
[email protected]
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang-tools-extra] [mlir] [MLIR][XeGPU] Refactor Layout access interface (PR #172125)

2025-12-15 Thread Tuomas Kärnä via cfe-commits


@@ -124,63 +124,74 @@ xegpu::DistributeLayoutAttr 
xegpu::getDistributeLayoutAttr(const Value value) {
 Operation *defOp = result.getDefiningOp();
 assert(defOp && "result must have a defining op");
 
-// For ConvertLayoutOp, the layout is stored in the targetLayoutAttr
-if (auto convertOp = dyn_cast(defOp))
-  return convertOp.getTargetLayoutAttr();
-
-// for LoadNdOp, the layout is stored in the tensor descriptor
-if (auto loadNd = dyn_cast(defOp))
-  return getDistributeLayoutAttr(loadNd.getTensorDesc());
-
-// for LoadMatrixOp, the layout is attached to the property of the op
-if (auto loadOp = dyn_cast(defOp))
-  return loadOp.getLayoutAttr();
-
-// for StoreMatrixOp, the layout is attached to the property of the op
-if (auto storeOp = dyn_cast(defOp))
-  return storeOp.getLayoutAttr();
-std::string layoutName = getLayoutName(result);
-if (defOp->hasAttr(layoutName))
-  return defOp->getAttrOfType(layoutName);
-
-// check for "permament" layout only after "temporary" layout name lookup
-// for backward compatibility
-if (auto loadGatherOp = dyn_cast(defOp))
-  return loadGatherOp.getLayoutAttr();
+if (auto anchorOp = dyn_cast(defOp)) {
+  auto layout = anchorOp.getAnchorLayout();
+  return layout;
+}
+
+std::string layoutName = getTempLayoutName(result);
+if (defOp->hasAttr(layoutName)) {
+  auto layout =
+  defOp->getAttrOfType(layoutName);
+  return layout;
+}
   }
 
   if (auto arg = dyn_cast(value)) {
 auto *parentOp = arg.getOwner()->getParentOp();
 if (auto loop = dyn_cast(parentOp)) {
   OpOperand *tiedInit = loop.getTiedLoopInit(arg);
-  if (tiedInit)
-return getDistributeLayoutAttr(tiedInit->get());
+  if (tiedInit) {
+auto layout = getDistributeLayoutAttr(tiedInit->get());
+return layout;
+  }

tkarna wrote:

nit: identical to previous implementation, revert?

https://github.com/llvm/llvm-project/pull/172125
___
cfe-commits mailing list
[email protected]
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang-tools-extra] [mlir] [MLIR][XeGPU] Refactor Layout access interface (PR #172125)

2025-12-15 Thread Tuomas Kärnä via cfe-commits


@@ -770,4 +770,30 @@ def XeGPU_MemLayoutAttr : XeGPUAttr<"MemLayout", 
"mem_layout"> {
 
 }
 
+def AnchorLayoutInterface : OpInterface<"AnchorLayoutInterface"> {
+  let cppNamespace = "::mlir::xegpu";
+
+  let description = [{
+An attribute interface for accessing anchor layout information.
+This interface provides a method to set and retrieve the anchor layout
+from attributes that implement it.

tkarna wrote:

from _operations_ that implement it.

https://github.com/llvm/llvm-project/pull/172125
___
cfe-commits mailing list
[email protected]
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang-tools-extra] [mlir] [MLIR][XeGPU] Refactor Layout access interface (PR #172125)

2025-12-15 Thread Tuomas Kärnä via cfe-commits


@@ -1508,19 +1517,20 @@ void XeGPUWgToSgDistributePass::runOnOperation() {
vector::ConstantMaskOp, vector::CreateMaskOp>(
   [=](Operation *op) -> bool {
 // Check for either a SliceAttr or LayoutAttr on the result.
-auto layout = xegpu::getDistributeLayoutAttr(op->getResult(0));
+auto layout =
+xegpu::getTempLayout(dyn_cast(op->getResult(0)));
 return isLegal(layout);
   });
 
   target.addDynamicallyLegalOp(
   [=](xegpu::LoadGatherOp op) -> bool {
-auto layout = xegpu::getDistributeLayoutAttr(op.getResult());
+auto layout = op.getLayoutAttr();
 return isLegal(layout);
   });
 
   target.addDynamicallyLegalOp(
   [=](xegpu::StoreScatterOp op) -> bool {
-auto layout = xegpu::getDistributeLayoutAttr(op.getOperand(0));
+auto layout = op.getLayoutAttr();

tkarna wrote:

getAnchorLayout

https://github.com/llvm/llvm-project/pull/172125
___
cfe-commits mailing list
[email protected]
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang-tools-extra] [mlir] [MLIR][XeGPU] Refactor Layout access interface (PR #172125)

2025-12-15 Thread Tuomas Kärnä via cfe-commits


@@ -20,18 +20,20 @@ gpu.module @xevm_module{
 %0 = xegpu.create_nd_tdesc %arg0 : memref<8x16xf16>
   -> !xegpu.tensor_desc<8x16xf16, #xegpu.layout>
 %1 = xegpu.load_nd %0[%c0, %c0]
-  {layout_result_0 = #xegpu.layout} :
+  {layout = #xegpu.layout} :

tkarna wrote:

The tests seem to be using a mixture of `{layout = ...}` and `<{layout = ...}>` 
format which I find a little confusing. The former format suggests just an 
arbitrary attribute applicable to any op, whereas here we mean the anchor 
layout that is only supported by certain ops.

https://github.com/llvm/llvm-project/pull/172125
___
cfe-commits mailing list
[email protected]
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang-tools-extra] [mlir] [MLIR][XeGPU] Refactor Layout access interface (PR #172125)

2025-12-15 Thread Tuomas Kärnä via cfe-commits


@@ -1478,7 +1487,7 @@ void XeGPUWgToSgDistributePass::runOnOperation() {
   });
 
   target.addDynamicallyLegalOp([=](xegpu::DpasOp op) -> bool {
-auto layout = xegpu::getDistributeLayoutAttr(op.getResult());
+auto layout = op.getLayoutCdAttr();

tkarna wrote:

getAnchorLayout ?

https://github.com/llvm/llvm-project/pull/172125
___
cfe-commits mailing list
[email protected]
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang-tools-extra] [mlir] [MLIR][XeGPU] Refactor Layout access interface (PR #172125)

2025-12-15 Thread Tuomas Kärnä via cfe-commits


@@ -955,8 +957,8 @@ struct WgToSgStoreScatterOpWithOffset
 if (!valueType)
   return failure();
 
-xegpu::DistributeLayoutAttr layout =
-xegpu::getDistributeLayoutAttr(op.getOperand(0));
+xegpu::DistributeLayoutAttr layout = op.getLayoutAttr();

tkarna wrote:

getAnchorLayout ?

https://github.com/llvm/llvm-project/pull/172125
___
cfe-commits mailing list
[email protected]
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang-tools-extra] [mlir] [MLIR][XeGPU] Refactor Layout access interface (PR #172125)

2025-12-15 Thread Tuomas Kärnä via cfe-commits


@@ -285,6 +397,14 @@ void xegpu::removeLayoutAttrs(Operation *op) {
   removeLayoutAttr(opr);
 for (OpResult result : nestOp->getOpResults())
   removeLayoutAttr(result);
+if (op->hasAttrOfType("layout"))
+  op->removeAttr("layout");
+if (op->hasAttrOfType("layout_a"))
+  op->removeAttr("layout_a");
+if (op->hasAttrOfType("layout_b"))
+  op->removeAttr("layout_b");
+if (op->hasAttrOfType("layout_cd"))
+  op->removeAttr("layout_cd");

tkarna wrote:

nit: I wonder if this could be part of the AnchorLayoutInterface, e.g. 
removeAnchorLayout(), so that we would not need to replicate op specific attr 
names here.

https://github.com/llvm/llvm-project/pull/172125
___
cfe-commits mailing list
[email protected]
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang-tools-extra] [mlir] [MLIR][XeGPU] Refactor Layout access interface (PR #172125)

2025-12-13 Thread Jianhui Li via cfe-commits

https://github.com/Jianhui-Li edited 
https://github.com/llvm/llvm-project/pull/172125
___
cfe-commits mailing list
[email protected]
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang-tools-extra] [mlir] [MLIR][XeGPU] Refactor Layout access interface (PR #172125)

2025-12-13 Thread Jianhui Li via cfe-commits

https://github.com/Jianhui-Li edited 
https://github.com/llvm/llvm-project/pull/172125
___
cfe-commits mailing list
[email protected]
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits