Issue 60614
Summary Missing vector type conversion in `ConvertIndexToLLVM`?
Labels mlir:core, mlir:llvm
Assignees
Reporter dcaballe
    The following IR reached the MLIR to LLVM translation pass in our pipeline with an `index` type that is not lowered before the lowering to the LLVM dialect (see ` %1 = llvm.mlir.constant(dense<[0, 1, 2]> : vector<3xindex>) : vector<3xi32>`):

```
  llvm.func @_iota_dim1_dispatch_0_generic_2x3(%arg0: !llvm.ptr<struct<"iree_hal_executable_environment_v0_t", (ptr<i32>, ptr<func<i32 (ptr<func<i32 (ptr<i8>, ptr<i8>, ptr<i8>)>>, ptr<i8>, ptr<i8>, ptr<i8>)>>, ptr<ptr<func<i32 (ptr<i8>, ptr<i8>, ptr<i8>)>>>, ptr<ptr<i8>>, struct<"iree_hal_processor_v0_t", (array<8 x i64>)>)>> {llvm.align = 16 : i64, llvm.noalias}, %arg1: !llvm.ptr<struct<"iree_hal_executable_dispatch_state_v0_t", (i32, i32, i16, i16, i32, i32, i16, i8, i8, ptr<i32>, ptr<ptr<i8>>, ptr<i32>)>> {llvm.align = 16 : i64, llvm.noalias}, %arg2: !llvm.ptr<struct<"iree_hal_executable_workgroup_state_v0_t", (i32, i32, i16, i16, i32, ptr<ptr<i8>>, i32)>> {llvm.align = 16 : i64, llvm.noalias}) -> vector<3xf32> {
    %0 = llvm.mlir.constant(0 : i32) : i32
    %1 = llvm.mlir.constant(dense<[0, 1, 2]> : vector<3xindex>) : vector<3xi32>
 %2 = llvm.load %arg2 : !llvm.ptr<struct<"iree_hal_executable_workgroup_state_v0_t", (i32, i32, i16, i16, i32, ptr<ptr<i8>>, i32)>>
    %3 = llvm.extractvalue %2[0] : !llvm.struct<"iree_hal_executable_workgroup_state_v0_t", (i32, i32, i16, i16, i32, ptr<ptr<i8>>, i32)> 
    %4 = llvm.mlir.undef : vector<3xi32>
    %5 = llvm.insertelement %3, %4[%0 : i32] : vector<3xi32>
    %6 = llvm.shufflevector %5, %4 [0, 0, 0] : vector<3xi32> 
    %7 = llvm.add %6, %1  : vector<3xi32>
    %8 = llvm.sitofp %7 : vector<3xi32> to vector<3xf32>
    llvm.return %8 : vector<3xf32>
 }
```

It looks like there is a missing type conversion for vectors in `ConvertIndexToLLVM`, since the `index` type is not lowered when I run `mlir-opt --convert-index-to-llvm repro.mlir`. Is there any other pass that should take care of this?

Also, it we run `mlir-translate` on this example, we end up generating code that is incorrect. The [0, 1, 2] constant term in the addition above is turned into [0, 0, 1]:

```
define <3 x float> @_iota_dim1_dispatch_0_generic_2x3(ptr noalias align 16 %0, ptr noalias align 16 %1, ptr noalias align 16 %2) {
  %4 = load %iree_hal_executable_workgroup_state_v0_t, ptr %2, align 8
  %5 = extractvalue %iree_hal_executable_workgroup_state_v0_t %4, 0
  %6 = insertelement <3 x i32> undef, i32 %5, i32 0
  %7 = shufflevector <3 x i32> %6, <3 x i32> undef, <3 x i32> zeroinitializer
  %8 = add <3 x i32> %7, <i32 0, i32 0, i32 1>
  %9 = sitofp <3 x i32> %8 to <3 x float>
 ret <3 x float> %9
}
```

I wonder if we should prevent any index type to be translated directly to LLVM to avoid this kind of silent errors.

@Mogball, @River707, @ftynse, @joker-eph 
_______________________________________________
llvm-bugs mailing list
[email protected]
https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-bugs

Reply via email to