[PATCH] D52434: [OpenMP] Make default schedules for NVPTX target regions in SPMD mode achieve coalescing

2018-09-27 Thread Jonas Hahnfeld via Phabricator via cfe-commits
Hahnfeld accepted this revision.
Hahnfeld added a comment.

In https://reviews.llvm.org/D52434#1248032, @gtbercea wrote:

> That is the intention. I just took out that part from here to not confuse 
> things since I wanted to have that as a separate patch. :)


Ok, perfect. I was probably confused that the title still speaks about 
"schedules" (plural).


Repository:
  rC Clang

https://reviews.llvm.org/D52434



___
cfe-commits mailing list
cfe-commits@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[PATCH] D52434: [OpenMP] Make default schedules for NVPTX target regions in SPMD mode achieve coalescing

2018-09-27 Thread Gheorghe-Teodor Bercea via Phabricator via cfe-commits
gtbercea added a comment.

In https://reviews.llvm.org/D52434#1248012, @Hahnfeld wrote:

> Should we also change the default `schedule` to `static, 1`? I know that's 
> not really needed for `teams distribute parallel for` (because the new 
> default `dist_schedule` only leaves one iteration per thread), but this 
> doesn't happen for `target parallel for`. Additionally it would make the 
> intent more explicit and LLVM doesn't need to look through divisions needed 
> to implement `static` without chunk. Just thinking aloud, not sure if that's 
> worth it.


That is the intention. I just took out that part from here to not confuse 
things since I wanted to have that as a separate patch. :)


Repository:
  rC Clang

https://reviews.llvm.org/D52434



___
cfe-commits mailing list
cfe-commits@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[PATCH] D52434: [OpenMP] Make default schedules for NVPTX target regions in SPMD mode achieve coalescing

2018-09-27 Thread Jonas Hahnfeld via Phabricator via cfe-commits
Hahnfeld added a comment.

Should we also change the default `schedule` to `static, 1`? I know that's not 
really needed for `teams distribute parallel for` (because the new default 
`dist_schedule` only leaves one iteration per thread), but this doesn't happen 
for `target parallel for`. Additionally it would make the intent more explicit 
and LLVM doesn't need to look through divisions needed to implement `static` 
without chunk. Just thinking aloud, not sure if that's worth it.


Repository:
  rC Clang

https://reviews.llvm.org/D52434



___
cfe-commits mailing list
cfe-commits@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[PATCH] D52434: [OpenMP] Make default schedules for NVPTX target regions in SPMD mode achieve coalescing

2018-09-27 Thread Alexey Bataev via Phabricator via cfe-commits
ABataev accepted this revision.
ABataev added a comment.
This revision is now accepted and ready to land.

LG


Repository:
  rC Clang

https://reviews.llvm.org/D52434



___
cfe-commits mailing list
cfe-commits@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[PATCH] D52434: [OpenMP] Make default schedules for NVPTX target regions in SPMD mode achieve coalescing

2018-09-27 Thread Gheorghe-Teodor Bercea via Phabricator via cfe-commits
gtbercea updated this revision to Diff 167326.
gtbercea added a comment.

Fix type of chunk size.


Repository:
  rC Clang

https://reviews.llvm.org/D52434

Files:
  lib/CodeGen/CGOpenMPRuntime.h
  lib/CodeGen/CGOpenMPRuntimeNVPTX.cpp
  lib/CodeGen/CGOpenMPRuntimeNVPTX.h
  lib/CodeGen/CGStmtOpenMP.cpp
  test/OpenMP/nvptx_target_teams_distribute_parallel_for_codegen.cpp
  test/OpenMP/nvptx_target_teams_distribute_parallel_for_simd_codegen.cpp

Index: test/OpenMP/nvptx_target_teams_distribute_parallel_for_simd_codegen.cpp
===
--- test/OpenMP/nvptx_target_teams_distribute_parallel_for_simd_codegen.cpp
+++ test/OpenMP/nvptx_target_teams_distribute_parallel_for_simd_codegen.cpp
@@ -33,7 +33,7 @@
 l = i;
   }
 
-  #pragma omp target teams distribute parallel for simd map(tofrom: aa) num_teams(M) thread_limit(64)
+ #pragma omp target teams distribute parallel for simd map(tofrom: aa) num_teams(M) thread_limit(64)
   for(int i = 0; i < n; i++) {
 aa[i] += 1;
   }
@@ -82,7 +82,7 @@
 // CHECK-LABEL: define {{.*}}void {{@__omp_offloading_.+}}(
 // CHECK-DAG: [[THREAD_LIMIT:%.+]] = call i32 @llvm.nvvm.read.ptx.sreg.ntid.x()
 // CHECK: call void @__kmpc_spmd_kernel_init(i32 [[THREAD_LIMIT]], i16 0, i16 0)
-// CHECK: call void @__kmpc_for_static_init_4({{.+}}, {{.+}}, {{.+}} 92,
+// CHECK: call void @__kmpc_for_static_init_4({{.+}}, {{.+}}, {{.+}} 91,
 // CHECK: {{call|invoke}} void [[OUTL2:@.+]](
 // CHECK: call void @__kmpc_for_static_fini(
 // CHECK: call void @__kmpc_spmd_kernel_deinit()
@@ -96,7 +96,7 @@
 // CHECK-LABEL: define {{.*}}void {{@__omp_offloading_.+}}(
 // CHECK-DAG: [[THREAD_LIMIT:%.+]] = call i32 @llvm.nvvm.read.ptx.sreg.ntid.x()
 // CHECK: call void @__kmpc_spmd_kernel_init(i32 [[THREAD_LIMIT]], i16 0, i16 0)
-// CHECK: call void @__kmpc_for_static_init_4({{.+}}, {{.+}}, {{.+}} 92,
+// CHECK: call void @__kmpc_for_static_init_4({{.+}}, {{.+}}, {{.+}} 91,
 // CHECK: {{call|invoke}} void [[OUTL3:@.+]](
 // CHECK: call void @__kmpc_for_static_fini(
 // CHECK: call void @__kmpc_spmd_kernel_deinit()
@@ -112,7 +112,7 @@
 // CHECK-DAG: [[THREAD_LIMIT:%.+]] = call i32 @llvm.nvvm.read.ptx.sreg.ntid.x()
 // CHECK: call void @__kmpc_spmd_kernel_init(i32 [[THREAD_LIMIT]], i16 0, i16 0)
 // CHECK: store {{.+}} 99, {{.+}}* [[COMB_UB:%.+]], align
-// CHECK: call void @__kmpc_for_static_init_4({{.+}}, {{.+}}, {{.+}} 92, {{.+}}, {{.+}}, {{.+}}* [[COMB_UB]],
+// CHECK: call void @__kmpc_for_static_init_4({{.+}}, {{.+}}, {{.+}} 91, {{.+}}, {{.+}}, {{.+}}* [[COMB_UB]],
 // CHECK: {{call|invoke}} void [[OUTL4:@.+]](
 // CHECK: call void @__kmpc_for_static_fini(
 // CHECK: call void @__kmpc_spmd_kernel_deinit()
Index: test/OpenMP/nvptx_target_teams_distribute_parallel_for_codegen.cpp
===
--- test/OpenMP/nvptx_target_teams_distribute_parallel_for_codegen.cpp
+++ test/OpenMP/nvptx_target_teams_distribute_parallel_for_codegen.cpp
@@ -35,7 +35,7 @@
 l = i;
   }
 
-  #pragma omp target teams distribute parallel for map(tofrom: aa) num_teams(M) thread_limit(64)
+#pragma omp target teams distribute parallel for map(tofrom: aa) num_teams(M) thread_limit(64)
   for(int i = 0; i < n; i++) {
 aa[i] += 1;
   }
@@ -87,7 +87,7 @@
 // CHECK-LABEL: define {{.*}}void {{@__omp_offloading_.+}}(
 // CHECK-DAG: [[THREAD_LIMIT:%.+]] = call i32 @llvm.nvvm.read.ptx.sreg.ntid.x()
 // CHECK: call void @__kmpc_spmd_kernel_init(i32 [[THREAD_LIMIT]], i16 0, i16 0)
-// CHECK: call void @__kmpc_for_static_init_4({{.+}}, {{.+}}, {{.+}} 92,
+// CHECK: call void @__kmpc_for_static_init_4({{.+}}, {{.+}}, {{.+}} 91,
 // CHECK: {{call|invoke}} void [[OUTL2:@.+]](
 // CHECK: call void @__kmpc_for_static_fini(
 // CHECK: call void @__kmpc_spmd_kernel_deinit()
@@ -101,7 +101,7 @@
 // CHECK-LABEL: define {{.*}}void {{@__omp_offloading_.+}}(
 // CHECK-DAG: [[THREAD_LIMIT:%.+]] = call i32 @llvm.nvvm.read.ptx.sreg.ntid.x()
 // CHECK: call void @__kmpc_spmd_kernel_init(i32 [[THREAD_LIMIT]], i16 0, i16 0)
-// CHECK: call void @__kmpc_for_static_init_4({{.+}}, {{.+}}, {{.+}} 92,
+// CHECK: call void @__kmpc_for_static_init_4({{.+}}, {{.+}}, {{.+}} 91,
 // CHECK: {{call|invoke}} void [[OUTL3:@.+]](
 // CHECK: call void @__kmpc_for_static_fini(
 // CHECK: call void @__kmpc_spmd_kernel_deinit()
@@ -117,7 +117,7 @@
 // CHECK-DAG: [[THREAD_LIMIT:%.+]] = call i32 @llvm.nvvm.read.ptx.sreg.ntid.x()
 // CHECK: call void @__kmpc_spmd_kernel_init(i32 [[THREAD_LIMIT]], i16 0, i16 0)
 // CHECK: store {{.+}} 99, {{.+}}* [[COMB_UB:%.+]], align
-// CHECK: call void @__kmpc_for_static_init_4({{.+}}, {{.+}}, {{.+}} 92, {{.+}}, {{.+}}, {{.+}}* [[COMB_UB]],
+// CHECK: call void @__kmpc_for_static_init_4({{.+}}, {{.+}}, {{.+}} 91, {{.+}}, {{.+}}, {{.+}}* [[COMB_UB]],
 // CHECK: {{call|invoke}} void [[OUTL4:@.+]](
 // CHECK: call void @__kmpc_for_static_fini(
 // CHECK: call void @__kmpc_spmd_kernel_deinit()
Index: lib/CodeGen/CGStmtOpenMP.cpp

[PATCH] D52434: [OpenMP] Make default schedules for NVPTX target regions in SPMD mode achieve coalescing

2018-09-26 Thread Alexey Bataev via Phabricator via cfe-commits
ABataev added inline comments.



Comment at: lib/CodeGen/CGOpenMPRuntime.cpp:9199
+  OpenMPDistScheduleClauseKind *ScheduleKind, llvm::Value *) const {
+  return;
+}

Remove `return;`, it is not required



Comment at: lib/CodeGen/CGOpenMPRuntimeNVPTX.h:345
+  void setDefaultDistScheduleAndChunk(CodeGenFunction ,
+  OpenMPDistScheduleClauseKind *ScheduleKind,
+  llvm::Value *) const override;

Modify it to be the reference rather than the pointer.


Repository:
  rC Clang

https://reviews.llvm.org/D52434



___
cfe-commits mailing list
cfe-commits@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[PATCH] D52434: [OpenMP] Make default schedules for NVPTX target regions in SPMD mode achieve coalescing

2018-09-26 Thread Gheorghe-Teodor Bercea via Phabricator via cfe-commits
gtbercea updated this revision to Diff 167172.
gtbercea edited the summary of this revision.
gtbercea added a comment.

Only change default schedule for distribute directive.


Repository:
  rC Clang

https://reviews.llvm.org/D52434

Files:
  lib/CodeGen/CGOpenMPRuntime.cpp
  lib/CodeGen/CGOpenMPRuntime.h
  lib/CodeGen/CGOpenMPRuntimeNVPTX.cpp
  lib/CodeGen/CGOpenMPRuntimeNVPTX.h
  lib/CodeGen/CGStmtOpenMP.cpp
  test/OpenMP/nvptx_target_teams_distribute_parallel_for_codegen.cpp
  test/OpenMP/nvptx_target_teams_distribute_parallel_for_simd_codegen.cpp

Index: test/OpenMP/nvptx_target_teams_distribute_parallel_for_simd_codegen.cpp
===
--- test/OpenMP/nvptx_target_teams_distribute_parallel_for_simd_codegen.cpp
+++ test/OpenMP/nvptx_target_teams_distribute_parallel_for_simd_codegen.cpp
@@ -33,7 +33,7 @@
 l = i;
   }
 
-  #pragma omp target teams distribute parallel for simd map(tofrom: aa) num_teams(M) thread_limit(64)
+ #pragma omp target teams distribute parallel for simd map(tofrom: aa) num_teams(M) thread_limit(64)
   for(int i = 0; i < n; i++) {
 aa[i] += 1;
   }
@@ -82,7 +82,7 @@
 // CHECK-LABEL: define {{.*}}void {{@__omp_offloading_.+}}(
 // CHECK-DAG: [[THREAD_LIMIT:%.+]] = call i32 @llvm.nvvm.read.ptx.sreg.ntid.x()
 // CHECK: call void @__kmpc_spmd_kernel_init(i32 [[THREAD_LIMIT]], i16 0, i16 0)
-// CHECK: call void @__kmpc_for_static_init_4({{.+}}, {{.+}}, {{.+}} 92,
+// CHECK: call void @__kmpc_for_static_init_4({{.+}}, {{.+}}, {{.+}} 91,
 // CHECK: {{call|invoke}} void [[OUTL2:@.+]](
 // CHECK: call void @__kmpc_for_static_fini(
 // CHECK: call void @__kmpc_spmd_kernel_deinit()
@@ -96,7 +96,7 @@
 // CHECK-LABEL: define {{.*}}void {{@__omp_offloading_.+}}(
 // CHECK-DAG: [[THREAD_LIMIT:%.+]] = call i32 @llvm.nvvm.read.ptx.sreg.ntid.x()
 // CHECK: call void @__kmpc_spmd_kernel_init(i32 [[THREAD_LIMIT]], i16 0, i16 0)
-// CHECK: call void @__kmpc_for_static_init_4({{.+}}, {{.+}}, {{.+}} 92,
+// CHECK: call void @__kmpc_for_static_init_4({{.+}}, {{.+}}, {{.+}} 91,
 // CHECK: {{call|invoke}} void [[OUTL3:@.+]](
 // CHECK: call void @__kmpc_for_static_fini(
 // CHECK: call void @__kmpc_spmd_kernel_deinit()
@@ -112,7 +112,7 @@
 // CHECK-DAG: [[THREAD_LIMIT:%.+]] = call i32 @llvm.nvvm.read.ptx.sreg.ntid.x()
 // CHECK: call void @__kmpc_spmd_kernel_init(i32 [[THREAD_LIMIT]], i16 0, i16 0)
 // CHECK: store {{.+}} 99, {{.+}}* [[COMB_UB:%.+]], align
-// CHECK: call void @__kmpc_for_static_init_4({{.+}}, {{.+}}, {{.+}} 92, {{.+}}, {{.+}}, {{.+}}* [[COMB_UB]],
+// CHECK: call void @__kmpc_for_static_init_4({{.+}}, {{.+}}, {{.+}} 91, {{.+}}, {{.+}}, {{.+}}* [[COMB_UB]],
 // CHECK: {{call|invoke}} void [[OUTL4:@.+]](
 // CHECK: call void @__kmpc_for_static_fini(
 // CHECK: call void @__kmpc_spmd_kernel_deinit()
Index: test/OpenMP/nvptx_target_teams_distribute_parallel_for_codegen.cpp
===
--- test/OpenMP/nvptx_target_teams_distribute_parallel_for_codegen.cpp
+++ test/OpenMP/nvptx_target_teams_distribute_parallel_for_codegen.cpp
@@ -35,7 +35,7 @@
 l = i;
   }
 
-  #pragma omp target teams distribute parallel for map(tofrom: aa) num_teams(M) thread_limit(64)
+#pragma omp target teams distribute parallel for map(tofrom: aa) num_teams(M) thread_limit(64)
   for(int i = 0; i < n; i++) {
 aa[i] += 1;
   }
@@ -87,7 +87,7 @@
 // CHECK-LABEL: define {{.*}}void {{@__omp_offloading_.+}}(
 // CHECK-DAG: [[THREAD_LIMIT:%.+]] = call i32 @llvm.nvvm.read.ptx.sreg.ntid.x()
 // CHECK: call void @__kmpc_spmd_kernel_init(i32 [[THREAD_LIMIT]], i16 0, i16 0)
-// CHECK: call void @__kmpc_for_static_init_4({{.+}}, {{.+}}, {{.+}} 92,
+// CHECK: call void @__kmpc_for_static_init_4({{.+}}, {{.+}}, {{.+}} 91,
 // CHECK: {{call|invoke}} void [[OUTL2:@.+]](
 // CHECK: call void @__kmpc_for_static_fini(
 // CHECK: call void @__kmpc_spmd_kernel_deinit()
@@ -101,7 +101,7 @@
 // CHECK-LABEL: define {{.*}}void {{@__omp_offloading_.+}}(
 // CHECK-DAG: [[THREAD_LIMIT:%.+]] = call i32 @llvm.nvvm.read.ptx.sreg.ntid.x()
 // CHECK: call void @__kmpc_spmd_kernel_init(i32 [[THREAD_LIMIT]], i16 0, i16 0)
-// CHECK: call void @__kmpc_for_static_init_4({{.+}}, {{.+}}, {{.+}} 92,
+// CHECK: call void @__kmpc_for_static_init_4({{.+}}, {{.+}}, {{.+}} 91,
 // CHECK: {{call|invoke}} void [[OUTL3:@.+]](
 // CHECK: call void @__kmpc_for_static_fini(
 // CHECK: call void @__kmpc_spmd_kernel_deinit()
@@ -117,7 +117,7 @@
 // CHECK-DAG: [[THREAD_LIMIT:%.+]] = call i32 @llvm.nvvm.read.ptx.sreg.ntid.x()
 // CHECK: call void @__kmpc_spmd_kernel_init(i32 [[THREAD_LIMIT]], i16 0, i16 0)
 // CHECK: store {{.+}} 99, {{.+}}* [[COMB_UB:%.+]], align
-// CHECK: call void @__kmpc_for_static_init_4({{.+}}, {{.+}}, {{.+}} 92, {{.+}}, {{.+}}, {{.+}}* [[COMB_UB]],
+// CHECK: call void @__kmpc_for_static_init_4({{.+}}, {{.+}}, {{.+}} 91, {{.+}}, {{.+}}, {{.+}}* [[COMB_UB]],
 // CHECK: {{call|invoke}} void [[OUTL4:@.+]](
 // CHECK: call void 

[PATCH] D52434: [OpenMP] Make default schedules for NVPTX target regions in SPMD mode achieve coalescing

2018-09-25 Thread Alexey Bataev via Phabricator via cfe-commits
ABataev added inline comments.



Comment at: lib/CodeGen/CGStmtOpenMP.cpp:2304
+  const auto *C = S.getSingleClause();
+  if (C) {
+// If schedule clause is present.

Restore original code here



Comment at: lib/CodeGen/CGStmtOpenMP.cpp:2317
+// When schedule clause is absent we choose sensible defaults.
+CGM.getOpenMPRuntime().chooseDefaultSchedule();
+Chunk = CGM.getOpenMPRuntime().getDefaultChunkValue(

Why you don't want to have only one function for the default scheduling/chunk? 
Also, you should change `DistSchedule` code, not `Schedule`



Comment at: lib/CodeGen/CGStmtOpenMP.cpp:3337
   }
+
   const unsigned IVSize = getContext().getTypeSize(IVExpr->getType());

Restore original


Repository:
  rC Clang

https://reviews.llvm.org/D52434



___
cfe-commits mailing list
cfe-commits@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[PATCH] D52434: [OpenMP] Make default schedules for NVPTX target regions in SPMD mode achieve coalescing

2018-09-24 Thread Gheorghe-Teodor Bercea via Phabricator via cfe-commits
gtbercea created this revision.
gtbercea added reviewers: ABataev, caomhin.
Herald added subscribers: cfe-commits, guansong, jholewinski.

For the OpenMP NVPTX toolchain choose default schedules which ensure coalescing 
on the GPU when in SPMD mode. This significantly increases the performance of 
offloaded target code.


Repository:
  rC Clang

https://reviews.llvm.org/D52434

Files:
  lib/CodeGen/CGOpenMPRuntime.cpp
  lib/CodeGen/CGOpenMPRuntime.h
  lib/CodeGen/CGOpenMPRuntimeNVPTX.cpp
  lib/CodeGen/CGOpenMPRuntimeNVPTX.h
  lib/CodeGen/CGStmtOpenMP.cpp
  test/OpenMP/nvptx_target_teams_distribute_parallel_for_codegen.cpp
  
test/OpenMP/nvptx_target_teams_distribute_parallel_for_generic_mode_codegen.cpp
  test/OpenMP/nvptx_target_teams_distribute_parallel_for_simd_codegen.cpp

Index: test/OpenMP/nvptx_target_teams_distribute_parallel_for_simd_codegen.cpp
===
--- test/OpenMP/nvptx_target_teams_distribute_parallel_for_simd_codegen.cpp
+++ test/OpenMP/nvptx_target_teams_distribute_parallel_for_simd_codegen.cpp
@@ -34,7 +34,7 @@
 l = i;
   }
 
-  #pragma omp target teams distribute parallel for simd map(tofrom: aa) num_teams(M) thread_limit(64)
+ #pragma omp target teams distribute parallel for simd map(tofrom: aa) num_teams(M) thread_limit(64)
   for(int i = 0; i < n; i++) {
 aa[i] += 1;
   }
@@ -81,44 +81,44 @@
 // CHECK-LABEL: define {{.*}}void {{@__omp_offloading_.+}}(
 // CHECK-DAG: [[THREAD_LIMIT:%.+]] = call i32 @llvm.nvvm.read.ptx.sreg.ntid.x()
 // CHECK: call void @__kmpc_spmd_kernel_init(i32 [[THREAD_LIMIT]], i16 0, i16 0)
-// CHECK: call void @__kmpc_for_static_init_4({{.+}}, {{.+}}, {{.+}} 92,
+// CHECK: call void @__kmpc_distribute_default_init_4({{.+}}, {{.+}}, {{.+}} 1,
 // CHECK: {{call|invoke}} void [[OUTL2:@.+]](
 // CHECK: call void @__kmpc_for_static_fini(
 // CHECK: call void @__kmpc_spmd_kernel_deinit()
 // CHECK: ret void
 
 // CHECK: define internal void [[OUTL2]](
-// CHECK: call void @__kmpc_for_static_init_4({{.+}}, {{.+}}, {{.+}} 34,
+// CHECK: call void @__kmpc_for_static_init_4({{.+}}, {{.+}}, {{.+}} 33,
 // CHECK: call void @__kmpc_for_static_fini(
 // CHECK: ret void
 
 // CHECK-LABEL: define {{.*}}void {{@__omp_offloading_.+}}(
 // CHECK-DAG: [[THREAD_LIMIT:%.+]] = call i32 @llvm.nvvm.read.ptx.sreg.ntid.x()
 // CHECK: call void @__kmpc_spmd_kernel_init(i32 [[THREAD_LIMIT]], i16 0, i16 0)
-// CHECK: call void @__kmpc_for_static_init_4({{.+}}, {{.+}}, {{.+}} 92,
+// CHECK: call void @__kmpc_distribute_default_init_4({{.+}}, {{.+}}, {{.+}} 1,
 // CHECK: {{call|invoke}} void [[OUTL3:@.+]](
 // CHECK: call void @__kmpc_for_static_fini(
 // CHECK: call void @__kmpc_spmd_kernel_deinit()
 // CHECK: ret void
 
 // CHECK: define internal void [[OUTL3]](
-// CHECK: call void @__kmpc_for_static_init_4({{.+}}, {{.+}}, {{.+}} 34,
+// CHECK: call void @__kmpc_for_static_init_4({{.+}}, {{.+}}, {{.+}} 33,
 // CHECK: call void @__kmpc_for_static_fini(
 // CHECK: ret void
 
 // CHECK: define {{.*}}void {{@__omp_offloading_.+}}({{.+}}, i{{32|64}} [[F_IN:%.+]])
 // CHECK: store {{.+}} [[F_IN]], {{.+}}* {{.+}},
 // CHECK-DAG: [[THREAD_LIMIT:%.+]] = call i32 @llvm.nvvm.read.ptx.sreg.ntid.x()
 // CHECK: call void @__kmpc_spmd_kernel_init(i32 [[THREAD_LIMIT]], i16 0, i16 0)
 // CHECK: store {{.+}} 99, {{.+}}* [[COMB_UB:%.+]], align
-// CHECK: call void @__kmpc_for_static_init_4({{.+}}, {{.+}}, {{.+}} 92, {{.+}}, {{.+}}, {{.+}}* [[COMB_UB]],
+// CHECK: call void @__kmpc_distribute_default_init_4({{.+}}, {{.+}}, {{.+}} 1, {{.+}}, {{.+}}, {{.+}}* [[COMB_UB]],
 // CHECK: {{call|invoke}} void [[OUTL4:@.+]](
 // CHECK: call void @__kmpc_for_static_fini(
 // CHECK: call void @__kmpc_spmd_kernel_deinit()
 // CHECK: ret void
 
 // CHECK: define internal void [[OUTL4]](
-// CHECK: call void @__kmpc_for_static_init_4({{.+}}, {{.+}}, {{.+}} 34,
+// CHECK: call void @__kmpc_for_static_init_4({{.+}}, {{.+}}, {{.+}} 33,
 // CHECK: call void @__kmpc_for_static_fini(
 // CHECK: ret void
 
Index: test/OpenMP/nvptx_target_teams_distribute_parallel_for_generic_mode_codegen.cpp
===
--- test/OpenMP/nvptx_target_teams_distribute_parallel_for_generic_mode_codegen.cpp
+++ test/OpenMP/nvptx_target_teams_distribute_parallel_for_generic_mode_codegen.cpp
@@ -24,7 +24,7 @@
 // CHECK: define weak void @__omp_offloading_{{.*}}_main_l16(i{{64|32}} %{{[^,].*}}, i32* dereferenceable{{[^,]*}}, i{{64|32}} %{{[^,)]*}})
 // CHECK: [[TID:%.+]] = call i32 @__kmpc_global_thread_num(%struct.ident_t* @
 // CHECK: call void @__kmpc_spmd_kernel_init(
-// CHECK: call void @__kmpc_for_static_init_4(
+// CHECK: call void @__kmpc_distribute_default_init_4(
 
 // CHECK: call void [[PARALLEL:@.+]](i32* %{{.*}}, i32* %{{.+}}, i{{64|32}} %{{.+}}, i{{64|32}} %{{.*}}, i{{64|32}} %{{.*}}, i32* %{{.*}})
 // CHECK: br label %
Index: test/OpenMP/nvptx_target_teams_distribute_parallel_for_codegen.cpp