masahi commented on a change in pull request #8466:
URL: https://github.com/apache/tvm/pull/8466#discussion_r672217797



##########
File path: src/tir/transforms/split_host_device.cc
##########
@@ -89,6 +92,17 @@ class VarUseDefAnalysis : public StmtExprMutator {
 
   Stmt VisitStmt_(const AllocateNode* op) final {
     this->HandleDef(op->buffer_var.get());
+    auto storage_scope = 
runtime::StorageScope::Create(GetPtrStorageScope(op->buffer_var));
+    if (storage_scope.rank == runtime::StorageRank::kDynShared) {
+      ICHECK_EQ(use_dyn_shmem_, false) << "Only one dynamic shared memory 
allocation is allowed.";

Review comment:
       I realized that we must support this usage, since it is required, among 
others, for multiplying fp16 or int8 matrices and accumulate into fp32 or int32.
   
   I'm thinking that we need to expose pointer cast in TIR. I dind't find it in 
the current code:
   ```
   extern __shared__ char buf[SIZE];
   half* A_ptr = (half*)&buf[0];
   half* B_ptr = (half*)&buf[128];
   float32* C_ptr = (float32*)&buf[256];
   ```
   I didn't get what you meant by "translate to an intrinsic" @tqchen. Can we 
hide pointer cast and the load /stores that follow it behind intrinsics? For 
allocation and merging, I'm thinking about working exclusively with byte arrays 
and make use of `storage_rewrite` as is.

##########
File path: src/tir/transforms/split_host_device.cc
##########
@@ -89,6 +92,17 @@ class VarUseDefAnalysis : public StmtExprMutator {
 
   Stmt VisitStmt_(const AllocateNode* op) final {
     this->HandleDef(op->buffer_var.get());
+    auto storage_scope = 
runtime::StorageScope::Create(GetPtrStorageScope(op->buffer_var));
+    if (storage_scope.rank == runtime::StorageRank::kDynShared) {
+      ICHECK_EQ(use_dyn_shmem_, false) << "Only one dynamic shared memory 
allocation is allowed.";

Review comment:
       I realized that we must support this usage, since it is required, among 
others, for multiplying fp16 or int8 matrices and accumulate into fp32 or 
int32, for example (tensor core).
   
   I'm thinking that we need to expose pointer cast in TIR. I dind't find it in 
the current code:
   ```
   extern __shared__ char buf[SIZE];
   half* A_ptr = (half*)&buf[0];
   half* B_ptr = (half*)&buf[128];
   float32* C_ptr = (float32*)&buf[256];
   ```
   I didn't get what you meant by "translate to an intrinsic" @tqchen. Can we 
hide pointer cast and the load /stores that follow it behind intrinsics? For 
allocation and merging, I'm thinking about working exclusively with byte arrays 
and make use of `storage_rewrite` as is.

##########
File path: src/tir/transforms/split_host_device.cc
##########
@@ -89,6 +92,17 @@ class VarUseDefAnalysis : public StmtExprMutator {
 
   Stmt VisitStmt_(const AllocateNode* op) final {
     this->HandleDef(op->buffer_var.get());
+    auto storage_scope = 
runtime::StorageScope::Create(GetPtrStorageScope(op->buffer_var));
+    if (storage_scope.rank == runtime::StorageRank::kDynShared) {
+      ICHECK_EQ(use_dyn_shmem_, false) << "Only one dynamic shared memory 
allocation is allowed.";

Review comment:
       I realized that we must support this usage, since it is required, among 
others, for multiplying fp16 or int8 matrices and accumulate into fp32 or 
int32, for example (tensor core).
   
   I'm thinking that we need to expose pointer cast in TIR, and somehow extend 
`Load` and `Store` node to be able to read/write with arbitrary pointers. I 
want to be able to express something like below :
   ```
   extern __shared__ char buf[SIZE];
   half* A_ptr = (half*)&buf[0];
   half* B_ptr = (half*)&buf[128];
   float32* C_ptr = (float32*)&buf[256];
   ...
   C_ptr[0] = A_ptr[0] * B_ptr[0];
   ```
   I didn't get what you meant by "translate to an intrinsic" @tqchen. Can we 
hide pointer cast and the load /stores that follow it behind intrinsics? For 
allocation and merging, I'm thinking about working exclusively with byte arrays 
and make use of `storage_rewrite` as is.

##########
File path: src/tir/transforms/split_host_device.cc
##########
@@ -89,6 +92,17 @@ class VarUseDefAnalysis : public StmtExprMutator {
 
   Stmt VisitStmt_(const AllocateNode* op) final {
     this->HandleDef(op->buffer_var.get());
+    auto storage_scope = 
runtime::StorageScope::Create(GetPtrStorageScope(op->buffer_var));
+    if (storage_scope.rank == runtime::StorageRank::kDynShared) {
+      ICHECK_EQ(use_dyn_shmem_, false) << "Only one dynamic shared memory 
allocation is allowed.";

Review comment:
       One possible solution is to rewrite the above program to 
   ```
   extern __shared__ char buf[SIZE];
   *((floa32*)&buf[256]) = ((half*)&buf[0])[0] * ((half*)&buf[128])[0];
   ```
   And introduce two intrinsics, `reinterpret_load` / `reinterpret_store`, 
which can be used to express the snippet above by
   ```
   extern __shared__ char buf[SIZE];
   reinterpret_store(&buf[256], "float32", 0, 
    reinterpret_load(&buf[0], "half", 0) * reinterpret_load(&buf[128], "half", 
0));
   ```
   

##########
File path: src/runtime/file_utils.cc
##########
@@ -64,12 +66,14 @@ void FunctionInfo::Save(dmlc::Stream* writer) const {
   writer->Write(name);
   writer->Write(arg_types);
   writer->Write(thread_axis_tags);
+  writer->Write(use_dyn_shared_memory);
 }
 
 bool FunctionInfo::Load(dmlc::Stream* reader) {
   if (!reader->Read(&name)) return false;
   if (!reader->Read(&arg_types)) return false;
   if (!reader->Read(&thread_axis_tags)) return false;
+  if (!reader->Read(&use_dyn_shared_memory)) return false;

Review comment:
       If we do the rename `thread_axis_tags => launch_param_tags`, the JSON 
reader would break  backward compat at 
`helper.DeclareField("launch_param_tags", &launch_param_tags);` .
   
   Or shall we do the rename but keep the old attribute name for JSON 
reader/writer? e.g.
   `helper.DeclareField("thread_axis_tags", &launch_param_tags);`.
   




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to