masahi commented on a change in pull request #8466:
URL: https://github.com/apache/tvm/pull/8466#discussion_r672217797



##########
File path: src/tir/transforms/split_host_device.cc
##########
@@ -89,6 +92,17 @@ class VarUseDefAnalysis : public StmtExprMutator {
 
   Stmt VisitStmt_(const AllocateNode* op) final {
     this->HandleDef(op->buffer_var.get());
+    auto storage_scope = 
runtime::StorageScope::Create(GetPtrStorageScope(op->buffer_var));
+    if (storage_scope.rank == runtime::StorageRank::kDynShared) {
+      ICHECK_EQ(use_dyn_shmem_, false) << "Only one dynamic shared memory 
allocation is allowed.";

Review comment:
       I realized that we must support this usage, since it is required, among 
others, for multiplying fp16 or int8 matrices and accumulate into fp32 or 
int32, for example (tensor core).
   
   I'm thinking that we need to expose pointer cast in TIR, and somehow extend 
`Load` and `Store` node to be able to read/write with arbitrary pointers. I 
want to be able to express something like below :
   ```
   extern __shared__ char buf[SIZE];
   half* A_ptr = (half*)&buf[0];
   half* B_ptr = (half*)&buf[128];
   float32* C_ptr = (float32*)&buf[256];
   ...
   C_ptr[0] = A_ptr[0] * B_ptr[0];
   ```
   I didn't get what you meant by "translate to an intrinsic" @tqchen. Can we 
hide pointer cast and the load /stores that follow it behind intrinsics? For 
allocation and merging, I'm thinking about working exclusively with byte arrays 
and make use of `storage_rewrite` as is.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to