csullivan commented on a change in pull request #10558:
URL: https://github.com/apache/tvm/pull/10558#discussion_r826099605



##########
File path: src/runtime/hexagon/hexagon/hexagon_device_api_v2.cc
##########
@@ -163,6 +177,63 @@ 
TVM_REGISTER_GLOBAL("device_api.hexagon.mem_copy").set_body([](TVMArgs args, TVM
   *rv = static_cast<int32_t>(0);
 });
 
+std::map<void*, HexagonBuffer*> vtcmallocs;
+
+TVM_REGISTER_GLOBAL("device_api.hexagon.AllocNd").set_body([](TVMArgs args, 
TVMRetValue* rv) {
+  int32_t device_type = args[0];
+  int32_t device_id = args[1];
+  int32_t dtype_code_hint = args[2];
+  int32_t dtype_bits_hint = args[3];
+  std::string scope = args[4];
+  CHECK(scope.find("vtcm") != std::string::npos);
+  int64_t ndim = args[5];
+  // Forcing contiguous allocation, for now
+  // TODO(Straw): Enable discontiguous allocation
+  CHECK_EQ(ndim, 1);
+  std::vector<int64_t> shape;
+  for (int i = 0; i < ndim; ++i) {
+    shape.push_back(args[6 + i]);
+  }

Review comment:
       My original thinking with kArrShape was that the `tvm_stack_make_shape` 
intrin made sense given that we aren't actually passing a tensor. You would 
then need to pass ndim and dtype and scope as well.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to