csullivan commented on a change in pull request #10558:
URL: https://github.com/apache/tvm/pull/10558#discussion_r826090199
##########
File path: src/runtime/hexagon/hexagon/hexagon_device_api_v2.cc
##########
@@ -163,6 +177,63 @@
TVM_REGISTER_GLOBAL("device_api.hexagon.mem_copy").set_body([](TVMArgs args, TVM
*rv = static_cast<int32_t>(0);
});
+std::map<void*, HexagonBuffer*> vtcmallocs;
+
+TVM_REGISTER_GLOBAL("device_api.hexagon.AllocNd").set_body([](TVMArgs args,
TVMRetValue* rv) {
+ int32_t device_type = args[0];
+ int32_t device_id = args[1];
+ int32_t dtype_code_hint = args[2];
+ int32_t dtype_bits_hint = args[3];
+ std::string scope = args[4];
+ CHECK(scope.find("vtcm") != std::string::npos);
+ int64_t ndim = args[5];
+ // Forcing contiguous allocation, for now
+ // TODO(Straw): Enable discontiguous allocation
+ CHECK_EQ(ndim, 1);
+ std::vector<int64_t> shape;
+ for (int i = 0; i < ndim; ++i) {
+ shape.push_back(args[6 + i]);
+ }
Review comment:
Yes, you're understanding is correct, kArrShape is the type field to be
used to alloc memory for the DLTensor::shape field on the stack. How about at
the [call
site](https://github.com/apache/tvm/pull/10558/files#diff-e708efd03116baca1c23f26e4ab2c9645bd2222929d077931b23bd0fa582ba22R48)
we add a Call to the
[builtin](https://github.com/apache/tvm/blob/main/include/tvm/tir/builtin.h#L326-L345)
`tvm_stack_make_array` and then unpack a DLTensor handle in this packed
function. Then the only other arg necessary should be the scope.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]