quic-sanirudh commented on code in PR #12204:
URL: https://github.com/apache/tvm/pull/12204#discussion_r945441218


##########
src/runtime/hexagon/ops/conv2d.h:
##########
@@ -0,0 +1,144 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+#include <HAP_farf.h>
+#include <tvm/runtime/c_runtime_api.h>
+#include <tvm/runtime/device_api.h>
+
+#include <cassert>
+
+#ifndef TVM_RUNTIME_HEXAGON_OPS_CONV2D_H_
+#define TVM_RUNTIME_HEXAGON_OPS_CONV2D_H_
+
+#ifdef DEBUG_CONV
+#define DEBUG_BLOCK(X) \
+  { X }
+#define debug(...) FARF(ALWAYS, ##__VA_ARGS__)
+#else
+#define DEBUG_BLOCK(X)
+#define debug(...)
+#endif
+
+#define HAP_CALL(hap_fn, ...)                 \
+  {                                           \
+    int rc = hap_fn(__VA_ARGS__);             \
+    if (rc != 0) {                            \
+      debug("%s failed: rc=%x", #hap_fn, rc); \
+    }                                         \
+  }
+
+namespace detail {
+static constexpr auto hexagon_device = 
DLDevice{static_cast<DLDeviceType>(kDLHexagon), 0};
+
+// Standalone DLTensor: the standalone-ness means that this object owns the 
shape
+// (as opposed to a DLTensor).
+template <size_t N>
+class SDLTensor : public DLTensor {
+ public:
+  SDLTensor(void* data_ptr, DLDataType data_type, void* data_space, const 
int64_t* data_dims)
+      : SDLTensor(data_ptr, data_type, data_space) {
+    for (size_t i = 0; i != N; ++i) dims[i] = data_dims[i];
+  }
+
+  SDLTensor(void* data_ptr, DLDataType data_type, void* data_space,
+            std::initializer_list<int64_t> data_dims)
+      : SDLTensor(data_ptr, data_type, data_space, data_dims.begin()) {}
+
+  void* GetDataSpace() const { return data_space; }
+
+ private:
+  SDLTensor(void* data_ptr, DLDataType data_type, void* data_space) : 
data_space(data_space) {

Review Comment:
   Again, as I commented above, @kparzysz-quic could probably provide the 
proper answer.
   
   Having said that, I think the difference is that `data_space` is meant to 
store the pointer returned from `AllocDataSpace`, which means we could call 
`FreeDataSpace` on the same pointer again. `data_ptr` would be the same as 
`data_space` in the case of activations.
   
   In case of weights, `data_ptr` stores the pointers the first element of each 
**"chunk"** in layout (I've explained the difference between "chunkified" and 
"blockized" below and also added them as comments in the code). In short, since 
chunks can be of different sizes (due to the restriction that there would be no 
padding along height and width as part of the layout), the `data_ptr` allows 
efficient access each of the chunks in the data space.
   
   For the questions you've asked:
   1. `data_ptr` could be the same as `data_space` or a completely separate 
pointer pointing to the start addresses of each chunk of weights
   2. I think both have to be valid pointers. In case of weights, the 
`data_ptr` points to addresses on the stack, so it gets freed  automatically, 
but `data_space` is always freed with a call to `FreeDataSpace`
   3. I'm not sure, it is right now used only for the activations and weights 
of convolution, and is there are always separate `SDLTensor` instances for 
them. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to