areusch commented on a change in pull request #6917:
URL: https://github.com/apache/incubator-tvm/pull/6917#discussion_r528398528



##########
File path: include/tvm/runtime/crt/graph_runtime.h
##########
@@ -61,14 +61,20 @@ typedef struct TVMGraphRuntime TVMGraphRuntime;
  * \brief Allocate a new GraphRuntime with vmalloc and initialize it.
  *
  * \param sym_json JSON-encoded graph.
- * \param m TVM Module that exposes the functions to call.
+ * \param module_handle TVM Module that exposes the functions to call.
  * \param ctxs runtime execution context.
  */
-TVMGraphRuntime* TVMGraphRuntime_Create(const char* sym_json, const struct 
TVMModule* m,
+TVMGraphRuntime* TVMGraphRuntime_Create(const char* sym_json, TVMModuleHandle 
module_handle,
                                         const TVMContext* ctxs);
 
 int TVMGraphRuntime_GetInputIndex(TVMGraphRuntime* runtime, const char* name);
 
+/*!
+ * \brief get number of input tensors allocated.
+ * \return integer number of tensors available to use.
+ */
+int TVMGraphRuntime_GetNumInputs();

Review comment:
       this one is needed for the CRT test. we didn't yet support the case of 
executing the graph runtime within the C runtime over RPC. test_linked_params 
uses this to verify that the CRT graph runtime can lookup the linked params. 
it's a bit tangential yes, but it is basically rounding out the graph runtime 
impl and it's not a very large addition; i'd prefer to keep it with this change 
to show the motivation.

##########
File path: include/tvm/runtime/crt/graph_runtime.h
##########
@@ -77,6 +83,12 @@ int TVMGraphRuntime_GetInputIndex(TVMGraphRuntime* runtime, 
const char* name);
  */
 void TVMGraphRuntime_SetInput(TVMGraphRuntime* runtime, const char* name, 
DLTensor* data_in);
 
+/*!
+ * \brief get number of output tensors allocated.
+ * \return integer number of output tensors allocated.
+ */
+int TVMGraphRuntime_GetNumOutputs();

Review comment:
       same story as before




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to