marcoabreu commented on a change in pull request #11325: Added TensorRT runtime 
integration
URL: https://github.com/apache/incubator-mxnet/pull/11325#discussion_r202208971
 
 

 ##########
 File path: src/executor/exec_pass.h
 ##########
 @@ -198,6 +198,27 @@ Graph InferStorageType(Graph&& graph,
                        StorageTypeVector&& storage_type_inputs = 
StorageTypeVector(),
                        const std::string& storage_type_attr_key = "");
 
+/*! \brief The default storage type inference function, which assigns all 
undefined
+ *         storage types to kDefaultStorage. If all of input and output 
storage types
+ *         are kDefaultStorage, DispatchMode::kFCompute is assigned to 
dispatch_mode. Otherwise,
+ *         DispatchMode::kFComputeFallback is assigned to dispatch_mode.
+ */
+bool DefaultStorageType(const nnvm::NodeAttrs& attrs,
+                        const int dev_mask,
+                        DispatchMode* dispatch_mode,
+                        std::vector<int> *iattr,
+                        std::vector<int> *oattr);
+
+/*!
+ * \brief Replace subgraphs by TRT (forward only)
+ */
+Graph ReplaceSubgraph(Graph&& g,
+                      const std::unordered_set<nnvm::Node*>& set_subgraph,
+                      std::unordered_map<std::string, NDArray>* const 
params_map);
+
+std::vector<std::unordered_set<nnvm::Node*>> GetTrtCompatibleSubsets(const 
Graph& g,
 
 Review comment:
   Can we please find a nicer and more maintainable way than ifdefs? We're 
already clustered because of mkldnn, let's not introduce another burden if 
possible. 

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

Reply via email to