comaniac commented on a change in pull request #8172:
URL: https://github.com/apache/tvm/pull/8172#discussion_r644219591
##########
File path: src/runtime/contrib/tensorrt/tensorrt_runtime.cc
##########
@@ -174,25 +176,50 @@ class TensorRTRuntime : public JSONRuntimeBase {
int binding_index = engine->getBindingIndex(name.c_str());
ICHECK_NE(binding_index, -1);
if (data_entry_[eid]->device.device_type != kDLCUDA) {
-
device_buffers[binding_index].CopyTo(const_cast<DLTensor*>(data_entry_[eid]));
+ auto device_buffer = GetOrAllocateDeviceBuffer(eid, binding_index);
+ device_buffer.CopyTo(const_cast<DLTensor*>(data_entry_[eid]));
}
}
}
private:
+ /*! \brief Get batch size for engine from the runtime input shapes. */
+ int GetBatchSize() {
+ return data_entry_[input_var_eid_[0]]->ndim == 0 ? 1 :
data_entry_[input_var_eid_[0]]->shape[0];
+ }
+
+ /*! \brief TensorRT engines are built for a maximum batch size. If an engine
doesn't exist for a
+ * certain batch size already, see if we can reuse an engine built for a
higher batch size. */
+ bool FindCompatibleEngine(int batch_size, int* compatible_engine_batch_size)
{
+ // Check for exact match
+ if (trt_engine_cache_.count(std::make_pair(symbol_name_, batch_size))) {
+ *compatible_engine_batch_size = batch_size;
+ return true;
+ }
Review comment:
Exactly, and this is a common issue also happening at TVM tuning. I'm
not sure how TRT users would prefer to build the best fit engine every time or
willing to reuse the built one with some performance loss. Maybe making it
configurable first might be better.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]