trevor-m commented on a change in pull request #8172:
URL: https://github.com/apache/tvm/pull/8172#discussion_r644156232
##########
File path: src/runtime/contrib/tensorrt/tensorrt_runtime.cc
##########
@@ -174,25 +176,50 @@ class TensorRTRuntime : public JSONRuntimeBase {
int binding_index = engine->getBindingIndex(name.c_str());
ICHECK_NE(binding_index, -1);
if (data_entry_[eid]->device.device_type != kDLCUDA) {
-
device_buffers[binding_index].CopyTo(const_cast<DLTensor*>(data_entry_[eid]));
+ auto device_buffer = GetOrAllocateDeviceBuffer(eid, binding_index);
+ device_buffer.CopyTo(const_cast<DLTensor*>(data_entry_[eid]));
}
}
}
private:
+ /*! \brief Get batch size for engine from the runtime input shapes. */
+ int GetBatchSize() {
+ return data_entry_[input_var_eid_[0]]->ndim == 0 ? 1 :
data_entry_[input_var_eid_[0]]->shape[0];
+ }
+
+ /*! \brief TensorRT engines are built for a maximum batch size. If an engine
doesn't exist for a
+ * certain batch size already, see if we can reuse an engine built for a
higher batch size. */
+ bool FindCompatibleEngine(int batch_size, int* compatible_engine_batch_size)
{
+ // Check for exact match
+ if (trt_engine_cache_.count(std::make_pair(symbol_name_, batch_size))) {
+ *compatible_engine_batch_size = batch_size;
+ return true;
+ }
Review comment:
From TRT's documentation, it sounds like doing this could have
performance impact.
"Another consideration is that building the optimized network optimizes for
the given maximum batch size. The final result will be tuned for the maximum
batch size but will still work correctly for any smaller batch size. It is
possible to run multiple build operations to create multiple optimized engines
for different batch sizes, then choose which engine to use based on the actual
batch size at runtime. "
The implementation in this PR isn't much better because it depends on the
order of batch sizes encountered. For example, if we encounter batch sizes 1,
2, 4 then 3 engines are built. But if we encounter in the order 4, 2, 1, only
one engine is built and it will be suboptimal for input sizes 1 and 2 which is
pretty much the same result as if we only kept one engine with the largest
batch size.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]