mbrookhart edited a comment on pull request #7154:
URL: https://github.com/apache/tvm/pull/7154#issuecomment-750926391


   Sorry for the delay in responding to this, I wanted to look at the 
frameworks more closely. We currently have 5 importers that leverage NMS:
   
   MXNET does multibox_transform_loc and then NMS on the outputs. 
multi_box_transform_loc converts a 3D array of scores with shape (batch_size, 
class_num, num_anchors) into a most likely class and score for that class, plus 
does some coordinate transforms on the box.
   
   ONNX takes a 3D tensor of (batch_size, class, num_anchors), does 
slicing/concatenating with the boxes, and then does a per-class 
get_valid_counts->non_max.
   
   Pytorch takes in a 1D tensor of scores and concats it with the boxes before 
performing get_valid_counts and nms. As @masahi shows in this PR, there is 
preprocessing to embed all classes into that 1D tensor outside of the op.
   
   TF takes a 1D tensor of scores and concats it to the boxes before performing 
get_valid_counts and nms. I'm not sure if the rest of the TF graph is handling 
the loop over batch size and classes.
   
   TFlite takes a 3D score tensor of shape (batch size, num_anchors, class_id), 
reorders it to (batch_size, class_id, num_anchors), performs 
multibox_transform_loc->nms, and strangely does get_valid_counts after NMS. 
   
   It looks like we're doing pre-processing in every framework to reduce the 
amount of score information and convert it to the 5 or 6 D form the nms API 
wants. None of the frameworks give us inputs in the packed form the API 
expects, and we jump through hoops in every importer to convert inputs into 
that form. Then in at least TFLite and ONNX, we perform further 
splitting/slicing/concatenating to restore the separate class ids. 
   
   I think I agree with @masahi, we seem to be jumping through a lot of hoops 
in the importers to support a TVM NMS API that's out of line with the 
frameworks, and that might be hurting our overall performance.
   
   
   
   
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to