jwfromm commented on a change in pull request #6251:
URL: https://github.com/apache/incubator-tvm/pull/6251#discussion_r474016831
##########
File path: python/tvm/relay/frontend/onnx.py
##########
@@ -1875,6 +1876,20 @@ def _impl_v1(cls, inputs, attr, params):
return _vision.roi_align(x, rois, [output_height, output_width],
spatial_scale, sampling_ratio)
+class Clip(OnnxOpConverter):
+ """Operator converter for Clip.
+ """
+ @classmethod
+ def _impl_v1(cls, inputs, attr, params):
+ return AttrCvt('clip', transforms={'min': 'a_min', 'max': 'a_max'})
+
+ @classmethod
+ def _impl_v11(cls, inputs, attr, params):
+ clip_bounds = [attr[bound] if bound in attr else
+ infer_value_simulated(inputs[i+1],
params).asnumpy().item(0)
+ for i, bound in enumerate(['min', 'max'])]
Review comment:
Although `clip` in relay requires static values for `min` and `max`, it
looks like `relay.minimum` and `relay.maximum` can use expressions for both the
lhs and rhs arguments. I'd recommend using those two ops to construct clip to
avoid calls to `infer_value_simulated`.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]