This is an automated email from the ASF dual-hosted git repository.

tlopex pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git


The following commit(s) were added to refs/heads/main by this push:
     new 4f5a17a4ae [BugFix][Relax] Fix ONNX Clip converter for opset 11-12 
(#19375)
4f5a17a4ae is described below

commit 4f5a17a4ae4f04061bb7ed4327922bc831bb0d3a
Author: Soowon Jeong <[email protected]>
AuthorDate: Fri Apr 10 03:42:33 2026 +0900

    [BugFix][Relax] Fix ONNX Clip converter for opset 11-12 (#19375)
    
    ## Description
    
    ONNX changed the Clip operator from attribute-based min/max (opset 1-10)
    to input-based min/max (opset 11+). The Relax ONNX frontend only had
    `_impl_v1` (attributes) and `_impl_v13` (inputs), so opset 11-12 models
    were dispatched to `_impl_v1` which ignores the input-based min/max and
    falls back to `-inf`/`inf` defaults, making Clip a no-op.
    
    This caused **silent numerical divergence** in any opset 11-12 model
    using Clip/ReLU6 (e.g. MobileNetV2-12 from the ONNX Model Zoo).
    
    ### Root cause
    
    `OnnxOpConverter.get_converter()` selects the largest `_impl_v*` version
    <= the model opset. With only `v1` and `v13`, opset 11-12 mapped to
    `v1`, which reads min/max from attributes — but opset 11+ passes them as
    inputs.
    
    ### Fix
    
    Add `_impl_v11` that delegates to `_impl_v13`.
    
    ### Results (MobileNetV2, opset 12)
    
    | Metric | Before | After |
    |--------|:---:|:---:|
    | max abs diff vs ORT | 1.72e+06 | **8.58e-06** |
    | cosine similarity | 0.222 | **1.000** |
    | Top-5 match | No | **Yes** |
    
    ## Testing
    
    ```bash
    pytest tests/python/relax/test_frontend_onnx.py -k "clip" -v
    ```
    
    All 6 existing Clip tests pass (opset 6 and 13+). The fix only affects
    opset 11-12 dispatch.
---
 python/tvm/relax/frontend/onnx/onnx_frontend.py | 21 +++++++++++++--------
 1 file changed, 13 insertions(+), 8 deletions(-)

diff --git a/python/tvm/relax/frontend/onnx/onnx_frontend.py 
b/python/tvm/relax/frontend/onnx/onnx_frontend.py
index 64fbf94076..940022de41 100644
--- a/python/tvm/relax/frontend/onnx/onnx_frontend.py
+++ b/python/tvm/relax/frontend/onnx/onnx_frontend.py
@@ -1087,6 +1087,11 @@ class Clip(OnnxOpConverter):
         results = bb.emit_te(topi.minimum, results, max)
         return results
 
+    @classmethod
+    def _impl_v11(cls, bb, inputs, attr, params):
+        # Opset 11 changed Clip from attribute-based min/max to input-based.
+        return cls._impl_v13(bb, inputs, attr, params)
+
     @classmethod
     def _impl_v13(cls, bb, inputs, attr, params):
         results = inputs[0]
@@ -3769,13 +3774,13 @@ def _argreduce_select_last_index(bb, data, axis, 
keepdims, op):
         offset = relax.const(int(axis_size) - 1, "int64")
     else:
         # dynamic: get axis size at runtime and subtract 1
-        shape_tensor = bb.normalize(relax.op.shape_to_tensor(
-            bb.normalize(relax.op.shape_of(data))
-        ))
-        offset = bb.normalize(relax.op.subtract(
-            bb.normalize(relax.op.take(shape_tensor, relax.const(axis, 
"int64"), axis=0)),
-            relax.const(1, "int64"),
-        ))
+        shape_tensor = 
bb.normalize(relax.op.shape_to_tensor(bb.normalize(relax.op.shape_of(data))))
+        offset = bb.normalize(
+            relax.op.subtract(
+                bb.normalize(relax.op.take(shape_tensor, relax.const(axis, 
"int64"), axis=0)),
+                relax.const(1, "int64"),
+            )
+        )
     return relax.op.subtract(offset, flipped_idx)
 
 
@@ -4353,7 +4358,7 @@ class SplitToSequence(OnnxOpConverter):
                 for i in range(n)
             ]
             return relax.Tuple(squeezed)
-        
+
         return output
 
 

Reply via email to