masahi commented on issue #12524:
URL: https://github.com/apache/tvm/issues/12524#issuecomment-1223566223

   Yeah, that PR is very old, but their current implementation in 
https://github.com/pytorch/pytorch/blob/a85d1f0bcdd02cf18d3b0517337458cb51a18cdb/aten/src/ATen/native/cuda/ActivationLogSigmoidKernel.cu#L32-L35
 is not differently not a simple composition of log and sigmoid.
   
   I tried replacing 
https://github.com/apache/tvm/blob/f64a3bda253f4220d66eeb3348f93f486392cb8e/python/tvm/relay/frontend/pytorch.py#L912-L914
 by 
   
   ```
       def log_sigmoid(self, inputs, input_types):
           data = inputs[0]
           mn = _op.minimum(_op.const(0, dtype=input_types[0]), data)
           z = _op.exp(-_op.abs(data))
           return mn - self.log1p([z], input_types)
   
   ```
   
   following the PT code. TVM results are now also huge, but still don't agree 
with PT. Can you take it from here?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to