ekalda commented on PR #13645: URL: https://github.com/apache/tvm/pull/13645#issuecomment-1377502376
> > (1) We should still check whether TFLite's RELU6 is correctly executed on NPU, some basic conv2d + RELU6 tests should do > > in my opinion it will be enough to add separate tests for RELU6 and RELU_N1_TO_1 since RELU, RELU6, RELU_N1_TO_1 activations are converted to clip operation in the tflite frontend https://github.com/apache/tvm/blob/main/python/tvm/relay/frontend/tflite.py#L534 and other cases with basic operations will be covered by cases with relu activation or is it necessary to add tests for RELU6, RELU_N1_TO_1 with other operations? > We don't have a pattern matcher for a solitary Relay CLIP operation, so I suspect that if the graph consists only RELU and nothing else, it doesn't get offloaded to NPU at all. That's why I was suggesting it is tested in conjunction with another operator that matches an optional RELU, but I agree that there is no need to test it with all of them. But since both, RELU6 and RELU_N1_TO_1, are distinct TFLite operators, it would be good to have some testing for them to claim that they are supported. > > (2) I'm confused about the min/max changes - is it an unrelated bugfix or necessary for RELU_N1_TO_1 implementation? > > this is adding restrictions according to the hardware To me this looks like fixing what was an oversight in the initial implementation of binary elementwise and is not related to adding support to two new activation functions (which is what the title of this PR claims) and in general should be a separate PR, but I don't think it worth the trouble to split up this PR since it is quite small. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
