jikechao opened a new issue, #15338:
URL: https://github.com/apache/tvm/issues/15338
For the type `int32` or `uint32`, the graph executor always gives the wrong
inference results.
In the below statement, if we replace 'graph' with 'vm' or 'aot', the
results will be correct.
`model = relay.build_module.create_executor("graph", mod, tvm.cpu(0),
'llvm', params).evaluate()`

### Steps to reproduces
```
import tvm
import tvm.relay as relay
import numpy as np
from tensorflow import keras
from tensorflow.keras import layers, models
input_shape = (1, 16)
#input_data = np.random.random(size=input_shape)
dtype = 'int32'
input_data = np.random.randint(4, size=input_shape)
x = layers.Input(shape=input_shape[1:], dtype=dtype)
layer = keras.layers.Dropout(rate=0.2)
layer.set_weights(layer.get_weights())
y = layer(x)
model = models.Model(x, y)
res_keras = model(input_data)
res_keras2 = model(input_data)
shape_dict = {'input_1': input_shape}
mod, params = relay.frontend.from_keras(model, shape_dict)
with tvm.transform.PassContext(opt_level=3):
model = relay.build_module.create_executor("graph", mod, tvm.cpu(0),
'llvm', params).evaluate()
test_x_tvm = input_data
res_tvm = model(tvm.nd.array(test_x_tvm.astype(dtype))).numpy()
np.testing.assert_allclose(res_keras, res_tvm, atol=1e-3, rtol=1e-3)
```
### Actual behavior

@echuraev @Hzfengsy @shingjan
Could you help me confirm this is a bug in TVM? Thank you in advance!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]