zhuzilin commented on pull request #8056:
URL: https://github.com/apache/tvm/pull/8056#issuecomment-863697486


   @tkonolige I've updated the test. But there seems to be some error with the 
cuda target. Could you give me some help? Thank you! Part of the error message 
in the CI is listed below:
   
   >dev = cuda(0), target = 'cuda'
   >
   >
   >```python
   >    @tvm.testing.parametrize_targets
   >
   >    def test_nll_loss(dev, target):
   >
   >>       verify_nll_loss(dev, target, (10, 5))
   >```
   >
   >
   >tests/python/topi/python/test_topi_loss.py:60: 
   >
   >_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
_ _ 
   >
   >tests/python/topi/python/test_topi_loss.py:40: in verify_nll_loss
   >
   >    fn = tvm.build(s, [predictions, targets, weights, nll_loss_result], 
target, name="nll_loss")
   >
   >python/tvm/driver/build_module.py:353: in build
   >
   >    mod_host, mdev = _build_for_device(input_mod, tar, target_host)
   >
   >python/tvm/driver/build_module.py:177: in _build_for_device
   >
   >    mod_mixed = tvm.transform.Sequential(opt_mixed)(mod_mixed)
   >
   >python/tvm/ir/transform.py:161: in __call__
   >
   >    return _ffi_transform_api.RunPass(self, mod)
   >...
   >
   >E     Did you forget to bind?
   >
   >E       Variable `T_divide` is directly accessed by host memory (it is not 
contained in a thread environment or in the function arguments.
   >
   >E       Variable `targets` is directly accessed by host memory (it is not 
contained in a thread environment or in the function arguments.
   >
   >E       Variable `weights` is directly accessed by host memory (it is not 
contained in a thread environment or in the function arguments.
   >
   >E       Variable `targets` is directly accessed by host memory (it is not 
contained in a thread environment or in the function arguments.
   >
   >E       Variable `targets` is directly accessed by host memory (it is not 
contained in a thread environment or in the function arguments.
   >
   >E       Variable `weights` is directly accessed by host memory (it is not 
contained in a thread environment or in the function arguments.
   >
   >E       Variable `targets` is directly accessed by host memory (it is not 
contained in a thread environment or in the function arguments.
   >
   >E       Variable `predictions` is directly accessed by host memory (it is 
not contained in a thread environment or in the function arguments.
   >
   >E       Variable `targets` is directly accessed by host memory (it is not 
contained in a thread environment or in the function arguments.
   >
   >E     File "/workspace/src/tir/analysis/verify_memory.cc", line 202
   >
   >E   RuntimeError: Memory verification failed with the following errors:
   >
   >E   PrimFunc([predictions, targets, weights, T_divide]) 
attrs={"global_symbol": "nll_loss", "tir.noalias": (bool)1, "target": cuda 
-keys=cuda,gpu -max_num_threads=1024 -thread_warp_size=32} {
   >
   >E     // attr [nll_loss] storage_scope = "global"
   >
   >E     allocate nll_loss[float32 * 10]
   >
   >E     // attr [nll_loss_red] storage_scope = "global"
   >
   >E     allocate nll_loss_red[float32 * 1]
   >
   >E     // attr [nll_loss_red] storage_scope = "global"
   >
   >E     allocate nll_loss_red[float32 * 1]
   >
   >E     for (ax0, 0, 10) {
   >
   >E       nll_loss[ax0] = tir.if_then_else((targets[ax0] != -100), ((0f - 
predictions[((ax0*5) + targets[ax0])])*weights[targets[ax0]]), 0f)
   >
   >E     }
   >
   >E     nll_loss_red[0] = 0f
   >
   >E     for (k0, 0, 10) {
   >
   >E       nll_loss_red[0] = (nll_loss_red[0] + nll_loss[k0])
   >
   >E     }
   >
   >E     for (ax0, 0, 10) {
   >
   >E       nll_loss[ax0] = tir.if_then_else((targets[ax0] != -100), 
weights[targets[ax0]], 0f)
   >
   >E     }
   >
   >E     nll_loss_red[0] = 0f
   >
   >E     for (k0, 0, 10) {
   >
   >E       nll_loss_red[0] = (nll_loss_red[0] + nll_loss[k0])
   >
   >E     }
   >
   >E     T_divide[0] = (nll_loss_red[0]/nll_loss_red[0])
   >
   >E   }
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to