eaten-cake commented on issue #17989:
URL: https://github.com/apache/tvm/issues/17989#issuecomment-2889903708

   Sorry, the reproduction code above is incorrect. Here's the corrected one:
   ```python
   import tvm
   from tvm import dlight as dl
   from tvm.relax.frontend.torch import from_exported_program
   
   import torch
   
   class Unbind(torch.nn.Module):
   
       def forward(self, x):
           return torch.unbind(x, dim=0)
       
   model = Unbind()
   x = torch.randn(2, 3, 4)
   exported_program = torch.export.export(model, (x,))
   mod = from_exported_program(exported_program)
   
   dev = tvm.cuda(0)
   target = tvm.target.Target.from_device(dev)
   
   with target:
       mod = tvm.relax.transform.LegalizeOps()(mod)
       mod = dl.ApplyDefaultSchedule(
           dl.gpu.Fallback(),
       )(mod)
   
   mod.show()
   
   ex = tvm.relax.build(mod, target)
   
   x_tvm = tvm.nd.from_dlpack(x)
   x_tvm = x_tvm.copyto(dev)
   
   vm = tvm.relax.VirtualMachine(ex, dev)
   
   out = vm["main"](x_tvm)
   
   print(out)
   
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to