dpankratz opened a new pull request #4650: [Bugfix] Support for placeholders 
defining tensor shapes
URL: https://github.com/apache/incubator-tvm/pull/4650
 
 
   When defining tensors that can take on a dynamic shape a typical way would 
be to use var as follows:
   ```
   A = tvm.var(dtype='int32')
   C = tvm.compute((A,A), lambda i : 0)
   sch = tvm.create_schedule(C.op)
   f = tvm.build(sch,[A, C])
   ```
   An equivalent program using a placeholder instead of a var can be written as 
follows:
   ```
   A = tvm.placeholder((1,1), dtype='int32',name="A")
   C = tvm.compute((A[0,0],A[0,0]), lambda i : 0)  
   sch = tvm.create_schedule(C.op)  
   f = tvm.build(sch,[A, C])  
   ```
   
   However, despite these programs having identical functionality an error is 
produced when attempting to build which is due to the assertions of the 
following form being added:
   ```
   assert((A(0, 0) == float32(arg2.shape[0])), "Argument arg2.shape[0] has an 
unsatisfied constraint")
   ```
   The salient piece of the assertion is `A(0,0)` which is a Call node with 
`call_type=Call::Halide` that codegen is unable to handle.  Normally these 
calls are replaced with a load during the StorageFlatten pass. In this case 
since `A(0,0)` is one of the expressions defining the buffer shape it does not 
pass through StorageFlatten. This is because it is materalized in assertions 
created by `ArgBinder::BindDLTensor` which occurs after the passes.
   
   This goal of this pull request is to illustrate a fix which is to flatten 
the shapes of buffers as they are created. After this fix is introduced the 
above placeholder code snippets can compile and run as expected. I'm not 
convinced this fix is the ideal way to handle this issue so I would appreciate 
any suggestions for improvements to this approach. 
   
   The original thread stating this bug can be found 
[here](https://discuss.tvm.ai/t/tvm-compute-with-variable-shape-fails-to-build/4648).
   
   @yzhliu @tqchen @zhiics

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to