dpankratz opened a new pull request #4892: [Bugfix] Fixed: Bitwise ops on 
floats causing wrong code generation and crashes. 
URL: https://github.com/apache/incubator-tvm/pull/4892
 
 
   # Bitwise Ops on Floats
   
   I encountered a few bugs involving using bitwise operators on floating point 
types. The first bug is that the `<<` and `>>` operators are essentially no-ops 
when applied to floats which can be misleading. For example:
   ```
   shape = (1,1)
   c = tvm.compute(shape,lambda i,j: tvm.const(10) << 2.0)
   s = tvm.create_schedule([c.op])
   f = tvm.build(s,[c])
   c_tvm= tvm.nd.array(numpy.zeros(shape,dtype='float32'))
   f(c_tvm)
   print(c_tvm) #Prints [[10.]] not expected [[40.]]
   ```
   
   The other bitwise operators `|`, `&`, `^`, and `~` throw an error in the 
LLVM backend when applied to floats. For reference I tested using LLVM-8.
   
   This bug is fixed by adding checks to filter out floats for the `|`, `&`, 
`^`, `~`, `<<`, and `>>` operators.
   
   # TypeError Crashes
   
   I also encountered a few more cases where operators failed to promote python 
types to tvm ExprOps which causes a python TypeError.
   
   ```
   a = tvm.var()
   10 % a #crashes
   10 << a #crashes
   10 >> a #crashes
   ```
   
   This bug is fixed by adding the missing operators to `expr.py`
   
   This pull request also includes a regression test for both in 
`test_lang_basic.py`

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

Reply via email to