u99127 commented on pull request #8368:
URL: https://github.com/apache/tvm/pull/8368#issuecomment-876714877


   Thanks @mbaret for pulling me into this. 
   
   My views on this topic are the following.
   
   - The unit of interchange between Tensorflow Lite and TVM is the flat 
buffer. If TVM is able to consume everything represented in the flat buffer, it 
doesn’t matter what APIs created that as it is the ultimate of what can be 
represented by the higher level APIs. That produces an argument that this kind 
of approach can be used to reason about coverage of Tensorflow Lite operators 
at the flatbuffer level i.e. if TVM is able to consume and test the 150 
operators that are representable in the schema, then the frontend is pretty 
robust in what it can consume. 
   
   - The point about semantic changes between versions is possibly a valid one 
- but I would personally expect that Tensorflow lite did not break backwards 
compatibility (modulo actual bug fixes and fixing correctness issues with 
existing operators) for a standard Operator (i.e. not in the custom op space) 
in the way described in the example with the custom operator as that would need 
consumers needing updates in their tflite readers and runtimes / deployment 
scenarios that are independent of each other.
   
   - Finally the notion that this somehow makes life harder for TVM developers 
is debatable as far as I am concerned.  I expect the complexity of 
understanding the semantics of the operator have changed in the implementation 
in the Tensor-flow lite framework to be same in both approaches. I think is 
easier to explore with a frozen flat buffer between 2 versions rather than 
having to dig in to the TF and keras API and understand the vagaries of that 
with differences in versions.
   
   
   regards
   Ramana


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to