ANSHUMAN87 commented on pull request #8368:
URL: https://github.com/apache/tvm/pull/8368#issuecomment-871625248


   > > > This patch adds infrastructure to directly generate TFLite model 
buffers
   > > > by using flatc, the flatbuffers command line tool. This gives us more
   > > > freedom in creating the models for testing since we don't have to
   > > > rely on any of the converters.
   > > > 
   > > > * Add classes and helper functions to create the model buffer
   > > > * Add some convolution tests that test TFLite2 models in int8
   > > >   with per channel and per tensor quantization and remove the
   > > >   orphaned Keras tests
   > > > 
   > > > Co-authored with @NicolaLancellotti
   > > 
   > > 
   > > Hi @ekalda, I am just unable to see the need for such changes. As per my 
understanding, TFlite framework behaviour is not something we should control in 
TVM.
   > > Model buffers should be created using standard Apis in TFlite. We should 
not use a custom one to validate our requirements which may result in failure 
of complete TFlite frontend Parser.
   > > Maybe if you share what was the actual motivation for this change, we 
can discuss about the solution better. Thanks!
   > 
   > Hi @ANSHUMAN87, see that RFC for some more motivation - 
https://discuss.tvm.apache.org/t/rfc-tflite-frontend-create-models-for-frontend-testing-by-directly-writing-tflite-buffers/9811
   > 
   > The gist is that the current converters that convert into TFLite are just 
not flexible enough when it comes to creating the one operator models with 
various properties (e.g. different fused activations). We have found that 
writing buffers directly is the most convenient, fast and debuggable way for 
consistently generating various one operator models with desired properties.
   > 
   > As of whether the models created like that are valid TFLite models - since 
we use the TFLite schema to create the buffers, all the models created this way 
are valid TFLite models and if TVM frontend fails to parse them, that indicates 
problem with the TVM's TFLite's frontend parser.
   > 
   > Also tagging @mbaret @FrozenGene @manupa-arm @anijain2305 @leandron
   
   Thanks @ekalda for detailed response.
   I will go through little deeper about the point you mentioned about 
flexibility. 
   
   But at high level, I am still not convinced for the approach. Please find 
below few fruits for thought.
   1. TFlite was initially not designed to create models with first hand. Now 
it is coming up with model maker of its own. So the basic idea was convert 
existing tf and keras models to TFlite and then do certain optimization like 
layer fusions and post training quantization or fine tuning. And prep for 
inference. Now if the conversion to TFlite does not support certain features, 
then definitely there will not be any TFlite models with that feature.
   2. Hence if you want that feature to be part of TFlite, then that has to be 
first part of Tensorflow project not TVM.
   3. As per you mentioned, you need operator with fused activation, then again 
the questions arise for WHICH one? TFlite, Tensorflow or Keras. if TFlite 
already supports, then no point of this change. But if Tensorflow or Keras 
supports, but TFlite does not, then why TVM TFlite frontend needs to support?
   
   Hope my points are clear. Please revert back in case anything. Definitely I 
might be missing the point which you might have observed. Please enlighten me, 
so that I also can be on same page. TIA! 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to