ANSHUMAN87 commented on pull request #8368:
URL: https://github.com/apache/tvm/pull/8368#issuecomment-873047771
Thanks @ekalda for such detailed response. Quite enlightened with your POV
now.
I believe most of the points we both concur.
However below are some points where my POV differs.
As you have taken example of "TFLite_PostProcess_Detection". I will consider
that to state my point.
I agree this Op is not part of Tensorflow Op mainstream and they use
flatbuffer to create it.
But this Op is created using standard APIs from user perspective, today
it is created with features x, y, h, w (just to explain) when the TF model SSD
operations are present as part of it.
Now consider later on, when this SSD support has to be upgraded from [x,
y, h, w] to [x, y, w, h], which is done because there was a limitation in SSD
op support. Now this limitation was visible at higher layers operation, not in
the model creation.
So, in that case, the flatbuffer implemented
"TFLite_PostProcess_Detection" in TVM will become obsolete implementation. And
it will be very difficult to findout the reason, unless you are looking into
the flatbuffer custom op implemenation in Tensorflow / TLite MLIR compiler
project.
Now above scenario is the failure of TFLite parser I am trying to project
here, if it is validated using TVM flatbuffer implementation. This issue can
be avoided easily if we use standard APIs and the standard models which are
created using those.
Also, TFLite converters are the standard entry points into the world of
TFLite and they are stable w.r.t. their own domain of versions(for some
operators the dev takes time and it gets stretched through versions). But there
is no TFLite standard(highlight) model exists which has bypassed it.
NOTE: I do clearly understand that testing using standard APIs like
(TF/Keras Op creation and convert to TFLite) is quite exorbitant, but this is
the necessary price we have to pay:) . If we do not do that there will be
always a possibility for gap in our CI evaluations.
If we really need this flatbuffer implementation for a comparatively easier
testing approach, we should use this as a last resort.
We should not use it, until it is really difficult to reproduce the scenario
using standard APIs. But I am afraid, if we start using this flatbuffer
framework to write test cases, then we might not be able to maintain the
sanctity.
NOTE: TFLite Schema gets updated very frequently, so maintenance cost for
each version upgrade sometimes will be more, if we follow this approach.
I really appreciate all the hard work you have done!
It is quite clearly visible. I am extremely sorry for not able to vote in
favor of the change.
But I leave to other members in TVM for their expert opinions.
Thanks again for your efforts!!!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]