saudet commented on issue #17783:
URL:
https://github.com/apache/incubator-mxnet/issues/17783#issuecomment-725100275
Here's another potential benefit of going with a tool like JavaCPP. I've
started publishing packages for TVM that bundle its Python API and also wraps
its C/C++ API:
* https://github.com/bytedeco/javacpp-presets/tree/master/tvm
Currently, the builds have CUDA/cuDNN, LLVM, MKL, and MKL-DNN/DNNL/oneDNN
enabled on Linux, Mac, and Windows, but users do not need to install anything
at all--not even CPython! All dependencies get downloaded automatically with
Maven (although we can use manually installed ones too if we want). It also
works out of the box with GraalVM Native Image and Quarkus this way:
*
https://github.com/bytedeco/sample-projects/tree/master/opencv-stitching-native
For deployment, the TVM Runtime gets built separately, so it's easy to
filter everything and get JAR files that are less than 1 MB, without having to
recompile anything at all! It's also easy enough to set up the build in a way
to offer a user-friendly interface to generate just the right amount of JNI (in
addition to enabling only the backends we are interested in) to get even
smaller JAR files. The manually written JNI code currently in TVM's repository
doesn't support that. Moreover, it is inefficiently written in a similar
fashion to the original JNI code in TensorFlow, see above
https://github.com/apache/incubator-mxnet/issues/17783#issuecomment-662994965,
so we can assume that using JavaCPP is going to provide a similar boost in
performance there as well.
If TVM is eventually integrated in MXNet as per, for example, #15465, this
might be worth thinking about right now. For most AI projects, Java is used
mainly at deployment time and manually written JNI or automatically generated
JNA isn't going to help much in that case.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]