tmoreau89 opened a new pull request #4698: [Runtime] EdgeTPU runtime for Coral 
Boards
URL: https://github.com/apache/incubator-tvm/pull/4698
 
 
   This PR extends the TFLite runtime to support edgeTPU-equipped Coral boards 
in order to measure inference time of models on edgeTPU with TVM RPC.
   
   ## Instructions to run the EdgeTPU runtime experiments
   
   ### Coral Board setup
   You'll need to follow these instructions: 
https://coral.ai/docs/dev-board/get-started/
   ```
   # Clone TensorFlow, and prepare the library dir
   # Note the older version of TF that we'll need to use
   git clone https://github.com/tensorflow/tensorflow --recursive --branch=1.8.0
   cd tensorflow
   mkdir tensorflow/lite/tools/make/gen
   mkdir tensorflow/lite/tools/make/gen/generic-aarch64_armv8-a
   mkdir tensorflow/lite/tools/make/gen/generic-aarch64_armv8-a/lib
   
   # TF dependence
   cd ~ && git clone https://github.com/google/flatbuffers.git
   cd flatbuffers && cmake -G "Unix Makefiles" && make && sudo make install
   
   # EdgeTPU lib
   cd ~ && git clone https://github.com/google-coral/edgetpu.git
   ```
   
   ### Cross compile tflite static library on x86 machine
   ```
   # Prerequisites 
   sudo apt-get update
   sudo apt-get install crossbuild-essential-arm64
   
   # cross-compile tflite library (note you need to use older version)
   git clone https://github.com/tensorflow/tensorflow.git --recursive 
--branch=1.8.0
   cd tensorflow
   ./tensorflow/lite/tools/make/download_dependencies.sh
   ./tensorflow/lite/tools/make/build_aarch64_lib.sh
   # Copy the tensorflow lib over to your coral board
   scp 
tensorflow/lite/tools/make/gen/generic-aarch64_armv8-a/lib/libtensorflow-lite.a 
 
mendel@coral:/home/mendel/tensorflow/tensorflow/lite/tools/make/gen/generic-aarch64_armv8-a/lib/
   ```
   
   ### Build TVM runtime on Coral Board
   ```
   cd ~ && git clone --recursive --branch=master 
https://github.com/apache/incubator-tvm.git tvm
   cd tvm && mkdir build && cp cmake/config.cmake build
   echo 'set(USE_GRAPH_RUNTIME_DEBUG ON)' >> build/config.cmake
   echo 'set(USE_TFLITE ON)' >> build/config.cmake
   echo 'set(USE_TENSORFLOW_PATH /home/mendel/tensorflow)' >> build/config.cmake
   echo 'set(USE_EDGETPU /home/mendel/edgetpu)' >> build/config.cmake
   cd build && cmake ..
   make runtime -j4
   ```
   
   ### Execute the RPC server on Coral
   First, follow this guide to set up a tracker for your remote devices: 
https://docs.tvm.ai/tutorials/autotvm/tune_relay_arm.html#start-rpc-tracker.
   On the coral, once TVM runtime has been built, execute:
   ```
   PYTHONPATH=/home/mendel/tvm/python:$PYTHONPATH python3 -m 
tvm.exec.rpc_server --tracker $TVM_TRACKER_HOST:$TVM_TRACKER_NODE --key coral
   ```
   
   ### Evaluate MobileNet on Coral board
   
   Execute the following python script:
   ```python
   import numpy as np
   
   import tvm
   from tvm import autotvm, relay
   from tvm.contrib import tflite_runtime
   
   target_edgetpu = True
   
   # Note: replace "tracker" and 9191 with your tracker host and port name
   remote = autotvm.measure.request_remote("coral", "tracker", 9191, timeout=60)
   ctx = remote.cpu(0)
   
   tflite_fp = "mobilenet_v2_1.0_224_quant_edgetpu.tflite" if target_edgetpu 
else "mobilenet_v2_1.0_224_quant.tflite"
   input_data = np.random.rand(1,224,224,3).astype("uint8")
   with open(tflite_fp, 'rb') as f:
       runtime = tflite_runtime.create(f.read(), ctx, 
target_edgetpu=target_edgetpu)
       runtime.set_input(0, tvm.nd.array(input_data, ctx))
       ftimer = runtime.module.time_evaluator("invoke", ctx,
               number=10,
               repeat=3)
       times = np.array(ftimer().results) * 1000
       print("It took {0:.2f}ms to run mobilenet".format(np.mean(times)))
   ```
   
   Upon running it, you'll get:
   `It took 143.74ms to run mobilenet`
   
   Now, set `target_edgetpu = True` and you'll get:
   `It took 3.22ms to run mobilenet`
   
   ## Notable interface changes
   
   * The TFLite runtime API does not expose the `allocate()` method anymore, 
and tensor allocation is done as part of the initialization process.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to