(1) Running DeepLabV3 Model
This tutorial explains the process of setting up the SNPE SDK and running inference on RB5 using a TensorFlow and PyTorch segmentation model.
Note: This can be extended to any Deep Learning models
TesnorFlow
Running Inference on Ubuntu 18.0.4:
This section will guide you in setting up the SNPE on a Ubuntu system and running inference using the TensorFlow model for DeepLabV3
- Download pre-trained DeepLabV3 model trained using TensorFlow:
1 | wget http://download.tensorflow.org/modelsdeeplabv3_mnv2_pascal_train_aug_2018_01_29.tar.gz |
- Setup Qualcomm Snapdragon Neural Processing Engine SDK on the system using the tutorial mentioned below.
1 | sudo snap install --classic android-studio #Android Studio installation is necessary for SNPE SDK to work |
Note: Make sure all the path variables are set properly according to the tutorial provided in the above links. Failing to set the paths will result in the following command to fail.
- Set the environment path for TensorFlow
1 | cd $SNPE_ROOT |
- Convert the model to .dlc format using the following command
1 | snpe-tensorflow-to-dlc --input_dim sub_7 1,513,513,3 --out_node ArgMax --input_network ./deeplabv3_mnv2_pascal_train_aug/frozen_inference_graph.pb |
Note: The “./deeplabv3_mnv2_pascal_train_aug/frozen_inference_graph.pb” is the downloaded TF model and the image size is set to 513x513x3 as an example
- Running Inference:
Preprocess the image using the Python script below. Example image is provided.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15import numpy as np
import cv2
from matplotlib import pyplot as plt
frame = cv2.imread('Example.jpeg')
# Resize frame with Required image size
frame_resized = cv2.resize(frame,(513,513))
# Pad smaller dimensions to Mean value & Multiply with 0.007843
blob = cv2.dnn.blobFromImage(frame_resized, 0.007843, (513, 513), (127.5, 127.5, 127.5), swapRB=True)
# Making numpy array of required shape
blob = np.reshape(blob, (1,513,513,3))
# Storing to a raw file
np.ndarray.tofile(blob, open('blob.raw','w') )Prepare a text file that contains all the images you would like to run inference on
- Create a file names “raw_list.txt” in the current directory
- Enter the path of the “blob.raw” file that was generated using the Python script
Run the following command to use the generated dlc model to run inference
1 | snpe-net-run --container deeplabv3.dlc --input_list ./raw_list.txt |
This command generates a file called “ArgMax:0.raw” in “/output/Result_0/” path that will be used as an input to out model.
- Run the input below Python script to obtain the segmentation masks and modify the image
1 | import cv2 |
Running Inference on RB5:
The SNPE SDK provides binaries for RB5’s architecture. To check out the list of supported architectures, run
On Ubuntu:
1 | cd $SNPE_ROOT/lib #Ensure the path export from SNPE installation |
For the Qualcomm RB5 platform, we are interested in in the following folders:
- aarch64-ubuntu-gcc7.5
- dsp
These folders need to be copied over to the RB5 either by using “adb shell”, “adb push” or “scp” commands.
- Select the architecture aarch64-ubuntu-gcc7.5
1 | export SNPE_TARGET_ARCH=aarch64-ubuntu-gcc7.5 |
- Push the binaries to target
1 | adb shell "mkdir -p /data/local/tmp/snpeexample/$SNPE_TARGET_ARCH/bin" # Creates a folder with the architecture's name |
Once the libraries are copied over, log into RB5 using “adb shell” command or use a monitor(preferred as the final result involves visualizing)
On RB5:
- Set tup the target architecture, library path and environment variables for “snpe-net-run” command to run successfully
1 | export SNPE_TARGET_ARCH=aarch64-ubuntu-gcc7.5 |
Note: These commands need to be run everytime a new terminal is opened. To avoid this, add these commands in ~/.bashrc file and run “source ~/.bashrc”
- Verify snpe-net-run copy
1 | snpe-net-run -h #This command should run successfully and list the available options |
Copy the Python scripts and the “.dlc” file and the images from “Running Inference on Ubuntu 18.0.4” section to run inference
Follow Step 4 from the previous section to run inference. (Feel free to skip the blob.raw generation if it is already copied over)
Note: The inference step involves running “snpe-net-run”.
Note: Once the masks are obtained, it can be used for any application. We have shown a simple background blur in this example.
PyTorch
A PyTorch model can be converted to dlc format to be run on RB5 as mentioned in the following sections.
Important: PyTorch models need to be converted to ONNX before they are converted to dlc format.
- Run the following script to generate DeepLabV3 ONNX model. Here we use pre-trained DeepLabV3 model available in TorchHub
1 | import torch |
- Install ONNX on Ubuntu system
1 | pip install onnx |
- Set the environment path for ONNX
1 | cd $SNPE_ROOT |
- Run ONNX to DLC conversion command
1 | snpe-onnx-to-dlc --input_dim sub_7 1,513,513,3 --out_node ArgMax --input_network /deeplabv3_onnx_model.onnx --output_path deeplab_pt.dlc |
- Once the dlc format is generated successfully, follow the “Running Inference” sections in TensorFlow section to run inference on both Ubuntu and RB5