Introduction

Qualcomm Robotics RB5 is a powerful edge computing device for robotics applications. We create this tutorial website to guide users to setup and run robotics applications on it.

The tutorials can be found at categories. They are devided into four sections: initial setup, accessing devices, robotics applications and ML at the edge.

For the initial setup tutorials, they will guide you through flashing a system, connecting to wifi and monitor. If you want, you can also follow the tutorial to setup a gnome desktop.

Then the tutorials on accessing devices will allow you to use the cameras and an IMU on Qualcomm Robotics RB5. Your Qualcomm Robotics RB5 can also work with bluetooth joystick controllers if you build and load the kernel modules following the tutorials.

With the sensor access, the robotics application tutorials will teach you how to install ros and demo apriltag feature detection, control of mobile robot platform as well as running ORB-SLAM3 on Qualcomm Robotics RB5.

Finally, the last set of tutorials introduce the process of converting a machine learning model for semantic segmentaion to run on Qualcomm Robotics RB5.

Comment and share

RB5 comes with an Inertial Measurement Unit (IMU). An IMU estimates the linear acceleration and angular velocity. These capabilities enable applications such as motion estimation and visual inertial SLAM, which are essential to many robotics tasks. The IMU can be accessed by a ROS2 node.

Assuming you have ROS2 dashing installed, then you can run the IMU node.

1
2
3
4
5
# source the ros dashing environment
source /opt/ros/dashing/setup.bash

# run the node
ros2 run imu-ros2node imu-ros2node

Check the published IMU messages in a new terminal.

1
2
3
4
5
# In a new terminal, source the environment.
source /opt/ros/dashing/setup.bash

ros2 topic list # list the topic
ros2 topic echo /imu # print the messages

Comment and share

As we described in our previous tutorial on building and loading kernel modules on the Qualcomm Robotics RB5, the Ubuntu 18.04 installation that is part of the Qualcomm Robotics RB5 LU build includes minimal package support to reduce OS complexity. With this in mind, if we wish to install custom drivers, we need to perform the process described. Thankfully, if you have build and loaded the kernel modules described in our tutorial, you will be able to interface with a standard joystick controller over USB and serial over USB.

With the preliminaries completed, we will use the Megabot Robot as an example to interface with an Qualcomm Robotics RB5. In this case, we will use the megapi Python module designed for the Megabot to communite with the robot. This can be readily installed using pip install megapi.

Here we define a set of primitive actions that can control this four-mechanum-wheel robot. The set of primitive control actions include move left,right,forward, in reverse, rotate clockwise, counter-clockwise, and stop. Yes, it come as a surprise to many but the wheels on the robot contain a number of different rollers that ultimately influence the kinematics of the robot and jointly provide very interesting properties such as rotating in place and moving sideways!

The inverse kinematics of the robot can be described below.

Inverse kinematicsInverse kinematics

The following code snippet implements the control conditions for each of the actions defined.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
def move(self, direction, speed):
# port1: front right (wheel 1)
# port2: rear left (wheel 0)
# port9: rear right (wheel 3)
# port10: front left (wheel 2)

if direction == "left":
# fl wheel
self.bot.motorRun(10, speed)

# fr wheel
self.bot.motorRun(1, speed)

# rl wheel
self.bot.motorRun(2, -speed)

# rr wheel
self.bot.motorRun(9, -speed)
return
elif direction == "right":
# fl wheel
self.bot.motorRun(10, -speed)

# fr wheel
self.bot.motorRun(1, -speed)

# rl wheel
self.bot.motorRun(2, speed)

# rr wheel
self.bot.motorRun(9, speed)
return
elif direction == "forward":
# fl wheel
self.bot.motorRun(10, -speed)

# fr wheel
self.bot.motorRun(1, speed)

# rl wheel
self.bot.motorRun(2, -speed)

# rr wheel
self.bot.motorRun(9, speed)


return
elif direction == "reverse":
# fl wheel
self.bot.motorRun(10, speed)

# fr wheel
self.bot.motorRun(1, -speed)

# rl wheel
self.bot.motorRun(2, speed)

# rr wheel
self.bot.motorRun(9, -speed)

return
elif direction == "ccwise":
# fl wheel
self.bot.motorRun(10, speed)

# fr wheel
self.bot.motorRun(1, speed)

# rl wheel
self.bot.motorRun(2, speed)

# rr wheel
self.bot.motorRun(9, speed)
elif direction == "cwise":
# fl wheel
self.bot.motorRun(10, -speed)

# fr wheel
self.bot.motorRun(1, -speed)

# rl wheel
self.bot.motorRun(2, -speed)

# rr wheel
self.bot.motorRun(9, -speed)

else:
# fl wheel
self.bot.motorRun(10, 0)

# fr wheel
self.bot.motorRun(1, 0)

# rl wheel
self.bot.motorRun(2, 0)

# rr wheel
self.bot.motorRun(9, 0)
return

This script has been implemented using ROS1 and ROS2. While the robot can be controlled using the standard joy node using an USB jostick controller, it can additionally be controlled a standard ROS message for higher level planning strategies. The process of building and running the ROS nodes is outlined below.

ROS1

Create a workspace, clone the ROS1 implementation, and build the package. Make sure ROS is in your path, i.e. source /opt/ros/melodic/setup.bash.

1
2
3
4
5
mkdir -p rb5_ws/src && cd rb5_ws/src
git clone https://github.com/AutonomousVehicleLaboratory/rb5_ros.git
cd ..
catkin_make
source rb5_ws/devel/setup.bash

Start the control node

1
ros run rb5_control rb5_mpi_control.py

ROS2

Create a workspace, clone the ROS2 implementation, and build the package. Make sure ROS is in your path, i.e. source /opt/ros/dashing/setup.bash.

1
2
3
4
5
mkdir -p rb5_ws/src && cd rb5_ws/src
git clone https://github.com/AutonomousVehicleLaboratory/rb5_ros2.git
cd ..
colcon build
source rb5_ws/install/setup.bash

Start the control node

1
ros2 run rb5_ros2_control rb5_mpi_control.py

MegaBot

References

Comment and share

This tutorial outlines the process of installing ROS which is short for Robot Operating System. While ROS is not an operating system in the traditional sense, this is an ecosystem that provides support for a wide variety of sensor drivers and software libraries to aid in the fast development of robotic applications. Whether it is to process camera data or to execute a motion plan to reach a target destination, ROS handles message passing between modules to enable communication across multiple software modules.

In the following two subsections, we outline the installation process of two ROS versions, namely ROS1 (Melodic) and ROS2 (Dashing) that were specifically designed to run on native Ubuntu 18.04 systems. As the naming system suggests, ROS1 precedes ROS2; however, each can come with newer/older flavors for each (i.e. Kinetic, Melodic, Neotic are ROS1 versions and Dashing, Foxy, and Galactic are ROS2 flavors). While the operating system support varies across releases, the key differences between ROS1 and ROS2 involve package support and features.

As far as benefits, a key benefit of using ROS2 involves security, stability, and its focus on making it compatible with industrial robotic applications that require reliability. However, some may find that open source packages that were previously availble in ROS1 are not entirely ported to ROS2. This is quickly changing but it is a tradeoff to consider during development.

In future tutorials, we will explore applications that utilize Melodic and Dashing as the basis for our applications since they are Ubuntu 18.04 specific.

Install ROS1 - Melodic

Setup sources.list

1
sudo sh -c 'echo "deb http://packages.ros.org/ros/ubuntu $(lsb_release -sc) main" > /etc/apt/sources.list.d/ros-latest.list'

Set up keys

1
2
sudo apt install curl # if you haven't already installed curl
curl -s https://raw.githubusercontent.com/ros/rosdistro/master/ros.asc | sudo apt-key add -

Install

1
2
sudo apt update
sudo apt install ros-melodic-desktop-full

Source ROS environment

1
2
echo "source /opt/ros/melodic/setup.bash" >> ~/.bashrc
source ~/.bashrc

Install Additional Dependencies for managing workspaces

1
sudo apt install python-rosdep python-rosinstall python-rosinstall-generator python-wstool build-essential

Install and initilize rosdep to help resolve package dependencies

1
2
3
sudo apt install python-rosdep
sudo rosdep init
rosdep update

Verify installation by running RViz (visualization GUI)

1
rviz

Reference

Install ROS2 - Dashing

Install host operating system dependencies

1
2
3
sudo apt-get install usbutils git bc
sudo apt-get -y install locales
sudo apt-get update && sudo apt-get install curl gnupg2 lsb-release

Setup sources.list

1
sudo sh -c 'echo "deb [arch=amd64,arm64] http://packages.ros.org/ros2/ubuntu `lsb_release -cs` main" > /etc/apt/sources.list.d/ros2-latest.list'

Set up keys

1
sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys F42ED6FBAB17C654

Install

1
sudo apt-get install ros-dashing-desktop

Source ROS environment

1
2
echo "source /opt/ros/dashing/setup.bash" >> ~/.bashrc
source ~/.bashrc

Install Additional Dependencies for managing workspaces

1
2
sudo apt-get install python3-argcomplete
sudo apt-get install python3-colcon-common-extensions

Verify installation by running Rviz2 (visualization GUI)

1
rviz2

Reference

Comment and share

Getting a gnome desktop is desirable for its more familiar to most of us compared to the default wayland desktop.

To get the gnome desktop, you will need to unminimize the system first. The current system is a minimal ubuntu 18.04 server and you can unminimize it to add more tools.

Then you can start install the gnome desktop by the following commands:

1
2
apt install gdm3 tasksel
tasksel install ubuntu-desktop

When this is done, you need to run the following command everytime you want to us the gnome desktop.

1
service gdm3 start

Notice that in order to successfully login to the gnome desktop, make use you choose the wayland for ubuntu in the login page, otherwise your username and password might not work.

Notice that even we have this gnome desktop, many gui applications with x11 support might still not work. However, the gnome-terminal and RViz will work properly.

Also, we do not recommand use this desktop because it takes a lot of computation (300-400% cpu!!!). It should only be used for visualization and debugging purpose for a short period of time.

Reference:
[1] https://linuxconfig.org/how-to-install-gnome-on-ubuntu-18-04-bionic-beaver-linux

Comment and share

The LU build outlined during in the bring-up process comprises of a minimal Ubuntu 18.04 installation. For this reason, various device kernel modules need to be build from source and loaded. In this tutorial, we document the process of building and loading the kernel modules for a USB joystick (joydev) and USB over serial (ch341). The source code associated with these modules is open-source and available as part of the linux kernel. We suggest use the USB-C cable to connect your RB5 to the computer so that you can copy large block of code over. Make sure you check the correctness of the format.

joydev

The kernel version utilized for this tutorial corresponds to 4.19.125. If a different version is being used, you can find the version that matches your kernel by utilizing uname -r.

Extract the source code associated with this module, copy it to a temporary directory, for example, we use directory joydev.

1
2
3
4
5
wget https://cdn.kernel.org/pub/linux/kernel/v4.x/linux-4.19.125.tar.gz
tar xvzf linux-4.19.125.tar.gz

mkdir joydev
cp -r linux-4.19.125/drivers/input/* joydev/ && cd joydev/

The following code need to be appended to the end of the Makefile that was copied to the temporary directory joydev.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
KVERS = $(shell uname -r)

# kernel modules
obj-m := joydev.o
#
EXTRA_CFLAGS=-g -O0 -Wno-vla -Wframe-larger-than=4496

build: kernel_modules

kernel_modules:
make -C /usr/src/header M=$(CURDIR) modules

clean:
make -C /usr/src/header M=$(CURDIR) clean

Build and Load kernel module

1
2
make
insmod joydev.ko

To avoid loading the module every time, create a script outside of the directory.

1
2
cd ..
vim joydev.sh

then copy the following script into the file.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
#!/bin/bash

KERNEL_VERSION=$(uname -r)
MODINFO=$(modinfo ./joydev/joydev.ko | grep vermagic)
MODULE_VERSION=$(echo $MODINFO | cut -d " " -f 2)


if [ $KERNEL_VERSION != $MODULE_VERSION ]
then
echo "Versions incompatible"
echo ".ko file compiled with " $MODULE_VERSION
echo "System kernel is " $KERNEL_VERSION
else
mkdir -p /lib/modules/$(uname -r)/kernel/drivers/input/
cp ./joydev/joydev.ko /lib/modules/$(uname -r)/kernel/drivers/input/
depmod -a
echo "JOYDEV loaded"
fi

save the file and execute it with

1
bash joydev.sh

The joydev module will be copied into the kernel directory and dynamically loaded when a joystick device is found.

ch341

Extract the source code associated with this module to a temporary directory, in this case ch341

1
2
3
4
5
6
# Skip these two steps if you already did it
wget https://cdn.kernel.org/pub/linux/kernel/v4.x/linux-4.19.125.tar.gz
tar xvzf linux-4.19.125.tar.gz

mkdir ch341
cp -r linux-4.19.125/drivers/usb/serial/* ch341 && cd ch341

The following code need to be append to the end of the Makefile that was copied to ch341.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
KVERS = $(shell uname -r)

# kernel modules
obj-m := ch341.o

EXTRA_CFLAGS=-g -O0 -Wno-vla

build: kernel_modules

kernel_modules:
make -C /usr/src/header M=$(CURDIR) modules

clean:
make -C /usr/src/header M=$(CURDIR) clean

Build and Load kernel module

1
2
make
insmod ch341.ko

To avoid loading the module every time, create a script outside of the directory.

1
2
cd ..
vim ch341.sh

then copy the following script into the file.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
#!/bin/bash

KERNEL_VERSION=$(uname -r)
MODINFO=$(modinfo ./ch341/ch341.ko | grep vermagic)
MODULE_VERSION=$(echo $MODINFO | cut -d " " -f 2)


if [ $KERNEL_VERSION != $MODULE_VERSION ]
then
echo "Versions incompatible"
echo ".ko file compiled with " $MODULE_VERSION
echo "System kernel is " $KERNEL_VERSION
else
mkdir -p /lib/modules/$(uname -r)/kernel/drivers/usb/serial/
cp ./ch341/ch341.ko /lib/modules/$(uname -r)/kernel/drivers/usb/serial/
depmod -a
echo "CH341 loaded"
fi

save the file and execute it with

1
bash ch341.sh

The Makefiles and kernel modules can be found on Github.

Comment and share

This tutorial will guide you towards running ORB_SLAM3 on Qualcomm Robotics RB5. ORB_SLAM3 is a popular software package that can perform visual SLAM and visual-inertial SLAM. The algorithm is fast and therefore suitable for the Qualcomm Robotics RB5 platform.

Install ORB_SLAM3 Library

The code from ORB_SLAM3 original repository doesn’t work right out of box on Qualcomm Robotics RB5. We created a version that we tested on Qualcomm Robotics RB5 and works well. You can download the code from https://github.com/AutonomousVehicleLaboratory/ORB_SLAM3_RB5.

After downloading the code, follow the README.md to compile the ORB_SLAM3 library. A simplified version are also given below.

1
2
3
4
5
# Install the required dependencies
git clone https://github.com/AutonomousVehicleLaboratory/ORB_SLAM3_RB5
cd ORB_SLAM3_RB5
chmod +x build.sh
./build.sh

ROS wrapper for ORB_SLAM3 on Qualcomm Robotics RB5

We create wrapper for the ORB_SLAM3 api in both ROS and ROS2. These wrappers handles communication and data conversion. The ROS version was tested on ROS melodic and the ROS2 version was tested on RO2 dashing.

The communication part includes receiving sensor messages published by other ROS node. The sensors include camera and IMU. If both of them are used, the wrapper also handles synchronization before passing them to the ORB_SLAM3 library. After the pose is given by the output from the ORB_SLAM3 library, we then convert it into transformation and publish to ROS TF.

We seperate the ORB_SLAM3 library and the ROS wrapper so that we don’t need to include this library into the ROS workspace. You will need to modify the line 4 of the CMakeLists.txt file in the ROS and ROS2 packages to give the correct ORB_SLAM3 library path so that these nodes can be built successfully.

For the ROS1

1
2
3
4
5
6
7
8
# If you don't already have a works space, create one first.
mkdir -p rosws/src

# Go to the source folder
cd rosws/src

# Clone the repository
git clone https://github.com/AutonomousVehicleLaboratory/rb5_ros

Before you build your package, set the path in the CMakeLists.txt file in the ORB_SLAM3_RB5 package from the repository that you just cloned.

1
set(ORB_SLAM3_SOURCE_DIR "path/to/ORB_SLAM3_RB5_LIB/")

Note that this path refers to the folder where the customized ORB_SLAM3 library from https://github.com/AutonomousVehicleLaboratory/ORB_SLAM3_RB5 is cloned to.

Then, build your package.

1
2
3
4
5
6
# Return to the root folder of the workspace.
# Include the ros tools, this assumes you have ROS1 melodic installed
source /opt/ros/melodic/setup.bash

# Build only this package
catkin_make --only-pkg-with-deps ORB_SLAM3_RB5

To run the package, first start the roscore in a new terminal

1
2
source /opt/ros/melodic/setup.bash
roscore

Then in another terminal, go into the workspace folder

1
2
3
4
5
# source the ros workspace
source devel/setup.bash

# run the program
rosrun ORB_SLAM3_RB5 Mono /path/to/ORB_SLAM3_RB5_library/Vocabulary/ORBvoc.yaml /path/to/ORB_SLAM3_RB5_library/Examples/Monocular/Euroc.yaml

Notice that you should use a different yaml file to reflect the parameters of your camera.

When running the ROS node, you will need to access camera using the ROS package we mentioned in the basic tutorial accessing cameras.

For the ROS version, the wrapper allows you to run the Monocular version of ORB-SLAM3 on Qualcomm Robotics RB5. The text interface will work with wayland desktop.

ORB-SLAM3 running in a Gnome-TerminalORB-SLAM3 running in a Gnome-Terminal

In the terminal, you can see the translation vector and rotation matrix being printed. A trajectory file named “KeyFrameTrajectory.txt” will be saved into the root of the workspace if the program is stoped by “Ctrl+C”.

You can also launch RViz on the gnome desktop that you setup following the tutorial.

ORB-SLAM3 pose displayed in RVizORB-SLAM3 pose displayed in RViz

For the ROS2 version, the wrapper allows you to run the visual inertial SLAM with a single camera and the IMU on Qualcomm Robotics RB5. The IMU data can be accessed through a prebuilt ROS2 package that is tested on ROS2 dashing.

Similarly, first clone it to your workspace.

1
2
3
4
5
6
7
8
# If you don't already have a works space, create one first.
mkdir -p ros2ws/src

# Go to the source folder
cd ros2ws/src

# Clone the repository
git clone https://github.com/AutonomousVehicleLaboratory/rb5_ros2

Before you build your package, set the path in the CMakeLists.txt file in the ORB_SLAM3_RB5 package from the repository that you just cloned.

1
set(ORB_SLAM3_SOURCE_DIR "path/to/ORB_SLAM3_RB5_LIB/")

Note that this path refers to the folder where the customized ORB_SLAM3 library from https://github.com/AutonomousVehicleLaboratory/ORB_SLAM3_RB5 is cloned to.

Then, build your package.

1
2
3
4
5
6
# Return to the root folder of the workspace.
# Include the ros tools, this assumes you have ROS2 dashing installed
source /opt/ros/dashing/setup.bash

# Build only this package
colcon build --packages-select orb_slam3_rb5_ros2

Then you can run the package.

1
2
3
4
5
6
7
8
# Source the ROS2 workspace
source install/setup.bash

# run the Monocular version
ros2 run orb_slam3_rb5_ros2 Mono /path/to/ORB_SLAM3_RB5_library/Vocabulary/ORBvoc.yaml /path/to/ORB_SLAM3_RB5_library/Examples/Monocular/Euroc.yaml

# run the Monocular-Inertial version
ros2 run orb_slam3_rb5_ros2 Mono_Inertial /path/to/ORB_SLAM3_RB5_library/Vocabulary/ORBvoc.yaml /path/to/ORB_SLAM3_RB5_library/Examples/Monocular-Inertial/Euroc.yaml

If You encounter error while loading shared libraries, add the library to path such as:

1
export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/path/to/ORB_SLAM3_RB5_LIB/lib:/path/to/Pangolin/build

Comment and share

This tutorial explains the process of setting up the SNPE SDK and running inference on RB5 using a TensorFlow and PyTorch segmentation model.

Note: This can be extended to any Deep Learning models

TesnorFlow

Running Inference on Ubuntu 18.0.4:

This section will guide you in setting up the SNPE on a Ubuntu system and running inference using the TensorFlow model for DeepLabV3

  1. Download pre-trained DeepLabV3 model trained using TensorFlow:
1
2
wget http://download.tensorflow.org/modelsdeeplabv3_mnv2_pascal_train_aug_2018_01_29.tar.gz
tar -xzvf deeplabv3_mnv2_pascal_train_aug_2018_01_29.tar.gz
  1. Setup Qualcomm Snapdragon Neural Processing Engine SDK on the system using the tutorial mentioned below.
1
2
sudo snap install --classic android-studio #Android Studio installation is necessary for SNPE SDK to work
https://developer.qualcomm.com/software/qualcomm-neural-processing-sdk/getting-started

Note: Make sure all the path variables are set properly according to the tutorial provided in the above links. Failing to set the paths will result in the following command to fail.

  1. Set the environment path for TensorFlow
1
2
3
cd $SNPE_ROOT
export TENSORFLOW_DIR="your_tensorflow_installation_dir"
source bin/envsetup.sh -o $TENSORFLOW_DIR
  1. Convert the model to .dlc format using the following command
1
snpe-tensorflow-to-dlc --input_dim sub_7 1,513,513,3 --out_node ArgMax --input_network ./deeplabv3_mnv2_pascal_train_aug/frozen_inference_graph.pb

Note: The “./deeplabv3_mnv2_pascal_train_aug/frozen_inference_graph.pb” is the downloaded TF model and the image size is set to 513x513x3 as an example

  1. Running Inference:
  • Preprocess the image using the Python script below. Example image is provided.

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    import numpy as np
    import cv2
    from matplotlib import pyplot as plt

    frame = cv2.imread('Example.jpeg')
    # Resize frame with Required image size
    frame_resized = cv2.resize(frame,(513,513))
    # Pad smaller dimensions to Mean value & Multiply with 0.007843
    blob = cv2.dnn.blobFromImage(frame_resized, 0.007843, (513, 513), (127.5, 127.5, 127.5), swapRB=True)

    # Making numpy array of required shape
    blob = np.reshape(blob, (1,513,513,3))

    # Storing to a raw file
    np.ndarray.tofile(blob, open('blob.raw','w') )
  • Prepare a text file that contains all the images you would like to run inference on

    • Create a file names “raw_list.txt” in the current directory
    • Enter the path of the “blob.raw” file that was generated using the Python script
  • Run the following command to use the generated dlc model to run inference

1
snpe-net-run --container deeplabv3.dlc --input_list ./raw_list.txt

This command generates a file called “ArgMax:0.raw” in “/output/Result_0/” path that will be used as an input to out model.

  • Run the input below Python script to obtain the segmentation masks and modify the image
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
import cv2
import numpy as np
from matplotlib import pyplot as plt

arr = np.fromfile(open('ArgMax:0.raw', 'r'), dtype="float32")
arr = np.reshape(arr, (513,513,1))
segment = arr[342:, 342:]
arr[arr == 15] = 255
original_img = cv2.imread('deeplab-check.jpeg')
arr2=cv2.resize(segment,(original_img.shape[1], original_img.shape[0]))
print(arr.shape)
for i in range(arr2.shape[0]):
for j in range(arr2.shape[1]):
if (arr2[i][j] != 255):
original_img[i][j] = original_img[i][j][0] = original_img[i][j][1] = original_img[i][j][2]
plt.imshow(original_img)
plt.show()
plt.imshow( arr, cmap="gray")
plt.show()

Running Inference on RB5:

The SNPE SDK provides binaries for RB5’s architecture. To check out the list of supported architectures, run

On Ubuntu:

1
2
cd $SNPE_ROOT/lib #Ensure the path export from SNPE installation
ls

For the Qualcomm RB5 platform, we are interested in in the following folders:

  • aarch64-ubuntu-gcc7.5
  • dsp

These folders need to be copied over to the RB5 either by using “adb shell”, “adb push” or “scp” commands.

  1. Select the architecture aarch64-ubuntu-gcc7.5
1
export SNPE_TARGET_ARCH=aarch64-ubuntu-gcc7.5
  1. Push the binaries to target
1
2
3
4
5
adb shell "mkdir -p /data/local/tmp/snpeexample/$SNPE_TARGET_ARCH/bin" # Creates a folder with the architecture's name
adb shell "mkdir -p /data/local/tmp/snpeexample/dsp/lib" #Creates lib folder to copy over the libraries to
adb push $SNPE_ROOT/lib/$SNPE_TARGET_ARCH/*.so /data/local/tmp/snpeexample/$SNPE_TARGET_ARCH/lib #Copy the architecture libraries
adb push $SNPE_ROOT/lib/dsp/*.so /data/local/tmp/snpeexample/dsp/lib
adb push $SNPE_ROOT/bin/$SNPE_TARGET_ARCH/snpe-net-run /data/local/tmp/snpeexample/$SNPE_TARGET_ARCH/bin

Once the libraries are copied over, log into RB5 using “adb shell” command or use a monitor(preferred as the final result involves visualizing)

On RB5:

  1. Set tup the target architecture, library path and environment variables for “snpe-net-run” command to run successfully
1
2
3
export SNPE_TARGET_ARCH=aarch64-ubuntu-gcc7.5
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/data/local/tmp/snpeexample/$SNPE_TARGET_ARCH/lib
export PATH=$PATH:/data/local/tmp/snpeexample/$SNPE_TARGET_ARCH/bin

Note: These commands need to be run everytime a new terminal is opened. To avoid this, add these commands in ~/.bashrc file and run “source ~/.bashrc”

  1. Verify snpe-net-run copy
1
snpe-net-run -h #This command should run successfully and list the available options
  1. Copy the Python scripts and the “.dlc” file and the images from “Running Inference on Ubuntu 18.0.4” section to run inference

  2. Follow Step 4 from the previous section to run inference. (Feel free to skip the blob.raw generation if it is already copied over)

Note: The inference step involves running “snpe-net-run”.

Note: Once the masks are obtained, it can be used for any application. We have shown a simple background blur in this example.

PyTorch

A PyTorch model can be converted to dlc format to be run on RB5 as mentioned in the following sections.

Important: PyTorch models need to be converted to ONNX before they are converted to dlc format.

  1. Run the following script to generate DeepLabV3 ONNX model. Here we use pre-trained DeepLabV3 model available in TorchHub
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
import torch
import torchvision
BEST_MODEL_PATH_ONNX = "deeplabv3_onnx_model.onnx"
#Load pre-trained Model
model = torch.hub.load('pytorch/vision:v0.7.0', 'deeplabv3_resnet50', pretrained=True)
model.eval()
x = torch.randn(1, 3, 224, 224, requires_grad=True)
y = model(x)
torch_out = torch.onnx._export(model, # model being run
x, # model input (or a tuple for multiple inputs)
BEST_MODEL_PATH_ONNX, # where to save the model (can be a file or file-like object)
export_params=True, # store the trained parameter weights inside the model file
input_names=['Conv2d0_3-64'], # specify the name of input layer in onnx model
output_names=['Linear2_4096-2']) # specify the name of output layer
print("Successfully genereated ONNX model at ",BEST_MODEL_PATH_ONNX)
  1. Install ONNX on Ubuntu system
1
pip install onnx
  1. Set the environment path for ONNX
1
2
3
cd $SNPE_ROOT
export ONNX_DIR="your_onnx_installation_dir"
source bin/envsetup.sh -o $ONNX_DIR
  1. Run ONNX to DLC conversion command
1
snpe-onnx-to-dlc --input_dim sub_7 1,513,513,3 --out_node ArgMax --input_network /deeplabv3_onnx_model.onnx --output_path deeplab_pt.dlc
  1. Once the dlc format is generated successfully, follow the “Running Inference” sections in TensorFlow section to run inference on both Ubuntu and RB5
Example Input ImageExample Input Image
Generated MaskGenerated Mask
Post Processed ImagePost Processed Image

Comment and share

A key sensor for many robotics applications is camera. It enables cool applications such as object detection, semantic segmentation and visual SLAM. There are two cameras on Qualcomm Robotics RB5.

This tutorial will tell you a few ways to access these cameras. Before we start, note that these cameras cannot be read from OpenCV directly but a tool called GStreamer can bridge the gap.

OpenCV access through GStreamer and tcp

The easiest way to access camera is through a tcp port created by GStreamer. Then you can use OpenCV to read data from the tcp port.

On Qualcomm Robotics RB5, run the following command:

1
gst-launch-1.0 -e qtiqmmfsrc name=qmmf ! video/x-h264,format=NV12, width=1280, height=720,framerate=30/1 ! h264parse config-interval=1 ! mpegtsmux name=muxer ! queue ! tcpserversink port=8900 host=192.168.1.120

Note that you will need to change the host ip to your Qualcomm Robotics RB5’s IP address. This can be done by running the following command.

1
2
sudo apt install net-tools # if you don't have ifconfig
ifconfig

The ip address of Qualcomm Robotics RB5 can be found after inet as something like 192.168.0.xxx.

Then you can access the camera with the help of the OpenCV library. A python example is given below

1
2
3
4
5
6
7
8
9
import cv2

cap = cv2.VideoCapture("tcp://192.168.1.120:8900") #rb5 ip & port (same from command)
while(True):
ret, frame = cap.read()
cv2.imwrite("captured_image_opencv.jpg",frame)
# you can process image by using frame
break
cap.release()

Again, make sure you change the host ip to your Qualcomm Robotics RB5 IP.

accessing camera using ROS or ROS2 packages

Another way is to use the ROS packages we provided both in ROS1 and ROS2.

ROS1: https://github.com/AutonomousVehicleLaboratory/rb5_ros
ROS2: https://github.com/AutonomousVehicleLaboratory/rb5_ros2

We provided launch file for both ROS and ROS2 packages so that it is easy to config the camera node.

Access the Camera in ROS1

For ROS1 package, first clone it to your workspace.

1
2
3
4
5
6
7
8
# If you don't already have a works space, create one first.
mkdir -p rosws/src

# Go to the source folder
cd rosws/src

# Clone the repository
git clone https://github.com/AutonomousVehicleLaboratory/rb5_ros

Then, build your package.

1
2
3
4
5
6
7
8
# Return to the root folder of the workspace.
cd ..

# Include the ros tools, this assumes you have ROS1 melodic installed
source /opt/ros/melodic/setup.bash

# Build only this package
catkin_make --only-pkg-with-deps rb5_vision

Then you can run the package. For example, you can run the RGB camera by

1
2
3
4
5
# source the ros workspace
source devel/setup.bash

# start the program with a set of parameters in the launch file
roslaunch rb5_vision rb_camera_main_ocv.launch

This will publish images to the topic /camera_0.

start a new terminal and check with the following command.

1
2
3
4
source /opt/ros/melodic/setup.bash

rostopic list # list all the topics
rostopic hz /camera_0 # get the frequency of the topic

Finally, you can stop the process by pressing Ctrl + C in the terminal where it is running. Notice that it will take a few seconds for it to stop.

Similarly, you can run the tracking camera by

1
roslaunch rb5_vision rb_camera_side_ocv.launch

And this will publish images to the topic /camera_1.

Access the camera in ROS2

For ROS2 package, first clone it to your workspace.

1
2
3
4
5
6
7
8
# If you don't already have a works space, create one first.
mkdir -p ros2ws/src

# Go to the source folder
cd ros2ws/src

# Clone the repository
git clone https://github.com/AutonomousVehicleLaboratory/rb5_ros2

Then, build your package.

1
2
3
4
5
6
7
8
# Return to the root folder.
cd ..

# Include the ros tools, this assumes you have ROS2 dashing installed
source /opt/ros/dashing/setup.bash

# Build only this package
colcon build --packages-select rb5_ros2_vision

Then you can run the package.

1
2
3
4
5
6
7
8
# Source the ROS2 workspace
source install/setup.bash

# Run the RGB camera
ros2 launch rb5_ros2_vision rb_camera_main_ocv_launch.py

# Or run the tracking camera
ros2 launch rb5_ros2_vision rb_camera_side_ocv_launch.py

Again, you can verify that the message is being published using the following command in another new terminal.

1
2
3
source /opt/ros/dashing/setup.bash
ros2 topic list # check if the topic is being pubilshed
ros2 topic hz /camera_0 # check the frequency of the RGB image message

Reference:
[1]: https://developer.qualcomm.com/comment/18637#comment-18637

Comment and share

Qualcomm Robotics RB5 is a powerful edge computing device for robotics applications. Before we can use it to drive cool applications, we need to flash an operating system into it. In this tutorial, we will show you how to set up Ubuntu 18.04 on Qualcomm Robotics RB5, connect it to WiFi to enable SSH and config HDMI display. We enhanced the official setup guide with some additional tips. The setup steps requires a computer with Ubuntu operating system.

Install Ubuntu 18.04 on Qualcomm Robotics RB5

a) Install adb and fastboot by using the following command in Linux Terminal:

1
sudo apt-get install android-tools-adb android-tools-fastboot

b) Download the Qualcomm Robotics SDK Manager from here. You will need to create an account for this.

c) The download should be a Zip file that contains the SDK Manager installation package and a Readme file. Follow the instructions in the Readme file to install the prerequisites.

d) Install the SDK manager on the Linux workstation. Refer to the process given as step 2 in the following link

e) Before running the SDK manager, if you are using a Linux workstation, run the following command in the Terminal:

1
sudo systemctl stop ModemManager

f) Run the SDK manager. Follow step 3 in the following link

g) Follow step 4 from the same link to download resources and generate system image. This could take slightly more than 30 minutes.

h) Choose LU or LE flash. The LU flash has been tried before and flash was a success. This forum answer is a possible explanation for the difference between LU and LE flash.

i) Now, start the process of flashing the generated system images on the Qualcomm Robotics RB5 by following step 5 from the same link

j) Follow steps 1-3 from this link to continue and complete the flashing process successfully.

k) If flashing is successful, adb should be working. To check this, keep your Qualcomm Robotics RB5 connected to your workstation and open your workstation’s terminal and type:

1
adb shell

and you should see a device ID shown as an attached device.

l) If not, please power cycle the development kit. Since the system images are flashed, there is no need to press the F_DL key to force the device to enter the Emergency Download Mode.

Setup WiFi and SSH

Once the OS is flashed, the next step involves setting up WiFi and SSH connections.
To set up WiFi connectivity on Qualcomm Robotics RB5, follow steps 1-4 from this link

To access Qualcomm Robotics RB5 terminal, both adb shell or SSH can be used. To set up SSH connection:

a) Type the following commands in a new terminal:

1
2
adb shell  
sh-4.4#ifconfig

The ‘ifconfig’ command gives you the IP address of your connection.
Then use the following command:

1
sh-4.4#ssh root@ <IP address>

This will ask you for a password, which is ‘oelinux123’

This will successfully complete the SSH connection, through which you can remotely access the Qualcomm Robotics RB5 terminal.

HDMI Display

The next step involves connecting to an HDMI monitor. The following is the procedure:

a) Refer to the ‘Check HDMI’ section in this link

b) Instead of the 5 commands given in the link given in a), you could try just this single command after connecting the HDMI:

1
weston --connector=29

c) If b) doesn’t work, then use the following 5 commands everytime while connecting HDMI:
1
2
3
4
5
c:\>adb shell
sh-4.4# mkdir -p /usr/bin/weston_socket
sh-4.4# export XDG_RUNTIME_DIR=/usr/bin/weston_socket
sh-4.4# export LD_LIBRARY_PATH=/usr/lib:/usr/lib/aarch64-linux-gnu/
sh-4.4# weston --tty=1 --connector=29 --idle-time=0

Comment and share

Author's picture

RB5 ROBOTICS TUTORIALS

A set of Robotics Tutorials developed for the RB5 Robotics Development Platform from Qualcomm. Authors are from the Contextual Robotics Institute at UC San Diego.

Contributors

Henrik I. Christensen, David Paz, Henry Zhang, Anirudh Ramesh.