As we described in our previous tutorial on building and loading kernel modules on the Qualcomm Robotics RB5, the Ubuntu 18.04 installation that is part of the Qualcomm Robotics RB5 LU build includes minimal package support to reduce OS complexity. With this in mind, if we wish to install custom drivers, we need to perform the process described. Thankfully, if you have build and loaded the kernel modules described in our tutorial, you will be able to interface with a standard joystick controller over USB and serial over USB.

With the preliminaries completed, we will use the Megabot Robot as an example to interface with an Qualcomm Robotics RB5. In this case, we will use the megapi Python module designed for the Megabot to communite with the robot. This can be readily installed using pip install megapi.

Here we define a set of primitive actions that can control this four-mechanum-wheel robot. The set of primitive control actions include move left,right,forward, in reverse, rotate clockwise, counter-clockwise, and stop. Yes, it come as a surprise to many but the wheels on the robot contain a number of different rollers that ultimately influence the kinematics of the robot and jointly provide very interesting properties such as rotating in place and moving sideways!

The inverse kinematics of the robot can be described below.

Inverse kinematicsInverse kinematics

The following code snippet implements the control conditions for each of the actions defined.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
def move(self, direction, speed):
# port1: front right (wheel 1)
# port2: rear left (wheel 0)
# port9: rear right (wheel 3)
# port10: front left (wheel 2)

if direction == "left":
# fl wheel
self.bot.motorRun(10, speed)

# fr wheel
self.bot.motorRun(1, speed)

# rl wheel
self.bot.motorRun(2, -speed)

# rr wheel
self.bot.motorRun(9, -speed)
return
elif direction == "right":
# fl wheel
self.bot.motorRun(10, -speed)

# fr wheel
self.bot.motorRun(1, -speed)

# rl wheel
self.bot.motorRun(2, speed)

# rr wheel
self.bot.motorRun(9, speed)
return
elif direction == "forward":
# fl wheel
self.bot.motorRun(10, -speed)

# fr wheel
self.bot.motorRun(1, speed)

# rl wheel
self.bot.motorRun(2, -speed)

# rr wheel
self.bot.motorRun(9, speed)


return
elif direction == "reverse":
# fl wheel
self.bot.motorRun(10, speed)

# fr wheel
self.bot.motorRun(1, -speed)

# rl wheel
self.bot.motorRun(2, speed)

# rr wheel
self.bot.motorRun(9, -speed)

return
elif direction == "ccwise":
# fl wheel
self.bot.motorRun(10, speed)

# fr wheel
self.bot.motorRun(1, speed)

# rl wheel
self.bot.motorRun(2, speed)

# rr wheel
self.bot.motorRun(9, speed)
elif direction == "cwise":
# fl wheel
self.bot.motorRun(10, -speed)

# fr wheel
self.bot.motorRun(1, -speed)

# rl wheel
self.bot.motorRun(2, -speed)

# rr wheel
self.bot.motorRun(9, -speed)

else:
# fl wheel
self.bot.motorRun(10, 0)

# fr wheel
self.bot.motorRun(1, 0)

# rl wheel
self.bot.motorRun(2, 0)

# rr wheel
self.bot.motorRun(9, 0)
return

This script has been implemented using ROS1 and ROS2. While the robot can be controlled using the standard joy node using an USB jostick controller, it can additionally be controlled a standard ROS message for higher level planning strategies. The process of building and running the ROS nodes is outlined below.

ROS1

Create a workspace, clone the ROS1 implementation, and build the package. Make sure ROS is in your path, i.e. source /opt/ros/melodic/setup.bash.

1
2
3
4
5
mkdir -p rb5_ws/src && cd rb5_ws/src
git clone https://github.com/AutonomousVehicleLaboratory/rb5_ros.git
cd ..
catkin_make
source rb5_ws/devel/setup.bash

Start the control node

1
ros run rb5_control rb5_mpi_control.py

ROS2

Create a workspace, clone the ROS2 implementation, and build the package. Make sure ROS is in your path, i.e. source /opt/ros/dashing/setup.bash.

1
2
3
4
5
mkdir -p rb5_ws/src && cd rb5_ws/src
git clone https://github.com/AutonomousVehicleLaboratory/rb5_ros2.git
cd ..
colcon build
source rb5_ws/install/setup.bash

Start the control node

1
ros2 run rb5_ros2_control rb5_mpi_control.py

MegaBot

References

Comment and share

This tutorial outlines the process of installing ROS which is short for Robot Operating System. While ROS is not an operating system in the traditional sense, this is an ecosystem that provides support for a wide variety of sensor drivers and software libraries to aid in the fast development of robotic applications. Whether it is to process camera data or to execute a motion plan to reach a target destination, ROS handles message passing between modules to enable communication across multiple software modules.

In the following two subsections, we outline the installation process of two ROS versions, namely ROS1 (Melodic) and ROS2 (Dashing) that were specifically designed to run on native Ubuntu 18.04 systems. As the naming system suggests, ROS1 precedes ROS2; however, each can come with newer/older flavors for each (i.e. Kinetic, Melodic, Neotic are ROS1 versions and Dashing, Foxy, and Galactic are ROS2 flavors). While the operating system support varies across releases, the key differences between ROS1 and ROS2 involve package support and features.

As far as benefits, a key benefit of using ROS2 involves security, stability, and its focus on making it compatible with industrial robotic applications that require reliability. However, some may find that open source packages that were previously availble in ROS1 are not entirely ported to ROS2. This is quickly changing but it is a tradeoff to consider during development.

In future tutorials, we will explore applications that utilize Melodic and Dashing as the basis for our applications since they are Ubuntu 18.04 specific.

Install ROS1 - Melodic

Setup sources.list

1
sudo sh -c 'echo "deb http://packages.ros.org/ros/ubuntu $(lsb_release -sc) main" > /etc/apt/sources.list.d/ros-latest.list'

Set up keys

1
2
sudo apt install curl # if you haven't already installed curl
curl -s https://raw.githubusercontent.com/ros/rosdistro/master/ros.asc | sudo apt-key add -

Install

1
2
sudo apt update
sudo apt install ros-melodic-desktop-full

Source ROS environment

1
2
echo "source /opt/ros/melodic/setup.bash" >> ~/.bashrc
source ~/.bashrc

Install Additional Dependencies for managing workspaces

1
sudo apt install python-rosdep python-rosinstall python-rosinstall-generator python-wstool build-essential

Install and initilize rosdep to help resolve package dependencies

1
2
3
sudo apt install python-rosdep
sudo rosdep init
rosdep update

Verify installation by running RViz (visualization GUI)

1
rviz

Reference

Install ROS2 - Dashing

Install host operating system dependencies

1
2
3
sudo apt-get install usbutils git bc
sudo apt-get -y install locales
sudo apt-get update && sudo apt-get install curl gnupg2 lsb-release

Setup sources.list

1
sudo sh -c 'echo "deb [arch=amd64,arm64] http://packages.ros.org/ros2/ubuntu `lsb_release -cs` main" > /etc/apt/sources.list.d/ros2-latest.list'

Set up keys

1
sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys F42ED6FBAB17C654

Install

1
sudo apt-get install ros-dashing-desktop

Source ROS environment

1
2
echo "source /opt/ros/dashing/setup.bash" >> ~/.bashrc
source ~/.bashrc

Install Additional Dependencies for managing workspaces

1
2
sudo apt-get install python3-argcomplete
sudo apt-get install python3-colcon-common-extensions

Verify installation by running Rviz2 (visualization GUI)

1
rviz2

Reference

Comment and share

This tutorial will guide you towards running ORB_SLAM3 on Qualcomm Robotics RB5. ORB_SLAM3 is a popular software package that can perform visual SLAM and visual-inertial SLAM. The algorithm is fast and therefore suitable for the Qualcomm Robotics RB5 platform.

Install ORB_SLAM3 Library

The code from ORB_SLAM3 original repository doesn’t work right out of box on Qualcomm Robotics RB5. We created a version that we tested on Qualcomm Robotics RB5 and works well. You can download the code from https://github.com/AutonomousVehicleLaboratory/ORB_SLAM3_RB5.

After downloading the code, follow the README.md to compile the ORB_SLAM3 library. A simplified version are also given below.

1
2
3
4
5
# Install the required dependencies
git clone https://github.com/AutonomousVehicleLaboratory/ORB_SLAM3_RB5
cd ORB_SLAM3_RB5
chmod +x build.sh
./build.sh

ROS wrapper for ORB_SLAM3 on Qualcomm Robotics RB5

We create wrapper for the ORB_SLAM3 api in both ROS and ROS2. These wrappers handles communication and data conversion. The ROS version was tested on ROS melodic and the ROS2 version was tested on RO2 dashing.

The communication part includes receiving sensor messages published by other ROS node. The sensors include camera and IMU. If both of them are used, the wrapper also handles synchronization before passing them to the ORB_SLAM3 library. After the pose is given by the output from the ORB_SLAM3 library, we then convert it into transformation and publish to ROS TF.

We seperate the ORB_SLAM3 library and the ROS wrapper so that we don’t need to include this library into the ROS workspace. You will need to modify the line 4 of the CMakeLists.txt file in the ROS and ROS2 packages to give the correct ORB_SLAM3 library path so that these nodes can be built successfully.

For the ROS1

1
2
3
4
5
6
7
8
# If you don't already have a works space, create one first.
mkdir -p rosws/src

# Go to the source folder
cd rosws/src

# Clone the repository
git clone https://github.com/AutonomousVehicleLaboratory/rb5_ros

Before you build your package, set the path in the CMakeLists.txt file in the ORB_SLAM3_RB5 package from the repository that you just cloned.

1
set(ORB_SLAM3_SOURCE_DIR "path/to/ORB_SLAM3_RB5_LIB/")

Note that this path refers to the folder where the customized ORB_SLAM3 library from https://github.com/AutonomousVehicleLaboratory/ORB_SLAM3_RB5 is cloned to.

Then, build your package.

1
2
3
4
5
6
# Return to the root folder of the workspace.
# Include the ros tools, this assumes you have ROS1 melodic installed
source /opt/ros/melodic/setup.bash

# Build only this package
catkin_make --only-pkg-with-deps ORB_SLAM3_RB5

To run the package, first start the roscore in a new terminal

1
2
source /opt/ros/melodic/setup.bash
roscore

Then in another terminal, go into the workspace folder

1
2
3
4
5
# source the ros workspace
source devel/setup.bash

# run the program
rosrun ORB_SLAM3_RB5 Mono /path/to/ORB_SLAM3_RB5_library/Vocabulary/ORBvoc.yaml /path/to/ORB_SLAM3_RB5_library/Examples/Monocular/Euroc.yaml

Notice that you should use a different yaml file to reflect the parameters of your camera.

When running the ROS node, you will need to access camera using the ROS package we mentioned in the basic tutorial accessing cameras.

For the ROS version, the wrapper allows you to run the Monocular version of ORB-SLAM3 on Qualcomm Robotics RB5. The text interface will work with wayland desktop.

ORB-SLAM3 running in a Gnome-TerminalORB-SLAM3 running in a Gnome-Terminal

In the terminal, you can see the translation vector and rotation matrix being printed. A trajectory file named “KeyFrameTrajectory.txt” will be saved into the root of the workspace if the program is stoped by “Ctrl+C”.

You can also launch RViz on the gnome desktop that you setup following the tutorial.

ORB-SLAM3 pose displayed in RVizORB-SLAM3 pose displayed in RViz

For the ROS2 version, the wrapper allows you to run the visual inertial SLAM with a single camera and the IMU on Qualcomm Robotics RB5. The IMU data can be accessed through a prebuilt ROS2 package that is tested on ROS2 dashing.

Similarly, first clone it to your workspace.

1
2
3
4
5
6
7
8
# If you don't already have a works space, create one first.
mkdir -p ros2ws/src

# Go to the source folder
cd ros2ws/src

# Clone the repository
git clone https://github.com/AutonomousVehicleLaboratory/rb5_ros2

Before you build your package, set the path in the CMakeLists.txt file in the ORB_SLAM3_RB5 package from the repository that you just cloned.

1
set(ORB_SLAM3_SOURCE_DIR "path/to/ORB_SLAM3_RB5_LIB/")

Note that this path refers to the folder where the customized ORB_SLAM3 library from https://github.com/AutonomousVehicleLaboratory/ORB_SLAM3_RB5 is cloned to.

Then, build your package.

1
2
3
4
5
6
# Return to the root folder of the workspace.
# Include the ros tools, this assumes you have ROS2 dashing installed
source /opt/ros/dashing/setup.bash

# Build only this package
colcon build --packages-select orb_slam3_rb5_ros2

Then you can run the package.

1
2
3
4
5
6
7
8
# Source the ROS2 workspace
source install/setup.bash

# run the Monocular version
ros2 run orb_slam3_rb5_ros2 Mono /path/to/ORB_SLAM3_RB5_library/Vocabulary/ORBvoc.yaml /path/to/ORB_SLAM3_RB5_library/Examples/Monocular/Euroc.yaml

# run the Monocular-Inertial version
ros2 run orb_slam3_rb5_ros2 Mono_Inertial /path/to/ORB_SLAM3_RB5_library/Vocabulary/ORBvoc.yaml /path/to/ORB_SLAM3_RB5_library/Examples/Monocular-Inertial/Euroc.yaml

If You encounter error while loading shared libraries, add the library to path such as:

1
export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/path/to/ORB_SLAM3_RB5_LIB/lib:/path/to/Pangolin/build

Comment and share

An important task in robotics is to uniquely identify features and landmarks over time as a robot navigates through the environment. The process can help estimate the robotics position (localization) to ultimately be able to perform robust path planning and navigation. In this tutorial, we will explore a popular open-source library for AprilTag detection and 3D pose estimation from a single monorcular camera.

AprilTags (shown in the figure below) are a form of fiducial markers that are designed with predefined sizes and patterns. The benefit of knowing their size is pose estimation given that a perspective mapping between two planes can be described by a Homography $\mathbf{H}$ matrix. In this case, $\mathbf{H}$ can describe a mapping between the camera frame and the marker since both can be assumed to be flat surfaces. This makes AprilTags an easy way of performing 3D pose estimation of objects with a single monocular camera by simply placing an AprilTag on the object. In addition, the unique pattern that defines each marker can help differentiate multiple markers that may be observable at a particular instance and by leveraging the concept of Hamming distance and dictionaries, error detection and correction can be performed in the event that part of a marker is occluded.

Multiple AprilTag fiducial markers detected within an image with IDs 7 and 237. For marker 7, an error is detected and corrected. [1]Multiple AprilTag fiducial markers detected within an image with IDs 7 and 237. For marker 7, an error is detected and corrected. [1]

The AprilTag3 Library

Using the AprilTag library is actually quite simple and can be installed from source. This includes compatible versions for C++ but also Python3. While our implementation will consist of C++, the Python version is even easier to get up and running. To install, simply run pip install apriltag.

For convenience, we have provided two implementations for AprilTag detection in ROS1 and ROS2. Given that both are C++ implementations, the logic remains the same. First, we convert our image to grayscale and populate a image_u8_t struct using the OpenCV cv::Mat image format.

1
2
3
4
5
6
7
8
cv::Mat image_gray; 
cv::cvtColor(image, image_gray, cv::COLOR_BGR2GRAY);

image_u8_t im = { .width = image_gray.cols,
.height = image_gray.rows,
.stride = image_gray.cols,
.buf = image_gray.data
};

Once the image has been converted to grayscale, we can instantiate an apriltag_detection_t instance and call apriltag_detector_detect(). This is the primary function incharge of performing marker detection on camera data. It is worth noting that this detection object can be reused so memory can be allocated in the class constructor of your implementation.

1
2
apriltag_detector_t *a_detector = apriltag_detector_create();
zarray_t * detections = apriltag_detector_detect(a_detector, &im);

As it can be seen, apriltag_detector_detect() returns an array of type zarray_t with the list of detections concatenated. In the following code block we extract individual detections into instances of type apriltag_detection_t and perform a perspective mapping using the homography matrix calculated. This is handled by estimate_tag_pose(). However, two important considerations include i) that we know the size of the markers in advance, and ii) we understand the intrinsic parameters of the camera that include image center and focal length. These are attributes that are part of the first argument of type apriltag_detection_info_t that is passed to estimate_tag_pose().

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
apriltag_detection_t *det;
vector<apriltag_pose_t> poses;
vector<int> ids;

for (int i=0; i<zarray_size(detections); i++){

zarray_get(detections, i, &det);
info.det = det;
apriltag_pose_t pose;

// estimate SE(3) pose
estimate_tag_pose(&info, &pose);
poses.push_back(pose);
ids.push_back(det->id);
}

The marker size used in our implementation corresponds to $0.162m$ (depending on the print AprilTag marker size) and the camera parameters are estimated for the wide angle lens of the Qualcomm Robotics RB5 using the OpenCV calibration tool or its ROS version and a standard checkerboard target. Notice that after you calibrate your camera and undistort the image, the undistort image will have a new camera matrix. We set the info for AprilTag to perform accurate pose estimation after getting the new camera matrix.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
cv::Mat rectify(const cv::Mat image){
cv::Mat image_rect = image.clone();
// get new camera matrix after undistort
// alpha is set to 0.0 so all pixels are valid
const cv::Mat new_K = cv::getOptimalNewCameraMatrix(K, d, image.size(), 0.0);
cv::undistort(image, image_rect, K, d, new_K);

// set info for pose estimation using new camera matrix
det.setInfo(tagSize, new_K.at<double>(0,0), new_K.at<double>(1,1), new_K.at<double>(0,2), new_K.at<double>(1,2));

return image_rect;
}

void AprilDetection::setInfo(double tagSize, double fx, double fy, double cx, double cy){
info.tagsize = tagSize;
info.fx = fx;
info.fy = fy;
info.cx = cx;
info.cy = cy;
}

The components described above have been wrapped into ROS1 and ROS2 implementations and be evaluated using the steps below. You will need to set the following parameters in the node.

1
2
3
4
5
6
7
8
9
10
11
12
13
// TODO: Replace these parameters using your calibration results
double distortion_coeff[] =
{0.008549, -0.016273, -0.002954, -0.003708, 0.000000};
double intrinsics[] = {683.558755, 0. , 961.607445,
0. , 680.809134, 547.701668,
0. , 0. , 1.};

const cv::Mat d(cv::Size(1, 5), CV_64FC1, distortion_coeff);
const cv::Mat K(cv::Size(3, 3), CV_64FC1, intrinsics);

// TODO: Set tagSize for pose estimation, assuming same tag size.
// details from: https://github.com/AprilRobotics/apriltag/wiki/AprilTag-User-Guide#pose-estimation
const double tagSize = 0.162; // in meters

ROS1

Create a workspace, clone the ROS1 implementation, and build the package. Make sure ROS is in your path, i.e. source /opt/ros/melodic/setup.bash.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
# install dependency
apt install ros-melodic-apriltag

mkdir -p rb5_ws/src && cd rb5_ws/src
git clone https://github.com/AutonomousVehicleLaboratory/rb5_ros.git
cd ..

# build all packages in the workspace
catkin_make
source devel/setup.bash

# if the build all packages failed, try build only
catkin_make --only-pkg-with-deps rb5_vision
catkin_make --only-pkg-with-deps april_detection

Start the camera node

1
roslaunch rb5_vision rb_camera_main_ocv.launch

Start the AprilTag detection node

1
rosrun april_detection april_detection_node

ROS2

Create a workspace, clone the ROS2 implementation, and build the package. Make sure ROS is in your path, i.e. source /opt/ros/dashing/setup.bash.

1
2
3
4
5
mkdir -p rb5_ws/src && cd rb5_ws/src
git clone https://github.com/AutonomousVehicleLaboratory/rb5_ros2.git
cd ..
colcon build
source rb5_ws/install/setup.bash

Start the camera node

1
ros2 launch rb5_ros2_vision rb_camera_main_ocv_launch.py

Start the AprilTag detection node

1
ros2 run ros2_april_detection april_detection_node

Visualizing the markers and poses

For convenience, our ROS implementations publish messages of type geometry_msgs::PoseArray (ROS1) and geometry_msgs::msg::PoseStamped (ROS2). These messages are timestamped and include a unique ID that corresponds to the marker ID as part of the message header. Nonetheless, the main component of each is message is the marker’s pose in 3D which is represented as a position (a point) and orientation (here represented as a Quaternion). While we won’t go into the math component on how Quaternions are utilized to represent orientations in 3D space, ROS has an good tools for handling transformation and can handle transformations with ease. Below is a video of each marker being visualized using the 3D visualization tool for ROS called RViz. The video demos AprilTag3 detection and real-time 3D pose estimation running onboard of Qualcomm RB5.

References

[1] AprilTag: A robust and flexible visual fiducial system

Comment and share

  • page 1 of 1
Author's picture

RB5 ROBOTICS TUTORIALS

A set of Robotics Tutorials developed for the RB5 Robotics Development Platform from Qualcomm. Authors are from the Contextual Robotics Institute at UC San Diego.

Contributors

Henrik I. Christensen, David Paz, Henry Zhang, Anirudh Ramesh.