As we described in our previous tutorial on building and loading kernel modules on the Qualcomm Robotics RB5, the Ubuntu 18.04 installation that is part of the Qualcomm Robotics RB5 LU build includes minimal package support to reduce OS complexity. With this in mind, if we wish to install custom drivers, we need to perform the process described. Thankfully, if you have build and loaded the kernel modules described in our tutorial, you will be able to interface with a standard joystick controller over USB and serial over USB.
With the preliminaries completed, we will use the Megabot Robot as an example to interface with an Qualcomm Robotics RB5. In this case, we will use the megapi Python module designed for the Megabot to communite with the robot. This can be readily installed using pip install megapi.
Here we define a set of primitive actions that can control this four-mechanum-wheel robot. The set of primitive control actions include move left,right,forward, in reverse, rotate clockwise, counter-clockwise, and stop. Yes, it come as a surprise to many but the wheels on the robot contain a number of different rollers that ultimately influence the kinematics of the robot and jointly provide very interesting properties such as rotating in place and moving sideways!
The inverse kinematics of the robot can be described below.
Inverse kinematics
The following code snippet implements the control conditions for each of the actions defined.
defmove(self, direction, speed): # port1: front right (wheel 1) # port2: rear left (wheel 0) # port9: rear right (wheel 3) # port10: front left (wheel 2)
if direction == "left": # fl wheel self.bot.motorRun(10, speed)
This script has been implemented using ROS1 and ROS2. While the robot can be controlled using the standard joy node using an USB jostick controller, it can additionally be controlled a standard ROS message for higher level planning strategies. The process of building and running the ROS nodes is outlined below.
ROS1
Create a workspace, clone the ROS1 implementation, and build the package. Make sure ROS is in your path, i.e. source /opt/ros/melodic/setup.bash.
1 2 3 4 5
mkdir -p rb5_ws/src && cd rb5_ws/src git clone https://github.com/AutonomousVehicleLaboratory/rb5_ros.git cd .. catkin_make source rb5_ws/devel/setup.bash
Start the control node
1
ros run rb5_control rb5_mpi_control.py
ROS2
Create a workspace, clone the ROS2 implementation, and build the package. Make sure ROS is in your path, i.e. source /opt/ros/dashing/setup.bash.
1 2 3 4 5
mkdir -p rb5_ws/src && cd rb5_ws/src git clone https://github.com/AutonomousVehicleLaboratory/rb5_ros2.git cd .. colcon build source rb5_ws/install/setup.bash
This tutorial outlines the process of installing ROS which is short for Robot Operating System. While ROS is not an operating system in the traditional sense, this is an ecosystem that provides support for a wide variety of sensor drivers and software libraries to aid in the fast development of robotic applications. Whether it is to process camera data or to execute a motion plan to reach a target destination, ROS handles message passing between modules to enable communication across multiple software modules.
In the following two subsections, we outline the installation process of two ROS versions, namely ROS1 (Melodic) and ROS2 (Dashing) that were specifically designed to run on native Ubuntu 18.04 systems. As the naming system suggests, ROS1 precedes ROS2; however, each can come with newer/older flavors for each (i.e. Kinetic, Melodic, Neotic are ROS1 versions and Dashing, Foxy, and Galactic are ROS2 flavors). While the operating system support varies across releases, the key differences between ROS1 and ROS2 involve package support and features.
As far as benefits, a key benefit of using ROS2 involves security, stability, and its focus on making it compatible with industrial robotic applications that require reliability. However, some may find that open source packages that were previously availble in ROS1 are not entirely ported to ROS2. This is quickly changing but it is a tradeoff to consider during development.
In future tutorials, we will explore applications that utilize Melodic and Dashing as the basis for our applications since they are Ubuntu 18.04 specific.
This tutorial will guide you towards running ORB_SLAM3 on Qualcomm Robotics RB5. ORB_SLAM3 is a popular software package that can perform visual SLAM and visual-inertial SLAM. The algorithm is fast and therefore suitable for the Qualcomm Robotics RB5 platform.
Install ORB_SLAM3 Library
The code from ORB_SLAM3 original repository doesn’t work right out of box on Qualcomm Robotics RB5. We created a version that we tested on Qualcomm Robotics RB5 and works well. You can download the code from https://github.com/AutonomousVehicleLaboratory/ORB_SLAM3_RB5.
After downloading the code, follow the README.md to compile the ORB_SLAM3 library. A simplified version are also given below.
1 2 3 4 5
# Install the required dependencies git clone https://github.com/AutonomousVehicleLaboratory/ORB_SLAM3_RB5 cd ORB_SLAM3_RB5 chmod +x build.sh ./build.sh
ROS wrapper for ORB_SLAM3 on Qualcomm Robotics RB5
We create wrapper for the ORB_SLAM3 api in both ROS and ROS2. These wrappers handles communication and data conversion. The ROS version was tested on ROS melodic and the ROS2 version was tested on RO2 dashing.
The communication part includes receiving sensor messages published by other ROS node. The sensors include camera and IMU. If both of them are used, the wrapper also handles synchronization before passing them to the ORB_SLAM3 library. After the pose is given by the output from the ORB_SLAM3 library, we then convert it into transformation and publish to ROS TF.
We seperate the ORB_SLAM3 library and the ROS wrapper so that we don’t need to include this library into the ROS workspace. You will need to modify the line 4 of the CMakeLists.txt file in the ROS and ROS2 packages to give the correct ORB_SLAM3 library path so that these nodes can be built successfully.
For the ROS1
1 2 3 4 5 6 7 8
# If you don't already have a works space, create one first. mkdir -p rosws/src
# Go to the source folder cd rosws/src
# Clone the repository git clone https://github.com/AutonomousVehicleLaboratory/rb5_ros
Before you build your package, set the path in the CMakeLists.txt file in the ORB_SLAM3_RB5 package from the repository that you just cloned.
# Return to the root folder of the workspace. # Include the ros tools, this assumes you have ROS1 melodic installed source /opt/ros/melodic/setup.bash
# Build only this package catkin_make --only-pkg-with-deps ORB_SLAM3_RB5
To run the package, first start the roscore in a new terminal
1 2
source /opt/ros/melodic/setup.bash roscore
Then in another terminal, go into the workspace folder
1 2 3 4 5
# source the ros workspace source devel/setup.bash
# run the program rosrun ORB_SLAM3_RB5 Mono /path/to/ORB_SLAM3_RB5_library/Vocabulary/ORBvoc.yaml /path/to/ORB_SLAM3_RB5_library/Examples/Monocular/Euroc.yaml
Notice that you should use a different yaml file to reflect the parameters of your camera.
When running the ROS node, you will need to access camera using the ROS package we mentioned in the basic tutorial accessing cameras.
For the ROS version, the wrapper allows you to run the Monocular version of ORB-SLAM3 on Qualcomm Robotics RB5. The text interface will work with wayland desktop.
ORB-SLAM3 running in a Gnome-Terminal
In the terminal, you can see the translation vector and rotation matrix being printed. A trajectory file named “KeyFrameTrajectory.txt” will be saved into the root of the workspace if the program is stoped by “Ctrl+C”.
You can also launch RViz on the gnome desktop that you setup following the tutorial.
ORB-SLAM3 pose displayed in RViz
For the ROS2 version, the wrapper allows you to run the visual inertial SLAM with a single camera and the IMU on Qualcomm Robotics RB5. The IMU data can be accessed through a prebuilt ROS2 package that is tested on ROS2 dashing.
Similarly, first clone it to your workspace.
1 2 3 4 5 6 7 8
# If you don't already have a works space, create one first. mkdir -p ros2ws/src
# Go to the source folder cd ros2ws/src
# Clone the repository git clone https://github.com/AutonomousVehicleLaboratory/rb5_ros2
Before you build your package, set the path in the CMakeLists.txt file in the ORB_SLAM3_RB5 package from the repository that you just cloned.
# Return to the root folder of the workspace. # Include the ros tools, this assumes you have ROS2 dashing installed source /opt/ros/dashing/setup.bash
# Build only this package colcon build --packages-select orb_slam3_rb5_ros2
Then you can run the package.
1 2 3 4 5 6 7 8
# Source the ROS2 workspace source install/setup.bash
# run the Monocular version ros2 run orb_slam3_rb5_ros2 Mono /path/to/ORB_SLAM3_RB5_library/Vocabulary/ORBvoc.yaml /path/to/ORB_SLAM3_RB5_library/Examples/Monocular/Euroc.yaml
# run the Monocular-Inertial version ros2 run orb_slam3_rb5_ros2 Mono_Inertial /path/to/ORB_SLAM3_RB5_library/Vocabulary/ORBvoc.yaml /path/to/ORB_SLAM3_RB5_library/Examples/Monocular-Inertial/Euroc.yaml
If You encounter error while loading shared libraries, add the library to path such as:
An important task in robotics is to uniquely identify features and landmarks over time as a robot navigates through the environment. The process can help estimate the robotics position (localization) to ultimately be able to perform robust path planning and navigation. In this tutorial, we will explore a popular open-source library for AprilTag detection and 3D pose estimation from a single monorcular camera.
AprilTags (shown in the figure below) are a form of fiducial markers that are designed with predefined sizes and patterns. The benefit of knowing their size is pose estimation given that a perspective mapping between two planes can be described by a Homography $\mathbf{H}$ matrix. In this case, $\mathbf{H}$ can describe a mapping between the camera frame and the marker since both can be assumed to be flat surfaces. This makes AprilTags an easy way of performing 3D pose estimation of objects with a single monocular camera by simply placing an AprilTag on the object. In addition, the unique pattern that defines each marker can help differentiate multiple markers that may be observable at a particular instance and by leveraging the concept of Hamming distance and dictionaries, error detection and correction can be performed in the event that part of a marker is occluded.
Multiple AprilTag fiducial markers detected within an image with IDs 7 and 237. For marker 7, an error is detected and corrected. [1]
The AprilTag3 Library
Using the AprilTag library is actually quite simple and can be installed from source. This includes compatible versions for C++ but also Python3. While our implementation will consist of C++, the Python version is even easier to get up and running. To install, simply run pip install apriltag.
For convenience, we have provided two implementations for AprilTag detection in ROS1 and ROS2. Given that both are C++ implementations, the logic remains the same. First, we convert our image to grayscale and populate a image_u8_t struct using the OpenCV cv::Mat image format.
Once the image has been converted to grayscale, we can instantiate an apriltag_detection_t instance and call apriltag_detector_detect(). This is the primary function incharge of performing marker detection on camera data. It is worth noting that this detection object can be reused so memory can be allocated in the class constructor of your implementation.
As it can be seen, apriltag_detector_detect() returns an array of type zarray_t with the list of detections concatenated. In the following code block we extract individual detections into instances of type apriltag_detection_t and perform a perspective mapping using the homography matrix calculated. This is handled by estimate_tag_pose(). However, two important considerations include i) that we know the size of the markers in advance, and ii) we understand the intrinsic parameters of the camera that include image center and focal length. These are attributes that are part of the first argument of type apriltag_detection_info_t that is passed to estimate_tag_pose().
The marker size used in our implementation corresponds to $0.162m$ (depending on the print AprilTag marker size) and the camera parameters are estimated for the wide angle lens of the Qualcomm Robotics RB5 using the OpenCV calibration tool or its ROS version and a standard checkerboard target. Notice that after you calibrate your camera and undistort the image, the undistort image will have a new camera matrix. We set the info for AprilTag to perform accurate pose estimation after getting the new camera matrix.
cv::Mat rectify(const cv::Mat image){ cv::Mat image_rect = image.clone(); // get new camera matrix after undistort // alpha is set to 0.0 so all pixels are valid const cv::Mat new_K = cv::getOptimalNewCameraMatrix(K, d, image.size(), 0.0); cv::undistort(image, image_rect, K, d, new_K);
// set info for pose estimation using new camera matrix det.setInfo(tagSize, new_K.at<double>(0,0), new_K.at<double>(1,1), new_K.at<double>(0,2), new_K.at<double>(1,2));
The components described above have been wrapped into ROS1 and ROS2 implementations and be evaluated using the steps below. You will need to set the following parameters in the node.
// TODO: Set tagSize for pose estimation, assuming same tag size. // details from: https://github.com/AprilRobotics/apriltag/wiki/AprilTag-User-Guide#pose-estimation constdouble tagSize = 0.162; // in meters
ROS1
Create a workspace, clone the ROS1 implementation, and build the package. Make sure ROS is in your path, i.e. source /opt/ros/melodic/setup.bash.
ros2 run ros2_april_detection april_detection_node
Visualizing the markers and poses
For convenience, our ROS implementations publish messages of type geometry_msgs::PoseArray (ROS1) and geometry_msgs::msg::PoseStamped (ROS2). These messages are timestamped and include a unique ID that corresponds to the marker ID as part of the message header. Nonetheless, the main component of each is message is the marker’s pose in 3D which is represented as a position (a point) and orientation (here represented as a Quaternion). While we won’t go into the math component on how Quaternions are utilized to represent orientations in 3D space, ROS has an good tools for handling transformation and can handle transformations with ease. Below is a video of each marker being visualized using the 3D visualization tool for ROS called RViz. The video demos AprilTag3 detection and real-time 3D pose estimation running onboard of Qualcomm RB5.
A set of Robotics Tutorials developed for the RB5 Robotics Development Platform from Qualcomm. Authors are from the Contextual Robotics Institute at UC San Diego.
Contributors
Henrik I. Christensen, David Paz, Henry Zhang, Anirudh Ramesh.