3D SLAM and Object Detection for Jackal Navigation

3D SLAM and Object Detection for Jackal Navigation

Overview

In this project, I equipped the Clearpath Jackal robot with perception, localization, and mapping capabilities for navigation. The robot is able to autonomously navigate, plan paths, and avoid obstacles while mapping the environment. It is also able to detect people and different objects. Furthermore, point cloud processing was implemented to remove and downsample noisy LiDAR data.

Hardware

  • Clearpath Jackal UGV
  • Intel RealSense D435i
  • Velodyne LiDAR VLP-16

Software

  • ROS 2 Humble
  • RTAB-Map
  • Nav2
  • YOLOv7
  • Point Cloud Library (PCL)

Setting Up the Jackal on ROS 2 Humble

The first step of the project was to make the Jackal run on ROS 2 Humble. The packages for the Jackal currently do not officially support ROS 2 Humble, so the packages for ROS 2 Foxy (which are supported) had to be built from source, and required adjustments to be compatible with ROS 2 Humble. Detailed instructions are provided in a GitHub repository, which is available here. Below is a summarized process:

  1. Install Ubuntu 22.04 LTS
  2. Set up the wireless network
  3. Build packages from source and install dependencies on the Jackal
  4. Build packages from source and install dependencies on your computer
  5. Set up ROS 2 to work between your computer and the Jackal
  6. Set up PS4 controller
  7. Set up Velodyne VLP-16

3D SLAM and Autonomous Navigation

For this part of the project, I used RTAB-Map to implement 3D simultaneous localization and mapping (SLAM) on the Jackal. RTAB-Map performs SLAM by using a graph-based optimization approach, which allows it to efficiently handle loop closures and drift in the robot’s localization, resulting in robust mapping and localization. It generates a 3D point cloud map, which is used to create a 2D occupancy grid map.

Nav2 was then used to achieve autonomous navigation. It uses a 2D occupancy grid map to plan the Jackal’s path, avoid obstacles, and reach a desired goal. The map is updated in real-time as the robot moves and perceives its environment. The flowchart below shows how the Jackal, RTAB-Map, and Nav2 fit together.

Below is a video of the Jackal navigating in the real world:

Real-Time Object Detection

In order to achieve real-time object detection, I integrated YOLOv7 into a ROS 2 node and used the RealSense D435i camera. Below is a demonstration:

Point Cloud Processing

Although Nav2 works well with the occupancy map generated by RTAB-Map, it is actually unsuitable for navigation because it contains numerous occupied grid points caused by noisy LiDAR data. In order to address this, I used PCL to do the following:

  • Eliminate any points that exceed the robot’s height, are beneath the ground, and are in close proximity to the robot
  • Removed points that do not have any neighboring points within a specific radius
  • Downsampled the point cloud data

Below is a point cloud map generated by RTAB-Map:

Below is a point cloud map after the data was filtered:

In addition, point cloud filtering enabled the Jackal to go under the table. Instead of detecting the table as one large obstacle, the robot detected it as two obstacles (table legs) and determined that there was enough space available to go through.

Future Work

  • Implement an exploration algorithm, such as frontier exploration, or utilize reinforcement learning to develop an exploration policy
  • Perform LiDAR-camera fusion to create a more complete and accurate representation of the environment
View Project on GitHub