In the complex ecosystem of autonomous mobile robotics, technology and data fusion play a crucial role in the way our autonomous security robots move around and find their way in their environment. Here, we explore the mapping, navigation and localisation technologies that enable our mobile robots to move with agility and precision.
Data blending involves combining information from different sources to create an analytical dataset that can be used to make business decisions or for a specific business process.
In the case of our outdoor surveillance robots, this involves the blending of multimodal sensor data. By combining visual, LIDAR, ultrasonic and other sensors, the robots are then able to compensate for the limitations of each individual technology.
The first cornerstone of autonomous mobility is robotic mapping. Our autonomous security robots exploit a complex technology that enables them to create a map of their environment while locating themselves in relation to this map. This is SLAM technology, or Simultaneous Localization and Mapping.
This technology merges data from :
In the case of our GR100 robot, high-definition LIDAR sensors, combined with point cloud processing algorithms, provide a three-dimensional perception of the environment. This real-time mapping, based on multi-sensor data fusion techniques, provides a precise three-dimensional spatial representation for advanced navigation.
Precise localisation is the holy grail of autonomous robotics. As with cartography, this involves merging data from different sensors so that robots can locate themselves accurately: GPS, inertial sensors, a geo-referenced map, odometry, LIDAR and cameras.
The location data is processed using extended Kalman filtering. This advanced data fusion algorithm minimises errors by optimally estimating the state of the system from noisy data.
The GR100’s location is the result of the convergence of a high-precision GPS receiver, coupled with sophisticated inertial sensors that feed a data fusion algorithm. This innovative approach enables the robot to locate itself with a minimum margin of error. Data from multi-source sensors guarantees precise positioning, as well as tolerance to GPS losses.
The navigation of autonomous robots is based on a clever combination of sensory data and sophisticated algorithms. Intelligent navigation systems integrate multiple layers of information to make informed decisions.
Key elements include:
The GR100‘s autonomous navigation is based on advanced trajectory planning algorithms and dedicated processing units. High-precision IMU motion sensors, integrated with visual perception systems, feed a predictive navigation model. The fusion of data from these multiple sources enables the robot to make dynamic decisions, optimising its trajectory according to detected obstacles and constantly changing environmental parameters.
Data fusion is the nerve centre that powers the intelligence of our autonomous robots. This approach enables us to combine heterogeneous data to obtain a complete view of the environment.
As technology and data fusion continue to advance, the future of autonomous mobile robotics promises even more revolutionary advances, opening up new prospects for securing private, industrial and risky spaces.
At Running Brains Robotics, advanced mapping, precise location and intelligent navigation, supported by the expertise of our engineers, create a proactive and cost-effective safety solution for our customers.
Our promise? To develop the technologies used in our robots to guarantee ongoing reliability and lay the foundations for a new era in private security and safety. Together, we can anticipate tomorrow’s challenges with precision and responsiveness!
Head of Marketing & Communication at Running Brains Robotics