Explore evocortex

Localization

Localization is mandatory for all autonomous systems: A robot has to know it’s own position within the given environment. Furthermore, it must distinguish between obstacles and also traversable paths.  

Map and Localize 

Based on Simultaneous Localization and Mapping (SLAM), robot perceives the environment and creates a map. With SLAM the robot localizes itself in relation from the starting pose. While the robot moves and the acquisition of new data, the knowledge about the environment is enriched. This data is fused into a map of the environment. When a LIDAR is used, the received map is similar to a ground plan.  

After the creation of the map, it can be used equally for a single robot, or also a whole robot fleet. If a 2D LIDAR is used for localization, the environment should only be changed little, otherwise robustness of the localization is not ensured.  

With a camera-based localization approach, Evocortex prepares a new way: LIDARS make a big part of the costs of a todays autonomous robot when it is used for localization. In contrast to that camera systems are comparable cheap and can provide – with addition of intelligent software – a higher precision and also a higher robustness.  

For the mapping and localization Evocortex provides different solutions: The Evo Localizer SDK enables LIDAR-based mapping and localization for a fleet of robots. With the camera-based localization – the Evocortex Localization Module (ELM)  two cameras are used towards the ground and the ceiling to ensure high precision and robustness.  

Related Topics

EvoLocalization Module