Explore evocortex

Localization

Where am I?

Localization is a mandatory task for all mobile autonomous systems: A robot has to know its own position within a given environment to be able to interact with it. Furthermore, for the purpose of movement, it must be able to distinguish between obstacles and traversable paths.

Localize and Map

Using Simultaneous Localization and Mapping (SLAM), a robot perceives its environment and localizes itself in relation to the starting pose. While the robot moves and acquires new data, it accumulates knowledge about the environment. This data is then fused into a map of the environment. When a LIDAR is used, the received map is similar to a ground plan.

After the creation of the map, it can be used either for a single robot or for an entire robot fleet. If a 2D LIDAR is used for localization, future changes in the environment should be minimized in order to ensure the robustness of the localization; otherwise, it might become necessary to repeat the mapping process.

LIDAR and Camera

With a camera-based localization approach, Evocortex paves a new way: LIDARS constitute a significant portion of the costs of a contemporary autonomous robot when used for localization. In contrast, camera systems are comparatively cheap and can provide – with addition of intelligent software – a higher precision and robustness.

Evocortex provides two solutions for mapping and localization: The Localizer SDK enables LIDAR-based mapping and localization for a fleet of robots. The camera-based Evocortex Localization Module (ELM) on the other hand utilizes two cameras, towards the ground and the ceiling respectively, to ensure high precision and robustness.

Observing a SLAM process from an outside perspective

Observing a SLAM process from the robot's perspective

Localizer SDKELMNavigationRobotics