C h a p t e r 2
The effectiveness of AMRs in unstructured
environments depends on their ability
to continuously perceive, interpret,
and respond to complex inputs. These
systems must detect and classify objects,
extract spatial geometry, track motion,
respond to dynamic changes in their
surroundings, and execute navigational
decisions with low latency. As such,
robust perception demands a layered
architecture of sensing technologies that
each address different needs.
Rolling shutter image sensors provide
high-resolution and high dynamic range
(HDR) visual data and are widely used
for object recognition and semantic
segmentation. However, their sequential
exposure mechanism introduces
geometric distortion during rapid
motion. In high-speed or variable lighting
conditions, global shutter sensors offer
clear advantages. These sensors expose
all pixels simultaneously to eliminate
motion artifacts and enable accurate
shape detection under fast motion or
flickering artificial light.
For depth perception, AMRs typically
integrate one or more of four methods—
stereo vision, indirect time-of-flight
(iTOF), direct time-of-flight (dTOF, as used
in LiDAR), and ultrasonic sensing:
ADVANCED SENSING AND
AI-DRIVEN PERCEPTION
With sensor fusion, adaptive
safety zones, predictive AI,
and emergency routines, AMRs
can navigate efficiently while
maintaining a high standard of
human safety."
José Carlos García Moreno
Autonomous Navigation Engineer, PAL Robotics
11
Engineering the Future: The Sensors and Systems Powering Modern Mobile Robots