Sensors and Sensing in Robotics
- Sensors and Sensing in Robotics
Prerequisites
This page does not require any specific prerequisite, outside knowing what a robot consists of. —
General Motivation
From collaborative factory arms to drones and humanoids, every robot relies on sensing to perceive their environment and to control their own actions. Sensors can acquire and process information from a variety of sources, from recording motor displacement, to detecting light, sound and force. They convert this information into (usually) digital signals that a computer can then further process and analyse.
Regardless of the task, meaningful robot actions begin with accurate perception of both the robot’s own state and its surroundings. Without reliable sensory feedback, the most sophisticated control algorithm degenerates into blind open-loop commands. Conversely, well-designed sensing turns a simple robotic platform into a situationally aware agent that can:
- Estimate its own state (proprioception) – joint encoders, IMUs and force sensors provide the data to infer pose, velocities and loads, yielding an internal state estimate that closes the control loop.
- Perceive the external world (exteroception) – cameras, lidars, radars and tactile arrays reveal obstacles, objects and humans, enabling navigation, manipulation and safe collaboration.
- Adapt to uncertainty – no mathematical model is perfect; sensors observe the difference between expected and actual behaviour and let the controller correct in real time.
- Share information with higher-level reasoning – mapping, planning and learning modules all begin with raw observations turned into meaningful features.
Early robotics tried to side-step sensing by assuming perfectly known environments. Modern applications, from warehouse fulfilment to planetary exploration, demonstrate that autonomy becomes feasible only when perception, estimation and control form a tight feedback cycle.
Conceptual Questions
Question 1: What is the PRIMARY reason every robot needs sensing?
Question 2: Which pair correctly matches the type of perception with WHAT it measures?
Question 3: Which of these sensors is MOSTLY used for proprioception?
Question 4: Why is sensor feedback essential even when we have a precise mathematical model of the robot?
Here are two examples of usage of sensors for state of the art robots.

Examples of sensors mounted on an industrial arm; Credit: EPFL/LASA Laboratory
An industrial robot arm tasked to manover a shovel must be endowed with motor encoders for accurate positioning and orienting of the shovel, force/toque sensors at its end-effect to sense and react to change in the stiffness of the material, and tactile sensors at its fingertip to guarantee tight grip on the shovel.

This ICub Humanoid Robot is endowed with high resolution binocular cameras for 3-dimensional rendering of the world and tactile sensors to perceive touch at its fingertips. All these sensors are necessary to reach and grab the red ball. Credit: EPFL/LASA Laboratory
A humanoid robot may be tasked to interact with its environment in more ways than would an industrial robot. In addition to motor encoders, force/torque and tactile sensors, it needs an IMU to measure its global orientation in space. Cameras and microphones are, on the other hand, crucial to allow the robot to interact in human-inhabited environments.
Course Content
What is a sensor
A sensor is a device that detects or measures a physical property, the measurand (e.g., distance, light, temperature, pressure, motion), and converts it into a signal that can be read, interpreted, and used by a computer.

Illustration of the sensing principle: a physical phenomenon (light) interacts with a sensor, which converts it into an electrical signal that can be processed. Image source : https://fity.club/lists/suggestions/types-of-electrical-sensors/
Example:
- A light sensor detects the intensity of light and converts it into a varying electrical signal.
- An ultrasonic sensor measures the time it takes for a sound pulse to bounce back from an object, then converts that into a distance value.
- An accelerometer measures acceleration and outputs a voltage proportional to the force it experiences.
In robotics, sensors are essential because they provide the link between the robot and its environment. Without them, a robot would be “blind” and unable to adapt.
Video introduction
Here is a small video explaining what sensors are and how they are used.
What is a Sensor? Different Types of Sensors, Applications . YouTube video, 19 August 2020. Available at: https://www.youtube.com/watch?v=XI49uFm5HRE&t
The Ideal Sensor
To understand real sensors, it helps to imagine the ideal sensor, a theoretical device that:
| Property | Ideal Behaviour |
|---|---|
| Perfect accuracy | Measures the true value with no error |
| Noise-free | Output has zero noise (no random fluctuations) |
| Infinite resolution | Detects the smallest possible change in the measurand |
| Instantaneous response | Responds to changes with no delay or lag |
| Selectivity | Respond only to the target measurand |
| Immunity | Ignore all other influences (temperature, vibrations …) |
| Non-invasiveness | Leave the measurand unchanged |
| Perfect model | Known, usually linear, $y \propto x$ |
| Universal conditions | Operates in all environments (temperature, lighting, etc.) |
| Unlimited lifetime | Never degrades or wears out over time |
The ideal sensor does not exist, but it is a useful reference. When designing a robot, it is useful to compare real sensors against this “perfect” baseline to reason about range, resolution, noise, latency, linearity, drift, and environmental robustness.
Sensor encoding: analog to digital converter
Sensors usually employ Analog-to-Digital Converter (ADC) to convert a continuous analog input signal into a discrete digital value, so as to ease computer interpretation. An ADC samples an analog quantity (such as voltage). It then quantizes that value into one of a finite number of digital codes. For an N-bit ADC, there are $2^N$ possible digital codes. For instance, a 3-bit ADC produces 8 codes: 000, 001, 010, 011, 100, 101, 110, 111. Each code corresponds to a small range of the analog input, often called a quantization level or bin.
Sensor imperfections
Real sensors are always imperfect. They come with limitations and trade-offs, such as:
-
Noise : Random variations in the signal, making readings uncertain.
Example: an IMU yaw reading jitters even while the robot is stationary.
-
Limited range : Every sensor has minimum and maximum values it can detect.
Example: an ultrasonic module may only work from ~2 cm to ~4 m.
-
Finite resolution : Sensors encoded in Analog-to-Digital Converter (ADC) can only detect changes above a threshold (quantization).
Example: a 10-bit ADC over 3.3 V has ≈3.2 mV per LSB.
-
Accuracy vs precision : A sensor may be consistent but biased, or accurate on average but inconsistent.
Example: readings are tightly clustered but offset by +0.5 °C.
-
Latency : Some sensors take time to respond or update slowly.
Example: GPS typically updates at 1–10 Hz; barometers often require filtering.
-
Environmental sensitivity : Performance may drop under certain conditions (lighting, temperature, materials, EMI, vibrations).
Example: cameras in low light; sonar on soft or angled surfaces.
Trade-off examples
- A LiDAR provides very accurate distance maps but is expensive and power-hungry.
- An ultrasonic sensor is cheap and robust, but has low resolution and can be confused by certain materials.
- A camera captures rich information but requires heavy processing power (and favorable lighting).
Key takeaway: A sensor is the robot’s window into the physical world. The “ideal” sensor helps us define what we want, but real sensors always involve trade-offs. Understanding those trade-offs is the first step to choosing the right sensor for a robotic application.
Conceptual Questions
Question 1: What is the primary role of a sensor in robotics?
Question 2: In the chain Physical Quantity → Sensor → Signal → A/D → Digital data, what does the A/D step represent?
Question 3: Which of the following is not an example of a sensor?
Question 4: Which statement correctly distinguishes accuracy and precision?
Question 5: A camera struggles in low light. Which limitation best describes this?
Characteristics of Sensors
Units & Scales
Every sensor output is ultimately expressed in a physical unit defined by the International System of Units (SI).
| Quantity | Base unit (symbol) | Typical sensor example |
|---|---|---|
| Length | metre (m) | Laser range-finder |
| Mass | kilogram (kg) | Load cell |
| Time | second (s) | Real-time clock |
| Electric current | ampere (A) | Current probe |
| Temperature | kelvin (K) or °C | Thermistor, RTD |
| Luminous intensity | candela (cd) | Photodiode |
| Amount of substance | mole (mol) | Gas sensor |
For robots, it’s important to always check what units a sensor outputs and whether conversion or calibration is needed.
Example:
A temperature sensor might output a voltage that corresponds to °C, but you need to apply a formula (e.g., 10 mV per °C).
Measurement Range
Range is the interval $$[x_{\min},\,x_{\max}]$$ within which the sensor maintains its specified performance.
Example A TMP36 analog temperature sensor typically has a range of
$$x_{\min} = -40\,^\circ\mathrm{C} \quad \text{to} \quad x_{\max} = 125\,^\circ\mathrm{C}.$$
Temperatures beyond this window may cause incorrect readings or permanent damage.
Key rules:
- A wider range prevents saturation but often reduces resolution.
- Outside the range, data is invalid.
Resolution
Resolution is the smallest input increment $\Delta x_{\text{min}}$ a system can detect.
- For an ADC-based sensor, the resolution is determined by the maximal number of bits (N) one can use to encode the signal:
$$\Delta x_{\text{min}} = \tfrac{\text{FS}}{2^N}$$
where (N) = number of bits, and FS the Full Scale of sensor measurement.
A measurement smaller than $\Delta x_{\text{min}}$ can not be perceived by the sensor
Example A Potentiometer placed on a motor shaft can measure the displacement of the robot joint moved by that motor. The potentiometer outputs a continuous analog voltage (e.g., 0–5 V) proportional to the angle that can then be converted into a digital signal with an ADC. If one uses a 8-bit ADC encoder and the joint can move ±90 degrees, the resolution is: $$\Delta x_{\text{min}} = \frac{180\,\text{g}}{2^{8}} \approx 0.7\,\text{degrees}.$$ A resolution of less than 1 degree may be insufficient for generating highly accurate displacements. In a robot arm, such angular imprecision accumulates along the kinematic chain, resulting in significant positional error at the end effector.
- For an analog sensor, the resolution is determined by the sensor’s physics and construction. For instance, the sensor’s material properties (e.g. resistive or capacitive elements) may limit what it can respond to. Additionally, mechanical tolerances (e.g., friction, backlash) may further reduce resolution.
The resolution and range of measurement of a sensor is usually documented in the manufacturer’s datasheet.
Accuracy & Precision
When evaluating a sensor, two related but distinct concepts often come up: accuracy and precision. These terms are sometimes confused, but they describe different aspects of measurement quality.
-
Accuracy is about how close the average measurement is to the true or reference value.
$$ \text{Accuracy Error} = \big|\bar{y} - y_{\text{true}}\big| $$
-
Precision (Repeatability) is about how close repeated measurements are to one another, regardless of whether they are correct on average.
$$ \sigma_{\text{rep}} = \sqrt{\frac{1}{n-1}\sum_{i=1}^{n}(y_i-\bar{y})^2}\,. $$
The dartboard analogy below is a classic way to illustrate this difference:

Precision and accuracy. Website image. Available at: https://www.antarcticglaciers.org/glacial-geology/dating-glacial-sediments-2/precision-and-accuracy-glacial-geology/
| Quadrant | Accuracy | Precision | Interpretation |
|---|---|---|---|
| Top-left | Low | Low | Measurements scattered & far from truth. |
| Top-right | Low | High | Tight cluster offset from truth → systematic bias. |
| Bottom-left | High | Low | Centred on truth but large scatter → high random noise. |
| Bottom-right | High | High | Ideal sensor: tight cluster around true value. |
Key take-aways:
- A sensor can be precise yet inaccurate (systematic bias) or accurate yet imprecise (large random scatter).
- Calibration removes bias to improve accuracy; filtering averages out noise to improve precision.
- While a sensor may be highly accurate, the resolution of the sensor’s ADC may reduce this accuracy to the lowest resolution of its encoder.
Video Explaining Accuracy and Precision
A short video that illustrates the concepts of accuracy and precision in the context of sensors.
Accuracy and Precision | It’s Easy! . YouTube video, 06.11.2017. Available at: https://www.youtube.com/watch?v=KEeSQvMCPLg
When controling a robot, knowing the sensor’s resolution, precision and accuracy is crucial, as it can constrain or even rule out certain actions. For example, if distances to objects cannot be measured with a resolution or accuracy finer than 10 cm, the robot must operate with great caution when moving close to objects.
Noise
Noise is any undesired variation added to a measurement. It limits how well we can estimate the true value, even when the sensor is otherwise “perfect.”
We model a measured signal (y(t)) as: $$ y(t) \;=\; x(t) \;+\; b \;+\; \varepsilon(t) $$ where $x(t)$ is the true signal, $b$ is a (possibly time/temperature-dependent) bias (systematic error), and $\varepsilon(t)$ is random noise.
To understand and manage noise effectively, it is important to distinguish between random noise, which is unpredictable and varies from reading to reading, and systematic errors, which are repeatable biases built into the measurement process.
-Random noise (stochastic, zero-mean)
Unpredictable jitter that causes repeated readings to fluctuate around the true value.
-Examples
- A distance sensor reports 100.2, 99.8, 100.5, 100.1 cm while the target is fixed at 100 cm.
- A light sensor varies slightly due to mains flicker or transient shadows.
- An IMU yaw estimate wanders by about ±0.2° when the platform is stationary.
-Mitigation
Averaging or low-pass filtering. If the standard deviation of single readings is $\sigma$, averaging $M$ independent readings yields approximately $$ \sigma_{\text{avg}} \approx \frac{\sigma}{\sqrt{M}}. $$ This reduces scatter but increases latency.
-Systematic noise (errors) (deterministic, repeatable)
Consistent deviations that bias measurements in a fixed direction. Averaging does not remove these; calibration is required.
Examples
- Bias/offset: A thermometer consistently reads $+2\,^\circ\mathrm{C}$ high.
- Scale factor error: Wheel odometry overestimates distance because the wheel diameter is set too large, reporting $1.02\times$ the true travel.
- Misalignment: A range sensor tilted upward returns longer distances than actual.
- Drift: A sensor’s output shifts gradually with warm-up or supply voltage changes.
-Mitigation
Zeroing and multi-point calibration (to remove bias and correct scale), improved mounting/alignment, temperature compensation, stable power, and appropriate warm-up time.
Rule of thumb
Use calibration to remove systematic errors; use filtering/averaging to reduce random noise.
Videos
Two short videos provide additional context: the first introduces the concept of noise in sensors, while the second explains the distinction between random and systematic errors.
Whats’s the Noise about Sensor Technology? . YouTube video, 17.11.2016. Available at: https://www.youtube.com/watch?v=cPm3ii1ngmw
Random and systematic error explained: from fizzics.org . YouTube video, 15.02.2021. Available at: https://www.youtube.com/watch?v=huDRfgbc1HA
Response Time & Bandwidth
A sensor’s dynamic performance determines how well it tracks changes over time. Two core notions are used:
- Response time: how quickly the output reacts to a change at the input (latency, rise/settling time).
- Bandwidth: the highest signal frequency the sensor and its electronics can follow with acceptable attenuation and phase lag.
First-order model
Many sensors can be approximated by a first-order low-pass system with time constant $\tau$: $$ y(t)=x_0\big(1-e^{-t/\tau}\big)\quad\text{(step input)}. $$ Common timing metrics include:
- Rise time $t_r$: time to move from 10% to 90% of the final value (about $2.2\tau$ for a first-order system).
- Settling time $t_s$: time to enter and remain within $\pm2\%$ of the final value (about $4\tau$).
- Latency: total end-to-end delay (sensor physics, internal filtering, communication, and processing).
The corresponding $-3$ dB bandwidth is $$ f_{3\text{dB}}\approx\frac{1}{2\pi\tau}, $$ the frequency at which amplitude falls to roughly $70\%$ and phase lag becomes appreciable.
Update rate, bandwidth, and latency are distinct
- Update/sample rate $f_s$: how often samples are produced (e.g., 100 Hz IMU, 30 fps camera).
- Bandwidth $f_{3\text{dB}}$: how rapidly the content may vary without being smoothed away.
- Latency: the delay before a change appears in the data stream.
A device may output at $f_s=1\,\text{kHz}$ yet exhibit a small bandwith due to internal filtering, with several milliseconds of latency.
Sampling and aliasing
To represent a signal with highest relevant frequency $f_{\text{signal}}$, the sampling frequency should satisfy $$ f_s\ge 2\,f_{\text{signal}}\quad\text{(Nyquist criterion)}. $$ In feedback control, $f_s$ in the range $5\text{–}10\times f_{\text{signal}}$ is commonly selected to preserve phase margin. An anti-alias filter is typically applied so that content above $f_s/2$ is attenuated prior to sampling.
Effect of filtering
Filters reduce noise at the cost of delay. A simple $M$-point moving average introduces a group delay $$ \text{delay}\approx\frac{M-1}{2\,f_s}, $$ and lowers effective bandwidth roughly in proportion to $M$. Digital filter cutoffs should therefore be configured to avoid excessive lag when responsiveness is critical.
Example 1 — Temperature probe
For $\tau=5\,\text{s}$, $t_s\approx20\,\text{s}$ and $f_{3\text{dB}}\approx\frac{1}{2\pi\cdot5}\approx0.032\,\text{Hz}$ (period $\sim31\,\text{s}$). Suitable for slow variations; unsuitable for fast control.
Example 2 — Quadcopter attitude
If body-rate content extends to $f_{\text{signal}}\approx10\,\text{Hz}$, an IMU bandwidth $\ge100\,\text{Hz}$ and sample rate $f_s\approx1\,\text{kHz}$ are typical. End-to-end latency $\lesssim10\,\text{ms}$ (preferably a few ms) supports stable control.
Example 3 — Ultrasonic ranging on a rover
Round-trip time to $4\,\text{m}$ is approximately $2\cdot4/343\approx0.023\,\text{s}$, plus processing, leading to tens of milliseconds latency and update rates around $10\text{–}20\,\text{Hz}$. Adequate for slow navigation; inadequate for high-speed avoidance.
Practical guidelines
- The highest relevant signal frequency $f_{\text{signal}}$ should be identified from task dynamics.
- Sensor bandwidth should satisfy $f_{3\text{dB}}\gtrsim2\,f_{\text{signal}}$ (preferably higher), and the sample rate $f_s\gtrsim5\text{–}10\,f_{\text{signal}}$.
- Latency should be budgeted across the entire pipeline (sensor → bus → driver → estimator).
- Dynamic performance should be validated on the platform by injecting steps, ramps, or sinusoids and measuring $t_s$, $\tau$, and phase lag.
Key takeaway
High-speed robotic systems require sensors with sufficient bandwidth and low end-to-end latency. Bandwidth and sampling choices should reflect task dynamics, and filtering should be treated as a trade-off between noise reduction and delayed response.
Videos
Two short videos provide additional context: the first introduces the concept of aliasing and the Nyquist theorem, while the second explains what the terme Bandwith means.
What is aliasing and the Nyquist theorem? . YouTube video, 04.03.2022. Available at: https://www.youtube.com/watch?v=IZJQXlbm2dU&t
What is Bandwidth? (Bandwidth and Signal Processing) . YouTube video, 21.08.2017. Available at: https://www.youtube.com/watch?v=whUkZUORix0
Conceptual questions
Question 3: The measurement range of a sensor is best described as:
Question 4: True or False : Data measured outside the specified range should be considered invalid.
Question 5: An ADC spans 0–5 V with $N=10$ bits. What is the ideal voltage resolution?
Question 6: True or False : Higher resolution guarantees higher accuracy.
Question 7: Which statement best defines precision?
Question 8: True or False : Averaging many measurements removes systematic bias.
Question 9: Which of the following is a systematic error?
Question 10: True or False : Averaging $M$ independent samples reduces the standard deviation approximately by $1/\sqrt{M}$.
Question 11: For a first-order sensor with time constant $\tau$, which relation holds for the $-3$ dB bandwidth?
Question 12: True or False : A 1 kHz sample rate implies a 1 kHz sensor bandwidth.
Mathematical Problem
Comprehensive Problem (Range, Resolution, Accuracy/Precision, Noise, Response Time & Bandwidth)
A temperature measurement chain is designed as follows:
- Physical span: maps from
-40 °Cto+125 °Cinto0–5 V(span165 °C). - ADC: ideal,
12-bit, full-scale0–5 V. - Uncalibrated sensor bias:
+0.6 °C(systematic; constant over the range). - Random noise per sample:
σ = 0.30 °C(zero-mean, independent between samples). - Sensor dynamics: first-order low-pass with time constant
τ = 0.8 s. - Sampling rate:
fs = 10 Hz. A simple moving average ofMsamples is applied before logging. - Application: The control loop may contain temperature oscillations up to
0.20 Hz.
Tasks: Enter numerical answers (rounded sensibly). The checker accepts small tolerances.
Show Full Worked Solution
1) ADC resolution (°C/LSB):
The temperature span is $165\,^\circ\mathrm{C}$ over $2^{12}=4096$ codes, so
$$ \Delta T_{\text{LSB}} = \frac{165}{4096}\,^\circ\mathrm{C} \;\approx\; 0.0403\,^\circ\mathrm{C/LSB}. $$
2) Quantization noise (std):
For uniform quantization, $\sigma_q = \Delta/\sqrt{12}$, hence
$$ \sigma_q \;\approx\; \frac{0.0403}{\sqrt{12}}\,^\circ\mathrm{C} \;\approx\; 0.0116\,^\circ\mathrm{C}. $$
3) Minimum $M$ for post-filter $\leq 0.10\,^\circ\mathrm{C}$:
Single-sample random standard deviation combines as
$$ \sigma_{\text{single}} = \sqrt{\sigma^2 + \sigma_q^2}, \qquad \sigma = 0.30\,^\circ\mathrm{C}, $$
so
$$ \sigma_{\text{single}} \;\approx\; \sqrt{0.30^2 + 0.0116^2} \;\approx\; 0.3002\,^\circ\mathrm{C}. $$
Averaging $M$ independent samples gives
$$ \sigma_{\text{avg}} = \frac{\sigma_{\text{single}}}{\sqrt{M}}. $$
Require $\sigma_{\text{avg}} \leq 0.10$, so
$$ M \;\geq\; \left(\frac{0.3002}{0.10}\right)^2 \approx 9.01 \;\;\Rightarrow\;\; M_{\min}=10. $$
4) Moving-average delay:
For a length-$M$ moving average at $f_s=10\,\mathrm{Hz}$, group delay is
$$ \text{delay} \;\approx\; \frac{M-1}{2 f_s}. $$
With $M=10$:
$$ \text{delay} = \frac{9}{20} = 0.45\,\mathrm{s}. $$
5) Sensor $-3$ dB bandwidth:
For a first-order system,
$$ f_{3\text{dB}} = \frac{1}{2\pi\tau}. $$
With $\tau=0.8\,\mathrm{s}$:
$$ f_{3\text{dB}} = \frac{1}{2\pi\cdot 0.8}\,\mathrm{Hz} \;\approx\; 0.199\,\mathrm{Hz}. $$
6) First-order amplitude ratio at $f=0.20\,\mathrm{Hz}$:
Let $r = f/f_{3\text{dB}}$, with magnitude
$$ |H(j2\pi f)| = \frac{1}{\sqrt{1+r^2}}. $$
For $f=0.20\,\mathrm{Hz}$:
$$ r = \frac{0.20}{0.199} \approx 1.005, \qquad |H| \approx \frac{1}{\sqrt{1+1.005^2}} \approx 0.705. $$
Thus about $70.5\%$ of the amplitude passes (≈ $-3.0$ dB).
7) Post-average random std with $M=10$:
$$ \sigma_{\text{avg}} = \frac{0.3002}{\sqrt{10}}\,^\circ\mathrm{C} \;\approx\; 0.0949\,^\circ\mathrm{C}. $$
8) Uncalibrated accuracy error:
The systematic bias contributes an absolute error of
$$ |\text{bias}| = 0.6\,^\circ\mathrm{C}. $$
Notes & cross-checks:
- The measurement range includes the expected operating window (e.g., $[-20,110]\,^\circ\mathrm{C}$) since it lies within $[-40,125]\,^\circ\mathrm{C}$.
- Nyquist for $0.20\,\mathrm{Hz}$ content is $f_s \geq 0.40\,\mathrm{Hz}$; here $f_s=10\,\mathrm{Hz}$ is ample.
- The total latency includes the sensor’s lag (from $\tau$) plus digital filtering delay; both influence controller stability (see §1.6).
Proprioceptive Sensors
Proprioceptive sensors measure a robot’s internal state (joint positions/velocities, body rates, torques/currents, temperatures, power). In contrast, exteroceptive sensing observes the external environment (e.g., range to obstacles, images of the scene).
In the following figure, a humanoid robot demonstrates the use of multiple proprioceptive sensors to estimate its internal state. The robot employs an inertial measurement unit (IMU) to determine its orientation and body motion, joint encoders to measure joint positions, and force–torque sensors to monitor internal loads and interaction forces. Tactile sensors embedded in the fingertips provide additional feedback on contact conditions. Together, these sensors enable precise estimation and control of the robot’s posture and movement. In addition to these proprioceptive sensors, the robot is also equipped with cameras and microphones, exteroceptive sensors that capture visual and auditory information from the surrounding environment, allowing it to perceive and respond to external stimuli.

This ICub Humanoid Robot is endowed with high resolution binocular cameras for 3-dimensional rendering of the world and tactile sensors to perceive touch at its fingertips. All these sensors are necessary to reach and grab the red ball. Credit: EPFL/LASA Laboratory
Common proprioceptive signals.
Here are some of the typical proprioceptive sensors used in robotics, measuring key internal quantities that describe the robot’s mechanical and electrical state. These signals form the foundation for accurate estimation, feedback, and control.
| Quantity | Typical sensor | Units |
|---|---|---|
| Joint/shaft position | Incremental/absolute encoder, potentiometer | rad, deg, counts |
| Joint/shaft velocity | Derived from encoder or tachometer | rad/s, rpm |
| Linear position/force (links/structures) | Strain gauge (bridge), Linear variable differential transformer (LVDT) | m, N, Nm |
| Body acceleration/rotation | IMU (accelerometers, gyroscopes) | m/s², °/s |
| Electrical current/voltage | Shunt/Hall sensor, ADC | A, V |
| Torque (estimate) | From current: $\tau \approx k_t I$; torque sensor | Nm |
| Temperature | Thermistor/RTD/IC sensor | °C |
| Battery state | Voltage, current (Coulomb counting) | V, A, Ah |
Difference between proprioceptive and exteroceptive sensors
Proprioceptive and exteroceptive sensing form two complementary views of a robot’s perception system. Proprioceptive sensors describe the robot’s own internal state, while exteroceptive sensors capture information about the surrounding environment. The table below summarizes their main distinctions.
| Aspect | Proprioceptive | Exteroceptive |
|---|---|---|
| What is measured | Internal state (joint angles, velocities, forces, body motion, actuator/electrical variables) | External world (terrain, obstacles, objects, features, lighting, sound) |
| Typical sensors | Encoders, IMUs, current/voltage sensors, strain gauges, torque sensors, thermistors | Cameras (mono/stereo/RGB-D), LiDAR, radar, sonar, GPS/UWB, microphones |
| Primary use | Low-level control, odometry, state estimation, diagnostics, health monitoring | Mapping, localization relative to the world, object and scene perception |
| Latency / bandwidth | Low latency, high update rate, directly used in feedback loops | Higher latency, lower update rate, requires heavier processing |
| Dependence | Independent of environment; depends on internal calibration | Strongly dependent on environmental conditions (lighting, texture, clutter) |
Together, these two sensing modalities provide the foundation for robust robotic behavior: proprioceptive sensors keep the robot stable and aware of itself, while exteroceptive sensors keep it situated and responsive to the world.
#### Odometry
Odometry estimates a robot’s change in pose by integrating proprioceptive motion measurements over time (e.g., wheel/track motion, joint motion, IMU). Historically known as dead reckoning, odometry develops a kinematic model relating actuator motions to body motion, then integrates that model to produce pose as a function of time. Errors from modeling and sensing accumulate and must be managed or corrected with additional measurements.
Differential-drive wheel odometry

Differential drive kinematics. Source : Springer Handbook of Robotics, Chapter : 20.1
One of the most common forms of odometry is wheel odometry. Consider a planar robot with two powered wheels mounted on a common axle, separated by track width $b=2d$ (so $d$ is the half-baseline). Let the right/left wheel linear speeds be $v_{r}, v_{\ell}$ (positive forward) and the corresponding incremental travels over a sample be $\Delta s_{r}, \Delta s_{\ell}$. The body’s instantaneous motion is a rigid twist about an instantaneous center of curvature (ICC) on the axle line.
Kinematic relations . With body angular rate $\omega$ and ICC radius $R$ (signed, measured from the body center): $$ \omega (R+d)=v_{\ell}, \qquad \omega (R-d)=v_{r}. $$
We can rearrange these two equations to solve for ω the rate of rotation about the ICC and R the distance from the center of the robot to the ICC : $$ V=\tfrac{1}{2}(v_{r}+v_{\ell}), \qquad \omega=\frac{v_{r}-v_{\ell}}{b}, \qquad R=\frac{V}{\omega}=\frac{b}{2}\,\frac{v_{r}+v_{\ell}}{\,v_{r}-v_{\ell}\,}. $$
Now as $v_{r}, v_{\ell}$ are functions of time we can generate a set of equations of motion for the differential drive robot. Using the point midway between the wheels as the origin of the robot, and writing $\omega$ as the orientation of the robot with respect to the x-axis of a global Cartesian coordinate system, one obtains
$$ x(t) = \int V(t)\cos(\theta(t))\,dt, \qquad y(t) = \int V(t)\sin(\theta(t))\,dt, \qquad \theta(t) = \int \omega(t)\,dt . $$
From encoders to wheel travel.
Encoders report counts as the wheel (or motor) turns. Over one sample, let the right/left counts be $\Delta N_r,\ \Delta N_\ell$. If each wheel revolution produces $\text{CPR}$ (Counts Per Rotation) counts and the wheel radius is $r$, then
$$ \Delta \phi = 2\pi\,\frac{\Delta N}{\text{CPR}}\quad(\text{rad}),\qquad \Delta s = r\,\Delta \phi = \frac{2\pi r}{\text{CPR}}\,\Delta N. $$
Apply the same to each side: $$ \Delta s_r = \frac{2\pi r}{\text{CPR}}\,\Delta N_r,\qquad \Delta s_\ell = \frac{2\pi r}{\text{CPR}}\,\Delta N_\ell. $$
A simple velocity estimate uses the sample time $\Delta t$: $$ v \approx \frac{\Delta s}{\Delta t}. $$
Example $r=0.05\,\text{m}$, $\text{CPR}=8000$. One count corresponds to
$$ \Delta s_{\text{per count}} = \frac{2\pi r}{\text{CPR}} \approx \frac{2\pi\cdot 0.05}{8000} \approx 0.0000393\,\text{m} = 0.039\,\text{mm}. $$ If $\Delta N_r=+300$ and $\Delta N_\ell=+280$ over $\Delta t=0.02\,\text{s}$, then
$\Delta s_r\approx 11.8\,\text{mm}$, $\Delta s_\ell\approx 11.0\,\text{mm}$ and $v_r\approx 0.59\,\text{m/s}$, $v_\ell\approx 0.55\,\text{m/s}$.
Calibration & error sources (typical)
- Wheel radius / scale factor. Misestimated radius scales $\Delta s_{\ell},\Delta s_{r}$ ⇒ linear drift.
- Baseline $2d$. Misestimated track width biases $\Delta\theta$ ⇒ heading drift.
- Encoder quantization & missed counts. Sets resolution and adds random noise (cf. Ch. 1.3, 1.5).
- Backlash & compliance. Reversals cause transient under/over-counting; mount encoders on motor vs. output shaft accordingly.
- Wheel slip & terrain effects. Slip, sinkage, uneven contact violate the no-slip model; systematic curvature error accumulates.
- Time synchronization. Pose errors arise if encoder/IMU samples are integrated with inconsistent time stamps.
- Integration drift. Dead reckoning accumulates error; pose maintenance requires fusing with external references (e.g., vision, LiDAR, GPS) or loop closures.
Odometry in the estimation stack
Odometry provides a high-rate, low-latency motion prior for controllers and filters; drift is bounded by fusing with exteroceptive/global measurements (e.g., GPS outdoors, visual landmarks indoors) in extended Kalman filters or factor-graph optimizers. GPS–IMU fusion is a canonical example of complementary sensors combined via Kalman filtering. The same principle applies to wheel/IMU/vision fusion for terrestrial robots.
Key takeaway.
Odometry turns local actuator/IMU readings into an integrated pose estimate using a kinematic model. It is indispensable for short-term motion tracking and control, but uncorrected errors inevitably accumulate; calibration, careful time stamping, and sensor fusion are essential to maintain accuracy over distance.
Conceptual Quesetions
Question 1: What is the core idea of odometry?
Question 2: If $v_r=v_\ell\neq 0$ for a differential drive with track width $b=2d$, what is the angular rate $\omega$?
Question 3: With the sign convention $v_r,v_\ell>0$ forward and $\omega=(v_r-v_\ell)/b$, if $v_r>v_\ell$ the robot turns:
Question 4: If $\mathrm{CPR}$ doubles and all else is unchanged, the distance represented by one count:
Question 5: True or False: Mounting encoders on the motor shaft removes the influence of gearbox backlash on wheel odometry.
Question 6: True or False: Even with perfect encoders and calibration, persistent wheel slip can create systematic curvature errors in odometry.
Question 7: For $v_r=0.60$ m/s, $v_\ell=0.40$ m/s, and $b=0.50$ m, what is $\omega$?
Question 8: Which statement best describes odometry in a modern fusion system?
Further exploration
Video explaining differential drive odometry in more detail.
wheeled robot control and odometry. YouTube video, Sep 11, 2019. Available at: https://www.youtube.com/watch?v=LrsTBWf6Wsc
Article explaining differential Drive odometry :
#### Rotary & Linear Position Sensing (Encoders & Potentiometers)
Position sensing provides joint/shaft angle and linear travel for feedback control, odometry, and safety. Common technologies include incremental encoders, absolute encoders, resolvers/synchros, and potentiometers. Selection should be guided by the characteristics in Ch. 1 (range, resolution, accuracy, noise, bandwidth/latency) and by mechanical integration constraints.
Incremental encoders.

Sketch of the quadrature encoder disc, and output from photodetectors placed over each of the two pattern. The corresponding state changes are shown on the right
Incremental encoders measure relative motion by generating a series of pulses as the shaft rotates. The position is obtained by counting pulses from a reference point.
A typical incremental encoder produces two output signals, Channel A and Channel B, which are square waves shifted by 90° (in quadrature). By observing the phase relationship between these two signals, the direction of rotation can be determined:
- If Channel A leads Channel B, the shaft is rotating in one direction.
- If Channel B leads Channel A, the shaft is rotating in the opposite direction.
Each pair of transitions (rising and falling edges of A and B) defines a state. By cycling through four distinct states $(S_1, S_2, S_3, S_4)$, one complete quadrature period is formed. Counting all four edges per cycle provides 4× resolution compared to a single channel.
Many incremental encoders also include an Index (I) signal, which generates a single pulse per revolution. This provides a reference or “home” position for absolute alignment.
Video
This short video explains how a incremental encoders works.
Incremental Encoder (Shaft Encoder)- how it works. YouTube video, Mar 22, 2017. Available at: https://www.youtube.com/watch?v=zzHcsJDV3_o
Absolute encoders.

Each concentric track on the encoder disk represents one bit of resolution. Note that each track, starting from the inside of the disk, has double the number of light-and-dark bands than the previous track. The encoder shown here has 4 tracks, so has 4 bits of resolution and can measure 16 positions (2^4) for each rotation of the encoder. Source : https://www.linearmotiontips.com/when-is-encoder-resolution-specified-in-bits-and-what-does-that-tell-us/
Absolute encoders, whether rotary or linear, track the position of an axis by assigning a unique value to each position on the encoder. This means that no matter where the axis is located, its exact position can always be determined. Because each position is uniquely identified, this remains true even if the encoder has been powered off and restarted, there is no need to re-home the encoder upon power-up to determine its position.
For most absolute rotary encoders, resolution is defined in terms of bits. The encoder disk is patterned with concentric tracks around its circumference (and a corresponding number of sensors, one for each track), with each track representing one bit of resolution.
To convert bits of resolution into the number of positions the encoder can detect in one shaft revolution, raise 2 to the power of the number of bits:
-
An 8-bit encoder can measure
$2^8 = 256$ positions per revolution. -
A 16-bit encoder can measure
$2^{16} = 65{,}536$ positions per revolution.
Video
This short video explains how an absolute encoder works.
Absolute Encoder (Shaft Encoder, Rotary encoder) - how it works!. YouTube video, Mar 22, 2017. Available at: https://www.youtube.com/watch?v=yOmYCh_i_JI
Potentiometers.

A linear potentiometer: a wiper slides along a resistive track (A–C). The output at B is a fraction of the excitation proportional to displacement.
Operating principle.
A potentiometer (rotary or linear) forms a voltage divider. With excitation $V_{\text{ref}}$ across the end terminals and the wiper at normalized position $0\le \alpha \le 1$ (measured from the low end), the ideal output is $$ V_{\text{out}} = \alpha\,V_{\text{ref}} \quad \text{(no load).} $$ Therefore the reading is absolute (no homing needed after power cycles). Rotary devices map angle $\theta$ to $\alpha=\theta/\theta_{\max}$; linear devices map travel $x$ to $\alpha=x/L$.
Key specs.
- Range (mechanical/electrical travel). Rotary parts often provide $\theta_{\max}!\approx!300^\circ$; multi-turn (e.g., $5$–$10$ turns) extends range. Linear parts specify stroke $L$ and electrical travel (slightly less than mechanical).
- Resolution. Set by the ADC and noise, not by “bits” in the pot: $$ \Delta \alpha = \frac{1}{2^N},\qquad \Delta \theta = \theta_{\max}\,\Delta\alpha,\qquad \Delta x = L\,\Delta\alpha. $$
- Accuracy/linearity & hysteresis. Typical linearity $\pm(0.5\%\text{–}2\%)$ FS; small hysteresis from wiper and bearings.
Loading & ratiometric readout.
Finite input impedance $R_{\text{in}}$ of the ADC/load pulls down $V_{\text{out}}$ and introduces gain error. With total track resistance $R_{\text{pot}}$, the loaded divider is $$ V_{\text{out,loaded}} \;=\; V_{\text{ref}}\, \frac{(\alpha R_{\text{pot}} \parallel R_{\text{in}})} {(1-\alpha)R_{\text{pot}} + (\alpha R_{\text{pot}} \parallel R_{\text{in}})}\,, $$ which reduces to $V_{\text{out}}!\approx!\alpha V_{\text{ref}}$ when $R_{\text{in}}!\gg!R_{\text{pot}}$.
Integration notes.
- Buffer the wiper with a high-impedance amplifier if $R_{\text{in}}$ is not large.
- Add a small RC near the ADC to tame contact noise; keep leads short or shielded.
- Avoid mechanical end-stops in normal operation; select stroke so the application stays inside the electrical travel.
Examples
1) ADC-limited resolution (rotary): $\theta_{\max}=300^\circ$, $N=12$.
$$\Delta\theta = \frac{300^\circ}{2^{12}} \approx 0.073^\circ \text{ per LSB}.$$ 2) Loading error check: $R_{\text{pot}}=10\,\text{k}\Omega$, $R_{\text{in}}=1\,\text{M}\Omega$. At mid-travel ($\alpha=0.5$), the error relative to $\alpha V_{\text{ref}}$ is $\approx 0.25\%$; with $R_{\text{in}}=100\,\text{k}\Omega$ it rises to a few percent.
Conceptual questions
Question 1: Incremental vs. absolute encoders. Which statement best differentiates the two?
Question 2: Quadrature resolution. An incremental encoder outputs 1000 quadrature cycles per revolution (i.e., 1000 A–B state periods). With 4× edge counting, the counts per revolution are:
Question 3: Direction from A/B. In a quadrature encoder, if Channel A leads Channel B, the direction is:
Question 4: Index pulse. The Index (I or Z) signal on many incremental encoders is used primarily to:
Question 5: Angle per count. If a shaft yields 4096 counts per revolution, the ideal angular resolution is approximately:
Question 6: Absolute encoder bits. A 14-bit absolute rotary encoder can uniquely report how many positions per revolution?
Question 7: Potentiometer ideal output. A linear potentiometer is excited with $V_{\text{ref}}=5\,$V. Ignoring loading, at $\alpha=0.25$ the output is:
Question 8: Loading effect. True or False: With $R_{\text{pot}}=10\,\text{k}\Omega$ and ADC input $R_{\text{in}}=100\,\text{k}\Omega$, loading error is negligible across the stroke.
Question 9: Ratiometric readout. True or False: Driving a potentiometer with the same $V_{\text{ref}}$ used by the ADC reference makes the reading insensitive (ideally) to supply variation.
Question 10: Resolution origin (potentiometers). Which statement is most accurate?
Question 11: Range and travel. Which statement is correct for typical devices?
Question 12: Integration best practice. When $R_{\text{in}}$ cannot be made $\gg R_{\text{pot}}$, the recommended interface is to:
Inertial Sensing
Gyroscopic Systems
The goal of gyroscopic systems is to measure changes in vehicle orientation by taking advantage of physical laws that produce predictable effects under rotation. Effectively they measure how fast a robot is rotating about an axis (angular rate). By integrating this rate, we can track changes in orientation over time. In practice, every real gyro has noise and bias, so orientation from pure integration will drift and must be calibrated and often fused with other sensors.
How a Gyroscope Works (YouTube, 9 min). A visual refresher on mechanical intuition: .
How a Gyroscope Works. What a Gyroscope Is . YouTube video, Aug 25, 2022. Available at: https://www.youtube.com/watch?v=V6XSsNAWg00
Main classes of gyroscopes
1) Mechanical gyroscopes and gyrocompasses
- Principle. Gyroscopes and gyrocompasses rely on the principle of the conservation of angular momentum $L=I\omega$. Angular momentum is the tendency of a rotating object to keep rotating at the same angular speed about the same axis of rotation in the absence of an external torque. A rapidly spinning rotor maintains its orientation; torques cause precession perpendicular to both spin and applied torque. Classical gyrocompasses exploit precession with a pendulous weight and damping so the spin axis aligns with true north in the Earth frame.
- Notes for robots. Pure mechanical gyrocompasses are bulky, need careful damping (often oil reservoirs), and are sensitive to vehicle motions and latitude corrections. They are now uncommon in mobile robots compared to optical or MEMS devices.

Simple gyrocompass. (a) Pendulus gyro. (b) Precessional motion. Source: Springer Handbook of Robotics, Chapter 20.1
2) Optical gyroscopes
- Principle (Sagnac effect). Send light both ways around a closed loop (see Fig below) of length $D=2\pi R$. If the loop is stationary, both pulses traverse the same distance at speed $c$ and arrive together after $$ t = \frac{D}{c} = \frac{2\pi R}{c}. $$ Now suppose the loop rotates clockwise at angular speed $\omega$. The clockwise pulse must travel farther to “catch” the moving end point, while the counterclockwise pulse travels a shorter distance.
Distances while the loop rotates.
- Clockwise path length: $D_c = 2\pi R + \omega R t_c$
- Counterclockwise path length: $D_a = 2\pi R - \omega R t_a$
Because speed is $c$ for both beams, $$ c,t_c = D_c \Rightarrow t_c=\frac{2\pi R}{c-\omega R},\qquad c,t_a = D_a \Rightarrow t_a=\frac{2\pi R}{c+\omega R}. $$
Time difference (Sagnac delay). $$ \Delta t \equiv t_c - t_a = 2\pi R\left(\frac{1}{c-\omega R}-\frac{1}{c+\omega R}\right). $$
This $\Delta t$ is what RLGs and FOGs convert into a measurable phase or frequency shift to estimate the rotation rate $\omega$.

Circular light path. (a) Stationary path. (b) Moving path. Source: Springer Handbook of Robotics, Chapter 20.2.3
Fiber-optic gyros (FOG) use long polarization-maintaining fiber loops; ring-laser gyros (RLG) use a laser cavity and measure the beat frequency between the two standing waves. Optical gyros are accurate, with no spinning mass.
3) MEMS (micro-electromechanical) gyroscopes
- Principle (Coriolis). A vibrating proof mass with velocity $\mathbf{v}$ inside a frame rotating at rate $\boldsymbol{\Omega}$ experiences Coriolis acceleration. Coriolis acceleration is the apparent acceleration that arises in a rotating frame of references. Suppose that an object moves along a straight line in a rotating frame of reference. To an outside observer in an inertial frame the object’s path is curved, thus there must be some force acting on the object to maintain the straight line motion as viewed by the rotating observer. An object moving in a straight line with local velocity $\mathbf{v}$ in a frame rotating at rate $\boldsymbol{\Omega}$ relative to an inertial frame will experience a Coriolis acceleration given by : $$ \mathbf{a}_{\text{Coriolis}} = 2\mathbf{v}\times \boldsymbol{\Omega}. $$ By driving a known vibration and sensing the orthogonal motion induced by Coriolis forces, the device estimates angular rate. Common structures: tuning-fork, vibrating-wheel, and wine-glass resonators. Compact, low-power, and inexpensive, MEMS gyros dominate robotics platforms.

MEMS gyroscope: principle of operation. Source: Springer Handbook of Robotics, Chapter 20.2.3
Wine-glass resonator gyroscopes use the effect of Coriolis forces on the position of nodal points on a resonating structure to estimate the external rotation. As MEMS gyroscopes have no rotating parts, have low-power consumption requirements, and are very mall, MEMS gyros are quickly replacing mechanical and optical gyroscope sensors in robotic applications.
What gyros actually deliver
- Rate gyros (RG). Output angular rate $\dot{\theta}$ directly.
- Rate-integrating gyros (RIG). Internally integrate to report angle, though most robotic pipelines still integrate rate in software to keep timing consistent with other sensors.
Why fusion is essential. All gyros exhibit drift due to bias and noise. Drift causes orientation error; in an IMU this misorients gravity removal for accelerometers, so residual gravity integrates to large position error over time. Robots therefore combine gyro data with other references (accelerometers, magnetometers, GPS, vision) using filters or factor graphs.
Important Performance metrics of Inertial measurement units
- Bias repeatability / stability. How much the zero-rate output wanders over time at constant conditions; dominates long-term drift.
- Angle Random Walk (ARW). Noise-induced angle error growth when integrating rate; sets short-term orientation precision.
- Scale factor. Mapping from physical rate to volts or counts (e.g., mV per deg/s); errors here scale the estimate.
Practical selection and integration tips
- Match range and bandwidth to dynamics. Choose full-scale so saturation is unlikely during worst maneuvers, and pick bandwidth high enough for control needs without excessive noise or latency.
- Mounting and alignment. Keep axes orthogonal, rigidly mount near the robot’s center to reduce vibration coupling, and include axis misalignment in calibration.
- Bias handling. Estimate bias at startup while the robot is still; track slowly varying bias in your estimator during operation.
- Thermal behavior. Expect temperature-dependent bias and scale factors; if possible, calibrate across temperature.
- Triads and IMUs. Three orthogonal gyros are ganged for full 3-D rotation; in practice they live with accelerometers in an IMU.
Conceptual Questions
Question 1: Why does integrating gyro rate to get orientation drift over time?
Question 2: What physical principle do MEMS gyroscopes use to sense rotation?
Question 3: The Sagnac effect used in optical gyros (FOG/RLG) is best described as:
Question 4: Which spec mainly limits short-term orientation precision when integrating gyro rate?
Question 5: What is a good practice to reduce orientation drift in an IMU-based estimator?
Further exploration
-
Optical gyros. Read about the Sagnac effect and ring laser gyros on Wikipedia – Sagnac effect.
-
MEMS gyro basics. Short primer on tuning-fork MEMS designs: Wikipedia – MEMS gyroscope.
Accelerometer
Just as gyroscopes can be used to measure changes in orientation of a robot, other inertial sensors, known as accelerometers, can be used to measure external forces acting on the vehicle. One important factor concerning accelerometers is that they are sensitive to all external forces acting upon them, including gravity. Accelerometers use one of a number of different mechanisms (e.g., gravity), the force acts on the mass and displaces the spring.

Accelerometers. (a) Mechanical accelerometer. (b) Piezoelectric accelerometer. Source: Springer Handbook of Robotics, Chapter 20.3
Physical model (spring–mass–damper).
A basic accelerometer can be idealized as a proof mass $m$ attached to a spring $k$ with damping $c$; external force produces displacement $x$ measured by the readout: $$ \begin{array}{rl} F_{\text{applied}} &= F_{\text{inertial}} + F_{\text{damping}} + F_{\text{spring}}
&= m\ddot{x} + c\dot{x} + kx \, . \end{array} $$
Under a constant acceleration $a$ (e.g., gravity component), static equilibrium gives $k\,x \approx m\,a$ (ignoring damping), so displacement is proportional to acceleration; dynamics (bandwidth, settling) follow from the second-order system above. Mechanical implementations are sensitive to vibration and may converge slowly if under-damped.
Common transduction mechanisms.
- Mechanical (displacement-measured). Uses the spring–mass–damper directly; simple but vibration-prone and slower to settle.
- Piezoelectric. A crystal stressed by the proof mass generates a measurable voltage; well suited to dynamic acceleration.
- (Modern MEMS devices often use capacitive sensing of the proof-mass displacement; principles still map to the model above.)
Link to the inertial pipeline.
In an IMU, tri-axial gyros integrate attitude, accelerometer readings are rotated to the navigation frame, gravity is subtracted, and the result is integrated to velocity and then position. Any gyro/accel bias mis-orients gravity removal, so residual gravity integrates to large position drift over time, hence the need for sensor fusion.
Key specifications
- Range (e.g., $\pm2g,\ \pm16g$): prevent saturation during maneuvers.
- Scale factor / sensitivity (e.g., mV/$(\mathrm{m/s^2})$): maps output to acceleration; accuracy matters for bias/scale calibration.
- Bias & bias stability / drift: dominant long-term error; characterize across temperature and time.
- Bandwidth / response time: choose high enough for platform dynamics; avoid excessive internal filtering that adds latency.
- Alignment & orthogonality: small axis misalignments couple motions; include in calibration.
Calibration & usage notes.
- Six-position “1 g” check. Place each axis alternately up/down to estimate per-axis bias and scale ($\lVert a\rVert\approx g$ at rest).
- Ratiometric, low-noise readout. Stable reference and clean analog path reduce noise; average multiple samples with care (filtering adds delay, Ch. 1.6).
- Mounting & temperature. Rigid mounting minimizes parasitics; allow warm-up and compensate temperature coefficients.
- Gravity handling. For motion estimation, subtract gravity using the best available attitude estimate before integration.
Key takeaway.
Accelerometers convert proof-mass deflection into acceleration, inherently sensing gravity as well as motion. Their usefulness in robotics hinges on proper range selection, noise/bias management, bandwidth/latency budgeting, and calibration, and on fusing with other sensors to prevent integrated drift.
Conceptual questions
Question 1: Gravity sensitivity. True or False: An accelerometer at rest on a table will measure a nonzero acceleration magnitude of approximately $g$ because it senses gravity.
Question 2: Static equilibrium model. In the spring–mass–damper model with mass $m$, spring $k$, damping $c$, under constant acceleration $a$ (steady state), which relation best describes the displacement $x$?
Question 3: Piezoelectric use case. Which statement best describes a piezoelectric accelerometer?
Question 4: Six-position calibration. The primary goal of a six-position “1 g” test is to estimate per-axis:
Question 5: Gravity removal in an IMU. True or False: Accurate attitude from gyros is important because any tilt error misprojects gravity, leaving a residual that integrates to large velocity/position drift.
Further exploration
This short video explains how an accelerometer works.
How a Smartphone Knows Up from Down (accelerometer) . YouTube video, 22.05.2012. Available at: https://www.youtube.com/watch?v=KZVgKu6v808
Force, Torque, and Strain Sensing
Force, torque, and strain sensing enable a robot to perceive its own interactions with the environment. These measurements close the loop for compliant control, grasp stability, slip detection, and safe physical human–robot interaction. In practice, measurements are combined from multiple points along the actuation chain: motor currents (effort), joint or wrist force–torque (F/T) sensors, and tactile sensors on the skin or fingertips. Each measurement location captures a different portion of the system’s mechanics and noise characteristics, making the intended application of the data the central consideration in sensor design.
Measurement Location: From Effort to Contact
- Actuator effort (motor current). In many electric drives, torque is approximately proportional to current, $\tau \approx k_t I$. This relationship is useful for fast inner-loop control, however, gearbox losses, friction, and compliance makes current an imperfect indicator of external contact forces at the output.
- Joint or wrist F/T sensors. Multi-axis load cells or flexure-based sensors mounted at the wrist or fingertip directly measure forces and moments with high bandwidth. With a known fingertip geometry, the contact point can also be inferred from the measured $[\mathbf{f},\ \boldsymbol{\tau}]$, a capability often referred to as intrinsic tactile sensing.
Actuator effort: motor current as a torque sensor
In most electric drives, electromagnetic torque is proportional to motor current. This makes the drive itself a built-in torque sensor.
Core relation. For a motor with torque constant $k_t$, $$ \tau_m \approx k_t I \quad \text{(SI units: } k_t[\mathrm{Nm/A]} \text{).} $$ With a gear ratio $g$ (output torque is $g$ times motor shaft torque) and efficiency $\eta$, $$ \tau_{\text{joint}} \approx \eta g k_t I - \tau_f(\dot{q}) - J_{\text{refl}} \ddot{q}, $$ where $\tau_f(\dot{q})$ captures friction and cogging effects, $J_{\text{refl}}$ is reflected inertia, and $q$ is the joint angle.
Why it is popular.
- Zero added mechanics or wiring; readings arrive at drive rates with minimal latency.
- Sufficient for many inner-loop controllers, collision detection, and coarse force regulation.
Implementation notes.
- Current measurement. Shunt resistor (precise, adds burden voltage) or Hall-effect/isolated sensors (galvanic isolation, lower insertion loss).
- Calibration. Identify $k_t$ from datasheet then verify under load; characterize $\tau_f(\dot{q})$ via slow sweeps; measure $\eta$ under representative speeds/loads.
-
Limits and pitfalls.
- Gear friction, stiction, and cogging bias the estimate at low speeds.
- Thermal drift of phase resistance and $k_t$ changes the mapping over temperature.
- Current loops and PWM add ripple; bandwidth and filtering trade latency against noise.
- Backlash/compliance decouple motor torque from external interaction torque during reversals.
When to add a true torque sensor. If precise low-force regulation, contact transients, or model uncertainties dominate, joint torque sensors, series elastic elements, or wrist F/T sensors provide more reliable interaction measurements.
Strain-based sensing
Strain-based sensing measures tiny elastic deformations in a compliant mechanical element and infers the applied force or torque through a known stiffness model. It is the workhorse behind joint torque sensors, six-axis wrist force–torque (F/T) sensors, weigh-scale load cells, and many tactile skins.
What is measured
- Strain is the relative change in length, $\varepsilon = \Delta L / L$ (unitless). In metals operating in the linear elastic regime, stress $\sigma$ and strain relate by $\sigma = E \varepsilon$, where $E$ is Young’s modulus.
- Strain gauges convert strain to an electrical signal. The most common are metal-foil resistive gauges; alternatives include piezoresistive silicon and piezoelectric ceramics.
Core transducer physics
- Foil (resistive) strain gauges. Electrical resistance $R$ changes approximately linearly with strain: $$ \frac{\Delta R}{R} \approx \mathrm{GF},\varepsilon, $$ where $\mathrm{GF}$ is the gauge factor (typically 2.0 for metal foil). Gauges are bonded to the elastic element with adhesive; alignment sets sensitivity to axial, bending, or torsional strain.
- Piezoresistive silicon. Doped silicon has a larger effective gauge factor (10–150), enabling compact, low-noise sensors, often integrated on diaphragms or micro-flexures.
- Piezoelectric. Generates charge proportional to dynamic strain. Very high bandwidth but poor at true DC; best for vibration or impact sensing (dynamic tactile).
From strain to force/torque
A compliant element (beam, ring, cross-shape, diaphragm, or torsion tube) concentrates strain where gauges are placed. With a linear elastic model, $$ \mathbf{v} = \mathbf{S},\mathbf{w} + \mathbf{b}, $$ where $\mathbf{v}$ collects bridge voltages, $\mathbf{w} = [F_x, F_y, F_z, \tau_x, \tau_y, \tau_z]^\top$ is the wrench (forces and torques) at a reference point, $\mathbf{S}$ is the sensitivity matrix determined by geometry and gauge placement, and $\mathbf{b}$ is an offset. Calibration identifies $\mathbf{S}$ (and $\mathbf{b}$) by applying known loads and solving a linear regression; the inverse then maps voltages back to forces and torques.
Bridge circuits and signal conditioning
- Wheatstone bridge. Gauges are wired as quarter-, half-, or full-bridges. Full-bridges place gauges in tension and compression, doubling sensitivity and providing temperature compensation.
- Excitation. Constant-voltage (e.g., $V_\mathrm{ex}=2$–10 V) is common; constant-current can reduce self-heating drift.
- Amplification. Instrumentation amplifiers provide high common-mode rejection ratio (CMRR). Typical strain signals are millivolts, so gain of 100–1000 is routine.
- Filtering and sampling. Anti-alias filters and low-latency digitization (16–24 bit ADCs) preserve bandwidth while controlling noise.
- Ratiometric readout. Measuring $V_\mathrm{out}/V_\mathrm{ex}$ cancels excitation drift.
Back-of-the-envelope. Quarter-bridge, 120 Ω, $\mathrm{GF}=2$, $\varepsilon=1000,\mu\varepsilon$ gives $\Delta R/R = 0.002$. Approximate bridge output $V_\mathrm{out} \approx (V_\mathrm{ex}/4)(\Delta R/R)$, so with $V_\mathrm{ex}=5$ V, $V_\mathrm{out}\approx 2.5$ mV. An instrumentation amplifier is thus required.
Exteroceptive Sensors
Exteroceptive sensors provide the measurements that allow a robot to build an understanding of what surrounds it, not just what it is doing internally. This is the sensing foundation for tasks like avoiding obstacles, following corridors, recognizing objects, estimating position in a map, and adapting behavior to changing environments.

This ICub Humanoid Robot is endowed with high resolution binocular cameras for 3-dimensional rendering of the world and tactile sensors to perceive touch at its fingertips. All these sensors are necessary to reach and grab the red ball. Credit: EPFL/LASA Laboratory
In the ICub humanoid robot shown in the image, the exteroceptive sensors complement the internal sensing used for control. Its RGB cameras provide rich visual information about nearby objects, their relative position in the scene, and motion cues from frame-to-frame changes. Its microphones provide auditory information, enabling detection and localization of sound sources (for example, a human voice or an alarm) and supporting interaction. Together, these exteroceptive sensors give the robot a more complete and task-relevant picture of the world around it, extending perception beyond what can be inferred from internal measurements alone.
A key feature of exteroceptive sensing is that the raw signals often describe the world indirectly. A camera produces images, a range sensor produces distances, and a satellite receiver produces global position estimates. Turning these signals into actionable information usually requires a processing pipeline that may include filtering, feature extraction, geometric reasoning, and sometimes machine learning. As a result, exteroceptive sensing is typically more computationally demanding and more sensitive to measurement conditions than internal sensing.
Exteroceptive sensors also come in a wide range of “data shapes” and trade-offs:
- Single-value measurements (for example, distance-to-obstacle from an ultrasonic sensor)
- Structured arrays (for example, depth images from a time-of-flight camera)
- High-dimensional observations (for example, RGB images and 3D point clouds from LiDAR)
- Global references (for example, Global Navigation Satellite System (GNSS) position outdoors)
Each modality brings different strengths and failure modes. Cameras can provide rich semantic information but depend strongly on lighting and texture. Ultrasonic sensors are inexpensive and robust at close range but struggle with soft materials and angled surfaces. LiDAR provides accurate geometry but can be affected by reflective or absorbing surfaces and weather. Practical robot designs often combine multiple exteroceptive sensors to reduce blind spots and improve robustness.
This chapter introduces common exteroceptive sensor families, the physical principles behind their measurements, and the practical considerations that determine real-world performance.
Conceptual Questions
Question 1: Why do many exteroceptive sensors require substantial processing before their outputs can guide robot decisions?
Question 2: Which scenario best describes a case where a range sensor might work well but a camera might fail?
Question 3: What is the most practical meaning of a sensor’s “field of view” (FoV) for a mobile robot?
Question 4: Why is time alignment between different exteroceptive sensor measurements important when the robot is moving?
Question 5: What is a strong reason to use multiple different exteroceptive sensing modalities on the same robot?
Contact Sensors (Touch and Tactile Sensing)
Contact sensors measure the environment through physical interaction. Unlike cameras or rangefinders that observe at a distance, contact sensing becomes informative only when the robot touches something. This makes contact sensors especially important for tasks where “knowing by touching” is unavoidable, such as grasping an object reliably, detecting collisions, or walking on uncertain terrain.
In robotics curricula, contact sensing is often discussed together with force perception, because many contact sensors ultimately aim to estimate contact forces, torques, pressure distributions, and slip events. The detailed sensor principles (resistive, capacitive, piezoelectric, optical tactile sensors), calibration procedures, and force-control use cases are covered in Force Perception, so this section focuses on how contact sensing fits into exteroceptive perception.

Uses of tactile sensing in robotics. Source: Springer Handbook of Robotics, Fig. 28.1.
Contact sensing is typically used in three recurring interaction modes: manipulation, exploration, and response. During manipulation, contact measurements help regulate grasp force, infer contact constraints, and assess stability. During exploration, the robot deliberately touches surfaces to estimate properties like texture, friction, or hardness. During response, contact sensing serves safety and robustness by detecting unexpected contact and triggering fast reactions.
Contact sensing spans a spectrum from simple “touch happened” signals to rich measurements of how contact is distributed:
- Binary contact sensors (contact switches, bumpers): output a yes/no signal indicating contact. These are common in mobile robots for low-cost collision detection and as safety bumpers.
- Local force or pressure sensors (force-sensitive resistors, pressure pads): output a continuous value related to contact intensity. These are often placed in grippers, fingertips, or foot soles.
- Tactile arrays (tactile skin, fingertip taxels): measure pressure over many small sensing elements (often called taxels, short for tactile pixels), producing a “pressure image” of contact. This supports estimating contact location, contact area, and detecting slip or rolling.
- Force/torque sensing at an end-effector (force–torque sensor): measures the net interaction at a mounting point, typically providing forces and torques along three axes each (often called 6-axis force/torque sensing). This is widely used for compliant manipulation, surface following, and safe physical interaction.

Miniature fingertip force–torque sensor for a prosthetic hand. Source: Springer Handbook of Robotics, Fig. 28.2.
Why contact sensing matters in practice
- Robust manipulation: vision can suggest where an object is, but contact sensing confirms when the object is actually grasped, whether it is slipping, and how firmly it is held.
- Safe physical interaction: contact sensors can trigger fast reflexes (stop, retract, compliant behavior) when unexpected contact occurs.
- Locomotion and terrain adaptation: foot contact sensors help detect touch-down, estimate load distribution, and improve balance on uneven or deformable ground.
- Exploration of unknown objects: tactile arrays can reveal local geometry and material cues (edges, ridges, softness) that may be ambiguous visually.
Practical design trade-offs
When selecting or integrating a contact sensor, typical engineering trade-offs include:
- Sensitivity vs durability: soft compliant skins can detect light touch but may wear out faster in harsh environments.
- Spatial resolution vs wiring and computation: high-resolution tactile arrays provide richer information but increase cabling complexity, data rate, and processing demands.
- Bandwidth (speed) vs noise: fast contact events (taps, slip onset) require higher sampling rates, which can amplify noise and require filtering.
- Calibration and drift: many tactile and force sensors exhibit offset drift, hysteresis, and temperature dependence, so periodic calibration and compensation can be necessary.
- Placement: fingertip sensors give precise local contact details, while wrist-mounted force–torque sensing gives a global interaction measurement but cannot directly localize contact along a finger without additional modeling.
How contact sensing complements other exteroceptive sensors
Contact sensing is often the “last meter” of perception: cameras and range sensors guide the robot toward a target, and contact sensors confirm and regulate the final interaction. A practical example is grasping: vision estimates an object pose and plans an approach, while tactile sensing confirms contact timing, corrects grasp alignment, and detects slip during lifting.
Conceptual Questions
Question 1: Why can a contact sensor be considered exteroceptive even though it is mounted on the robot?
Question 2: A bumper switch on a mobile robot is most directly used for which purpose?
Question 3: Which task benefits most from a tactile array (many taxels) compared to a single force sensor?
Question 4: A wrist-mounted force–torque (F/T) sensor reports a large net force during a grasp. Which situation could produce a similar reading even if the object is not securely grasped?
Question 5: Why do high-resolution tactile skins often create system-level challenges beyond the sensor physics?
Further exploration
Rangefinders
Rangefinders are a family of exteroceptive sensors that provide measurements of the distance between the robot and objects in its environment. These sensors are vital for tasks such as obstacle avoidance, mapping, localization, and autonomous navigation. Unlike contact sensors, which rely on physical interaction, rangefinders gather data from a distance and are typically used to understand the environment beyond the immediate vicinity of the robot.

LiDAR-based rangefinders are used in various robotic applications for obstacle detection, navigation, and spatial mapping. Source: https://news.panasonic.com/global/stories/805
In the image above, we see the principle of LiDAR technology, where the sensor emits a laser beam towards an object. The time it takes for the laser to travel to the object and return to the sensor is measured, and this time delay is used to calculate the distance. This technology is commonly used in autonomous vehicles and industrial robots for tasks like obstacle detection and spatial mapping. By measuring distances to surrounding objects, it helps robots understand their environment and make decisions about their movement.
Rangefinders operate on the principle of time-of-flight (ToF), where the sensor measures the time it takes for a signal (typically infrared or ultrasonic) to travel to an object and back. By knowing the speed of the signal and the round-trip time, the distance can be computed using the formula:
$$ \text{Distance} = \frac{c \cdot t}{2} $$
Where:
- (c) is the speed of the signal (typically the speed of light for lasers or sound speed for ultrasonic),
- (t) is the round-trip travel time of the signal.
The key advantage of rangefinders is their ability to provide absolute distance measurements directly, without requiring complex image processing or external references, making them well-suited for both indoor and outdoor navigation tasks.
Types of Rangefinders
Rangefinders come in various technologies, each with its strengths and limitations. The most common types include:
- Ultrasonic Rangefinders: Use sound waves to measure distances. They are widely used in robotics for short-range applications due to their low cost and simplicity.
- Infrared (IR) Rangefinders: Measure distance using infrared light. They are compact, inexpensive, and often used in small robots and consumer devices.
- Laser Rangefinders (LiDAR): Use laser beams to measure distances with high precision and are widely used in autonomous vehicles and drones. They provide highly accurate 3D distance measurements and are often used in mapping and localization tasks.
Rangefinder Applications
Rangefinders are utilized in a variety of tasks across different robotic applications:
- Obstacle Detection and Avoidance: Robots use rangefinders to measure the distance to surrounding objects and avoid collisions by navigating around them.
- Mapping and Localization: Robots equipped with rangefinders can create detailed maps of their environment by collecting distance data from multiple points, allowing them to localize themselves within the map.
- Autonomous Navigation: Rangefinders support path planning and trajectory control by continuously measuring distances to obstacles and ensuring safe navigation through dynamic environments.
Practical Design Trade-offs
When selecting a rangefinder for a robot, there are several design considerations and trade-offs to take into account:
- Range vs. Resolution: High-resolution rangefinders can measure small distance variations, but their range may be limited. Conversely, long-range sensors may offer less fine-grained resolution.
- Accuracy vs. Cost: Ultrasonic and IR sensors are less expensive but provide lower accuracy compared to more advanced technologies like LiDAR.
- Environmental Sensitivity: Some rangefinders are affected by environmental conditions such as lighting (for IR sensors) or weather (for LiDAR), so robustness to different conditions is a key factor in sensor selection.
- Speed vs. Noise: Fast-range measurement systems may suffer from higher noise, while slower systems offer better stability but might be unsuitable for fast-moving robots.
Why Rangefinders Matter in Practice
Rangefinders are essential for building an accurate and reliable model of the robot’s surroundings. By providing real-time distance data, they allow the robot to make informed decisions about its movement and interactions. For example, LiDAR enables high-precision navigation in environments that are too complex or dynamic for simpler sensors like ultrasonic or IR rangefinders.
Additionally, rangefinders support sensor fusion, where their data is combined with information from other exteroceptive sensors (e.g., cameras, IMUs) to create a richer, more reliable understanding of the environment. This fusion improves performance in complex scenarios such as indoor navigation, dynamic obstacle avoidance, and multi-robot coordination.
Conceptual Questions
Question 1: What is the primary advantage of using a rangefinder compared to a camera in robotic applications?
Question 2: Which type of rangefinder is most commonly used for precise 3D mapping in autonomous vehicles?
Question 3: What is the main limitation of ultrasonic rangefinders in practical robotics applications?
Question 4: Why is sensor fusion important when using rangefinders on robots?
Question 5: What trade-off must be considered when choosing between ultrasonic and LiDAR rangefinders for a robot?
Further exploration
Satellite-Based Positioning: GPS and GNSS

Source: ESA : https://www.esa.int/Applications/Satellite_navigation/How_satellite_navigation_works
Satellite-based positioning is an exteroceptive sensing modality because it estimates a robot’s position by observing external signals transmitted from satellites. The global navigation satellite system (GNSS) is the umbrella term for satellite constellations that provide this service, while the global positioning system (GPS) is the most widely used instance (NAVSTAR). GNSS provides an estimate of 3D position in absolute coordinates, plus a precise time and date reference, as long as satellite signals can be received reliably.
Robotics uses GNSS heavily in outdoor navigation because it provides a globally referenced position that does not drift with time in the same way that pure inertial sensing does. For many field robots, GNSS is the primary external reference used to correct accumulated drift from inertial measurement units (IMUs).
Video introduction
Here is a small optional video explaining gps.
How GPS Works 🛰️ What is GPS . YouTube video, 19.04.2023. Available at: https://www.youtube.com/watch?v=AlHPDRQ08jU
Core idea: position from time-of-flight
GNSS works by measuring how long radio signals take to travel from satellites to a receiver. If signal propagation time were known perfectly, the distance to each satellite could be computed and the receiver position could be determined by trilateration (intersection of spheres in 3D). In practice, the receiver does not carry an atomic clock like the satellites do, so the measured distances are pseudo-ranges that include clock bias and other errors.
A common measurement model for satellite $i$ is:
$$ \rho_i = \lVert \mathbf{r} - \mathbf{s}_i \rVert + c,\delta t + \varepsilon_i , . $$
- $\rho_i$: pseudo-range measurement to satellite $i$ (meters)
- $\mathbf{r}$: receiver position (meters, in a chosen reference frame)
- $\mathbf{s}_i$: satellite position (meters, same frame as $\mathbf{r}$)
- $c$: speed of light (meters per second)
- $\delta t$: receiver clock bias (seconds)
- $\varepsilon_i$: residual errors (atmosphere, multipath, noise, ephemeris uncertainty)
Because there are four unknowns in the simplest case (3D position plus clock bias), a position fix typically requires at least four satellites in view. GNSS receivers solve this using estimation algorithms (commonly Kalman-filter-based) and satellite broadcast information (including satellite position and timing).

GPS trilateration concept (2D sketch). In 3D, each pseudo-range constrains the receiver to a sphere; multiple spheres intersect at the receiver position. Source: Springer Handbook of Robotics, Fig. 29.8.
From geometry to equations (trilateration).
In the figure above, each emitter (satellite) at known position $\mathbf{s}_i$ defines a sphere with radius equal to the measured distance $d_i$ to the receiver. Ignoring errors for intuition, the idealized relation is
$$ \lVert \mathbf{r} - \mathbf{s}_i \rVert = d_i . $$
Including receiver clock bias and other effects, the measured distance becomes a pseudo-range,
$$ d_i = \rho_i = \lVert \mathbf{r} - \mathbf{s}_i \rVert + c\,\delta t + \varepsilon_i . $$
$$ \lVert \mathbf{r} - \mathbf{s}_i \rVert = \rho_i - c\,\delta t - \varepsilon_i. $$
Squaring both sides gives a quadratic equation in the unknown receiver position $\mathbf{r} = [x\ y\ z]^T$:
$$ (x - x_i)^2 + (y - y_i)^2 + (z - z_i)^2 = (\rho_i - c\,\delta t - \varepsilon_i)^2 . $$
With multiple satellites, this yields a system of nonlinear equations, one per satellite. Subtracting one equation from another eliminates the squared terms and leads to equations that are approximately linear in $(x, y, z, \delta t)$ around an initial guess. In practice, receivers solve this system using iterative least-squares or Kalman filtering.
Geometric intuition.
- One satellite → receiver lies somewhere on a sphere
- Two satellites → intersection of two spheres (a circle)
- Three satellites → two possible points (in 3D)
- Four satellites → unique solution for position and clock bias
This geometric requirement is why a minimum of four satellites is needed for a full 3D GNSS fix.
Conceptual Questions
Question 1: Why does GNSS need at least four satellites to solve for 3D position?
Question 2: What makes a pseudo-range different from a true geometric range?
Question 3: In the pseudo-range equation, what physical effect does the term $c\,\delta t$ represent?
Satellite constellations and signals

NASTAR constellation. Source: https://www.defenseindustrydaily.com/the-gps-constellation-now-and-future-01069/.
The NAVSTAR GPS constellation is built around a baseline of 24 satellites (with additional operational satellites often present) in medium Earth orbit, arranged so that most locations have four or more satellites visible when the sky is unobstructed. GNSS receivers may also use other constellations such as GLONASS and Galileo, and multi-constellation reception is now common in consumer and robotic hardware.
GPS satellites broadcast navigation signals on specific radio frequency bands. The primary civilian signal is the coarse-acquisition (C/A) code transmitted on the L1 band, centered at 1575.42 MHz. The term L1 simply refers to this designated carrier frequency used by GPS satellites for timing and ranging measurements.

Talcom. Source: https://www.tualcom.com/gnss-frequency-bands-and-signals/
Historically, most civilian receivers relied only on the L1 signal. Modern GNSS receivers often track multiple frequency bands, which allows them to estimate and compensate for ionospheric delay by comparing how different signal frequencies experience different amounts of propagation delay as they pass through the ionosphere. This frequency dependence makes it possible to reduce a major source of range error and improves positioning accuracy and robustness.
A key geometric idea is that accuracy depends not only on noise level, but also on satellite geometry. Satellites spread widely across the sky provide a better-conditioned solution than satellites clustered in one direction.
Conceptual Questions
Question 1: Why does satellite geometry matter even if measurement noise is unchanged?
Question 2: Why can dual-frequency or multi-frequency reception improve accuracy?
Question 3: What practical environmental feature most directly breaks the GNSS assumption of line-of-sight reception?
Performance, failure modes, and practical limitations
Baseline accuracy. Under typical conditions and without specialized enhancements, standard GPS accuracy is on the order of 20 to 25 m horizontally and about 43 m vertically, with a typical fix rate of 1 Hz (though faster or slower rates are possible).
Dominant error sources in robotics deployments:
- Line-of-sight obstruction: buildings, trees, mountains, canyons, and indoor environments can block satellites or reduce usable geometry.
- Atmospheric effects: ionosphere and troposphere introduce propagation delays that vary with conditions.
- Multipath: reflections from buildings, canyon walls, terrain, or the ground can delay signals and bias pseudo-ranges. Specialized receiver techniques and antennas can mitigate some multipath, but short-delay ground reflections are particularly difficult.
- Geometry metrics (DOP/PDOP): dilution of precision (DOP), especially positional DOP (PDOP), captures how measurement errors map into position errors. Receivers often use PDOP-driven satellite selection and periodically recompute it.
Important implication for robot safety: GNSS can sometimes produce wildly incorrect estimates when conditions degrade (for example due to multipath or poor geometry). Robust navigation stacks therefore treat GNSS as one input among several, with sanity checks and estimator consistency tests.
Conceptual Questions
Question 1: Why is vertical GNSS accuracy typically worse than horizontal accuracy?
Question 2: What is multipath, and why can it bias position estimates rather than just adding noise?
Question 3: What does a high PDOP value suggest about the current satellite configuration?
Augmentation and “enhanced GPS” options
Many systems improve GNSS accuracy by providing external corrections or by using carrier-phase information.
- Satellite-based augmentation systems (SBAS)
A satellite-based augmentation system (SBAS) is an enhancement layer built on top of GNSS. SBAS uses a network of ground reference stations at precisely surveyed locations to monitor GNSS satellite signals. These stations estimate common error sources—such as satellite clock errors, orbit (ephemeris) errors, and ionospheric delay, and broadcast correction messages to users via geostationary satellites.
One example of SBAS is the Wide Area Augmentation System (WAAS), operated in North America. When SBAS corrections are available, horizontal positioning accuracy can improve from roughly 10–12 m (standalone GPS) to about 1–2 m within the system’s coverage region.
- Differential GPS (DGPS)
Differential GPS (DGPS) improves positioning accuracy by using a reference receiver placed at a precisely surveyed, fixed location. Because the true position of this reference is known, it can estimate the current GNSS errors affecting its measurements (such as satellite timing and atmospheric delays). These estimated errors are then transmitted to nearby robot receivers, which apply the same corrections to their own measurements.
This approach works well only when the robot is close to the reference station, since many GNSS errors vary gradually with location. As the distance increases, the reference errors no longer match the robot’s local errors, and correction effectiveness decreases.
- Receiver Autonomous Integrity Monitoring (RAIM)
Receiver Autonomous Integrity Monitoring (RAIM) is a technique that allows a GNSS receiver to detect faulty measurements without relying on external corrections. The receiver computes multiple position solutions using different combinations of visible satellites and checks whether these solutions are mutually consistent.
If one satellite is providing incorrect data, the solutions will disagree, allowing the receiver to detect (and sometimes exclude) the faulty measurement. RAIM requires more satellites than the minimum needed for positioning, since this redundancy is essential for consistency checks and integrity monitoring.
- Real-Time Kinematic positioning (RTK)
Real-Time Kinematic positioning (RTK) is a high-precision GNSS technique that uses not only the navigation code, but also the carrier signal itself. GNSS signals are transmitted as radio waves at a known frequency. This underlying radio wave is called the carrier, and its oscillation is much faster than the navigation code modulated on top of it.
The carrier phase is the position within this repeating wave cycle (for example, whether the wave is at a peak, trough, or somewhere in between) when it arrives at the receiver. By tracking this phase very precisely, the receiver can measure changes in distance with millimeter-level resolution.
However, the receiver does not know how many full carrier wavelengths lie between the satellite and the receiver-only the fractional part of the current wave cycle can be observed directly. Determining this unknown whole-number count is known as resolving the integer ambiguity.
RTK combines carrier-phase measurements with corrections from a nearby base station to determine this integer number of wavelengths. Once the integer ambiguity is resolved, the distance between satellite and receiver can be estimated with very high precision, enabling centimeter-level horizontal positioning accuracy under good signal conditions. This level of precision is why RTK is often referred to as “survey grade.”
Conceptual Questions
Question 1: What additional information does RTK exploit that basic GNSS pseudo-range methods do not?
Question 2: Why do DGPS and RTK accuracy degrade as the robot moves farther from the reference station?
Question 3: What is the difference between “accuracy” and “integrity,” and which does RAIM primarily address?
GNSS in a full navigation system: integration with IMU
GNSS provides strong absolute position references but has three major limitations for robot state estimation:
- Orientation is not directly measured (yaw, and often pitch/roll require other sensors or motion-based inference).
- Measurements are discrete and can be delayed, so they do not provide a continuous state estimate at control rate.
- Fixes can be unavailable (indoors, underwater, heavy canopy, deep urban canyons).
A common solution is a GPS-aided inertial navigation system (GPS/INS), where an extended Kalman filter (EKF) or factor-graph estimator fuses:
- IMU: high-rate motion propagation (but drifting)
- GNSS: low-rate absolute position correction (but occasionally unavailable or biased)
The IMU “bridges” between GNSS updates, and GNSS constrains long-term drift. This complementarity is a standard pattern in mobile robotics.
Practical note: antenna lever arm. GNSS measures the position of the antenna, not the robot body origin. If the antenna is far from the IMU or vehicle reference point, that offset (lever arm) must be modeled, or apparent position changes can create orientation and stability issues in the fused solution.
Conceptual Questions
Question 1: Why does GNSS not directly provide full robot pose (position and orientation)?
Question 2: In a GNSS-IMU EKF, what role does the IMU play when GNSS is temporarily unavailable?
Question 3: Why does antenna placement relative to the IMU matter in fused navigation?
Further exploration
- Overview of GNSS concepts: Wikipedia - GNSS
- Differential corrections and augmentation: Wikipedia - Differential GPS
- High-precision methods: Wikipedia - Real-time kinematic positioning
- Receiver output format often seen in robotics: Wikipedia - NMEA 0183
Chapter wrap-up conceptual questions
Question 1: A robot drives from open sky into a street canyon between tall buildings. Which GNSS error mechanisms become more severe, and why?
Question 2: Why can GNSS provide long-term global accuracy while IMU-only dead reckoning drifts over time?
Question 3: Why is RTK often described as “survey grade,” and what hidden assumptions must hold for centimeter-level accuracy?
Question 4: Which GNSS outputs and metadata should a navigation stack log for debugging estimator failures (at minimum)?
- Cameras
- Environmental sensors (temperature, light, gas, chemicals)
Multisensor Data Fusion
- Probabilistic grids
- The Kalman Filter
- Sequential Monte Carlo Methods
Sensor Selection and Integration
- Defining requirements
- Mechanical, electrical & software integration
- EMI, thermal, and environmental considerations
- Safety, redundancy, fail‑safe design
- Maintenance & recalibration schedules
Programming
Credits
Ressources
Books
-
Springer Handbook of Robotics (Sensing and Estimation)
-
Springer Handbook of Robotics (Range Sensors)
-
Springer Handbook of Robotics (Multisensor Data Fusion)
Videos
- Sensors and Perception (University of Cambridge)