The approach phase is one of the most safety-critical segments of a civil aircraft flight. Within the framework of Performance-Based Navigation (PBN), navigation systems must satisfy strict requirements in terms of accuracy, availability, continuity and integrity. These constraints become particularly stringent during the final segment of a precision approach, which extends from the Final Approach Point, approximately 7 nautical miles from the runway threshold, down to the decision altitude.
Aircraft guidance during this phase traditionally relies on the Instrument Landing System (ILS) or on Global Navigation Satellite Systems (GNSS) augmented by space-based (SBAS) or ground-based (GBAS) augmentation systems. However, conventional radionavigation infrastructures are progressively being reduced to a Minimum Operational Network intended to mitigate large-scale GNSS outages. As a result, modern precision approaches increasingly depend on augmented GNSS solutions. In practice, the radio-frequency environment around airports may be affected by Radio Frequency Interference (RFI), which can degrade or interrupt GNSS signals. Such disruptions may force aircraft to interrupt the approach and revert to the remaining conventional navigation aids. Ensuring operational continuity, therefore, requires complementary sensors that are passive and robust to RF disturbances.
Optical sensors constitute promising candidates, particularly during the approach phase when the aircraft operates close to the ground and the visual environment provides rich navigation data. Although commercial aircraft are already equipped with onboard cameras to enhance pilot situational awareness during approach, landing and taxiing, these sensors rarely provide operational credit, and their potential remains largely underutilised.
Vision-based navigation relative to the runway has attracted increasing research interest. The European Japanese VISION project developed a hybrid inertial-GNSS-vision navigation system based on an error-state Kalman filter accounting for image processing delays. The C2Land project, led by the Institute of Flight Guidance at Technische Universität Braunschweig, investigates autonomous landing at airports without ground infrastructure by fusing optical and inertial data with non-augmented GNSS. Flight experiments conducted within this project represent some of the most advanced demonstrations of vision-based navigation systems.
Despite these developments, integrating cameras into safety-critical navigation architectures raises important integrity challenges. Vision sensors introduce new failure modes that must be incorporated into the integrity monitoring framework with appropriate risk allocation. However, integrity monitoring methods for vision-based navigation remain relatively limited. Many approaches adapt algorithms originally designed for GNSS, such as RAIM-based techniques using synthetic measurements derived from visual landmarks, batch implementations, or extensions of AIME using multiple optical sensors. More recent work proposed protection level formulations for hybrid inertial-vision-GNSS systems considering multiple fault modes.
However, as highlighted in earlier research, the direct application of GNSS integrity methods to vision measurements is generally suboptimal due to the specific characteristics of optical observations and the limited availability of statistical models describing their integrity behaviour. This lack of operational experience complicates compliance with the stringent integrity requirements of civil aviation precision approaches, as it requires conservative assumptions.
This study builds upon the hybrid inertial-vision-GNSS system introduced in earlier research, which is designed to ultimately comply with the performance requirements of a PBN CAT I precision approach. It aims to characterise the impact of vision integration on continuity and integrity requirements and to derive false alarm and missed detection probabilities that an integrity monitoring algorithm must verify.
The navigation system considered in this article is designed to support PBN CAT I precision approach operations. The system is set in the context of a radio frequency environment potentially disturbed by jamming or spoofing, resulting in potential GNSS service loss of continuity or unavailability. In the PBN framework, any such GNSS event during a precision approach would trigger a navigation system alert, requiring the pilot to initiate a missed approach procedure.
The hybrid navigation system integrates measurements from four distinct sensors, including:
• A navigation-grade inertial measurement unit (IMU) providing high-quality angular and velocity increments
• A GNSS receiver processing satellite signals (Signal-In-Space) and SBAS corrections to compute a 3D position
• A barometric altimeter used to stabilise the IMU’s vertical channel, supplying altitude information
• A vision system composed of one or more imaging sensors (e.g. monocular, stereo, infrared) and an image processing unit
The selected vision-based navigation approach relies on landmark-based positioning. The optical sensors observe the aircraft’s environment, referred to as the scene, and specifically detect the runway, from which one or more landmarks are extracted. The 3D positions of these landmarks are assumed to be known a priori and retrieved from the Aeronautical Information Publication (AIP). By associating each landmark with its known coordinates, a line-of-sight vector between the camera and the landmark can be reconstructed. This line-of-sight serves as the vision measurement input to the estimation process. A tightly coupled integration scheme is therefore considered in this architecture. The data fusion and state estimation process is based on an error-state Kalman filter. The filter’s structure, along with the mathematical modelling of its propagation and measurement equations, are detailed in earlier research.
The hybrid navigation system provides guidance system estimates of key navigation parameters, including position, velocity and attitude. It is also designed to provide integrity monitoring and issue alerts in the event of a continuity loss. In parallel, a predefined flight path is derived from a waypoint database and provided to aircraft guidance. This guidance is ultimately used by the flight crew.
A straightforward extension of an SBAS-augmented inertial-GNSS navigation system consists of integrating vision measurements within a triple inertial-GNSS-vision hybrid architecture. Such integration can significantly improve continuity of service because vision measurements can compensate for temporary GNSS outages. In this configuration, a loss of continuity would only occur if both GNSS and vision measurements become unavailable simultaneously. This capability is particularly valuable given the increasing vulnerability of GNSS to RFI.
However, the introduction of vision also brings additional failure modes that must be considered in the integrity risk allocation. When these failure modes are incorporated into the integrity framework, they may inadvertently tighten the integrity requirements associated with the SBAS-augmented GNSS subsystem. Consequently, improving continuity through sensor redundancy does not automatically translate into improved system integrity and may even degrade it if failure dependencies are not properly managed. The limitations of such triple-hybrid architectures are discussed in earlier research.
The integration of vision into an inertial-GNSS hybrid navigation system introduces a fundamental technical challenge arising from two partially conflicting objectives:
• To increase the continuity of service by leveraging vision measurements to bridge potential GNSS service losses
• To ensure this integration does not increase the integrity requirements allocated to the SBAS-augmented GNSS system
To resolve this trade-off, this work proposes a dual-navigation architecture in which vision measurements are integrated without increasing the integrity constraints imposed on the GNSS subsystem. The core principle of this architecture lies in the implementation of two parallel navigation solutions.
The first, referred to as the Main Navigation, relies solely on measurements from the GNSS, the navigation-grade IMU and the barometric altimeter, deliberately excluding any vision data. As such, this navigation chain corresponds to a state-of-the-art SBAS-augmented inertial-GNSS navigation system.
In contrast, the second solution, referred to as the Vision Navigation, uses only the IMU, barometric altimeter and vision-based measurements, excluding any GNSS inputs. It thus forms a pure inertial-vision navigation system.
During a precision approach conducted by a civil aircraft, the navigation outputs, comprising the estimated navigation states (position, velocity and attitude) as well as the associated integrity monitoring functions and alerts, are provided by either the Main Navigation or the Vision Navigation subsystem. By default, the system delivers navigation outputs from the Main Navigation as long as the SBAS-augmented GNSS service is available. When the GNSS service becomes unavailable and is formally declared out of service, the navigation outputs are transferred to those generated by the Vision Navigation. This transition is handled by a dedicated switching mechanism whose operation is governed by the availability status of the augmented GNSS service.
The use of two parallel navigation solutions therefore enables a clear separation of integrity risks associated with GNSS and vision within their respective navigation chains.
The proposed architecture benefits from the well-established performance of the inertial-GNSS hybrid system as long as GNSS signals are available, thereby maintaining the integrity of a navigation solution that has already been extensively validated. At the same time, it ensures continuity of service in the event of a GNSS outage by incorporating vision-based measurements into the overall navigation process. From the user’s perspective, the system continues to provide the required navigation information without indicating whether it originates from the main or vision-based navigation branch.
Integrity and continuity allocations for the hybrid navigation system are analysed using fault/risk allocation trees that describe the logical relationships between failure modes and their causes. The interpretation and computation rules of these trees are defined as described earlier.
Integrity represents the level of trust in the correctness of the navigation information and includes the system’s ability to provide timely alerts. An integrity failure occurs when the Navigation System Error (NSE) exceeds the horizontal or vertical alert limits, producing a Hazardous Misleading Information (HMI) event. This event can be expressed as: P_HMI = P(|e| > AL, y ∈ Ω)
where e denotes navigation error, AL the operational alert limit, y the vector of measurements, and Ω the set of measurements considered consistent with the integrity monitor. The integrity risk corresponds to the probability that this event occurs without triggering an alert within the specified time-to-alert.
Continuity refers to the system’s ability to perform its function without interruption, assuming it is available at the beginning of the operation. Although a precision approach typically lasts about 150 seconds, the continuity risk defined as described earlier only concerns the final 15 seconds of the approach. Continuity loss events include integrity monitor alerts, unscheduled GNSS outages and RFI disturbances. From a fault detection perspective, these events are primarily driven by detection alarms, which are generally dominated by false alarms. GNSS outages occurring earlier in the approach are instead classified as losses of availability.
To derive the fault allocation tree for the proposed hybrid navigation system, a reference allocation model is first established based on an SBAS-augmented inertial-GNSS architecture. The resulting structure follows the fault allocation framework developed for SBAS-based APV and CAT I approaches as described in earlier research.
The top-level metric is the Target Level of Safety (TLS), defined as the acceptable hull-loss probability per aircraft per flight hour. For approach operations, the TLS is 1×10⁻⁸ per approach, assuming a standardised duration of 150 seconds. Considering one catastrophic accident is associated with approximately 10 incidents, the associated risk budget becomes 1×10⁻⁷, which is equally allocated to continuity and integrity branches.
To derive system-level requirements, an additional breakdown is required. This refinement incorporates the mitigating influence of the flight crew. Operational analyses indicate reduction factors of seven for integrity and 2,000 for continuity, reflecting the fact that continuity losses occurring during the final seconds of an approach can often be managed visually, whereas integrity failures may generate misleading guidance.
After applying these factors, the navigation system requirements for PBN CAT I approaches are 1×10⁻⁴ for continuity and 3.5×10⁻⁷ for integrity per approach.
These requirements are allocated between aircraft and non-aircraft subsystems.
• Aircraft subsystems include all onboard navigation components, such as the GNSS receiver hardware, timing modules and processing software. Failures originate from internal causes (hardware faults, power interruptions, interface failures). Compliance with the continuity and integrity requirements is the responsibility of the aircraft manufacturer or avionics supplier, who must demonstrate their equipment satisfies the allocated risk budgets. In certification, continuity compliance is commonly shown using Mean Time Between Failure (MTBF) analysis, whereas the integrity requirement may be validated through design assurance processes and fault detection mechanisms as defined by applicable certification standards.
• Non-aircraft subsystems correspond to external contributors affecting navigation performance. In an SBAS-augmented architecture, this branch is limited to SIS, including GNSS signals and SBAS corrections. The navigation system must therefore ensure compliance with these requirements through appropriate integrity monitoring. Because these non-aircraft requirements relate solely to the external environment, the onboard equipment, specifically the GNSS receiver, is assumed to be ideal, i.e. operating nominally without introducing failures within the measurements. Under this assumption, responsibility for meeting the allocated performance requirements resides with the onboard navigation system, specifically through its integrity monitoring algorithms. Consequently, the non-aircraft continuity and integrity requirements define the performance thresholds the navigation system must meet.
Following the same methodology, the vision subsystem is decomposed into aircraft and non-aircraft branches to clearly delineate responsibility boundaries. This classification enables the identification of risks that fall under the scope of the aircraft manufacturer versus those that must be addressed by the onboard navigation monitoring functions.
Aircraft continuity risks originate from failures of onboard hardware or software involved in the vision processing chain. Vision measurements are produced through two main stages: image acquisition by optical sensors and landmark detection using onboard image-processing algorithms.
Failures affecting either stage may interrupt the generation of vision measurements. Optical sensors can be affected by hardware faults such as lens contamination, power interruption or optical degradation, while the processing chain may suffer from processor failures or software crashes. In this study, an aircraft-level continuity loss is defined as any failure of the onboard vision subsystem to produce a runway landmark measurement, assuming the scene observability allows it.
The continuity requirement allocated to the vision function is 10⁻¹ per approach. This relatively relaxed constraint reflects common image degradation mechanisms such as lens contamination or water droplets. Compliance is verified through equipment reliability analysis (e.g. MTBF), and redundancy such as sensor triplication can be used to improve overall continuity performance.
Non-aircraft continuity risks correspond to environmental effects that degrade vision measurements while the onboard equipment operates nominally. In this context, the vision subsystem is assumed to produce at least one valid measurement. Under this assumption, continuity loss may occur when the navigation system monitoring declares an alarm, for instance when protection levels exceed the alert limits or when a measurement anomaly cannot be excluded.
For GNSS, environmental disturbances are captured within the SIS concept. In vision-based navigation, the equivalent disturbances arise from the optical environment, which affects the propagation of visible or infrared radiation between the runway and the camera. Environmental perturbations increasing measurement noise are generally referred to as photometric noise, and include poor illumination conditions or strong reflections from the runway surface. These effects increase measurement variance and protection levels, whereas large biases or outliers are addressed within the integrity monitoring framework.
For the hybrid navigation system, the non-aircraft continuity risk is allocated to 8×10⁻⁵ per approach.
Operational conditions may prevent the vision subsystem from producing any measurement, for instance during night operations with visible-spectrum cameras or under adverse meteorological conditions. Such situations must be explicitly considered in the continuity allocation.
for the scope of simplicity of the article, Meteorological conditions preventing vision measurements are classified as non-aircraft continuity risks, as they originate from the external sensing environment rather than from failures of the onboard equipment. This treatment is consistent with the modelling of GNSS outages caused by radio frequency disturbances.
Two modelling strategies can be considered. One approach assumes complete vision unavailability due to environmental conditions is negligible compared to continuity losses caused by monitoring false alarms. However, this assumption is unrealistic because no existing optical system can guarantee a negligible probability of total vision unavailability.
The adopted approach therefore explicitly accounts for weather effects by decomposing the vision observation continuity risk into two contributions:
• Losses caused by adverse meteorological conditions
• Losses caused by false alarms of the fault detection function
Assuming one approach out of 20 is affected by weather conditions preventing optical measurements, the resulting continuity risk associated with vision observation is 3×10⁻² per approach. For a fault detection rate of 1 Hz, this corresponds to a false alarm probability of:
P_FA = 2×10⁻³ per fault detection test
The introduction of vision into an inertial-GNSS navigation architecture affects both aircraft-level equipment continuity risks and non-aircraft continuity risks driven by the external environment. In this representation, scene observation is explicitly placed within the non-aircraft domain, as it inherently accounts for environmental effects, including meteorological conditions. The aircraft-level vision function is represented by its two main components: the optical sensors and the image-processing unit.
The introduction of vision-based navigation substantially alleviates the continuity requirements previously imposed on the GNSS Signal-In-Space. In both aircraft and non-aircraft contexts, continuity loss occurs only when vision and GNSS are simultaneously unavailable. This architectural change yields multiple benefits. First, it relaxes equipment-level continuity requirements. Second, it addresses the risk of radio frequency interference, as the continuity risk allocated to the GNSS SIS is reduced by a factor of 12.5, down to 1×10⁻³ per approach.
Quantifying the integrity associated with airborne vision equipment is challenging. Integrity failures associated with airborne vision equipment occur when erroneous measurements produced by the onboard vision subsystem are accepted as valid by the navigation system and lead to navigation errors exceeding the alert limits. As with continuity risks, compliance with integrity requirements is primarily ensured through equipment certification.
Aircraft-level integrity threats originate from two components of the vision subsystem:
• Optical sensors may experience hardware failures such as calibration errors, lens defects, geometric distortions or failures of the imaging elements
• Image processing failures arise from abnormal behaviour of the onboard processing chain, including feature detection errors, computing faults, radiation-induced bit errors or errors in optical multi-sensor fusion
A core assumption is adopted: in the absence of sensor or processing failures, the produced measurement would be correct.
Failures affecting optical sensors can reasonably be considered random and statistically independent. Under this assumption, and in addition to integrity loss rates guaranteed by the manufacturer, these risks can be mitigated through redundancy and internal fault detection mechanisms.
These strategies may reduce the integrity loss probability associated with airborne vision equipment to levels that are either negligible (≈10⁻⁹ per approach) or sufficiently small to remain within the aircraft-level integrity allocation already assigned to the inertial-GNSS navigation system (10⁻⁷ per approach).
A non-aircraft integrity failure occurs when a vision measurement is corrupted by abnormal errors induced by the external environment. Measurement errors consist of:
• Photometric noise (nominal stochastic error)
• Deterministic biases (abnormal errors)
Integrity-threatening events correspond to deterministic biases affecting the estimated line-of-sight. Two main sources are:
• Incorrect feature detection
• Incorrect landmark association
The use of two parallel navigation solutions enables a clear dissociation between GNSS and vision integrity risks. Each navigation mode has its own fault tree and monitoring strategy.
A conservative assumption is adopted whereby the full integrity risk of 2×10⁻⁷ per approach is allocated to each navigation solution.
For Vision Navigation, the missed detection probability is defined as:
P_MD = P(|e| < AL, y ∈ Ω | H₁)
The required value is given by:
P_MD,req = IR_req / P(H₁)
Assuming a vision failure rate of 1.6×10⁻⁴ per approach, the maximum allowable missed detection probability becomes:
P_MD,req = 10⁻³
The modified integrity allocation tree includes separate subtrees for Main and Vision Navigation, with a switching mechanism selecting the active branch.
This clearly examines the contribution of vision-based measurements to improving navigation continuity during precision approach operations. A dual-navigation architecture was proposed, separating GNSS and vision constraints while maintaining continuity during GNSS outages. Defining integrity constraints for vision-based navigation and derives associated false alarm and missed detection requirements. These results provide guidance for the development of fault detection and integrity monitoring algorithms for vision-based navigation systems in safety-critical aviation applications.