Vision Systems For Flight Guidance

Goals and Objectives

This research program seeks to develop and demonstrate a completely passive, idependent vision based navigation system for use in aerial environments. The objective is for the system to be independent of external infrastructure (e.g. GPS) and passive to minimise risk of detection in hostile environments. The system would be applicable to autonomous vehicles as well as to aid navigation in manned systems.

Significance

There has been much focus recently on developing robust navigation systems in robotics to work in GPS denied and/or unstructured environments. This proposal does not seek to solve the generic problem of navigation in an unstructured environment, but rather to provide a reliable solution for use in the common and well ordered aerial situation where visual navigation information is rich, but nevertheless remain independent of external navigation infrastructure for robust and low-observable operations. Such a system is realisable, and promises a great leap-forward in aerial navigation for manned and unmanned systems.

Strategy

The basic principle is to use vision cameras to provided localisation of the vehicle by visual detection and tracking of natural and man-made ground features en-route. The system will combine the best features of SLAM (Simultaneous Localisation and Mapping) for short-term navigation precision, with TAN (Terrain Aided Navigation - defined in terms of a knowledge base of identifiable ground based features, rather than terrain matching) where appropriate, as well as visual attitude determination. A critical element in precision feature-based navigation is attitude precision. This is important because any attitude uncertainty will give rise to altitude-dependant position estimation errors. While most systems rely solely on integration of rotation rate measurements from an IMU (which is subject to drift), this system inherently incorporates a visual horizon detection system for precision attitude measurement.

The strategy is, on the basis of a good attitude fusion result, to integrate all sources of visual navigation information to produce a robust navigation solution. The first step is to use SLAM via temporary tracking of less significant features in the environment to limit the normal quadratic dead-reckoning drift from an IMU. These features are objects or regions that have definite and repeatably trackable visual characteristics, but which are not significant enough to log in a database. When more definite features are detected, such as man-made objects like roads, road junctions and water bodies that are repeatably detectable and have definite location characteristics, these are integrated into the SLAM database if they are not already logged. If the features are significant and already exist in a database of known navigation features, then their known position (and precision) is integrated into the navigation fusion algorithm to dramatically improve the precision of the navigation solution (this is feature matched terrain aiding). An additional source of localisation information is obtained via horizon profile shape matching to terrain database information when a sufficient horizon profile is detectable. This pragmatic approach of integration will give rise to a realisable visual navigation system and for all intents and purposes, mimics human pilot navigation performance under VFR (Visual Flight Rules).

Poster description (click on image for larger version). Details follow.



Research Areas

Horizon Detection for Attitude Determination

The horizon appearance is a strong visual indication of the attitude of an aircraft, so a vision based system should be able to detect the horizon and use its appearance to extract attitude measurements. Past methods have made the assumption that the horizon is straight, this neglects possible navigational and attitude information. This reserach project investigated developing a real-time capabile horizon detection method which allows for the actual horizon profile shape to be extracted. The horizon profile shape can then be used for attitude determination and other localisation processes.

Past methods have either focused on edge based horizon detection or image segmentation based horizon detection methods. These techniques work when the vehicle is close to the ground (such as in the case of MAV/UAVs) and when there is a strong horizon edge. However, these methods fail in whenever these conditions are not satisfied. The developed horizon detection method is able to detect weak and strong horizon responses which allow the algorithm to operate on a wider range of conditions.

The following figures show the results of the horizon detection algorithm. These images can present difficult for past approaches as in most the images the horizon is not strongly defined and the sky and ground regions of the image display similar colour and texture properties.

HorizonDetection1
HorizonDetection2
HorizonDetection3
HorizonDetection4
HorizonDetection5
HorizonDetection6

Video

A video is available of the horizon detection algorithm operating on a test flight video by clicking on the image below. The extracted horizon profile is used in the infinite horizon line model to estimate the bank and pitch of the aircraft.

HorizonDetectionVideo

Horizon-Based Terrain Aided Attitude Determination and Localisation

The developed horizon detection algorithm is able to extract the horizon profile shape in the image. The attitude of an aircraft can be calculated from the observed horizon in a camera image if the horizon profile is known. Past methods have made the assumption that the horizon is straight, this ignores the horizon profile which could contain yaw and positional information. This commonly used straight line assumption can introduce attitude biases in the measurements, as both the altitude of the platform and shape of terrain can affect the shape of the horizon profile. Terrain-aided attitude determination allows the observed horizon profile to be matched to the estimated terrain profile, accurately constraining the complete attitude triplet of the platform rather than just the bank and pitch.

Calculating the estimated horizon profile from a digital terrain map can be very computationally expensive. This research project developed an efficient real-time capable method of using terrain-aided horizon profile information to accurately determine the full attitude of the aircraft. This terrain-aided method is capable of producing attitude measurements at least an order of magnitude more precise than the commonly using horizon line method.

Video

A video is available of the terrain-aided attitude determination algorithm operating on a test flight video by clicking on the image below. The extracted horizon profile is matched to the terrain-aided horizon profile to accurately estimate the bank, pitch and yaw of the aircraft.

TerrainAidedAttitudeDeterminationVideo

The developed algorithm for efficiently generating the horizon profile from a terrain map has another use other than attitude determination. It can be used inside an optimisation process to find the position of the aircraft as well. Past localisation methods which use the horizon profile take minutes to process a single frame. However, by using the developed horizon profile generation method, a real-time feasible and accurate attitude and position estimation algorithm is realised.

Video

A video is available of the terrain-aided localisation and attitude determination algorithm operating on a simulated video by clicking on the image below. The extracted horizon profile is matched to the terrain-aided horizon profile to estimate attitude and position of the aircraft.

TerrainAidedLocalisationVideo

Terrain-Aided Navigation using Human-Recognisable Features

In the recent years, there has been a large focus on developing vision-aided systems to provide aiding measurements for navigation systems. Vision based methods have the advantages of being cheap, passive, and not subject to the same limitations as GPS. The use of optical sensors can increase the autonomy and reliability of the navigation systems. There have been systems developed which use various features as the reference navigation feature. Some of these features include SIFT (Scale-Invariant Feature Transform) or SURF (Speeded-Up Robust Feature) key points, blob features, image template matching], and the detection of artificial markers placed in the environment. Most of the visual features that are used are either artificial (placed in the environment to facilitate the experiment) or computer-recognisable features such as SIFT, SURF, blob or image template features (all of which are sometimes referred to as natural features). It is advantageous to investigate using human-recognisable features (such as rivers, roads, and buildings) as these are the features which pilots successfully use to navigate in VFR (Visual Flight Rules) conditions. The use of higher level features can increase the robustness of the navigation process by providing additional information that can be used in the data association stage.

Higher level features such as road intersections or junctions have been selected to be investigated as a navigation feature in this research project. Road intersections are very common, stable, and obvious landmark in urban areas and are readily mapping in GIS databases. The intersection road branches provide very useful data association information and the center of the intersection provides a discrete point feature.

A road intersection detection algorithm was developed and flight tests of the visual fusion process were undertaken to prove the usefulness of this visual navigation method. The following images show an example of the road intersection image processing method used to extract the intersections.

RoadInterserctionDetectionExample
RoadInterserctionDetectionProcess

Video

A video is available of the road intersection navigation system in operation from a test flight conducted with the Jabiru test platform.

TerrainAidedNavigationVideo

Simultaneous Localisation and Mapping using Human-Recognisable Features

This research is currently in progress. More details will be provided as the research matures. The following are a few images from the current research.


Publications