Engineering & Technology
August 10, 2022

Improving the safety of unmanned aerial systems: A fuzzy logic–AI approach

With their ever-increasing capabilities and applications, the use of unmanned aerial systems (UAS) is set to soar. This creates an urgent need for solutions that safely manage UAS operations in congested low-altitude airspace. Some advanced UAS-control methods are based on reinforcement learning techniques, but these are not yet fully validated. Taking a fuzzy logic-based approach to this problem are Dr Timothy Arnett and colleagues at Thales Avionics Inc, USA. The team is developing important and safe standards for the implementation of AI-controlled UAS.

An unmanned aerial system (UAS), or drone, is an aircraft designed to operate autonomously or be piloted remotely. Recent advances in UAS technology have led to an increase in their use, with applications in areas including delivery services, firefighting, surveillance, and reconnaissance tasks. As our reliance on the technology grows, there is now a crucial need for capable advanced control systems which can optimise performance and computational efficiency, while enabling a robust adaptability to changing scenarios.

‘Many advanced UAS control methods use artificial intelligence and machine learning reinforcement learning techniques,’ explains Dr Timothy Arnett at Thales Avionics Inc, USA. These systems can improve their performance by experiencing new data, and reinforcing the behaviours which lead to better outcomes, as specified by a user. Yet in many scenarios, there is some way to go before the level of control which Arnett describes can be achieved.

Tyler Olson/Shutterstock.com

Together with Thales Avionics colleagues including Dr Nicholas Ernest, Arnett employs an advanced set of tools to address this complex problem. The team aims to develop new control methods, and then verify them to ensure that they adhere to strict safety specifications. The team uses mathematically rigorous techniques known as ‘formal methods’. As Arnett continues, ‘these formal methods can facilitate more widespread adoption and implementation of advanced AI control of UAS in mission or safety-critical systems.’

Formal methods

Many techniques are available to analyse how well a system adheres to its specifications. AI verification methods can be done through numerical approaches – but these have their drawbacks. For example, Monte Carlo techniques rely on repeated random sampling to obtain numerical results, but it isn’t clear how much sampling is enough to verify a system. In addition, numerical evaluations can only identify when the specification fails, so they cannot guarantee that the system is correct every time.

Arnett and colleagues have designed a system where a UAS can generate optimised fuzzy trees as they navigate complex and unfamiliar environments.

When higher levels of correctness are required, such as systems where safety standards are critical, formal methods are worthwhile. Arnett explains, ‘formal methods provide mathematically rigorous techniques for the design, specification, and verification of these systems.’ In their previous research, Arnett and his team have used these methods to evaluate the safety of control systems for UASs.

Navigational safety

In a recent study, Arnett and colleagues established new safety specifications for the behaviour of a reinforcement learning-based AI system for controlling a UAS. Here, the UAS is programmed to navigate within a virtual, pre-defined boundary, or ‘corridor’. The walls of this corridor represent the bounds in which the UAS can safely operate as it moves along its intended path. This allows the device to deviate slightly from its path, as it avoids obstacles like birds and other drones or is blown slightly off course in windy conditions.

Arnett and colleagues established new safety specifications for the behaviour of a reinforcement learning-based AI system for controlling a UAS. metamorworks/Shutterstock.com

Within this corridor, the UAS manoeuvres and logs designated targets scattered through the environment. To track its distance from the corridor boundaries, it uses five distance sensors, or whiskers, positioned on its front and sides. Targets are then collected using a capture sensor positioned in front of the UAS. With these constraints in mind, the team could then create safety specifications to accommodate them.

Genetic fuzzy trees

As the behaviour of a UAS is not known before it attempts to navigate its corridor, the team employed reinforcement learning to develop their advanced control system. For this, they used an innovative AI method, named a genetic fuzzy tree (GFT). Requiring little-to-no knowledge of the optimal behaviour required to navigate a UAS, GFTs work by combining a technique named fuzzy logic with a technique for finding optimal solutions to a problem, based on Darwin’s theory of natural selection.

Fuzzy logic accounts for the fact that a certain conclusion may not be completely true or false: instead, it will often have some degree of vagueness, or fuzziness associated with it. With this in mind, it can mathematically quantify the validity of a conclusion as lying somewhere between 0 and 1 – with 0 being completely false, and 1 being completely true.

Based on this logic, fuzzy trees are made up of a number of fuzzy inference systems, which are known for their robustness, their ability to approximate outcomes, and ability to represent inputs and their outputs using language. By incorporating these systems into branching networks, the researchers significantly enhanced their scalability, allowing them to incorporate many more inputs than previous techniques. This allows detailed approximations to be carried out incredibly efficiently, allowing users to solve far more complex problems.

As AI systems become more ubiquitous, especially in safety-critical systems, it’s vital to have higher levels of confidence in their correctness. SquareMotion/Shutterstock.com

To optimise their fuzzy trees, Arnett’s team used a genetic algorithm, which first assesses the performance, or ‘fitness’ of a set, or ‘generation’ of possible solutions. Afterwards, the fittest solutions are selected as ‘parents’ for a new generation, and their characteristics are slightly altered, or mutated, to produce a new set of possible solutions – some of which may be better than their parents. The process then repeats, until the best possible solution is found.

With these combined techniques, Arnett and colleagues have designed a system through which high-performance fuzzy trees are generated that can navigate complex and unfamiliar environments. Their approach enables these systems to provide high-performing decisions that are robust to potential uncertainties while remaining computationally efficient. Additionally, high-level and brief explanations for rapid decisions in the face of these uncertainties can be inherently and accurately provided, while maintaining thorough explainability for longer-timescale actions or auditing purposes.

UAS behaviour specification

By employing their GFTs, the team showed that a UAS can successfully navigate its operational corridor, while collecting points of interest in many different scenarios. The resulting control system, however, needed to be verified against behavioural specifications to ensure that it can operate safely. Arnett recollects that this led the team to ask the question: how much testing is sufficient? To answer this question, researchers have listed four key specifications for describing the desired behaviour of the UAS, together with procedures for deciding whether or not these rules are being broken.

Through formal verification, Arnett and colleagues have confidence that their system will behave as intended, while avoiding safety violations.

Firstly, if the UAS is travelling at either its minimum or maximum speed, it should never be instructed to carry out tasks beyond its capabilities – as set by its current airspeed, load, and altitude. Secondly, if the length of the front whisker is less than the space vehicle’s available space for turning, then a turning manoeuvre should be avoided. Thirdly, if a whisker on the side of the vehicle senses a distance of less than 50m from the corridor wall, the UAS should turn away from it. Finally, if the UAS is flying straight along the middle of a straight corridor, its level flight should be maintained.

Formal verification for correctness

To verify the ‘correctness’ of their GFT algorithms, the team used formal methods to check the system against the specifications. Using their four key specifications as a benchmark, they could check whether the algorithms’ output adhered to the stated specifications, while the algorithms were sticking to their assigned tasks. Through this formal verification, Arnett and colleagues could be confident that their system would behave as intended, while avoiding safety violations which could lead to disastrous consequences.

Creative Stock Studio/Shutterstock.com

This formal verification revealed that, for now, the GFT system still violates several behaviour-related specifications for the control of UASs, revealing issues that Arnett’s team will need to iron out in their future research. All the same, it allowed them to establish the behaviours that were not being violated – something which couldn’t be done through simulation-based testing alone.

The researchers now hope that similar methods could be extended to other intelligent systems to construct reliable, trustworthy, and explainable AI. While formal verification of these systems can be challenging, it delivers considerably more confidence that advanced AI systems in mission/safety-critical applications will behave as their developers intended. ‘As AI systems become more ubiquitous, especially in safety-critical systems, it’s vital to have higher levels of confidence in their correctness,’ Arnett comments.

A bright future for UASs

Advanced control methods are increasingly being developed using AI and machine learning, and often employ reinforcement learning techniques. Yet because of a lack of trust and proof of their correctness, many of them aren’t ready to be used in safety-critical systems. Through the formal verification methods they have developed, Arnett and colleagues hope that researchers could develop robust new safety standards, bringing the widespread implementation of advanced AI control a step closer to reality.

Personal Response

What are some interesting specific scenarios where GFTs would be particularly useful for a UAS?

Our GFT architecture has universal approximation characteristics that enable it to be applied to a wide range of UAS, and other, problems. However, it would be especially useful in situations where explainability, verifiability, etc, are of importance while maintaining high performance. For example, a beyond-visual-range air-to-air combat GFT for computer simulations (named Alpha) was created in 2016 by our team through a combination of subject-matter expert knowledge and reinforcement learning. Alpha had superhuman performance and was able to defeat human operators, in fact 12 different pilots, in multiple-vehicle scenarios, as well as being able to act as a teammate or wingman to human operators. This capability is further enhanced by the transparent and explainable nature of fuzzy trees along with the formal verification of certain safety-critical aspects of the system.

This feature article was created with the approval of the research team featured. This is a collaborative production, supported by those featured to aid free of charge, global distribution.

Want to read more articles like this?

Sign up to our mailing list and read about the topics that matter to you the most.
Sign Up!

Leave a Reply

Your email address will not be published. Required fields are marked *