This blog posting represents the views of the author, David Fosberry. Those opinions may change over time. They do not constitute an expert legal or financial opinion.

If you have comments on this blog posting, please email me .

The Opinion Blog is organised by threads, so each post is identified by a thread number ("Major" index) and a post number ("Minor" index). If you want to view the index of blogs, click here to download it as an Excel spreadsheet.

Click here to see the whole Opinion Blog.

To view, save, share or refer to a particular blog post, use the link in that post (below/right, where it says "Show only this post").

Pilots Worry That They Will Be Replaced By AI.

Posted on 8th June 2023

Show only this post
Show all posts in this thread (Air Safety).

This article on Le Monde (in English) reports that airline pilots are concerned that they will be replaced on the flight deck by AI systems which can fly the plane.

Their worries are unfounded. Passengers will simply not accept being flown by AI systems.

What is likely to happen is that flight deck crews will be reduced in size (large long-distance flights typically operate with a crew of 3: a pilot, a copilot and a flight engineer).

There is a basic safety engineering problem preventing the complete safe replacement of flight crew with computers. Computers have no common sense, and so cannot deal with situations for which they are not programmed (in the case of AI, programming means training). That means that every possible scenario has to be foreseen, and the systems must be trained (and tested) for each scenario. Human pilots are able to fill these gaps in training with common sense, by extrapolating from situations for which they were trained, and by applying high-level principles; AI systems cannot do this; although AI capabilities are improving in these areas, it will never be possible to to have the 100% confidence in their abilities that would be needed for safety critical systems.

There is a method used in safety engineering, called failure modes analysis, which requires the designers of safety critical and mission critical systems to envisage what usage scenarios (which would include flight scenarios, in the case of flight systems) could occur, and what system failures could occur, in order to create a design that can cope with such failures and usage scenarios (known as Use Cases). Failure modes analysis relies on the application of common sense and applied paranoia by engineers. This is, however, a far cry from designing a system that can automate something such as flying an aircraft. There have been many well documented failures of failure modes analysis (e.g. the flight control system responsible for the Boeing 737 Max crashes). In fact safety and reliability engineering is notorious for its failures, through lack of common sense, inadequate paranoia, and flawed technical analysis and faulty knowledge about how mechanical and electronic components can fail.

In the short term it is likely that AI will be used to reduce the size of flight crews by providing advice on what actions the pilot should take during emergencies, negating the need for memorizing flight manuals (cockpits currently have these manuals on paper, but it often takes too long to look things up), and speeding pilot responses. This is now considered tried and trusted technology (I worked on such a system in the 1980s!).

The article on Le Monde does mention some AI based flight control systems that can fly the plane, including landing and take-off, but, for the reasons that I have listed above, these are not safe in a comprehensive sense, and will not be licensed for use without pilots; a pilot will always be needed to override any AI-based flight control system when it makes a mistake.

So yes, the jobs of some flight crew may be lost to AI, but AI is not going to completely replace humans in the cockpit, and most of the replacement will be achieved by natural wastage, not redundancies.