Sorry, Siri. Apologies, Alexa. You Cannot Be a Virtual Co-Pilot.

Contact Our Team

For more information about how Halldale can add value to your marketing and promotional campaigns or to discuss event exhibitor and sponsorship opportunities, contact our team to find out more

 

The Americas -
holly.foster@halldale.com

Rest of World -
jeremy@halldale.com



Image credit: Rick Adams / Midjourney AI

Four years after the European Union Aviation Safety Agency (EASA) published its initial roadmap for addressing artificial intelligence / machine learning (AI / ML) applications in aviation, and a year after EASA issued Artificial Intelligence Roadmap 2.0, the US Federal Aviation Administration (FAA) released its Roadmap for Artificial Intelligence Safety Assurance, Version 1.

With a “humble attitude,” the FAA’s Chief Scientific and Technical Advisor in AI/ML, Dr. Trung T. Pham, said, “Being second allows a learning opportunity from the first.”

The two major aviation regulatory agencies, as well as SAE’s G-34 committee and EUROCAE’s WG-114 committee, are aggressively collaborating to arrive at general consensus around how AI guidelines can be crafted to assure safety without introducing new risks into the airspace system.

It’s not simply a matter of using data to streamline airline bookings and operations. There are far-reaching implications, particularly for the emerging electric vertical take off and landing (eVTOL) sector and others advocating for eventual autonomous commercial aircraft flights. The UK CAA’s ‘pre-roadmap’ – “Building Trust in AI” – similarly references “high degrees of autonomy.” 

Irrespective of interagency cooperation, there are some key differences between the FAA and EASA:

  • The EASA roadmap has numerous references to ethics and ‘ethical AI.’ Whereas, the 3rd paragraph of the FAA Executive Summary states: “The treatment of the ethical use of AI is outside the scope of this roadmap.”
  • The European approach also wants to address ‘societal’ or ‘socio-economic’ values and ‘bias’ in datasets being used to train AI. The FAA roadmap “does not address societal aspects with the use of AI which are outside of the FAA’s authority. There may be some common considerations where the safety of an AI application can be impacted by biases in training data, such as a pilot-health monitoring system that works more effectively for some ethnicities than others. These issues are addressed within the scope of safety assurance, in that the designer of such a system must show that the system performs its function across the entire community of pilots without unfair advantages.”

With a primary focus on aircraft and air traffic control systems, neither roadmap has yet to address aviation training in detail. Indeed, EASA 2.0 mentions training mostly in the context of training engineers how to design AI and regulators how to evaluate it. Nor is there any reference to simulators.

There are two training-relevant paragraphs in the EASA 30-page document: 

“To ensure safe operations, crew training is another essential consideration. The use of AI gives rise to adaptive training solutions, where ML could enhance the effectiveness of training activities by leveraging the large amount of data collected during training and operations.”

“…additional requirements on the users’ (e.g. engineers) and end users’ (e.g. pilots, ATCOs) training phases are anticipated through the requirements for aviation organizations contained in Section C.6 of the EASA AI Concept Paper Issue 02.”

The FAA’s primary training / simulator references are:

“Operations: AI-based functions can support flight operations, including dispatch, training and training simulators, and scenario prediction. AI applications can also aid in document generation such as training manuals and Safety Risk Management (SRM) support.”

“…decommissioned aircraft and components find use in AI-driven pilot training simulators and educational tools, contributing to the training of aviation professionals.”

“A roadmap is a technological landscape (of innovations) that provides a sketchy idea of where we are and where we want to be,” said Dr. Pham. “AI is an emerging technology, meaning that it is still evolving. Therefore, a practical roadmap must be a living document, ready to be modified when new revelation about the AI technology is available.”

“AVOID PERSONIFICATION”

One of the most intriguing passages in the new FAA roadmap is a strong argument to “avoid personification.”

Based on the principle, “Treat AI as an algorithm or computer, not as a human,” the FAA denigrates “simulated assistants” such as Apple’s Siri or Amazon’s Alexa as marketing tools, “not conducive to assuring safe operation of these complex systems in aviation.”

“With AI technologies, it is common for developers and the media to portray AI as machines that interact like humans… personifying the software that is responding to prompts,” the FAA explains.

“Anthropomorphizing AI, like referring to virtual assistants by human names, can obscure the clear delineation of responsibility necessary for safe aviation system design, which must align with aviation regulations and international standards,” said Avinash Pinto, Outcome Leader with The MITRE Corporation, at Technical Exchange Meetings with the FAA and industry in July 2024. 

“Pilots must understand AI as a distinct system with unique modes and malfunctions, not as a human-like entity, to prevent misjudgments in the operation and troubleshooting of automated systems.”

“The roadmap,” Pinto noted, “avoids human-centric terminology for AI, emphasizing that while AI can take on certain operational functions, accountability for its performance and safety assurance lies with its designers and maintainers, not the AI itself.”

“Personifying AI,” the FAA roadmap reads, “can erode safety by creating ambiguity on the assignment of responsibility for safe operation. As certain operations, traditionally accomplished by people, are instead accomplished by automation, responsibility shifts from the human operator to the system designer. The system designer must delineate the responsibilities that are assigned to human beings as compared to the requirements that are assigned to systems and tools and must do so in a manner consistent with applicable aviation regulatory requirements and international standards.”

“Personifying AI applications suggests that they have human-like capabilities and potentially unexpected behavior. This contributes to the false impression that the modes, operation, and malfunction would be that of a human, and that AI is an entity which can be responsible. While the normal operation may be intended to automate something that can be performed by a human, the modes and malfunctions are notably different. The safety of future operations depends on the pilot understanding that a system containing AI is just a system and not another human with whom they can reason or negotiate.”

For example, “AI cannot be a part of crew-resource management (CRM) but can affect crew responsibilities. AI cannot be a copilot but can perform autopilot functions and affect how a pilot performs their duties. AI may have a degree of control authority over specific flight functions but is not accountable for anything; the designer and maintainer of the AI are accountable unless that responsibility has been allocated elsewhere by applicable law.”

***


Excerpted from the forthcoming book, The Robot in the Simulator: Artificial Intelligence in Aviation Training,” by Rick Adams, FRAeS. Available in September at http://aviationvoices.com.

FAA AI Roadmap, July 2024 –

https://www.faa.gov/aircraft/air_cert/step/roadmap_for_AI_safety_assurance

EASA AI Roadmap 2.0 –

https://www.easa.europa.eu/en/downloads/137919/en

UK CAA Strategy for AI, February 2024 – 

https://www.caa.co.uk/media/e0th5tdy/20240226-ai-principles-paper-v1-2-pdf.pdf

Related articles



More Features

More features