“I’m Here to Learn.”

Contact Our Team

For more information about how Halldale can add value to your marketing and promotional campaigns or to discuss event exhibitor and sponsorship opportunities, contact our team to find out more

 

The Americas -
holly.foster@halldale.com

Rest of World -
jeremy@halldale.com



Image credit: Rick Adams

Dr. Trung T. Pham is the FAA’s Chief Scientific and Technical Advisor (CSTA) for Artificial Intelligence (AI) - Machine Learning (ML), supporting research related to how AI and ML may be used in aviation systems, and how to evaluate integration of components based on AI / ML with aircraft software.

Dr. Pham joined the FAA in June 2022 with more than 35 years of software and AI experience at the US Air Force Academy as an academic professor, as a technical specialist and staff engineer at NASA Johnson Space Center, and in industry.

Dr. Pham has produced more than 50 publications, two technical books, and many technical presentations. He is a Senior Member of the Institute of Electrical and Electronic Engineering (IEEE) and a Senior Member of the International Society of Automation (ISA). His Ph.D. (on a NASA Fellowship) is from the Department of Electrical and Computer Engineering at Rice University in Houston, Texas.

Rick Adams: You have an impressive career in academia and scientific industry. How did you come to this unique new role with the FAA?

Dr. Pham: Friends at the FAA told me of this opportunity. So I applied and they flew out to Colorado to recruit me.

I joined the FAA in June 2022, Had a six-month honeymoon period. And then around February 2023, I was called by REDAC [the Research, Engineering, and Development Advisory Committee – which provides advice and recommendations to the FAA Administrator on the needs, objectives, plans, approaches, content, and accomplishments of the aviation research portfolio]. The REDAC asked me what happened to the Roadmap that they recommended.

Editor’s Note - The FAA AI Roadmap states: “The Research, Engineering, and Development Advisory Committee (REDAC) in 2022 recommended that FAA establish a roadmap for AI to alleviate uncertainty in the industry with a clear direction of how this innovative technology can be used in airborne applications. The industry lacks a method for the safety assurance of AI. Safety assurance encompasses all the activities and artifacts to provide sufficient justification that the risks are acceptable.”

I presented the work plan for that Roadmap in February 2023, got approved for that work plan, and then we organized a working team inside the FAA, and we had many conversations with the industry.

So we have a Roadmap document [Roadmap for Artificial Intelligence Safety Assurance, Version 1, published mid August 2024.

And every time we had a conversation, we evolved it. And finally we got to the point that we think that it can be the first release. So upper management signed the approval for releasing it in August this year.

It's a very challenging but enjoyable 18 months that we went through. For me to go from a developer to a regulator is challenging, but it's enjoyable. And we use the knowledge of engineering from AI and transform that to usable information for regulation development.

Rick Adams: How did you select the participants for the technical interchange meetings?

Dr. Pham: It consisted of organizations interacting with the FAA. At one point we had a conversation with SAE [a global standards organization with activities in automotive, aerospace and other transport industries]. We sent the invitation to SAE and they distributed it to their membership. We sent an invitation to RTCA [formerly the Radio Technical Commission for Aeronautics, which develops technical guidance for use by government regulatory authorities and industry] and other organizations like AIA [Aerospace Industries Association], AIAA [American Institute of Aeronautics and Astronautics], GAMA [General Aviation Manufacturers Association], and so on. And some reached out to me on a personal basis.

Rick Adams: Now that you have published the initial Roadmap, what happens next?

Dr. Pham: We are organizing additional technical documents that elaborate on some of the key points on the Roadmap guiding principles. For example, we talk about an incremental approach where we start with specific projects before we generalize the understanding into something more visible like policy. We are going to publish some technical documents based on what we understand of the technology through the projects that we work with industry and through projects that we funded.

Rick Adams: Are you working with other regulatory agencies such as EASA or the UK CAA?

Dr. Pham: Oh, we work with everybody. EASA was very instrumental to help me to understand the problem right at the beginning when I joined the FAA. But as soon as we had some clear understanding, I reached out to Transport Canada, UK CAA. And we work with people from Asia Pacific, some specific contacts that we had recently with Korea. Sometimes we have contact from the Middle East. So we are sharing our understanding with every regulatory authority through our international office.

Rick Adams: To date, the focus of the Roadmap is on aircraft and aircraft operations. There's not a lot of specific mention of training, say flight training or maintenance training, where people already want to use AI in those particular areas.

Dr. Pham: We mentioned in the research area we begin with aviation software as a starting focus to have a better understanding of AI technology in general. And then we hope that we can bring that understanding of AI technology to some other domains. And again, that is consistent with our incremental approach. We do one thing at a time.

Rick Adams: At what point might the training community start to see something that addresses FAA AI guidelines applied to training?

Dr. Pham: I think that is dependent on the community... that they are now interested to have us working with them. It has to come with some initial contact and conversation. We are working with everybody who has a strong interest. Between now and that theoretical gate, I am willing to have a conversation sharing what we know now and how it can be applied to all the domains in preparation for that specific gate.

Rick Adams: One of the references in the Roadmap that jumped out to some training people was the guidance to avoid ‘personification’. (Sorry, Siri. Apologies, Alexa. You Cannot Be a Virtual Co-Pilot. | Halldale Group) And as you know, a lot of AI training has leaned toward creating avatars, human-like characters and voices to aid in the training. Could you explain where you and the group are coming from with regard to personification and apps like Siri and Alexa?

Dr. Pham: Well, if you look at a simple application that almost everybody is using in their car is GPS. And there's a voice saying turn left at this intersection, turn right. But it's still us who is making that decision. And if there is an accident, it's us. The operator and owner of the car will be responsible for paying for the damage. That's where we are coming from.

We don't want people to say, Oops, I see my GPS as my co-pilot and my GPS has to take some responsibility in that because it's an intelligent being. It is an intelligent software, but it's not an intelligent being that we can distribute responsibility to it.

For operating the aircraft, you will see many AI applications with voice like you described. And sometimes it's so real with Generative AI, you can probably see holographic projection of a person sitting next to you in the near future as well. But it's still a piece of software, and it is still the pilot's responsibility to operate the aircraft. And if anything is happening, we know where to assign the responsibility.

Rick Adams: There were also specific references that you've avoided addressing things like ethical use or race and gender bias in some of the data. Whereas EASA has specifically mentioned the ethics of AI. Why did FAA choose not to go down that path?

Dr. Pham: Well, we respect the ethical issue mentioned in the Executive Order, which is similar to the European Union AI act. (Executive Order 14110: Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence Executive Order 14110: Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence – November 1, 2023)

Where we are different from EASA is the ethical issue. Regulation of the use of AI was assigned to the National Institute of Standards and in technology, where a small portion of that is channeled to the FAA, we only have authority to look at the safety in aviation.

An example is the facial recognition problem, which has a clear mandate to have ethical consideration when we use it for criminal applications like tracking the suspect or identifying suspects. And it's an ethical problem if we wrongly identify the suspect because the AI facial recognition is tuned to perform better in one group versus worse in another group.

But in aviation the same ‘facial recognition’ can be applied to recognizing an airport, for example, to prevent the pilot to land in the wrong airport at the same city. You saw an incident report on that a few years ago. So here's the question: Why do we care where the recognition is treating the airport like Hobby in Houston different from the international George Bush airport?

From the engineering point of view, the issue is does the minimum performance meet our expectation? Just like any other engineering component, we look at the performance to see if it's meeting the expectation that can ensure safety. And if it meets expectation, we don't know if we have the authority to require the developer to ensure equal rate of recognition. So it is an ongoing debate. But as far as seeing that as an engineering component, then we are applying the principle of engineering to look for this minimum performance requirement.

Rick Adams: As AI continues to develop, one of the critical factors is the data that the AI is being trained on. Some of the models ingest everything from the internet, which could cause some problems because of misinformation.

In aviation, what sort of data will they be focusing on? And would it make sense for the flight training community to develop a common ‘data lake’ of best information, best practices that the various AI models could draw from? Different companies are developing their own data sources, but they're not necessarily sharing it with other people. So how does the FAA see data being included or excluded?

Dr. Pham: Well, we know that the data will contribute to how a model or a system that we train is performing. So right now we look at analyzing the data to get the basic engineering characteristic of that system. Something that people kept saying is there is a lack of ‘explainability’ and therefore they could not show the engineering characteristics or at least give a functional description of that component. And that is a major problem with the lack of explainability in machine learning.

So we are looking at the data in the first step to get that information to put in to the system, so that now we can say that we have enough information for this component, and that will be useful for the system integrator to put it into their system and into their aircraft. The understanding of what is important in that data set is what we are now making a document to share, and hopefully that understanding of the data.

It has to be complete, but nobody knows how to assure that completeness requirement. So we are putting together the basic specific application. And we hope that by having conversations, we can expand that – for example - to the training domain. Right now we just have the basic understanding of what the data should be in some of the means of compliance, how it can demonstrate that it can comply with what we think it should be, and then by having conversations with people from different domains, then we can expand that understanding to outside the aviation software.

Rick Adams: Does ‘explainability’ relate to the Learned (static) versus Learning (dynamic) distinction that you made in the Roadmap?

Dr. Pham: Explainability is related to both. The Learned AI is a lot easier to address at this point. We think we have some explanation for this system developed.

We have to look at the training software to make sure that is how it is trained even during operation, just like a traditional adaptive control. Some people design a controller and then fix it and implement it, and some people said, but their system is highly nonlinear, so the linearized model will not be applicable outside a certain range. So they put in adaptive control for that. But some adaptive control can fluctuate so badly that it's not usable. So we have to look at the rule of how the variable in the controller is adapted or is modified.

We do the same thing with the Learning AI. We are studying it with no clear conclusion yet, but we are following the traditional way of regulating adaptive control. And we try to look at the learning algorithm. So it is still an ongoing research to prepare for the future of AI.

Rick Adams: What is important for you to address AI / ML for aviation training applications?

Dr. Pham: I think the important thing is for us, or for me specifically, to understand the needs of the aviation training so that we can plan how to work together. I would welcome conversation to help me prepare for the future, and eager for the evolving technology, or for other domains that will be using AI. And I have many conversations like that with industry. It’s important for me to see where each segment of the industry is moving so that we can prepare ourselves for the future.

I used to be in the academic environment where we have an open door. Everybody can walk in and say let's talk. I’m here to learn.

Dr. Pham can be contacted at trung.t.pham@faa.gov.

Related articles



More Features

More features