For more information about how Halldale can add value to your marketing and promotional campaigns or to discuss event exhibitor and sponsorship opportunities, contact our team to find out more
The Americas -
holly.foster@halldale.com
Rest of World -
jeremy@halldale.com
Instructors need standards… and the ability to adapt their teaching methods – for pilots returning from furlough, for different learning styles, and for new instructional media. Regional airline training veteran Paul Preidecker shares his experienced opinions on expectations of flight instructors.
Last year, I completed a LearJet type rating course from a Part 142 training center after joining a Part 135 operator. As in any such training event, my classmates and I held evening study sessions to review the day’s material and to get a handle on upcoming subjects. One session included discussion about whether an instructor was right in how he was teaching a procedure. The usual back-and-forth ensued but one classmate said, “Just remember guys, cooperate to graduate.” I’ll bet you’ve heard that one too. It seems to be a feature of every training environment.
It may be practical advice, but embedded in this cliché is a potential problem: standardization of training, and especially standardization of instructors. Is it an issue? When I look at the number of training events that occur in Part 121 during a typical year, the final product is exceptionally good. In fact, Part 121 training is often regarded as the “standard” for standardized training. We know that standardization is essential for safety because it creates an expectation of behavior and duty. Is there room to improve? Yes. Continuous improvement is a hallmark of standardized training.
Many pilots experience cooperate-to-graduate pressure at some point. You work with one instructor for a few days or weeks, but for some reason you change instructors. You discover that a “solid” concept or point has become squishy. Or, in the transition from one instructor to another, you lose time while the new instructor tries to pick up where the other left off. Rocky transitions can be a frustrating experience for everyone. At best, a trainee’s progress might be delayed a lesson or two until everyone is on the same page. At worst, safety concerns might arise if there are inaccurate assumptions about a trainee’s actual knowledge and skill.
When standardization exists through sound lesson plans, clear procedures, and accurate records, the transition from one instructor to another should be seamless. This kind of standardization creates and maintains situational awareness in training. Clearly, all stakeholders have an interest and a role in using standardization to streamline the training process.
So, what’s the issue? Perhaps we need to start by examining the various dimensions of standardization.
Regulatory authorities such as the FAA set standards through regulations, guidance, and policies. There is an unfortunate tendency to talk about FAA or other regulatory agency requirements as the “minimum standard,” which tends to imply something that is barely sufficient or even substandard. In fact, this is not the case. While it is good that most operators and training organizations strive to exceed the standard, the guidance set forth by the regulator is exactly what it claims to be: the acceptable standard of performance for safety.
The FAA’s interpretation of that standard – much like interpretation of standardized training procedures at a Part 121 or Part 142 facility – has sometimes been less “standard” than we would like it to be. During my air carrier career, it was not uncommon to hear training department colleagues at other airlines grouse about differences between Flight Standards District Offices (FSDOs) or Certificate Management Offices (CMOs) in terms of application and interpretation of policy, procedures, or regulations. The FAA is well aware of this issue. Indeed, greater consistency was a key goal for the 2017 structural reorganization of the Flight Standards Service, which eliminated regional boundaries in favor of four function-based offices (Office of Safety Standards, Air Carrier Safety Assurance, General Aviation Safety Assurance, and Foundational Business)
We next look at the operators.
Let’s move to the dimension of procedures. It is typical for an operator or a training provider to write detailed procedures for everything a pilot is expected to learn and do. To a point, that is necessary and appropriate. Regulators expect it, and there can be no standardized performance without standardized procedures.
These, of course, vary widely. I spent 18 years teaching the CRJ200 for my airline. When we first acquired these aircraft, we relied on the manufacturer’s procedures. With time and experience, we adapted them to better suit our operation. We made changes in response to data coming from our portfolio of voluntary safety programs and line operations data. Naturally, we thought our standard operating procedures were the best. Still, I confess that when I occupied the jumpseat on other CRJ200 operators, I sometimes noted enhancements that found their way into our own procedures. I also observed procedures that, while perfectly safe and consistent with regulatory requirements, would never work in our operation.
The implication for instructor (and pilot) standardization arises from the constant churn of moving among airlines. We all know the Law of Primacy: we remember best how we were first taught. Switching to a new operator means that an instructor or the pilot being trained needs to perform a mental CTRL ALT DELETE in order to adapt to the new company’s version of standardized procedures. Habits are hard to break, but mental/emotional resistance is sometimes the bigger challenge. We’ll get to that shortly.
A final point on the procedures dimension: training providers and operators must take care to avoid overreach. With the constant stream of digital data, it’s tempting to respond to safety data by trying to fix an issue with a checklist change or a new procedure. With too many such “enhancements” arriving too often, pilots – always independent-minded – will resist by sticking to what they know. Also, operators cannot write procedures for every eventuality. We may like the certainty of the zeros and ones in binary code, but real-world operations occur in the endless range of fractions between zero and one. Standardized procedures for instructors and trainees simply provide the guardrails.
Now let’s look at the dimension of grading, the most visible form of attempts to standardize instructor/evaluator assessment of trainee performance.
Regardless of how you train, the path to standardization is easier with an approved, well-understood grading system. Good grading scales evaluate a pilot’s ability against performance standards such “+/- 50 feet” on altitude or “+/- 10 knots” of airspeed on approach. These are often called the hard skills. In addition, a good grading model accommodates the so-called soft skills such as judgment, decision making, use of Threat and Error Management (TEM), Crew Resource Management (CRM), and command ability. An essential element of TEM is the ability to anticipate and recognize threats and prevent them from becoming errors. Evaluating TEM skills with CRM helps training organizations create realistic scenario-based training (SBT). Task-based training teaches us to fly; SBT teaches us to operate. A sound grading scale thus helps evaluate operational skills in addition to basic maneuver-based skills. An excerpt from one training provider’s grading scale illustrates these concepts and is shown in Figure 1.
Figure 1: Excerpt from a 5-point grading scale
GRADE | CRITERIA | ||
4 | STANDARD | TRAPPED ERRORS | Minor deviations from standards occur that are recognized and corrected in a timely matter. Most threats have been identified and mitigated. All errors have been trapped. Individual and crew performance meet expectations. TEM and CRM skills are effective. Maneuver Standards: Pilot exceeds “zero-deviation” by no more than 50% |
A more nuanced grading scale has become the hallmark of training and qualification under AQP or any other AQP-like system. There are two generally accepted approaches, each with advantages and disadvantages. A 5-point scale offers the opportunity for more granularity, which may be useful in subsequent data analysis. The downside is a tendency to grade towards the middle, which is considered average. A 4-point scale forces greater discrimination. However, it comes at the expense of additional data granularity.Many airlines have shifted from traditional training (Part 121, Subpart N and O) to the Advanced Qualification Program (AQP). An integral part of AQP is use of an approved grading scale. In the past, pilots were evaluated using some form of SAT/UNSAT check box system. As long as the trainee had a page full of SATs, he or she was deemed qualified. This grading style can create a problem in standardization due to a lack of definition: what constitutes a SAT or UNSAT? Even if the training organization determines that SAT means demonstrating proficiency at the 80% or better level, how do you determine 80% proficiency on maneuvers and procedures?
To ensure correct use of the chosen grading scale, instructors and evaluators require calibration. For simulator training, one popular calibration method uses recorded vignettes of pilots performing tasks and maneuvers. Before recording, the crew receives a script that specifies the desired level of performance. Scenarios are designed to show different levels of performance along the grading scale. For example, a well-executed engine out procedure at takeoff should receive a mark corresponding to the highest grade. The same maneuver with a 30-degree loss of heading would be graded correspondingly lower. TEM concepts can also be introduced and evaluated. If the crew fails to set a missed approach altitude, it would be identified and graded as an un-trapped error.
It takes around 15-20 scenarios to cover the most common training events. The grade assigned to each is determined by the person or group designated as the “gold” standard. Instructors-in-training view the videos and individually assign grades. We stress to instructors that they base grades on what they actually saw in the video, not on what the eventual outcome might have been. Standardization is reflected in the commonality of grading. Videos can be used for recurrent instructor/evaluator training as well as for initial sessions.
Calibration also involves determining inter- and intra-rating reliability. Inter-rating reliability is a determination of how closely a group of raters will grade the same event. Intra-rating reliability measures how consistently the same person grades the same event. These two measures help training organizations spot extremes in the grading pool (i.e., distinguish between the Santa Claus and the drill sergeant). These two measures help drive standardization. However, it is also necessary to understand not only what is being graded, but whom. For example, if one evaluator is always assigned to grade check airmen, he or she may indicate higher than average grades compared to evaluators grading the general pilot population.
Another dimension of standardization – crucial but often overlooked – arises from the difference between procedures and techniques. See 'Procedures or Technique?' below, which offers examples of this dimension when it comes to operational outcomes. For instructional activities, however, I think the better terms are “outcomes” and “methods.” The desired outcome of instructor standardization is that everyone is trained to perform procedures the same way, each time, every time. The performance metrics incorporated in the grading scale are intended to measure the extent to which this goal is achieved.
However, the method that an instructor uses to teach individual trainees may be – in fact, must be – as varied as the individuals receiving instruction. While most individuals align within a general set of learning preferences (e.g., visual, auditory, kinesthetic), modern principles of adult learning recognize a rich tapestry of factors that contribute to, or detract from, effective learning. The concept of “constructivism”, for example, holds that each individual integrates new information into the context of their own unique construct of accumulated knowledge, skill, and experience. The same explanation simply doesn’t work for every individual. If I find myself saying the same thing more than a few times, I need to find another way to say (teach) it. Bottom line: a “standardized” cookie-cutter approach to training might be efficient for the instructor or the training facility. For trainees, however, a more individualized approach could be more efficient in producing the desired performance outcomes. It should go without saying but teaching techniques that include shouting or any form of verbal abuse are both disrespectful and ineffective. Learning stops when yelling starts.
These concepts also apply in group instruction, whether in a “real” or virtual classroom environment. Generational differences are especially relevant here. In my experience, “older” learners prefer the traditional lecture/discussion environment. By contrast, “younger” trainees expect you to hand them an iPad with instructions to read specified material and come back in two weeks for the test. The challenge for training organizations is to make the presentation of critical material as varied as the learning preferences of the class. Today’s wide variety of media types and social media platforms offer endless opportunities to adapt to learner needs and preferences.
“The challenge for training organizations is to make the presentation of critical material as varied as the learning preferences of the class.” Image credit: ATP Flight School.
Post-pandemic, when long-furloughed pilots returning to flight operations may need an extra measure of training to refresh skills and confidence, instructors may be “rusty” as well. Some instructors have only been involved in recurrent training for pilots who were fortunate enough to retain employment. However, since hiring came to an abrupt halt for many operators, that means there has been no need for many instructors. So they are out of practice. And, when they do come back, they need to adjust how they teach based on how long a pilot was actually out of service. Do they just need to establish a pilot’s currency... or do they need to improve proficiency? In addition, many in-person recurrent (and new hire) ground school classes have been cancelled in favor of virtual classes. There are challenges in managing this new way to teach.
So what do these dimensions mean for instructor standardization, and for training instructors to “be” standardized
First, I think it is important to stress that the goal of instructor training is not to teach instructors to fly (although that needs to be verified). Rather, it is to teach them to teach in terms of both outcomes, which must involve standardized performance of procedures, and methods, which must be adapted to achieve that performance from trainees.
Second, it follows that operators and training providers may need to adjust their concept of what constitutes efficiency in training and invest in ensuring that instructors can operate effectively on both the “hard” and the “soft” dimensions of providing instruction. The former is fairly straightforward. For any given maneuver, the success or failure of the trainee’s performance is visible: did the pilot maintain a heading within 10 degrees, or not?
It is somewhat more challenging to measure the instructor’s capability on the “soft” dimension. Still, guidance is emerging. In cooperation with IFALPA, IATA published a 2018 document, Guidance Material for Instructor and Evaluator Training, that provides excellent suggestions on instructor/evaluator training. In addition to pilot competencies, it identifies competencies such as managing the learning environment, instruction, interaction with trainees, and assessment and evaluation. Within each core competency are a number of observable behaviors that can be used to evaluate instructor performance. Examples include application of instructional methods, maintenance of operational relevance and realism, variation in the number of instructor inputs to ensure that training objectives are met, and many others. Finally, the Guide offers an excellent example of how a training matrix can help train and assess instructor performance.
Notwithstanding the challenges involved in this approach, I believe it is well worth the effort. As good as we are, we can always do better. Improvements to instructor and evaluator standardization cannot be solved by regulation. Rather, such goals would best be accomplished by a cooperative effort (e.g., task force) comprising stakeholders from industry, government, associations, and academia. The group could develop and suggest a path for improvement using accepted best practices, a common grading format, and a mechanism to provide feedback and continuous improvement. I welcome any expressions of interest in this idea.
Instructors must understand the difference between a procedure and a technique, as both play a role in training.
Procedures are typically spelled out in a pilot operating handbook or flight crew manual. They provide information on what to do, how to do it, and when to do it.
A technique is a method of accomplishing a task, and it often allows some options. For example, how to manage the autopilot when commanding a descent is an example of a technique. You can select a vertical rate mode and control speed with thrust, or you can descend at a constant speed and let thrust control rate of descent. Both techniques accomplish the same task.
The role of an instructor is to provide options and recommendations on which technique may be preferred based on operational considerations. Instructors should avoid any language that forces a technique to become a procedure. “Well, I know what they taught you in class, but this is the only way to do it.”
Capt. Paul Preidecker is the Facilitator for the FAA Workgroup developing a Standardized Curriculum for training delivered by Part 142 training centers for Part 135 aircraft operators. In addition to creating standardized curriculum for the selected aircraft, the Workgroup is also tasked with creating standardized curriculum for instructors and evaluators.
Capt. Preidecker was recently elected President of the National Association of Flight Instructors (NAFI). Also, he was Chairman of the Flight Training Committee for the Regional Airline Association (RAA), was Co-chair of the Flight Operations Steering Committee for Bombardier and served on an FAA Aviation Rulemaking Committee. For many years, Paul has also chaired the Regional Pilot stream at the annual WATS conferences.
In his 20-year Air Wisconsin career, Paul was in the training department for 19 years. He qualified as a ground school, simulator, and aircraft instructor, a line check airman, a simulator check airman, and a FAA Aircrew Program Designee.
Before joining Air Wisconsin, he flew piston-twins for a Part 135 operator and taught as a Part 61 instructor for nine years. After retiring from Air Wisconsin, he flew a LearJet for a Part 135 operator doing medical transport missions.