AINsight: SOP Noncompliance a Slippery Slope

 - September 25, 2020, 12:22 PM

Aviation and other high-risk industries are full of policies and procedures. For good reason, these standard operating procedures (SOP) should be followed. By design, each SOP provides a standardized method to complete a task that keeps us safe and prevents harm.

In aviation, most professional pilots strive to be compliant; these SOPs are the recipe to effectively manage a highly technical machine in an extraordinarily complex operating environment. SOPs set up a predictable workflow, so the operators—in this case, the pilot flying and pilot monitoring—can anticipate each other’s next move and share a common mental model.

Following SOPs is important; a crew on the “same page” devotes much less mental capacity on a routine task and has more bandwidth to manage more complex operations such as environmental (weather or ATC) or aircraft (mechanicals and anomalies) threats.

But procedures are not always followed. When this happens, human factors experts use terms such as procedural drift, procedural intentional or unintentional non-compliance (PINC/PUNC), or normalization of deviance to categorize these errors.

Noncompliance with SOPs is a serious threat to safety. Over the past two decades, this issue has been highlighted as a top safety concern by organizations such as the NTSB, Flight Safety Foundation, and the NBAA Safety Committee. With all this focus on SOP noncompliance, it is important to differentiate unintentional errors and more risky intentional acts. While some organizations treat them equally, they are not.

The proverbial phrase, “To err is human,” applies here. Unintentional noncompliance errors typically involve a slip, lapse, or some other mistake. The intent is to be compliant with the written SOP, but for some reason—workload, fatigue, or a distraction—the wrong word is spoken or action is taken. Examples of this may be a callout that is not spoken verbatim, as described in a procedure.

Overly prescriptive SOPs are a “set up” for unintentional SOP errors. An example of this could be an altitude awareness callout, that specifies precise phraseology such as “FL210 for FL220,” when an alternative, such as “1,000 feet to go”—or any other variation—would suffice. In this case, the callout is made, the aircraft levels off at the assigned altitude, and the outcome of this error is classified as inconsequential.

More concerning are those intentional SOP noncompliance acts that involve an omission or violation. This is where, based on the level of risk, an operator should really take notice.

A common example of this risky behavior is the pilot who fails to go-around from an unstable approach. In this case, failing to go-around from an unstable approach would be considered a violation of an SOP. The outcome might lead to additional errors (landing short, long, or hard) and/or an undesired aircraft state such as a runway excursion. This is a big deal and should be addressed either in a debrief, or if discovered through a flight data monitoring program, via a crew contact by a gatekeeper.   

A tragic example of an intentional SOP noncompliance act resulted in the loss of the Space Shuttle Challenger. At the time, NASA had a culture of “faster, better, and cheaper.” An underlying factor within the organization was a strong “mission completion pressure.”

During each of the preceding 24 launches, known leaks were identified in the seals—or O-rings—between rocket stages. Due to an absence of adverse outcomes, these shortcuts over time became the norm. This gradual process, where unacceptable acts became acceptable would result in a significant procedural drift. On the 25th launch, luck ran out and an O-ring completely failed—Challenger was lost.

The SOP noncompliance outlier is an intentional act that involves significant risk because of gross negligence or a criminal act. Rare cases meeting this threshold might involve acts such as falsifying maintenance or weight and balance documents, drug or alcohol violations, or other unthinkable acts. These acts push the limits of any “just culture” algorithm and must not be tolerated.

Back to unstable approaches. For more than two decades, industry best practices recommend that operators adopt an SOP that defines stabilized approach criteria and to incorporate a no-fault go-around policy.

On a global level, compliance with this SOP is poor. To further quantify this issue, the rate of unstable approaches (using airline data) has decreased to less than 3 percent; that is the good news. Most troubling is the fact that, on average, the vast majority (~97 percent) of those unstable approaches continue to land, failing to go-around, often without any adverse consequences.

Unfortunately, from a human factors perspective, each “successful” landing from an unstable approach reinforces, the notion that it is “OK” due to a lack of an adverse outcome. Using the Challenger example, the reality is that by continuing behaviors such as an unstable approach and normalizing this deviance from an SOP, you are loading the chamber—a game of Russian Roulette—in which each landing or act increases the likelihood of an adverse outcome.

Pilot, safety expert, consultant, and aviation journalist Stuart “Kipp” Lau writes about flight safety and airmanship for AIN. He can be reached at