Op-Ed

There’s a reason why bad things happen to good airplanes | Opinion

Trump issues order to ground Boeing 737 Max planes

President Donald Trump says the U.S. is issuing an emergency order grounding all Boeing 737 Max 8 and Max 9 aircraft in the wake of a crash of an Ethiopian Airliner that killed 157 people.
Up Next
President Donald Trump says the U.S. is issuing an emergency order grounding all Boeing 737 Max 8 and Max 9 aircraft in the wake of a crash of an Ethiopian Airliner that killed 157 people.

In the almost 10 years since Air France 447, a state-of-the-art Airbus crashed into the South Atlantic, there have been 52 fatal airline crashes that have taken 2,599 lives. Now that the Boeing 737 Max has been grounded worldwide, we’d like to draw on lessons learned from the investigation of the French crash.

We believe the best response to the most recent tragedy, in Ethiopia, is a systems approach to both aircraft design and pilot training that does not rely on older methods created to manage systems built without computers. Unsolved industry problems include:

- Designs built around the concept that nothing need be done for scenarios deemed very low probability.

- The assumption that if a low-probability event does occur, then the pilot will be there to manage it.

- Pilots are not being trained on “improbable” emergencies that may result from the failure of flight automation.

- Most pilots lack the critical meteorology training needed to meet unexpected challenges.

- Aircraft designers similarly rely on assumptions about atmospheric phenomena that may not be valid.

- A lack of effective hands-on training. The co-pilot on the lost Ethiopian Air Flight 302 reportedly had just 200 hours of experience flying commercial jets.

Air crash investigators and Red Cross workers retrieved remains and personal items belonging to the passengers and crew members who were on flight ET302. The Boeing 737 MAX 8 aircraft came down minutes after taking off from Addis Ababa.

- The lack of recognition that experienced pilots have been able to prevent accidents – and these pilots are now retiring while new pilots are given no opportunities to gain skills these retiring aviators are taking with them.

- Acceptance of the fact that by-the-book adherence to checklists is not a panacea. One of US Air Captain Chesley Sullenberger’s first key decisions on Flight 1549 was to turn on an emergency generator well before his Airbus’s checklist called for it.

- Understanding that upgrading or patching software and hardware won’t guarantee a safe flight. Systems that appear to be working perfectly can fail.

- Realizing that accidents are happening despite the fact that all the people involved are trying to do the right thing and are following all approved and accepted methods.

These are just a few of the reasons why bad things keep happening to good airplanes.

The myth is that engineers and designers can create a fail-safe plane capable of always flying itself. When built-in protections created to prevent a stall stop working, pilots can be handling an airplane that is actually harder to fly than a vintage 707 or DC-8.

The truth is that pilots prevent accidents every day by adjusting to unexpected variables. We agree with MIT’s Dr. Nancy Leveson who argues that aviation safety, for too long, has been treating accidents as a failure problem when they are actually a control problem. Crashes are nearly always triggered by a lack of effective enforcement of safety constraints from the ground up. Safety must be designed into the system from the outset and cannot depend on probabilities.

Designers assume incorrectly that in the event of a system failure, such as what happened on Air France 447, pilots will handle it. A related problem is that traditional methods require assessment of failure probabilities, but software does not “fail.” Typically, the software functioned exactly as it was designed to do, but the designer did not anticipate the situation encountered. This means unexpected software interactions compromising the system are inevitable. Decisions on whether or not to train pilots for potential risks are also often based on probability and statistical analysis. This may be beneficial in preventing common events. Unfortunately, when low-probability events like Air France 447 happen, many pilots lack the training or experience necessary to meet these challenges.

For example, industry has focused on angle-of-attack sensors and software in the wake of the Lion Air accident. While fixing the angle-of-attack sensors and improving the software will most certainly solve this particular problem with the B-737 Max, these patches ignore a bigger industry-wide issue. A fault in a single sensor can cause a computer to do the wrong thing while working exactly as it was designed.

Computers only do what they are instructed to do by their software algorithms. They cannot manage tasks that were unanticipated in the design. Humans can and do adapt to unexpected outcomes flexibly in real time. The catch is that humans must understand what is happening.

Until we change the way we design these systems, expect to see more accidents. As Leveson points out, we need to understand why the system controls in place failed. The best way to prevent future accidents is to design a new control system that actually works. Unfortunately, as we are learning, the obsolete, outdated approach created for the pre-computer era does not guarantee the safety of your next flight.

Capt. Shem Malmquist is an international airline accident investigator, 777 captain and visiting professor at Florida Institute of Technology. Roger Rapoport is senior editor at Flight Safety Information. They are the coauthors of “Angle of Attack: Air France 447 and the Future of Aviation Safety.”

shem.jpg

roger.jpg

  Comments