MARI-TECH 2012 Conference presentation

Wendy Tadros, Chair
Transportation Safety Board of Canada
Ottawa, Ontario
10 April 2012

Check against delivery

Thank you for inviting me here today.

Today I want to take a look back into the past—about six years, to be exact. And I'd like to look at some events that are probably familiar to at least a few of you, because not only do they offer us some good lessons to be learned, but in many ways, they offer us a real concrete example of what we're trying to do here today, and every day, when we place the focus on safety.

On the evening of March 21, 2006, the ferry Queen of the North departed Prince Rupert, British Columbia, headed south for Port Hardy. It was to be an overnight voyage, an otherwise normal run, one the ship and crew had made many times before. Shortly after midnight, however—only a few hours into the trip—the vessel struck the side of Gil Island. Without propulsion and suffering extensive hull damage, she drifted for over an hour before sinking in 430 meters of water.

One hundred and one people were on board the Queen of the North that night. All but two made it off alive. Their fate—and the fate of everyone else who did make it off, crew included—was almost at once the subject of intense speculation, from all quarters: industry, law enforcement, the public, the media. Even before the ink had dried on the next morning's headlines, the questions were multiplying: What happened? Why did it happen? Had people died? And who was to blame? What were the crew doing? Were they having sex? Speculation was running rampant.

It's not easy wading into a controversy. It's certainly not fun, and it's even more challenging when the public's perception of your role and your purpose is skewed. Especially when at the TSB, it is not our job to look at issues of blame. It's our job to investigate accidents, and to figure out what needs to be fixed. And in this investigation, it was our job to answer questions about why the ship went aground. And while we did not shirk from that, we were limited by the evidence we had available. Specifically, we had no voice recording from the bridge.

What we did have—what we were able to recover from 1200 feet down and three month's immersion in salt water—was the ship's ECDIS: the electronic chart display system that told us about the vessel's route and speed and direction, and which showed that, for 14 critical minutes after missing a scheduled course change at Sainty Point, the vessel continued straight ahead until it struck Gil Island.

Let me stress that, without bridge audio, we had no moment-by-moment account of those 14 minutes. But from a safety point of view—and I confess this is a lot easier to say six years on—we had what proved to be even more important: we had, from the physical evidence and ECDIS, a pretty clear picture of what the crew were NOT doing. In fact, what they were NOT doing became, along with all the other deficiencies in the safety practices aboard that vessel, the very focus of our investigative work.

Today I'd like to talk about process, the methods by which we go about determining the answers to the questions that need answering—many of them the same questions people were asking in the days and months that followed the Queen of the North—indeed, the same ones they ask every time that phone rings and we're called to investigate. And once I've talked about that process, I want to talk about why the events of that one night just over six years ago still matter today.

To do that, though, I need to start with a few simple premises, three statements that guide us in our work every day.

The first statement is this:

  • No accident is ever the result of just one single factor.

It's true. Everything in life is connected to everything else. That's not some New-Agey sentiment; that's a basic fact of accident investigation. But even though most of us know it intuitively, there's something in our nature, something almost hard-wired, that makes us want to stop looking as soon as we come across the first reasonable explanation. Maybe it's all the deadlines in our busy world, or the rush to judgment, but blaming everything on one cause is... Well, let me give you an exaggeration, but one that nonetheless proves my point.

Why did the boat sink? Well, because it hit an island. OK, "case closed."

Or, to look in a little more detail: why did it sink? Because it filled with water. And why did it fill with water? Because it had a hole in it. And how did it get a hole in it? Because they were steaming down the channel and missed the course change.

But if it really were that simple, the TSB would just make a recommendation: That crews should do their jobs. That they should navigate in accordance with accepted principles. And that would be the end of that.

But while that would provide a neat and tidy storyline, it would not have done justice to all of the factors at play, and it would not have looked at the risks in the system. And really, what would such a "motherhood" statement accomplish? What would change? At the TSB, we owed it to all Canadians, and to the families of the two people who died that night, to look at the system and how it failed, to find out how it was that that boat hit that island that night, and how two people could have died.

To help us look deeper, the TSB has developed a methodology, an analysis tool, that we call ISIM: the Integrated Safety Investigation Methodology. The nuclear industry uses a similar safety approach and, basically, it involves asking "why" —again, and again—until you get beneath the surface, and then lower still, and again lower, until you're at the bottom, until you've identified the contributing factors, the underlying causes, that set the stage and made the accident possible.

That doesn't mean you ignore the top-level factors. But when we move beyond "it was poorly navigated, there was a hole in the hull, it filled with water, and then it sunk," we unearthed more questions, ones about bridge team complement: how many people were supposed to be working that night? What were their roles, and were they qualified to do them? What was company policy on staffing? Were the crew trained on the navigation instruments? When had they last received training? Were they supposed to work in pairs, or in teams of three or more? Were the alarms even turned on? We also had questions about fatigue: After all, it was night-time. Had the crew members had enough sleep? Was it recent sleep? And then there was the abandonment: What role did each crew member play? Had they done this before? Had they practiced mustering or crowd control, and if so how often? Emergencies are pretty tough to simulate, so how well had their training prepared them for what actually transpired that night?

All of these things, acting together, set the stage for what happened. And all of them, acting together, enabled events to unfold as they did.

And yet in the days and weeks and months following the tragedy, there was still that rush to judgment, and to over-simplify. Speculation and innuendo went viral. At one extreme, was the idea that it was all the fault of two rogues—that's what the rumor mill said, that it was all due to a pair of crew members who had messed up, ignored SOPs and were conspiring after the fact to prevent the truth from coming out. The corollary to this, of course—the other extreme view—is that the events of that night had absolutely nothing to do with the operation of this vessel, the fleet or the company—and that there were no problems in the system.

Obviously, neither of those two extremes was true.

But on one level, like I said, I understand the inclination to want to wrap things up in neat packages or an easy-to-understand framework. Even if stopping there would be like blaming the island. Or the hole in the hull. It would be like saying this one thing or that one thing was the be-all and end-all, and that if it weren't for this one or that one, the ship never would have sunk.

But at the TSB we know that every accident is systemic. Which means every accident is proof of a system that failed. Not one part or one ingredient or one person, but a system failure. Which is why when we released our final report, we identified the many causes and the underlying factors. Not just the missed course change at Sainty Point, but the fact that the there weren't enough people on the bridge, and that the navigational equipment hadn't been set up to take full advantage of all safety features. And when we had listed those factors, our recommendations addressed systemic issues: voyage data recorders, accounting for passengers, crew preparations for abandonment. Because these problems—deep fundamental risks that existed beyond one vessel, one crew, or even one company in one province—remained a real danger.

That philosophy, that everything is systemic, leads me to my second guiding principle:

  • No two accidents are ever alike.

That's because—again—accidents involve people. And because we all behave differently, addressing the causes of just one specific accident... Well, it might prevent that exact accident from happening again in the exact same fashion... But it doesn't mean that something else, something similar, can't also take place. To use a driving analogy, it's like putting speed limits for one type of vehicle on one specific dangerous curve, and then hoping no one in a different car speeds around the very next corner.

But at the TSB, safety means always looking in two directions at once: inward, down into the details of an investigation, but also outward, upward, at the big picture... At the system. So during our investigation we noted that passenger ferries play a significant role in the Canadian transportation network, with millions of vehicles, and tens of millions of people, being transported each year. And we saw that there were a lot of vessels over a certain tonnage operating in Canada, and that many of them, not just the Queen of the North, were carrying hundreds of people. And we also observed that some ferries were wholly domestic, while others were international. And then we saw that there were differences in how they were governed.

And yet the risks were often common.

So again, once we felt that we had addressed the specific risks and hopefully could prevent another Queen of the North tragedy, we realized that the lessons to learn were broader, too: and so we took it as a basic principle that all large passenger vessels should adhere to the same safety standards, regardless of domestic or international operations, and that, in general, the rules set out in the various conventions should apply to all passenger vessels operating in Canada. Sometimes, you see, that invisible dividing line in the water doesn't make safety sense.

I'm going to do a quick aside here and point out that this kind of proactive outlook doesn't apply just to ferries or passenger vessels. We're always combing through our database, searching for clues, links between one occurrence and another, trying to spot patterns, or dangerous trends. And one thing we found out, which is coming to fruition soon, has to do with fishing vessels.

A few years ago, our investigators began reporting an ominous trend�someone dying nearly every month, at one point. In a variety of ways, yes, and in many locations, but there was still those common elements. And yes, Canadians have always known that making your living from the sea carries risks, but... Well, we were concerned, so we launched what we call a Special Issues Investigation. It was several years in the making, and involved experts and studies and consultations from coast to coast to coast, but it was worth it. Later this spring we'll be releasing the results of that investigation, and hopefully we'll be able to say we saved a few lives—in advance, this time.

But back to the Queen of the North, and our third guiding principle:

  • No one ever sets out to have an accident.

On the surface, this one seems really obvious, and yet it's also one of those ideas that we seem to have an awful lot of trouble shaking free of. And the reason is that it comes down to how people view the concept of safety, and what needs to be done to stay or be safe.

For many people, as it turns out—and a lot of research has gone into this—preventing an accident is generally thought to be a matter of "being more careful" or else avoiding mistakes. We call that the "don't screw up" school of thought. People in this school will often toss out phrases like "just pay attention," "don't break any rules," "follow all the SOPs," and "make sure the equipment doesn't fail." And yes, those are all good things—don't get me wrong. But when that's the entirety of an organization's approach to safety, it misses the point that people are... Well, human. And that means, when things go wrong, human factors come into play.

When I say "human factors," I'm not just talking about the people on the ground, the ones who do or do not follow SOPs. I'm also talking about the organization as a living breathing—interconnected—thing. Budgets. Schedules. Staffing rotations. Human resources. Finance. Administration. (Seriously: finance and admin!) All of these play a role. For example, cutbacks in one area can leave an organization vulnerable in another. Supervisors, managers, senior staff: they take responsibility, too, because it's their policies and practices that can—even inadvertently—create the conditions and the underlying factors that can lead to an accident.

And so for a real, forward-looking organization, one that truly values safety as much as it says it does, the safety plan has to be more than just "don't screw up;" the safety plan has to be proactive: it has to watch for these factors, looking out—in advance—to see how the decisions made in one department can affect the operations of another. And there need to be defences in place, so that errors can be caught and addressed, so a problem can be caught while it's still a problem, and not a full-fledged accident or crisis.

On the Queen of the North, for example, the working environment on the bridge was "less than formal." Music was playing despite its potential for distraction. And the accepted principles of navigation safety were not consistently or rigorously applied. In fact, even though the crew obviously wasn't trying to have an accident, our report found that unsafe navigation practices persisted which, in this occurrence, contributed to a loss of situational awareness by the bridge team.

Or take another example, one that may have, to some people, seemed like no big deal: the practice of regularly operating with some watertight doors open, despite the potential for flooding in event of an accident. Was this deliberate? You bet. But was it done with malice, to try to make the consequences of an accident more severe? Of course not.

But it's precisely these kinds of lessons—the ones that are revealed at the intersection between rules and the people who have to apply them, that are often the ones that are hardest to learn. We are all of us creatures of habit; we don't like to change if we don't feel we need to, and our behavior is often only modified when we come up against something that forces us to change. Something like a terrible accident. One involving the death of innocents.

Sure, this accident happened to someone else—and deep down, who isn't thankful it wasn't them on board that night, either as a passenger of a crew member, in what must have seemed like a chaotic nightmare come to life? But just because it happened to someone else doesn't let the rest of the marine industry off the hook. Because as awful as it is to say it, this could have been even worse. Passenger counts that night were inaccurate, or not taken at all. When the ferry went down, BC Ferries had no idea how many people were on board. If you have to figure out if everyone got off, you need to know how many were on board to begin with.

As well, the lack of a completed evacuation plan, in addition to inadequate passenger safety training and drills, left some crew members under-prepared to handle the abandonment. And there were no industry-wide standards at the time for ensuring that officers had all the training appropriate to the equipment they used.

Since that night, an awful lot of effort has been expended to try and fix some of the very real problems that we found:

BC Ferries has updated its passenger reservation system for its northern fleet, to create a manifest containing all passengers' names that will help with counts. They have also taken steps to make sure officers are familiar with new equipment, and voluntarily installed simplified voyage data recorders on a number of its vessels. On the legislative side, new Fire and Boat Drills Regulations now require realistic drills for crew. And the Marine Personnel Regulations have been amended such that crews on certain passenger vessels are required to have passenger-management training. From a safety perspective, it's been a real success story, so much so that several of our recommendations stemming from that accident have now been assessed as Fully Satisfactory.

Hopefully, all of this effort to fix the many causes of one specific accident will prevent that one specific accident from happening again. But it's the systemic issues, the concerns that are common to other vessels, to other routes, or to other companies throughout the industry, that give us the real opportunity to advance safety. Because by looking at the broader picture—the view from 30,000 feet, as they say—we can learn the lessons from that one night, and apply them to other nights—to today and tomorrow and beyond.

Speaking of learning lessons from other accidents... Just over 100 years ago, another marine tragedy took place. And this week, as we mark the centennial of the foundering of the RMS Titanic, it is an honour to be here today in front of you, the marine engineering fraternity. Because on that infamous April night, none of the Titanic's 30 marine engineers survived the sinking, and as such deprived the world of truly knowing the bravery of their decisions and actions that fateful night.

This conference—Mari-Tech 2012—is placing a big emphasis on safety, and for that I am pleased. I'll be even more pleased if, over the next two days, we do more than just talk about it. Because when it comes to accident prevention, the TSB has a somewhat narrow mandate: we get the phone call after something has gone wrong. The real work is up to—people like you, people who set the safety agenda for companies, people who can take action every day.

Please, seize that opportunity.