Worldwide
In the News

Potential Risks from Outdated Technology

Embrace modern technologies to assure organisational and operational security.

Anyone who has been in the oil and gas sector for the last 20 years or so has seen vast improvements in the use of technology to advance operational efficiency and reliability. The improvements in assessing operational readiness of equipment and systems are one of the great stories of the 21st century. So why are we still using archaic risk assessment methodologies as attestation that we are ‘safe to start’ and ‘safe to operate’? These outdated methods include static, manual, and ‘just-in-time’ risk assessments and registers.

Here is a short self-assessment on operational risk management:

Is your organisation still dependent on spreadsheets, studies, dashboards, etc. that were done six months ago, or longer?
Has anything changed within your plant, facility or operations since these studies or reports were completed?
Are you dependent on risk management specialists, such as risk engineers, to evaluate and update your operational risk assessments to qualify barrier health?

If you answered, ‘no’ to the aforementioned questions, congratulations! From our interviews with prospective clients over the past ten years, we can say that you are in the extreme minority of oil and gas organisations. While many have embraced new hardware and software solutions to improve both the quality and quantity of data that is used for predictive analytics for preventative and reparative maintenance and operational efficiency, very few have invested in similar solutions to manage operational or process safety risks.

An Outdated, Inconsistent System
Enterprise software systems, such as those that are used to track training, competency and certification, are typically ‘stand-alone’ arrangements that are not integrated with the risk management process, and they certainly cannot transmit information back to a maintenance management system that says, “Hey, Joe just retired and the new guy has no experience with this equipment.”

Similarly, distributed control systems (DCS) typically provide data on the operational efficiency and health of the equipment that it is part of. However, human factor data is not usually included in the data feed used to evaluate process safety risk and maintenance management systems, yet human error has largely been recognised as one of the key contributory causes of process safety incidents. The systems that we have created to manage operational efficiency, competency, and maintenance and training are not being used effectively to predict and prevent process safety incidents.

Risk is a cumulative assessment process, yet we continue to use stand-alone, outdated processes to determine the safety to start or operate. Why is this?

Chemical Safety Board investigators inspect the aftermath of a refinery fire. Photo credit: US CSB.
Perhaps the drivers that influence updating our other systems focus more on optimisation of operational efficiency, i.e. our ability to maximise productivity, rather than evaluate the risk (severity x probability) for catastrophic failure due to a process safety incident. Almost certainly one of the factors is the plethora of methodologies and acronyms that are used across the oil and gas industry sector – bowties, HAZIDS, HAZOPS, ENVIDS, risk registers, pre-start up safety reviews (PSSR), project HSE and security reviews (PHSSER), human factors assessment tools (HFAT), process hazard analysis (PHA), to name just a few.

We have seen a wide spectrum of processes that are or are not being used in the oil and gas sector just like an ‘a la carte’ breakfast bar. It is my belief that this contributes to the confusion and lack of standardisation across the sector of risk management processes. The importance of this standardisation can possibly be best understood by looking at two major process safety incidents from 2010, including several key causal factors and recommendations that were provided by the United States Chemical Safety Board (US CSB).

Safety Incident #1 – Anacortes:

The Anacortes refinery in Washington state, where seven workers died in 2010 when a heat exchanger violently ruptured after a maintenance restart.
On 2 April 2010, the Tesoro Refining and Marketing Company, LLC petroleum refinery in Anacortes, Washington experienced a catastrophic rupture of a heat exchanger in the Catalytic Reformer/Naphtha Hydrotreater unit. The rupture fatally injured seven Tesoro employees who were working in the immediate vicinity of the heat exchanger at the time of the incident.

Causal factors identified by the US CSB included the following:

The rupture of the E heat exchanger was the result of the carbon steel heat exchanger being severely weakened by a high temperature hydrogen attack (HTHA). This causes fissures and cracking and occurs when carbon steel equipment is exposed to hydrogen at high temperatures and pressures, which degrades the mechanical properties of the steel.
Tesoro used an API (American Petroleum Institute) recommended practice (RP 941), which utilised a document that allowed Nelson Curves to predict the occurrence of HTHA based on previous equipment failure incidents, which are plotted based on self-reporting process conditions that were ill-defined and lacked consistency.
The start-up of the heat exchangers was hazardous, non-routine work. Leaks routinely developed, presenting hazards to workers conducting the start-up activities. Process hazard analysis at the refinery repeatedly failed to ensure that these hazards were controlled and that the number of workers exposed to them was minimised.

Safety Incident #2 – Texas City:

The aftermath of the Texas City refinery fire. Photo credit: US CSB.
At approximately 1.20 p.m. on 23 March 2005, a series of explosions occurred at the BP Texas City refinery during the restarting of a hydrocarbon isomerisation unit. Fifteen workers were killed and 180 others were injured. Many of the victims were in or around work trailers located near an atmospheric vent stack. The explosions occurred when a distillation tower flooded with hydrocarbons and was over-pressurised, causing a geyser-like release from the vent stack.

The recommendations that came out of the BP Texas City refinery investigation included the following:

Improve operator training and, at a minimum, require face-to-face training on recognising and handling abnormal situations.
Require knowledgeable supervisors or technically trained personnel to be present during hazardous operations phases such as unit start-up.
Ensure that process start-up procedures are updated to reflect actual process conditions.

At the 10-year anniversary of the incident, the US CSB made the following statement: “The gaps in existing federal and state regulations were cited in recent CSB investigations, including the April 2, 2010 explosion at the Tesoro refinery in Anacortes, Washington, and the August 6, 2012, fire at the Chevron refinery in Richmond, California. In both investigations, the CSB concluded warning signs regarding a potential accident were overlooked for years leading up to the catastrophic events. However, in both cases the CSB also found that federal and state process safety management regulations do not explicitly require the kinds of preventative measures that may have stopped the accidents from occurring.”

The safety message concludes with the following call by the CSB Chairperson, Moure-Eraso: “It’s been ten years since the terrible accident at the BP Texas City refinery. Industries and governments alike should increase their efforts to prevent process related disasters. Workers, the public and companies will benefit.”

Any Answers?
So, what are the answers to these dynamic and complex situations that are present in the industry?

First, the use of standardised tools to evaluate risk across the sector would be a good starting point. This includes the suite of tools recommended by API and the International Association of Oil and Gas Producers. Additionally, though, evaluation of process safety barriers using methodologies such as James Reason’s Barrier Model, frequently referred to as the ‘swiss cheese’ model, provide a standardised framework against which direct and contributory factors can be assessed using software applications.

Risk assessment matrix
Through the use of technology integration software, operators and producers can utilise risk assessment software that can accept millions of inputs, including static registers/studies, DCS, daily operational inputs, and maintenance and enterprise software systems. The amount of data provided is tremendous, but the output in the form of dashboards and registers is simplified so that the operation’s personnel can manage risk at the local level instead of needing a risk engineer to interpret the data.

The most sophisticated software applications are now cloud-based so that data can be accessed anywhere and at any time, as long as there is an internet connection. This enables front-line operation supervisors to objectively consult with senior level management because everyone is able to view the data together in real time from the cloud. Because inputs are continuous and based on actual operating conditions in the facility, risks can be viewed in real time and decisions can be made based on objective data versus subjective opinions.

RiskPoynt’s® safety software application is one such technology solution. While other software solutions exist, very few have designed their products to accept inputs that evaluate human factors, safety critical equipment, deferred maintenance, operational inputs, DCS, enterprise software feeds, and static risk data from risk registers, HAZIDS, HAZOPS, ENVIDS, PHAs and the rest. Based on conversations with major software and hardware developers, there is a consensus that we are currently in an augmented information technology (IT) space where humans make critical safety decisions based on the information provided through such systems.

But the evolution will undoubtedly evolve to the next generation of information technology, cognitive IT, where the software will make critical decisions, such as shutting down systems based on the data, without the need for human intervention. Before this can occur, though, the industry will need to evaluate the current standards, hardware and software that can provide the data, as well as the man-machine interface upon which critical decisions, either augmented or cognitive, are made. The first steps are to embrace the technology and improve on existing processes and systems utilising modern tools to view objective information for competent people to make the right decision.

Previous article
Sergipe Basin: Undrilled potential surpasses giant offshore discoveries
Next article
Insights into Exploration – Tools for a New Era

Related Articles