In the News
Worldwide

Pangea III: A Supercomputer for Oil and Gas Exploration

Total’s new supercomputer, Pangea III, was ranked the number one most powerful HPC in the industry and the 11th most powerful computer globally – but what does this mean for oil and gas exploration?

 

Alfred Wegener (1880–1930), the German meteorologist and geophysicist who formulated the first complete statement of the continental drift hypothesis, including the concept of the major land mass, Pangea.
Alfred Wegener is certainly accepted as being foremost among the scientists whose work has promoted the development of modern geology and therefore the petroleum geosciences: his famous theory of continental drift is still today the backbone of oil and gas exploration, as illustrated by the recent interest in the conjugate Brazilian and Angolan margins.

In his theory, Wegener proposed in 1915 that the continents once formed a contiguous, single land mass which he called ‘Pangea’, a word formed after the Ancient Greek pan (‘entire, whole’) and Gaia (‘Mother Earth, land’). Carrying both the ideas of uniqueness and entirety, the name was the perfect match for the High Performance Computer (HPC) that Total decided to install in Pau, France, back in 2013: Pangea was born.

Pangea has dramatically evolved in order to keep abreast of competition for processing capacity. Less than ten years after its conception, in mid-2019, the newest version of the supercomputer was brought online. Pangea III can deliver an amazing computing power of 25 petaflops, bringing the overall computational capability of the Total Group to 31.7 petaflops – the equivalent of 170,000 laptops combined together! It also brings the cumulative storage capacity to 76 petabytes (i.e. equivalent to about 50 million HD movies), and in June 2019 Pangea III was ranked the number 1 most powerful HPC in the industry, and the 11th most powerful computer globally. While this new supercomputer multiplied the processing power of the previous version by almost five, IBM – who developed and installed Pangea III – managed to bring down its power consumption per petaflop by a factor of more than ten!

Overview of a few racks from Pangea III. Image credit: Laurent Pascal.

More Computer Power Equals More Oil!
Seismic reflection has been at the foundation of oil and gas exploration for decades, and is widely used by the industry to image the subsurface and therefore to increase the chance of drilling successful exploration and development wells. Seismic waves sent into the ground by a seismic source spread in the subsurface, reflect at geological interfaces and propagate back to the surface where tens of thousands of sensors record them. The process, which is repeated thousands of times, allows us to image what lies beneath our feet down to 10 km or more, after the reflected waves have undergone a complicated processing sequence, which we call ‘seismic imaging’. Complex equations mimicking the propagation of waves in the subsurface are used to relocate the seismic events recorded to their actual location in a 3D image called a ‘cube’, a process that is called ‘migration’ by geophysicists.

Whilst the principles of migration have been understood since the very beginning of seismic exploration, migration algorithms have only been used in the past 20 years because they are so greedy with regards to computing power. In fact, the first migration algorithms were based on simplified equations or assumptions in order to cope with the limited computational capabilities available back then, but this resulted in images whose quality could still be improved, especially in complex areas. Nowadays, High Performance Computers allow us to process huge amounts of data, using algorithms able to model the physics of seismic waves propagation more and more accurately, which in turn generates more precise, higher resolution images, and therefore reduced uncertainties for prospect evaluation, well positioning or reservoir monitoring.

So, having more petaflops leads to more oil, produced for less in a safer environment.

Supercomputers and Seismic Imaging in Oil and Gas
Pangea III is part of Total’s overall strategy to develop a unique know-how in seismic imaging, so as to be able to tackle the ever more complex problems arising from the company’s wide-ranging portfolio. These requirements include subsalt imaging in Brazil, the Gulf of Mexico or deep offshore Angola; higher resolution velocity models in order to better predict pore pressure and design well architecture accordingly; or precise time-lapse imaging for reservoir monitoring and the subsequent optimisation of development or infill wells.

Velocity model (A) and migrated seismic image (B) in a complex geological setting. Red areas correspond to high velocity salt bodies. Source: Total.

Velocity model (A) and migrated seismic image (B) in a complex geological setting. Red areas correspond to high velocity salt bodies. Source: Total.

The ability to perform such imaging studies internally is definitely seen to be a competitive advantage. Strong interaction between imaging geophysicists and geologists, such as specialists in structural geology, means it is possible to integrate several disciplines in velocity model building – a crucial step in the seismic imaging process. These models can then be used to test several scenarios in just a few days, thanks to the extra computing power available, allowing the geoscientists to easily refine the velocity model and obtain a better, more focused image.

This internal expertise also gives the company a priceless flexibility and reactivity when short-term problems requiring a quick but precise answer arise. For instance, as soon as a well has been drilled the velocity model and associated migrated image can be updated, thanks to the well information provided in a very short timeframe; typically, just a couple of weeks. A decision on, for example, drilling or updating a sidetrack wellbore could then be made rapidly, based on a better local image.

Last, but not least, having an HPC available is also advantageous to Total’s internal R&D when they need to test and put in production the new imaging algorithms they deliver on a regular basis. Thanks to the close collaboration between the imaging and research teams, these more complex and/or efficient algorithms can quickly meet the needs of operational geophysicists, allowing them to tackle specific problems in a short timeframe. R&D can be compared to the fuel that is needed to power the competition engine – the supercomputer – so as to win the race for oil and gas!

More Computing Power with a Smaller Environmental Footprint
Shifting from Pangea II, which had been installed in 2016, to Pangea III basically multiplied the processing capacity by a factor of five, which required a major shift in the HPC architecture: indeed, it was quickly realised that a GPU (Graphic Processing Unit) was the only technology capable of delivering the desired computing power while ensuring an optimal energy consumption, and as a result the CPU (Central Processing Unit)-based architecture of Pangea II had to be completely revised.

The difference between a CPU (left) and GPU (right). Source: nvidia.

While CPUs and GPUs are both silicon-based processors, they process tasks in fundamentally different ways (see GEO ExPro Vol. 13, No. 1). A CPU is much more versatile and can work on a variety of tasks thanks to an architecture adapted to working in a sequential mode, and made of a few cores (up to 24). A GPU, on the contrary, is designed to handle multiple tasks at the same time in a very effective manner, and is therefore made up of thousands of smaller and more efficient cores, an architecture aimed at parallel computing. Nowadays, GPUs deliver a much higher computing power than CPUs, and can be up to 100 times faster for tasks requiring massive parallel computations like machine learning or the handling of huge datasets – which is what we are actually doing in seismic imaging.

Pangea III could therefore be described as a bespoke configuration that gives more power but with a smaller environmental footprint.

Changing the HPC architecture for Pangea III was a sensible choice, since it paves the way for new algorithms based on artificial intelligence, but at the same time this choice involved a complete overhaul of the production algorithms used by Total in order to make them compatible with this new architecture. As a matter of fact, the geophysicists and IT engineers spent a full year adapting most of the codes to this new configuration, an extraordinary effort which eventually delivered unmatched and unprecedented performance in the applications used for imaging.

Pangea III: A Versatile Tool in Oil and Gas Exploration
Although Pangea III is – as of today – mainly used for seismic imaging purposes, its versatility and computing power will also be beneficial for many disciplines across the Total Group. For example, the machine will allow reservoir engineers to run many more simulations, in order to better capture and manage uncertainties, or to quickly match actual production data to the reservoir model. New disruptive methods like enHM (ensemble History Matching) will produce better field-history matching and more reliable production forecasts, while reducing the research time required. In addition, other branches of the Total Group will benefit from the vast power of the HPC, since, for instance, chemical and molecular modelling and analysis will be made available to researchers from the Refining and Chemicals branch of the company.

In fact, the only stakeholders who complained when Pangea III was launched were the employees who cycle to work in Pau: the water feeding their changing rooms had been heated by Pangea II’s cooling system, but since the new supercomputer is much less energy intensive, it was no longer able to heat the water to an acceptable temperature. Fortunately, a new heating boiler was installed for them, making everyone happy – and Pangea III a completely shared success!

Further Reading on Supercomputers in Oil and Gas Exploration
Supercomputers for Beginners – Part I
Lasse Amundsen, Statoil, Martin Landrø and Børge Arntsen, NTNU Trondheim
Supercomputers analyse vast amounts of data, faster and more accurately. By 2020 they will be doing a quintillion calculations per second. This article provides a simple guide to supercomputers.
This article appeared in Vol. 12, No. 5 – 2015

Supercomputers for Beginners – Part II
Lasse Amundsen, Statoil, Martin Landrø and Børge Arntsen, NTNU Trondheim
Energy companies need ever larger computers. This article is the second of three articles giving an introduction to supercomputers . Here, we look at the design of parallel software.
This article appeared in Vol. 12, No. 6 – 2016

Supercomputers for Beginners – Part III. GPU-Accelerated Computing
Lasse Amundsen, Statoil, Martin Landrø and Børge Arntsen, NTNU Trondheim
Many TOP500 supercomputers today use both CPUs and GPUs to give the best of both worlds: GPU processing to perform mathematically intensive computations on very large data sets, and CPUs to run the operating system and perform traditional serial tasks. CPU-GPU collaboration is necessary to achieve high-performance computing.
This article appeared in Vol. 13, No. 1 – 2016

Supercomputers for Beginners – Part IV
Lasse Amundsen, Statoil, Martin Landrø and Børge Arntsen, NTNU Trondheim
Quantum Computers.
This article appeared in Vol. 13, No. 3 – 2016

Previous article
Covid-19 pushes service sector headcount to lowest level in over a decade
Next article
Egypt: A New Regional LNG Hub?

Related Articles