Vasilii and Eddy with a new graphics card in the Houston office. Photography: RFD.
North America
Technology

Why preserving a detailed facies reservoir model has everything to do with your graphics card

Developments in computer processing power have shown two very different trends over the past decades, with a big impact on how reservoir modelling and simulation are done

“Imagine a bulldozer,” says Vasilii Shelkov, CEO of Rock Flow Dynamics, from his office in Houston. “That is how you need to envisage the power delivery of the Central Pro­cessing Unit of our computers. Or, in short, CPU. In the early days of reservoir modelling, it was the bulldozers do­ing all the hard work. But with rapidly increasing demands when it comes to grid size, cell size, and the parameters that people want to see modelled within the grid cells of their reservoir models, so have the demands from software com­panies to up their processing speed.”

This soon led to friction.

“The problem is,” continues Vasilii, “the development of bulldozers has not been able to keep up the pace with the rising demand from the software. Something else had to be involved in being able to act as a workhorse in churning all these calculations.” The answer is in GPU, or the Graphics Processing Unit.

“Some geek at some point realised that graphics cards can be used to power workflows outside the gaming space,” Vasilii says. “It was a game changer. Not only because there was another vehicle available to take the load off the bull­dozer (CPU), but also because the development in graphic card processing capability has advanced much more rapidly than in CPU.”

“For that reason, we started to use GPU to perform our modelling workflows,” says Vasilii, who can still be found under his desk installing the latest graphics card – constant­ly experimenting with configurations to maximise parallel throughput and extract every ounce of performance the hardware can deliver.

But, there is always a but. The way the calculations are made in CPU space is very different from GPU. In turn, that has a significant ramification on how the software is run, which implies changing the code. That is quite a time-con­suming process. And that’s where the challenge lies. How much do you invest in terms of converting code to run on GPU, versus leaving code to run on CPU?

But regardless, what has been instrumental is at least being able to harness the capability offered through GPU, where CPU has not been able to cope with developments.

And that is very important, for the following reason: There is now no need to upscale anymore, which means not losing the resolution in the translation from your static to your dynamic reservoir model. Which then comes back to the title of the article – if you don’t need to upscale anymore, the business case for investing in yet another seismic survey is much better. Because the increased level of detail can now be preserved all the way throughout your modelling process. All thanks to your graphics card.

Previous article
Leap whole workflows in a single bound with DUG Elastic MP-FWI Imaging
Next article
Has the Aarhus geothermal project become a little less geothermal than initially planned?

Related Articles