The workflow includes a sliding window scanning the entire 3D target section, yielding the seismic facies classification results. Then the corresponding sedimentary facies is obtained based on the predicted seismic facies result, well log information, seismic data and geological and geophysical expertise. The predicted sedimentary facies result is consistent with the expert interpretation results. Source: Hui Gao et al. (2024).
Technology
Worldwide

A seismic facies library

A new approach for generating a library of seismic facies patterns helps automate and speed up the interpretation process

Seismic facies classification can be a tedious process, especially in an exploration context where large areas need to be as­sessed. In order to speed up that process, using a library of seismic facies patterns from a benchmark dataset can help map the 3D distri­bution of seismic facies in an automated way.

Hui Gui, representing a group of researchers from the University of Science and Technology of China (Hefei), gave a talk at the re­cent IMAGE Conference in Houston, during which he illustrated how a large collec­tion of benchmark seismic facies patterns was put to­gether. Both the datasets and the codes have been made available online for anyone interested to know more.

How to built it?

But how to make sure that a library of seismic facies patterns covers the diversity observed in the real world? In order to do that, the team worked with a three-tier strategy.

First of all, they built a portfolio of characteristic seismic facies from public domain data. These data­sets, being from different seismic vintages and areas, were standardized through a spectral analysis exercise.

Due to the lack of diver­sity in examples from public domain sources, as a second step the authors also gener­ated synthetic facies samples based on prior knowledge of seismic facies. But even though a noise factor was introduced in this process to arrive at a more real-world representation, this dataset still suffered from a lack in diversity and realism.

To overcome the issue, as a final step, the team used the images generated in the first and second stages as training datasets to train a so-called GAN model. The progressive growing of a GAN model consists of a generator mod­el (G) and a discriminator model (D), where G is used to capture the data distribu­tion and generate fake images to resemble the training da­taset, and D is used to assess the probability that images are real or fake. This training strategy allows the network to learn the features of the training dataset from large to small scales, resulting in fast­er training speed, higher sta­bility, and better-quality im­ages. After training the GAN model, the generator model (G) was used to automatical­ly generate diverse samples.

The test

The authors subsequently showed how the model was applied to a 3D seismic da­taset using the Yuanba 3D as an example – shown here. They showed that an expert interpretation of a series of carbonate reefs and slope ar­eas were correctly predicted by the model.

During the Q&A, some­one from Chevron remarked that the benchmark dataset seemed to have been built using unfaulted seismic data, implying that faults may hinder the automatic interpretation process. The author replied that this will be part of future work.

Previous article
Why is so much subsurface technology coming from Norway?
Next article
The exposed Interior Seaway – Just layer cake geology?

Related Articles