Much of the AI we see today is very domain-specific and, in my mind, one-dimensional. People continue to operate in their own realms, developing algorithms that produce results at great speed.
I attended a very good conference not long ago where many algorithms and workflows were presented that produced incredible results in a very short time.
However, at the end of the conference, the fundamental problem of ineffective collaboration and multi-disciplinary assimilation remained. The produced results still required time-consuming handover and manual incorporation.
AI gives us the means to iterate, fine-tune and customise at an unprecedented pace and scale. But when it comes to the way we work, is this innovation or just an evolution of doing the same thing faster?
The true risk is not that we will fail to innovate, but that we will become so efficient at iterating that we mistake speed for progress
Geoscientists are good iterators. By default, we model and tune a variety of scenarios to get the most realistic, economical or feasible idea on the table. Whether it is tweaking a reservoir model or reviewing many well designs, we are always focused on incremental improvements that, we hope, yield material benefits.
Rarely does an innovation come along that leads to a step change in that process.
For example, automated fault extraction and interpretation is the equivalent of improving a workflow at great speed. At the same time, AI-assisted inversion, a process based on in-depth, multidisciplinary understanding, is much less popular.
Here’s the rub… One of these replaces a manual, time-consuming process that requires high levels of iteration, whereas inversion is a specialised science. Theoretically, it should be a better business decision to automate inversion and yet it hasn’t happened.
The way I see it is that innovation is often wrongly associated with novelty and perceived as not grounded in reality, whereas the tools to iterate are already in the hands of workers today. In my mind, when it comes to the way we work, true innovation comes when systems meld together and become symbiotic. Information flows easily while solving complex multidimensional problems.
But the point I am trying to make is that if we just keep iterating, then who is thinking about the innovation layer?
The true risk is not that we will fail to innovate, but that we will become so efficient at iterating that we mistake speed for progress. While automated fault extraction and rapid modelling provide significant operational wins, they remain refinements of existing paradigms. If we allow the ‘Efficiency Dividend’ – the time reclaimed by these tools – to be swallowed by further iteration, we forfeit our chance at the next step-change. The next innovation has to come from someone who, hopefully, has used the time freed up by iteration of the last technology to find the next breakthrough.

