Insights

SPEE Software Symposium Take-aways

October 30, 2024
Zack Warren

Over the last 10 months, I’ve had the privilege of serving on the Society of Petroleum Evaluation Engineers Software Symposium with a focus on a “Bake-Off” for automated/assisted Decline Curve Analysis.

It’s been a great experience and (I hope) a very useful exercise for the industry, so here goes a brain dump of lessons learned, impressions, and predictions.

The “Bake-Off” consisted of giving participating vendors ~1,000 horizontal wells from the onshore U.S. and up to 12 hours to generate PDP forecasts.  We had anonymized and truncated the data, so we knew what “truth” was, and focused our analysis on the difference between actuals and the vendor forecasts.

I want to be clear that everything in this article is my opinion and mine alone. I hope that our final unanimous conclusions are written soon – in the meantime, as John Wright loves to say, Caveat Emptor!

Lessons Learned

First, WOW there are a lot of tools on the market doing automated DCA. Our criteria was that the tools must be “commercially available” in North America – the committee thought 5 participants would be fine, 10 would be a huge win.  Instead we had FIFTEEN entrants!  Through our outreach, we learned about another ten vendors that have tools but couldn’t or wouldn’t participate.  Who knows how many more are out there?

Is this market really capable of supporting 25+ competitors for automated Decline Curve Analysis?  Especially when a lot of reservoir engineers use their own proprietary approaches?

Second big lesson: whew, this analysis was difficult to conduct quickly. We gave ourselves 5 weeks from when the vendors returned forecasts and the event, which wasn’t even close to enough time.  We had seven very capable engineers cranking on the analysis during that time and were nowhere close to “finished” as of the day of the symposium.  We encountered a huge number of data quality issues – vendors are human, so this shouldn’t really be a surprise, but a lot of the responses had errors.  Mixed-up API numbers, European vs. US date conventions, rates instead of volumes – you name it, we probably saw it.  Cleaning those issues up ate a huge amount of analysis time.

Finally, building useful and digestible visualizations of such a large and complex dataset is a really interesting problem. With 15 vendors, 1,022 wells, three phases (oil, gas, and water), and decades of forecasts, our main data table was 5.5 million rows long.  How do you turn that into a single PowerPoint slide with a legible font size?!  Our answer was to use over 200 slides in our presentations – this kind of dataset is deserving of multiple PhD dissertations.

Impressions

I’ll defer to the full committee to report any data, but my overwhelming takeaway is “cautious optimism”.

The case for optimism: We saw a lot of perfectly reasonable forecasts coming from highly- or fully-automated approaches.  Especially when noise is low and history is long, forecasts generated by these tools do a good job of passing my personal “sniff test” for credibility.  In situations where the forecasts didn’t look good, there’s often an underlying cause that a reservoir engineer could explain in plain English such as frac hits or surface constraints.  That gives hope that a good programmer could turn those underlying causes into more sophisticated algorithms that are even more reasonable.

The case for caution: We also saw a lot of forecasts that had both poor statistical performance and failed the eyeball test.  There are certain types of wells that clearly caused a lot of algorithms serious problems, such as underlying data quality/completeness.  Many of those are also very difficult for highly trained and experienced humans to forecast accurately, so why would should we expect machines to do better?

Predictions

My first prediction is that the role of a reservoir engineer is going to keep evolving, and fast. John Wright gave a great (and hilarious) presentation on the history of DCA featuring hand-plotting on semi-log graph paper and fitting with ships’ curves, which hammered home how radically different the job of today’s engineers already is from those of decades ago.

The evolution that I see coming is that engineers are likely to move away from “artful fitting of curves” (David Fulford‘s apt description) towards the artful management of assumptions, Bayesian priors, and aggregation rules.  It offers the opportunity for higher productivity, freeing up time for more complex, critical thinking.

My second big prediction is that the technology is going to outpace the real-world adoption for high-risk applications like financial reporting and deal underwriting. I’m convinced these methods will approach the overall error metrics of human forecasts, but the perceived “embarrassment factor” of making a mistake in a high-stakes application like an annual SEC report or a multi-billion dollar acquisition will keep the reins held back on adoption.  Despite how expensive reservoir engineers are, it’s just not a big cost to have someone manually forecast wells.  What executive is going to be willing to wear that egg on their face, when a few dozen hours of manual work could prevent embarrassment?

Conclusion

I’m a big believer that events like this give us a great opportunity to move the profession ahead in a thoughtful, useful way.  Huge thanks go out to financial sponsors, bake-off participants, and my fellow committee members for the Symposium.  I learned a lot and had a ton of fun – let’s do it again in 2026!  I promise to find another gong!

a man hits a gong while smiling

Let's discuss it further.

We love to hear your thoughts. Drop us a line or schedule a time to talk.

Learn With Us

The Oil and Gas data marketplace is constantly changing. Stay up-to-date, learn the latest trends and plan for the future with us.