Are You Chasing the Right AI Projects? How to Benchmark What Matters

Author: Simon Daigneault, Product Marketing Engineer, Monolith

Read Time:mins

To cut through the noise of endless AI ideas and learn which frameworks will help you avoid operational blindness, benchmark your lab, and focus on the AI projects that actually deliver value. 

Every engineering leader is hearing the same thing: “AI should be here, AI should be there.” The ideas are endless, but the real challenge is deciding which use cases will actually work in your lab. 

 

The truth is there isn’t a single “best” AI use case overall. The best choice depends on the data you have, the problems causing the most cost, and the readiness of your team and processes. 

 

In this blog, we’ll walk through three practical ways to benchmark your AI readiness and decide where to start: 

 

  • AI Use Case Matrix – helps you compare use cases side by side and see which ones will pay off fastest. 
  • AI Maturity Matrix – shows how prepared your organisation is to execute on AI effectively. 
  • Use Case Identification Framework – from our whitepaper, which helps confirm whether a problem is truly suited for AI in the first place. 

 These frameworks don’t give you a single answer — they give you the evidence to decide where AI will deliver the most value given what you have today. 

 

Request a demo

 

The Wrong Approach: Chasing Only Small Fish  

 

In many organisations, AI adoption starts with “small fish” projects — proof-of-concepts chosen because they seem easy, not because they address the most critical problems. Examples include: 

 

  • Automating a minor reporting dashboard. 
  • Running AI on a small, isolated dataset from a single test stand. 
  • Building a one-off model to classify test images with no link back to validation outcomes. 
 Small Fish vs Big Whale

 

These pilots can deliver some value, and in fact we sometimes use them ourselves to prove capability and build trust. After all, you don’t wake up one day and hand over your entire engineering division to machine learning. Small steps help leaders see what’s possible and gain confidence in the technology. 

 

The mistake is stopping there. Small fish produce small ROI, and executives quickly conclude that AI is a nice-to-have, not a driver of transformation. 

 

The real benefits come from tackling the “big whales” — the large, process-level problems that drive cost and delay across entire programmes. These are harder to solve, but they are also where AI delivers measurable, organisation-wide impact. 

 

Two examples stand out from our work with OEMs: 

 

AI Application 

What It Solves 

Business Value 

Anomaly detection 

Identifies errors and irregularities in test data as they occur, instead of months later at the end of campaigns. 

Prevents costly retests, reduces wasted equipment time, and improves confidence in validation results. 

Test plan optimisation 

Analyses past and ongoing tests to highlight redundant cycles and recommend the minimum needed to validate. 

Cuts programme timelines, frees up test capacity, and accelerates speed-to-market. 

These are the kinds of projects that change how labs operate, reduce costs at scale, and accelerate innovation. Small fish have their place — especially when building confidence in a new approach — but the industry is shifting. 

 

Where a few years ago leaders were testing the waters with small AI pilots, today there is far greater awareness of the proven gains.

Engineering organisations trust that machine learning can deliver reliable outcomes, and they are moving more quickly towards larger, transformational projects. 

 

 

Are You Benchmarking Appropriately?  

 

The biggest risk in applying AI to engineering isn’t a lack of ideas — it’s investing in the wrong ones. Some teams chase easy proof-of-concepts that don’t deliver real ROI.

Others go after ambitious projects that look attractive on paper but aren’t feasible with the data or infrastructure they currently have. Both approaches waste time and create scepticism. 

 

Benchmarking solves this problem. By using structured frameworks, leaders can evaluate AI opportunities against consistent criteria and decide: 

 

  • Which projects are worth tackling now. 
  • Which ones should wait until the organisation is ready. 
  • Which ideas aren’t a good fit for AI at all. 

 

At Monolith, we use several complementary frameworks with engineering leaders to guide these decisions: the AI Use Case Matrix, the AI Maturity Matrix, and our Use Case Identification Framework from the whitepaper.

Each looks at readiness from a different angle — and together, they provide a clear view of where AI will deliver measurable value. 

 

Request a demo

 

The 3 Frameworks You Should Be Using   

 

There’s no shortage of frameworks for evaluating AI readiness. Each has its own merits and advantages, and no single one gives the full picture. 

 

At Monolith, we’ve shaped our approach through work with leading OEMs across Europe and North America. These collaborations have shown us where AI consistently delivers value in engineering, and where teams run into hidden costs. 

 

The three frameworks below are the ones we recommend most often to leaders in R&D and testing. Each looks at AI readiness from a different angle: 

 

  • Which projects to prioritise now. 
  • How prepared your organisation is overall. 
  • Whether a problem is truly suited for AI in the first place. 

 Together, they provide a structured way to decide where AI will have the biggest impact in your lab. 

 

Framework 1 — The AI Use Case Matrix  

 

The AI Use Case Matrix is a tool we use with engineering leaders to rank and prioritise potential AI applications in their labs. Each use case is evaluated on four criteria: 

 

  • Data availability – how much usable data exists. 
  • Feasibility – how easily the use case can be implemented. 
  • Business value – the impact of solving the problem. 
  • Time to implement – how quickly results can be achieved. 

 

The weighted score makes it easy to compare opportunities side by side and decide which ones will deliver the fastest ROI. 

 


 

Framework 2 — The AI Maturity Matrix   

While the Use Case Matrix helps you pick specific projects, the AI Maturity Matrix looks at the bigger picture: how ready is your organisation to execute on AI effectively? 

 

It benchmarks areas like: 

  • Data infrastructure and accessibility. 
  • Team skills and culture. 
  • Integration of AI into engineering workflows. 

 

This isn’t about ranking individual use cases — it’s about spotting gaps that could slow down or derail any AI project, no matter how good the idea is. 

 

Picture2-1

 

👉 To explore the maturity framework in detail, read our blog: The AI Maturity Matrix 

 

Framework 3 — Identifying the Right Use Cases    

Even with the right priorities and maturity level, not every problem is suitable for AI. That’s where our Use Case Identification Framework comes in, outlined in our whitepaper 3 Ways to Identify Good AI Use Cases in Engineering. 

 

It provides three principles for spotting true AI “sweet spots”: 

  1. Find the gaps in physics-based methods – where simulations or models are too slow or inaccurate. 
  1. Understand what AI can and can’t do – to avoid wasted effort on problems it isn’t designed to solve. 
  1. Know your data – quality, quantity, and relevance determine whether AI will deliver reliable outcomes. 

 3 ways to identify good ai use cases in engineering

This framework helps leaders avoid projects that look exciting but don’t have the right conditions to succeed. 

 

👉 Download the full whitepaper here: 3 Ways to Identify Good AI Use Cases in Engineering 

 

Different Tools for Different Contexts: Avoiding Operational Blindness 

 

One of the biggest risks in engineering labs is what we call operational blindness — the belief that your processes are running smoothly, when in reality the gaps are hidden in plain sight. Leaders often assume their data or workflows are “good enough,” but benchmarking consistently shows the opposite: the blind spots are usually where the costs add up fastest. 

 

  • Battery labs often trust their repeated performance tests (RPTs) as reliable. But when thousands of cycles run over months, even a single anomaly can waste the entire campaign. AI-driven anomaly detection reveals errors that manual reviews consistently miss. 
  • Full vehicle validation programmes may think their datasets are large enough for prediction. In practice, fragmentation across regions and suppliers makes consistency the true bottleneck. Forecasting only pays off once the data foundation is fixed. 
  • Component and system testing teams often equate running more tests with gaining more confidence. In reality, optimisation shows that many tests are redundant, and cutting them achieves the same validation outcomes at a fraction of the cost. 

 

The takeaway: operational blindness stops leaders from seeing where the biggest waste lives. That’s why applying multiple frameworks is so important — they force a clear-eyed view of where AI can actually deliver value, not just where you assume it will. 

 

Conclusion: Overcoming Blind Spots 

 

Engineering leaders don’t lack ideas for AI. What they lack is clarity on which ideas will deliver measurable impact. The wrong approach is chasing small, easy projects that never scale — or assuming you already know where your biggest opportunities are. That’s operational blindness, and it costs more than most teams realise. 

 

The solution is structured benchmarking. Frameworks like the AI Use Case Matrix, the AI Maturity Matrix, and our Use Case Identification Framework give you the perspective needed to see past assumptions and focus on the problems worth solving. 

 

Not every framework works in every context, and that’s the point. Used together, they expose blind spots, prioritise the right use cases, and provide evidence for where AI should be applied in your lab today. 

 

The question isn’t whether AI belongs in your testing strategy. It’s whether you have the right frameworks to separate the small fish from the whales — and to see clearly where operational blindness is slowing you down. 

 

Ready to Level Up Your Data Strategy? 

 

Most engineering teams have a data strategy. Few have a good one. 

 

In our recent workshop What Engineering Leaders Must Know About AI Data Strategy, we shared: 

  • How to calculate the hidden cost of bad data. 
  • Case studies on anomaly detection and test plan optimisation. 
  • A practical framework to turn test data into a strategic asset. 

 

Watch On-Demand Webinar &  Strategy Workshop for R&D and Tech Executives 
Data Strategy Webinar 2025 Thumbnail-1

 

If you’re responsible for R&D, validation, or digital transformation, this is your opportunity to see how leading OEMs are modernising their data strategies — and where your lab stands today. 

 


About the author

Simon B&W HeadshotAn experienced Product Marketing Engineer translating advances in AI into practical insights for battery development. At Monolith, I work across product, engineering, and commercial teams to ensure innovations in our platform deliver real-world value for OEMs. My background includes an MEng in Mechanical Engineering from Imperial College London, with a specialisation in battery testing, and hands-on experience at a battery energy storage startup in pack design, testing, and system integration. 

Share this post

Request a demo

Ready to get started?

BAE Systems
Jota
Aptar
Mercedes Benz
Honda
Siemes