Author: Aadam Khan, Engineering Outreach Rep, Monolith
Read Time: 6 mins
Imagine this: you load your latest batch of test data into the platform before leaving the office for the day.
While you sleep, an AI agent gets to work analysing the data, training models, flagging anomalies, and compiling notes for review. By the time you return, the insights are available for you to assess and act on immediately.
This is the new reality of agentic workflows: systems that don’t just assist engineers, but act on their behalf. Tools that don’t need to be micromanaged, because they understand your intent and execute accordingly.
It marks a shift from tools you use to tools you instruct.
And importantly, this isn’t science fiction. These workflows are already being built and demonstrated inside platforms like Monolith, with early versions already proving engineering value and full capabilities on the horizon.
At the centre of this transformation is MCP (Model Context Protocol), enabling LLMs to plug into tools, run full workflows, and return results faster.
Engineering teams are under growing pressure to move faster while solving increasingly complex problems. Product testing now involves more data, more variables, and tighter development timelines than ever before.
While traditional ML tools offer support, they still rely heavily on manual effort. Engineers are responsible for preparing datasets, scripting workflows, tuning models, and interpreting results, with each step requiring time, attention, and domain expertise.
Consider a typical scenario: an engineer runs a series of simulations in Star-CCM+ or performs an FEA analysis. Afterwards, they manually export result files, open Excel, clean and label the data, and then begin plotting or comparing it to test benchmarks. Each step may be straightforward, but in combination, they demand hours of focused effort, and the process is repeated again and again.
Most current tools act as assistants, not operators. They wait for human input. They don’t initiate, adapt, or run workflows end-to-end. What we often refer to as “automation” still depends on rigid scripts, fixed templates, and constant supervision.
As a result, engineering workflows cannot scale. Progress remains linear, engineers spend time on tasks that AI could be executing, and time becomes the bottleneck.
Which leads to the key question: what if your tools could run themselves, guided by engineering expertise and intent?
Model Context Protocol (MCP) is a new standard that lets large language models (LLMs) operate engineering software directly instead of only explaining what to do.
Developed by Anthropic in 2024, MCP gives LLMs structured access to external tools. Before MCP, even advanced models could only reason about workflows. They could not run a simulation or modify a model.
MCP defines how tools describe their inputs, outputs, and available actions. When platforms such as SimScale, STAR-CCM+, ANSYS Fluent, COMSOL, MATLAB, or Monolith provide an MCP server, an LLM can call those functions directly through natural language.
This allows an engineer to write a simple instruction like:
“Run a CFD analysis for this geometry at 120 km/h and compare temperature results.”
The LLM interprets the request, uses the MCP to execute the task in the connected tool, and returns the results.
To test the capabilities of MCP, we built an MCP server to allow LLMs to connect and interact with the Monolith platform. This “notebook handler” would be able to process a prompt in natural language and then perform activities within the Monolith platform (navigating data, training models, evaluating results, and summarising findings in a clean, structured notebook).
Access the on-demand webinar here
In this demonstration, we gave both a human and the Notebook Handler the same set of tasks:
This is a routine workflow many engineers do daily, and a perfect candidate for comparison.
On the left side of the screen, a human engineer performs the task manually: clicking through the platform, selecting features, adjusting parameters, running models, and recording outcomes.
On the right side, the MCP-powered Notebook Handler receives a single instruction and completes the full workflow autonomously.
Everything is automated: from navigating and understanding the dataset to selecting the right features based on its knowledge of the platform. It knows which parameters to use, executes each step efficiently, and adapts the workflow in real time. Once complete, a clean, structured notebook is generated with a full summary of its actions, model metrics, and suggested next steps for review.
Both the human and the Notebook Handler use the same platform tools and produce equivalent outputs. The difference lies in speed, repeatability, and effort. While a human needs to manually repeat the workflow each time, the process is fully automated and reusable.
The impact of this autonomy becomes even more evident in complex scenarios, such as training and comparing multiple models, where the time and effort saved increase exponentially.
Now, imagine that same workflow running not just once, but hundreds of times in parallel.
If a single agent can complete a full modelling pipeline in just a few minutes, scaling that across a fleet of AI assistants becomes transformative. Work that once took hours can be done in minutes, with full consistency and no manual overhead.
“This is where it gets exciting. You’re not limited to one agent; you can have a hundred running side by side, each testing a different hypothesis or exploring a new direction.”
Dan Mount – Senior Product Owner, Dr Joël Henry – Lead Principal Engineer
Access the full on-demand webinar here
What we’ve shown so far is only the start. The full potential of agentic engineering is just beginning to unfold.
MCP-enabled agents will soon power our key use cases across the Monolith platform, including:
We’ll be releasing a dedicated blog soon that explores how MCP transforms anomaly detection workflows and why this is key for engineering teams facing increasingly complex testing.
As we continue to expand beyond internal use, we’re looking for engineering teams to help us test and shape what comes next.
If you’re working through complex test, validation, or modelling challenges, or facing bottlenecks in data-heavy workflows, our product team would love to connect.
Join our Agentic AI for Engineering content series, where we will cover:
Book a call with the Monolith team to explore how MCP might support your specific use case and find out if you’re a fit for our upcoming beta group.
About the author
I’m a Chemical Engineering student from Imperial College London, working at Monolith to help engineers use AI to test less and learn more. I’m passionate about using self-learning models to optimise validation and drive innovation in systems like batteries, ECUs, and fuel cells.