Author: Dan Mount, Senior Product Manager, Monolith
Read Time: 5 mins
At Monolith AI, we don’t just talk about Large Language Models (LLMs)—we actively use them to power internal workflows, accelerate engineering processes, and prototype tools that could evolve into client-facing solutions.
In this employee spotlight, we sat down with Dan Mount to learn how LLMs are shaping productivity and innovation across the company.
How are we using LLMs internally to improve engineering and business workflows?
We’ve seen multiple impactful use cases emerge across the company:
Contract Review: LLMs speed up routine reviews by identifying risky clauses, summarising key terms, and streamlining communication with legal teams. As Dan explains:
“We’ve got documentation everywhere—SharePoint, Confluence, HubSpot, even Slack threads. LLMs help us distil all of that information and surface it to the right people driving initiatives. That’s been a really powerful use of large language models.”
Jira Ticket Creation: Our teams generate Jira tickets directly from Slack threads, capturing context, technical nuance, and user intent without losing detail. Dan highlights how this shifts focus:
“Before, 80% of the time was spent on manual ticket creation and only 20% on adding value. With tools like the MCP server, that’s flipped—20% manual, 80% contextual thinking. It’s less about saving time and more about increasing the value of the work.”
Empowered Product Managers: Product managers use LLM-powered tools like Cursor, Bolt, and V0 to bridge the business-engineering gap.
“It allows product managers to step into the arena of coding. You can prototype an idea in two or three hours instead of waiting weeks for engineering bandwidth. It’s cheap to build, easy to throw away, and accelerates experimentation.”
These applications have significantly reduced manual overhead while maintaining high standards of accuracy and quality.
What internal tools or automations using LLMs could evolve into client-facing products?
Several internal innovations have the potential to scale externally:
- LLM-Generated Feature Code: For example, when creating a “duplicate step” workflow, product managers used LLMs to prototype code directly. This generated code was reviewed and validated by engineers, dramatically increasing iteration speed without compromising integrity.
- MCP Server (Model Context Protocol): Our internal MCP server orchestrates complex ML workflows, helping product managers generate, contextualise, and execute steps more efficiently. Early results are promising, and the MCP server could become a client-facing tool for organisations looking to streamline ML pipelines.
These internal successes demonstrate how LLM-driven tools can accelerate both development and business processes.
What hasn’t worked when applying LLMs internally—and what did we learn?
LLMs aren’t magic; they work best with structure, adequate context, and clear instructions.
“If you’re vague or asking it to do too much, it’s going to fail. The key is bounding tasks clearly. We’re always pushing the limits of complexity, but you still need people who know what they’re doing to guide the process.”
One example came from how we handled long Confluence pages. At first, we fed entire documents into an LLM and asked it to summarise the research or adapt it for different use cases. The output often fell short of expectations, especially compared to examples we had seen elsewhere. The turning point was when we introduced workflows that split large inputs into smaller, more manageable sections. By chunking the content for input and then generating outputs section by section, the model produced results that were consistently clearer, more accurate, and closer to the standard we were aiming for.
LLMs rarely deliver perfect results out of the box, but with the right structure, context, and prompting, they can reach a level that feels remarkably close.
How do we ensure LLM-powered client solutions comply with data regulations like GDPR or export controls?
“We can run models inside a customer’s infrastructure—we’ve done this with some of our clients' private clouds. We can also tap into their existing deployed models, whether Anthropic or OpenAI. Everything is underpinned by our evaluation framework and ISO 27001 certification, so compliance and security aren’t afterthoughts; they’re accounted for from the start.”
- Flexible Deployment Models: We support running models within a client’s infrastructure or hosting them ourselves where permitted.
- Security-First Design: Implementations are tailored to regulatory and industry requirements, protecting sensitive engineering data while remaining fully compliant.
By prioritising security and control, we ensure our LLM solutions meet the strictest compliance standards.
Where are the biggest opportunities to integrate LLMs into our product roadmap without compromising accuracy or explainability?
We see strong potential in three areas:
- Self-Service Experimentation:
“We position ourselves as a tool for engineers without deep data science experience. With an LLM-powered assistant, users can learn and experiment much faster compared to tutorials. It’s like having a data science expert on hand from day one.”
- Human-in-the-Loop Systems:
“Instead of manually reviewing 100 notebooks, an agent can evaluate them, surface the top three, and give you links. You still make the final call, but LLMs help distil the information dramatically.”
- Workflow Automation: Streamlining repetitive tasks across the ML lifecycle to free up engineers for higher-value work.
Explainability and auditability remain top priorities to maintain engineering rigour while scaling LLM capabilities.
How do we stay current with rapid developments in LLMs?
The AI landscape evolves daily, and staying informed requires a decentralised approach.
“AI impacts everything—DevOps, security, product. Each person is accountable for keeping up. Internally, we crowdsource through Slack channels, sharing updates and even live-streaming big launches like GPT-5. Externally, we track developments on X, YouTube, Reddit, and Discord. It’s fast-moving, but the collective approach keeps us agile.”
How are we fostering an AI-literate culture internally?
Six months ago most people were using LLMs for everyday tasks, while early adopters were building full workflows. Now they have become essential: if you are not using them, you risk falling behind.
“I was an early adopter and pushed developers to start using tools like Cloud Code. That created a knock-on effect—we’ve reached an inflexion point where even late adopters are hands-on with LLMs. The key is planting seeds with champions so others come along for the ride.”
Curiosity and safe experimentation are rewarded, creating a culture of meaningful innovation.
Conclusion
At Monolith, we have made experimentation part of our everyday work. When a new model or tool appears, the team shares it, discusses it, and often sits together to see what it can do. Very quickly, the focus shifts from watching to testing. We try things out in our own workflows, look at where they make sense, and learn by doing.
What stands out is how open people are to giving new ideas a go. Productivity gains are important, and we are already seeing them, but the real achievement is the culture that has formed around this. Engineers, product managers, and researchers alike are willing to adapt, to challenge how things are done, and to put new approaches into practice.
LLMs evolve at a rapid pace, but by experimenting and sharing openly, we make sure their value is realised in the work we do every day.
About the author
An experienced Product Manager delivering Monolith AI's innovative solution across multiple engineering sectors. I work closely with data scientists, developers, customer success and marketing teams to ensure that we continue to deliver valuable and exciting new features for our clients!