Understanding Tyre Degradation Using Self-Learning Models
Jota Sport has regularly achieved podium finishes during its illustrious history, making it one of the most experienced and successful sports car teams to date. While part of their winning streak depends on the driver, the rest relies on the ability of the team to regularly squeeze out one-tenth of a second of improvements with each iteration in configuration.
Until recently, this meant evaluating masses of data from test track runs and coupling this with a healthy portion of intuition to select the right tuning choices using a traditional physics-based approach.
In this article, we will explore how Jota Sport was able to combine engineering ingenuity with Monolith’s self-learning models to finish first in this year’s 24 Hours of Le Mans Race (LMP2) and leveraged their engineering test and sensor data. Jota was able to build an easy-to-use AI application that solves an intractable, non-linear, tyre degradation problem in a data-driven way.
How it is Done Now: The Pacejka Magic Formula
In a classical sense, tyre degradation behaviour is predicted intuitively via visual inspection or distance travelled whilst the state-of-the-art approach is to calculate the sliding energies. The traditional approach for modelling tyre degradation is using the so-called Pacejka model (also known as the Magic Formula) which is an empirical model that involves predicting force on the tyre and fitting coefficients to match experimental testing.
Figure 1: Monolith allows engineers to test less, learn more and make better data-driven decisions using their underutilised test data. This not only helps to use less data to more accurately calibrate systems and instantly predict product performance but also to optimally use test facilities and focus on engineering efforts where it matters.
However, there are some major drawbacks to using this model. Firstly, it is numerically unstable when the velocity approaches zero: if the vehicle is stopped, then the model will simply not produce realistic results. Secondly, it can be occasionally hard to get a fit using this method, as different tyres don’t always behave consistently even though they might be of the same material. The model, which is based on complex assumptions, fails to capture not only the differences in manufacturing processes but also differences on the molecular level, which are nearly impossible to capture in any model. Lastly, this method is extremely expensive.
"...there is a cost of about £10,000 per tyre, and, when combined with the volume of tests needed, it becomes an incredibly expensive process that is simply not feasible as teams already fight exhaustion and hardware failure to get the race car to the winner’s podium."
Jota Performance Engineer, Joao Ginete
Sensor Output Modelling Using Self-Learning Models
One approach to gathering data is to simply run sensors at all times which measure the forces and velocity of interest. However, sensors are bulky and expensive which means that this approach is not economical to run at all times, especially in a race environment where the likelihood of damage is high. Certain sensors may also be restricted by racing regulations. Naturally, the question arises if we can feed that data that is being collected during training of the driver back into the AI model?
Figure 2: Top: The traditional, old workflow for a complex problem, based on empirical, known equations and physical models. Bottom: New workflow for an intractable (non-linear) physics problem that cannot be solved easily using the classical physics-based approach. Modelled and calibrated using Monolith’s self-learning models.
Monolith Software allows the user to fit models onto the data which was acquired during tests. This data can then be used during race weekends to provide information that helps the drivers maintain their tyres in their optimal performance window. The previous workflow would require tyres to be taken to a test bench, fit the empirical Pacejka model, run this as a simulation, do a few iterations if necessary, and then, using the simulations, the vehicle model can interrogate track data back through the vehicle model to give an estimate of what tyre energy would be (Figure 2). This is extremely difficult to replicate during a race weekend.
The workflow becomes a lot more simplified and streamlined when taking the data-driven path. Using test data, you can then use Monolith to create a model which gives you an accurate estimate of the output of these sensors, almost in real-time, performing virtual testing. Leveraging real-life sensor data allows you to test less, and learn more, and allows for more scope to plan experiments, analyse results and achieve optimal results.
Monolith empowers your engineering experts to learn more, as they can compute the tyre energy and give live feedback to the drivers during a race. Being able to do this in a live racing environment allows drivers to better understand if they are under or over-utilising the tyres in certain parts of the race track, allowing them to drive better and keep the tyres in the optimal performance window for the entire stint.
The 8 Hours Race in Bahrain
At the 8 Hours of Bahrain race in 2021, Jota Sport had two drivers close out the race for them. Anthony Davidson was the driver who completed the second-to-last double stint. He was extremely quick in his first stint and whilst this was impressive, there was concern from the Jota Sport engineers that the first stint would hurt the second, as the tyre performance would surely begin to drop off. Yet, this was not the case. Instead, Davidson’s second stint was only four to five-tenths slower than his first stint, effectively making it the quickest double stint of the weekend. When Davidson jumped out of the car, the Jota Sport team asked how he achieved this. His response was simply ‘Oh, you can push harder than you think.’ This information was sent to Antonio Felix da Costa, who was about to start his stint.
However, his first stint saw much higher tyre energies than Davidson’s stints. The Jota Sport team realised the value of the Monolith models, as they allowed the Jota Sport team to give curated feedback to the drivers on how they should be driving their cars differently and how they should adapt their driving styles to get the most from their tyres. As a result of the higher energies than in Antonio's first stint, the degradation level in the second stint was more than 1 second per lap (compared to 0.5s for Davidson).
The data-driven approach utilising Monolith enables Jota Sport to take just a few data sets from test days and use this to build AI models that replicate any missing data. Tyre degradation is an intractable physics problem which is vital for Jota Sport’s engineers to understand how the car behaves and how the drivers interact with the car in order to produce the required stint averages. Using Monolith allows Jota Sport engineers to skip the whole physical modelling process. The ease of use of the Monolith platform and its powerful capabilities means Jota Sport are able to help their drivers optimise their performance, ultimately helping them to win races, such as the 2022 24 Hours of Le Mans Race (LMP2).
The power of AI doesn’t stop at tyre degradation modelling. There are a plethora of problems that Monolith has already solved such as an 80% reduction in wind-tunnel test times, suspension modelling problems, incorporating different weather conditions as well as different driver behaviours & tracks, optimising the use of the Le Mans winner test facilities by using less data to more accurately calibrate their systems across a full range of operating conditions.
Using Monolith, Jota Sport has been able to overcome the tedious, time-intensive, and costly engineering approach which they previously used during testing. They did this through self-learning models which use real-world test data to learn from real-world performance, taken from track or wind-tunnel testing.