Over the last 20 years there has been a rapid expansion of systems that use wireless technology, from the growth of the mobile phone market to smart home devices and the continual movement to autonomous vehicles with their smart sensing technology. A key enabling component within all of these devices are the numerous antennas which are often hidden from human view.
I have been fortunate to spend my career working through this technological revolution in particular focussing on the development of the antenna and its associated systems. Recently I had an interesting question posed to me which was “how do you optimise the antenna?”, which got me thinking about the design process and the various tools that I have used over the years, in particular optimisers and optimisation.
As in many areas of engineering computer simulation is now the main stay of antenna design (an example of which is shown in Figure 1), with many excellent simulators which allow you to analyse behaviour in a full three-dimensional space. Nearly all of these offer a plethora of optimisers which allow you to vary key dimensions and parameters of the antenna automatically to meet the required performance. However in practice I still find that I undertake the majority of this task manually, carrying out small design parameter variations to understand the impacts and trends in performance. So why not use the optimisers?
A lot of this comes down to the behaviour and basis on which many of these optimisation algorithms work. Many are using mathematical functions which search for maximums or local maximums. Whilst these do indeed often provide the desired results, the challenge of realising the resulting design can be difficult. Physical manufacturing processes and tolerances are often not well captured with conventional optimisers. Consider for instance Figure 2 below. If we are optimising the length of a structure to give the best antenna gain, the optimiser will give you the “best” result based on finding which length gives you maximum gain. Unfortunately, too often the “best” value is at a sharp the point on the curve so any slight variation of length soon leaves you with poor performance (see point 1). It is often better not to be in the “best” place, but in an area with slightly poorer performance that is still acceptable, but where for example your value of gain varies much more slowly with length (see point 2).
Another challenge is that you are generally trying to optimise for multiple performance factors, not just say the antenna gain, but also how well power is transferred to and from the electronic circuits it is connected to. Often these differing performance factors impose different requirements from the antenna structure, so it always becomes about balance and finding the best compromise. In my experience this has been best achieved with a more manual approach, which gives a more in depth understanding of the design space. This coupled with feeding in an understanding of the manufacturing processes generally produces a viable design, where some very targeted use of optimisers can be used to further enhance this.
What does this mean for computer optimisation? Well like the technologies we create engineering design processes are also continually evolving and Artificial Intelligence has entered this sphere. Over the last year I have been fortunate enough to experiment with this through Monolith AI.
Monolith AI is a cloud-base software that enables engineering companies with historical data to explore their data easily and interactively and train machine learning models. These models are then used to accelerate product development cycles by reducing the number of simulations and tests needed to reach the final design of a product.
My experiments here started with creating a design dataset for a horn antenna. In my case this was built theoretically in excel, but in practice this design set would comprise previous design data from simulations, tests, and possibly some theoretical data. The beauty of this approach is that inherent in previous design data are a lot of the realistic effects that need to be captured. For instance, measured results contain all of the impacts of manufacturing imperfections and tolerances.
The Monolith AI software uses this data to train neural networks, which form the basis for growing future designs. Using my theoretical data set I was interested in designing a horn antenna suitable for feeding a parabolic reflector. These reflector antennas are seen widely in many applications from home satellite dishes to large radio telescopes. For best performance, the power from the feed horn should illuminate as much of the dish as possible, whilst not spilling over the edges. Visualisation of the data set within Monolith, allowed a snapshot of the overall design space to be developed and provided rapid assessment of impacts of the different design parameters. This showed that the radial length (R) of the horn was key (see Figure 3).
Once this was fixed at the optimum level, it was easy to see the relationship with the horn aperture, and to tweak this to maximise the performance (see Figure 4).
It seems that AI is allowing engineering optimisation to enter a new era, one that enhances many of the processes that engineers would use manually during technology designs. The ability to visualise and understand multiple design parameters, their trends, and relationships is powerful providing the engineer with an immediate understanding of where they are in the design space. Optimisation which is based on the whole design space rather than one or two key parameters gets to a realisable design quickly saving both time and money. It looks like the type of optimisation tools engineers really require may finally be arriving.