Much of the world around us is affected by fluid dynamics in some way. The aerodynamics of the cars we drive. The hydrodynamics of a tidal power turbine in a nearby estuary. The aeroacoustics of wind turbines up on a hill. Even the way the wind blows around our city streets.
In an ideal world, the forces and flows in all of these examples would be measurable in the field in an accurate and repeatable way. However, the reality is that this is not possible. This led to the development of wind and water tunnels as well as wave tanks in an attempt to replicate fluid dynamic behaviour and make the necessary measurements.
For a long time, these test facilities were the only way to carry out research and development work. The first full scale aircraft wind tunnel opened in Langley in 1931, sixty years after the first model scale tunnels. After 150 years of development, wind tunnels have improved in speed, size and scale as well as instrumentation and accuracy. So if wind tunnels were suitable for developing aircraft like the Bell X-1 or Concorde, why is there a need for a computational approach?
As with all physical testing, there is a huge cost and manpower overhead associated with designing and manufacturing physical test houses and models. Testing at scale can also cause problems due to Reynold’s number and blockage effects. Furthermore, there is a limit to what can be measured and visualised with a physical test solution.
This NASA image shows an attempt to visualise a wake vortex in a real-life scenario. The problem is that tests like this are expensive, unrepeatable, short-lived and difficult to quantify, which limits their practical value. Trying to carry out similar work on something like a wind turbine is equally as hard.
In the mechanical world, these types of problems are solved with multibody modelling and stress analysis simulations using Finite Element Analysis (FEA). These software tools allow engineers to obtain information that it is more or less impossible to capture physically.
In the world of fluid dynamics, the simulation solution is Computational Fluid Dynamics (CFD). Modelling the flows and pressures around test items offers more than just force measurements of drag and lift. A calculated flow field allows engineers to study very complex motion in fine detail and investigate the effects of small changes.
CFD can also simulate things that cannot be achieved in wind tunnels, such as curved flow cases that are a key area of development in Formula 1. Internal flows can also be modelled, helping engineers to understand the flow of fluids through refrigerators and jet engines. These benefits make CFD a powerful tool, that is in demand across many industries.
There are many advantages to performing experiments in the virtual world. But how do we go about taking on such a complex set of calculations? We have understood the basic governing aerodynamic equations since the mid-19th century. Solving them however, has not been so straightforward and doing so commercially took the best part of a century.
There are three main stages involved in modelling the flow around an object: modelling, discretisation and linearisation.
At the heart of the mathematical models used in CFD are the Navier Stokes equations. These are a set of partial differential equations which give flow velocity in three dimensions and can be used to calculate other parameters such as pressure and temperature.
There is a variety of different ways of formulating, simplifying and writing them down, but the two equations always appear as suitably complicated partial differential equations (PDE).
In terms of their meaning, the basis of the equations is a little more familiar. The first equation simply represents the conservation of mass in all three dimensions. The second equation is a fluid dynamic form of F=ma, Newton’s second law.
Between them, they describe the fundamentals of the flow field. The fluid can’t appear or disappear (conservation of mass) and the acceleration of a mass or volume of fluid is caused by forces due to pressure, viscosity and external forces such as gravity or electromagnetism.
Depending on the application, other equations may be required. These could be thermodynamic if there are significant temperature gradients or heat sources from an engine exhaust.
The above equations are known, so surely it's just a case of solving them in our simulations? It is possible to solve some simple problems, like laminar flow in a pipe. However, anything of sufficient complexity to be of use in aerodynamics isn’t as straightforward.
Analytical solutions (ones that can be calculated directly over an infinite number of points) don’t exist for most things of interest. In fact, solving the Navier Stokes equations is such a problem, that it forms one of the million-dollar Millennium Prizes for mathematicians.
This would seem to limit the use of the equations, but this is where engineers and mathematicians take divergent paths. The latter are interested in fully understanding the equations and why they don’t always solve. The former are interested in their application to real world problems and have computational tools to approximate solutions to partial differential equations. If done well, the equations work perfectly for most applications.
The modelling stage gives the first source of potential CFD software errors, which come in two forms. The first is the equations and simplifications that are chosen to be included in the model. This part of the equation can include many items that may or may not be relevant.
For example, if you are carrying out a CFD simulation of tides in the ocean, you have to include the gravitational force of the moon, among other things. That is not a necessity for the internal fluid flows in an automotive cooling system. Removing elements make the equations relatively easier to solve, but choices have to be carefully managed to maintain accuracy.
The second source is the accuracy of the 3D model of the test subject itself, be it an aircraft, a boat or a vehicle. Typically, this will come from a CAD model which may not include every detailed feature of the physical item. Things like shut lines on car bodywork or rivet heads on an aircraft skin can completely change the fluid motion. In addition, each 'identical' vehicle in a wind tunnel test will be subtly different to the next.
The first of the engineering tools to solve these equations is discretisation. This is the process of breaking down a continuous solid (a wing or turbine blade for example) into discrete points or nodes. In CFD, this is known as meshing and is an area which is very similar to Finite Element Analysis (FEA).
Discretisation breaks down a highly non-linear problem into a very large number of smaller, simpler sections which are considered to be linear. The combination of individual solutions for the points makes up an approximation of the overall solution. Computers are very good at solving a large number of linear problems of this nature, which means we can approximate our non-linear system relatively quickly.
A simple, non-CFD example of this technique is the approximation of π using a number of polygons. Knowing the value of π easily allows us to calculate the area and circumference of a circle, the equivalent of being able to analytically solve Navier Stokes. Before the value of π was known (and before the invention of trigonometry), however, it was approximated. Archimedes used polygons to discretise a circle and estimate the circumference and area.
The example here shows how an accurate estimate of π can quickly be generated and demonstrates the concept of discretisation error. This is an important measure in CFD which can be used to check how well the item has been meshed.
It would be easy to use a high number of nodes in a mesh to generate a very accurate solution, but this comes at the expense of time spent creating the mesh and computing time to solve for all of the points. Instead, there is a balancing act between accuracy and time. In the π example, a polygon “mesh” of millions of sides will get an estimate of π accurate to millions of decimal places. In reality, the 96-sided polygon is close enough for most applications.
One final point about meshing is the effect the CFD user can have. Different mesh densities need to be assigned across the model, using experience to put a denser mesh in important areas. However, this needs to be done in the most computationally efficient way possible to ensure suitable simulation run times.
Once the model has been formulated and the mesh created, the equations are solved by a process of iteration. It is possible to come to a very accurate solution using Gaussian elimination or LU decomposition, but, given the other sources of error, it is generally not worth the computational time.
In iterative solutions, the user will set an array of starting conditions and complex solver algorithms will work on each point on the mesh. Each node is subject to the forces due to the surrounding nodes (usually differences in pressure or viscosity due to velocity differences). Each of those surrounding nodes has forces due to other nodes acting on it and so on.
The iteration procedure works around all of the nodes until equilibrium is met for each time step. As we have seen in other areas of CFD, there is a compromise required here. The easiest way to come to an accurate solution is to keep on iterating for a long period of time, rather like the polygons in the previous π example, but in a time sense.
However, usually that time doesn’t exist, for cost or efficiency reasons. Instead, the user has to decide when the iteration error is acceptable. In the graph above, there are various points in the iteration count where the solution is relatively close or far away from the ultimate solution. Identifying this iteration error is not easy and depends on the purpose of each simulation.
Once we have a solution for the case, there is a huge amount of data available for processing. At each point in the mesh, we know the flow velocity and pressure which can be post-processed to generate other, more useful values.
For example, to calculate the lift of an aircraft wing, the surface pressures on each node can be multiplied by the surface area that the node represents which results in a force. Summing this for both the upper and lower surfaces of the wing determines the wing's lift.
On a 2021 Formula 1 car, the aerodynamics are heavily dependent on managing vortex structures to generate downforce. Knowledge of the flow velocities u, v and w means these vortices can be easily visualised at x, y and z slices along the car. This is almost impossible to do in any other way.
F1 cars are a very niche example, but the premise works equally for real world problems like drag on a heavy goods vehicle (HGV). Managing the surface friction and wake of a vehicle like this can save fuel and money and reduce carbon emissions.
It is also possible to trace the path of an individual air particle to create a streamline, studying how it moves along the vehicle. This allows the user to see where the air impacting a surface is coming from. Returning to the F1 example, this means that it is possible to establish what upstream components will affect flow over a problematic rear wing.
At this point, it's worth noting that wind tunnels become very important to correlate CFD results. Techniques like smoke visualisation, particle image velocimetry (PIV) and kiel probe rakes are used to create flow visualisations to compare and validate CFD simulations.
Taking comparable slices through the flow field obtained using these physical test methods allow direct comparisons to be made to CFD cases. Any errors can be identified and solved. This may require changing solver steps, meshing or turbulence models until correlation improves. The confidence gained from the correlation procedure allows the CFD results to be used more widely.
CFD also offers the advantage of automation. Running multiple cases overnight without requiring human intervention or maintenance. Standardised reports can give a wide range of information without the time taken to manually compile them.
CFD users are already working with vast super computers with thousands of cores to solve their cases. Despite this, it is still very difficult to use CFD in a similar way to wind tunnels. An F1 team using continuous motion in a wind tunnel can generate hundreds of downforce and drag numbers in a matter of hours. These runs cover a wide range of ride height, steer, roll and yaw cases. To do the same in CFD where each change in configuration is a new case would take too long to solve, which is why wind tunnels are still used.
This is likely where the main advances in CFD will be made. Increases in computing power with cloud services, solver design and meshing software will make simulations faster and more accurate.
If the mathematicians make progress with solving the Navier Stokes equations for the Millennium Prize, this could transfer across into commercial CFD solvers. This is important, because using iteration-based CFD is still quite a user dependent activity. Things like how a user chooses to mesh, the solvers that CFD suppliers implement, starting conditions, iteration counts and turbulence models can all influence the results. Automating these settings in an intelligent and consistent way, such as with AirShaper, can help resolve some of the uncertainty and improve the reliability and accuracy of CFD simulations. These differences can determine whether CFD predicts a vortex is stable or bursts, a change which can have very large real-world results.
It is easy to see, however, that CFD will eventually do most of the heavy lifting in the fluid dynamic world. F1 teams have largely backed a suggestion to aim to ban wind tunnel use from 2030. This would break over 40 years of dependence on wind tunnels, so represents a significant step in the rapid progress of CFD.