Pressure is the most important quantity in fluid mechanics, and one of the hardest to measure. Engineers can track velocity in a flow and follow tracer particles with lasers. But the pressure field, which ultimately determines the forces on wings, turbines, and swimming animals, remains largely invisible. Engineers designing small drones that mimic insect flight, or biologists trying to understand how a dragonfly generates lift through each wing stroke need that data. Most of the time they have to guess it, model it, or go without it.
A few years ago, a class of artificial intelligence models called physics-informed neural networks, or PINNs, offered a different approach. Rather than fitting a curve to data, PINNs embed the governing equations of fluid mechanics directly into the learning process. Feed the model velocity measurements, encode the laws of motion, and the pressure field emerges as a by-product, inferred rather than measured. The approach sits at the heart of what researchers now call AI for Science, a broader movement that includes digital twins of physical systems, where AI learns from known governing laws rather than from data alone. Its appeal in engineering is direct: instead of running expensive computational simulations of fluid dynamics, researchers can recover hidden quantities directly from measured data.
The practical reality, however, was messier. PINNs turned out to be temperamental. They worked well over short time windows and simple flows, but ask them to track a system over many cycles of motion — say, a flapping wing beating through twenty strokes — and the results deteriorated badly. Errors accumulated. Frequencies were missed. The physics got lost somewhere in the mathematics of training. The instinctive fix — throwing more computational power at the problem — did not work: increasing the network size five-fold over long time domains produced no meaningful improvement. For studying the kind of complex, long-duration flows that matter most in biology and engineering, standard PINNs were falling short.
Systematic solution
A research team from IIT-Madras and the LISN-CNRS laboratory in France has now published a systematic solution to this problem. The researchers identified three distinct reasons why PINNs struggle with time: the data can be too sparse; the time window too long; or the flow too spectrally complex, containing multiple interacting frequencies that no one told the model to look for.
The test-bed was a flapping elliptic air foil operating in conditions typical of insect wings and small unmanned aerial vehicles. The researchers ran two scenarios: periodic flow, repeating with each stroke; and quasi-periodic flow, which is seemingly regular but contains subtle, clashing frequencies caused by the way air swirls off the wing’s leading and trailing edges at slightly different rhythms. The quasi-periodic flow is associated with enhanced lift generation.
The core proposal was to stop treating time as a single, undivided domain. Rather than training one large neural network over the entire time history, they divided the temporal domain into segments of two or three flapping cycles each, and trained a smaller network on each segment in sequence. At the start of each new segment, the network was initialised not from scratch but from the weights of the previously trained network. This is transfer learning: the model carries forward what it has already learned about the physics and flow structure of the previous interval.
The improvement was substantial: Pressure reconstruction errors fell from 36 per cent to around 7 per cent. For quasi-periodic flows, the model successfully reconstructed the complex frequency spectrum, including multiple interacting peaks in the drag signal, which the standard model missed entirely.
The researchers also identified a leaner variant that trains each subsequent segment with fewer iterations and a lower learning rate. It matched the accuracy of the full approach while cutting training effort by roughly a third — useful for longer time histories or more complex geometries.
The team also introduced a practical data strategy they call ‘preferential spatio-temporal sampling’. The region immediately around the moving wing is physically complex, with strong vortices and sharp gradients; the wake further downstream is smoother and more predictable. The method concentrates its sampling budget on the chaotic air-wing interface, leading to fewer data points, lower computational overhead, and improved accuracy — a meaningful reduction in GPU time and cloud computing costs.
The immediate application is in experimental fluid mechanics. Take velocity data from a wind tunnel or water tunnel, run it through a trained PINN, and recover the pressure field and aerodynamic loads without any additional instrumentation. For bio-inspired flight research, where attaching pressure sensors to a dragonfly is not a realistic option, this is a significant step. For engineers working on micro-aerial vehicles, small surveillance drones, and search-and-rescue platforms, the ability to model quasi-periodic flapping accurately over long flight strokes is directly relevant to understanding how wing geometry and stroke patterns generate lift.
Limitations
There are limits. Strongly aperiodic or chaotic flows remain out of reach: where the frequency content is wild and the system is sensitive to initial conditions, neural networks lack the representational capacity to keep up. The paper also flags a subtler constraint: because the training data and the pressure benchmarks were produced by two different computational solvers, a small slice of the reported error reflects disagreement between tools rather than any weakness in the method itself. And the study was conducted in two dimensions; extending it to realistic three-dimensional wing geometries will require further work on sampling and computational cost.
More Like This
Published on March 9, 2026



