In just two months — October and November this year — the Indian Ocean spawned four powerful cyclonic storms, killing hundreds and devastating coastal communities across India, Sri Lanka and parts of East Asia.
Assessment of cyclone damage typically relies on aerial images captured by satellites and drones. Interpreting these images, however, is not a straightforward task — images differ widely across regions and storms due to variations in lighting, terrain, building materials and damage patterns. Artificial intelligence (AI) is now used to speed up assessments, but models trained on one disaster often perform poorly on another. An AI system trained on images from Cyclone Montha, in Andhra Pradesh, for instance, may struggle to assess damage after a cyclone in Sri Lanka. This challenge is known as the ‘domain gap’.
Researchers at the Indian Institute of Technology, Bombay, have developed a solution to this problem: A spatially aware domain adaptation network called SpADANet. “The AI model is designed to adapt across different storms and geographies, even when only limited, human-labelled data is available from the new disaster area,” says a write-up from IIT-Bombay.
While existing models treat the domain gap as a statistical issue, SpADANet uses spatial context — the arrangement and relationship of buildings and damaged areas within an image. This allows it to recognise damage patterns based not just on visual features like colour or shape, but also location and surroundings.
Mobile-friendly tool
Published recently in IEEE Geoscience and Remote Sensing Letters, the study shows that SpADANet improves damage classification accuracy by over 5 per cent, compared to existing methods. Crucially, the model can run on modest computing hardware, including tablets and smartphones, making it suitable for use in the field — an important advantage in disaster-hit regions with limited resources.
“SpADANet first teaches itself by studying unlabelled images from a domain (hurricane study area) by employing a process called self-supervised learning. This helps the model understand general visual patterns, such as how undamaged and damaged buildings or debris appear in aerial photos. By the time it sees labelled data, it already has a strong sense of what to look for in the data,” elaborates Prof Surya Durbha, who led the study.
It then uses a novel spatial module — Bilateral Local Moran’s I — to better capture how damage clusters across neighbouring areas.
The model was tested using satellite imagery from hurricanes Harvey (2017), Matthew (2016) and Michael (2018) in the US. Even when only 10 per cent of images from a new disaster were labelled, SpADANet outperformed standard approaches such as DANN, MDD and CORAL-based models, the write-up says.
IIT-Bombay clarifies that its SpADANet is “fundamentally different” from SPADANet, a model developed by a Japanese research team earlier this year.
More Like This
Published on December 29, 2025