NISAR: An all-seeing eye on Earth

NISAR: An all-seeing eye on Earth


The assembling of NISAR satellite at ISRO, Ahmedabad

From the ‘clean rooms’ and high-security halls of ISRO, in Bengaluru, comes a satellite unlike any built before. The NASA-ISRO Synthetic Aperture Radar (NISAR) mission not only marks the most ambitious collaboration between India and the US in space, but also sets a new gold standard for Earth observation.

What makes NISAR a trailblazer? Its core is a unique marriage of two radar systems, each with its own “superpower”.

Dual-frequency innovation is the heart of NISAR. L-band radar (provided by NASA), with a wavelength of 24 cm, effortlessly penetrates dense forests, sees the skeleton of landscapes, and monitors changes in vegetation and topography — even through thick cloud cover. S-band radar (engineered by ISRO), at 10 cm wavelength, excels at recording subtle changes in soil, wetland, and ice, even in challenging equatorial and polar environments.

This dual-frequency setup is a first not just for India, but also globally, for any free-flying space observatory, enabling simultaneous imaging from both bands to reveal what single-frequency satellites cannot.

Bold engineering

ISRO’s imprint on NISAR is unmistakable. Indian engineers took on the formidable challenge of designing, fabricating, and testing the S-band SAR unit and the satellite’s core bus.

NISAR

NISAR

Their achievements include the S-band SAR payload, which is responsible for capturing the high-resolution images that are critical in tracking natural disasters, including floods, landslides, and coastline changes.

The ‘chassis’ supporting the payloads, also built by ISRO, integrates sophisticated power, control, and thermal systems to keep NISAR’s sensitive electronics safe in the harsh conditions of space.

Launching from Sriharikota, using ISRO’s trusted geosynchronous satellite launch vehicle (GSLV), NISAR is a testament to India’s prowess in heavy satellite deployment.

Indian teams at the UR Rao Satellite Centre, in Bengaluru, conducted precision integration, testing the massive radar payload and reflector at ISRO’s state-of-the-art antenna facilities, simulating harsh space conditions and verifying interoperability.

Central to NISAR’s uniqueness is its SweepSAR digital beam forming architecture

Unlike traditional radars that scan side by side, SweepSAR’s ‘scan-on-receive’ technique covers an astonishing swath — over 240 km wide — while maintaining fine resolution.

The 12-m deployable mesh reflector, among the world’s largest, unfolds in orbit to catch the faint returning radar echoes with surgical precision, even as the reflector rotates and the spacecraft whizzes overhead. Each of the hundreds of feed elements in the antenna can be directed and processed independently, making NISAR an agile and dynamic observer, not a passive scanner.

Giant leap

NISAR is the result of a decade of technical exchanges between ISRO and NASA, involving hardware, know-how, and joint mission operations.

NISAR assembly and antenna testing at ISRO, Bengaluru

NISAR assembly and antenna testing at ISRO, Bengaluru

Mapping the entire globe every 12 days, NISAR will measure everything from tree heights and crop yields to glacier cracks and earthquake-triggered landslides, providing daily insights into a warming, shifting Earth. Real-time, freely available data will help planners predict floods, monitor water resources, and react to disasters, potentially saving lives in India and across the globe. All data is accessible to researchers worldwide within days, and even faster in emergencies.

The project embodies a commitment to global science and transparency.

A feather in ISRO’s cap, NISAR has deep roots in Indian soil. From the assembly lines in Bengaluru to outreach events in Gujarat Science City, Indian scientists and engineers are at the core of this international effort. The satellite will not just survey distant regions but also provide critical data for India’s monsoon management, forest conservation, and river basin planning — delivering local as well as global impact.

As NISAR prepares for launch, ISRO’s contribution — and India’s growing technological might — have never been more visible or celebrated.

This is not just a mission. It is a symbol of collaboration, innovation, and the power of vision to make the invisible visible.

(The writer is a former associate project director of NISAR at ISRO)

More Like This

Published on July 28, 2025



Source link

Fungus shield for pineapple

Fungus shield for pineapple


Transgenic pineapple plants

Indian researchers have identified a gene in pineapple that promises to lead to a powerful, homegrown line of defence against devastating fungal attacks.

The pineapple (Ananas comosus) is the most economically significant fruit of the Bromeliaceae family, valued for its nutritional benefits, alongside a delicious juicy flavour.

One of the biggest threats to pineapple farming is a disease called Fusariosis, caused by the aggressive fungus Fusarium moniliforme. It warps the plant’s stem, blackens the leaves and rots the fruit from the inside out. Traditional breeding techniques have struggled to keep up with the fast-evolving onslaught of such fungal foes.

Researchers at Bose Institute, Kolkata, have identified the gene behind the somatic embryogenesis receptor kinase (SERK), which can activate host defences against plant diseases.

Focusing on the AcSERK3 gene, part of the pineapple’s genetic code known for helping plants reproduce and survive stress, Prof Gaurab Gangopadhyay and his PhD student Dr Soumili Pal enhanced — or ‘overexpressed’ — the gene in pineapple plants. This charged up the plant’s natural defences to fight the Fusarium fungus far more effectively.

“The AcSERK3-overexpressed pineapple lines were more resilient to Fusarium infection than susceptible wild pineapple variety, due to increased stress-associated metabolites and scavenging enzyme activity. In controlled tests, these transgenic plants stood tall and green, while regular pineapples wilted under fungal siege,” says a press release.

A new multi-fungal tolerant pineapple variety could be developed through a long-term field study, enabling growers to plant varieties that can withstand multiple fungal threats.

Point-of-care sepsis diagnosis

Portable endotoxin detection device

Portable endotoxin detection device

A group of scientists from the National Institute of Technology, Calicut, have developed a highly sensitive, low-cost point-of-care device with an electrochemical biosensor for early diagnosis of sepsis. The portable device has eight distinct sensor architectures; it is used for detecting endotoxins rapidly and accurately, says a press release.

It detects endotoxin in blood serum using a standard addition method, providing results within 10 minutes.

Sepsis is a serious medical condition caused by an infection and can lead to multiple organ failure, shock and even death. Early and accurate diagnosis is crucial for timely therapeutic intervention and improving patient outcomes. Early diagnosis is possible with the precise and sensitive detection of specific biomarkers. Endotoxin, a toxic component of the outer membrane of gram-negative bacteria, acts as a key biomarker, signalling the presence of an infection that could lead to sepsis.

In all the sensors, appropriately modified nanomaterials such as gold atomic clusters or nanoparticles, cupric oxide, or copper nano-clusters, molybdenum disulphide, reduced graphene oxide, or carbon nanotubes were used to enhance sensitivity.

The team has demonstrated a highly sensitive electrochemical sensor chip for the detection of lipopolysaccharide, which is compatible with a portable analyser for on-site detection.

The sensor is fabricated using functionalised carbon nanotubes and copper (I) oxide nanoparticles.

Two of these electrochemical platforms demonstrated versatility by enabling the sensitive detection of gram-negative bacteria, specifically E coli, in water samples. This highlights their potential for efficient water quality monitoring.

More Like This

Published on July 28, 2025



Source link

Frozen timestamps

Frozen timestamps


A century and a half ago, when trains were plying but electricity was still not widely available, intrepid entrepreneurs cut huge chunks of ice from the frozen Great Lakes and transported them to California and Texas to cool drinks in summer.

Decades later, scientists developed a method of collecting ‘ice cores’, cylindrical chunks of ice, from different depths below the surface to study what lay trapped in them and, in turn, decipher the conditions that prevailed during that period. Air bubbles trapped in ice are really books of history.

Now researchers of the British Antarctic Survey are on a project to study ice cores 3 km below the Antarctica plateau to determine the state in which the continent existed 1.5 million years ago. This takes research further back in history, building upon an earlier research that looked at the continent’s climate record 800,000 years ago.

The drilling site, Little Dome C, is about 40 km from the French-operated Concordia Station.

Dr Liz Thomas, Head of the Ice Cores team at the British Antarctic Survey, seeks to unlock the answer to why, a million years ago, the gap between two glacial cycles expanded from 41,000 years to 100,000 years.

With the data collected from the cores, scientists will reconstruct how the environment was back then — temperatures, wind patterns, extent of sea ice, and so o.

“This unprecedented ice core dataset will provide vital insights into the link between atmospheric CO₂ levels and climate during a previously uncharted period in Earth’s history, offering valuable context for predicting future climate change,” Dr Thomas says in a statement.

More Like This

AUTOMATED ANSWERS: Ramprakash Ramamoorthy, Director of AI Research, Zoho

Published on July 28, 2025



Source link

PARAM-1 learns local ways

PARAM-1 learns local ways


In the rapidly expanding world of large language models (LLMs), English continues to dominate, throwing other languages in the shadow. This imbalance is particularly stark in India, where more than 20 official languages and hundreds of dialects are spoken daily. PARAM-1, a newly released bilingual foundation model, rises out of India’s own linguistic and cultural landscape.

The model is detailed in a paper published on arXiv (July 2025) by the BharatGen team, which includes Kundeshwar Pundalik, Piyush Sawarkar, Nihar Sahoo, and Abhishek Shinde. The authors describe PARAM-1 as a 2.9-billion parameter foundation model trained from the ground up to reflect Indian realities.

Beyond translation

The name PARAM has a legacy in Indian high-performance computing, but the new model signals a different ambition. PARAM-1 is not a simple upgrade of past systems; it is designed to create artificial intelligence that understands India as more than just another market.

Unlike most global models that treat Indian languages as peripheral, PARAM-1 dedicates 25 per cent of its training data to Hindi. This includes government translations, literary works, educational material and community-generated content. The rest of the dataset consists of English sources carefully curated for their factual depth and range.

A tokeniser is the first step in how a language model processes text. It breaks sentences into smaller units, or tokens, which the model can interpret.

Standard tokenisers, built for English, perform poorly on Indian scripts, splitting words into too many fragments. PARAM-1 addresses this with a script-aware tokeniser that recognises Hindi and other Indic scripts, creating fewer and more meaningful tokens. This improves both accuracy and efficiency.

Although PARAM-1 currently supports only English and Hindi, its tokeniser has been designed for broader Indian linguistic diversity. It can handle scripts such as Tamil, Telugu, Marathi and Bengali, laying the groundwork for future multilingual expansion.

Design, not retrofit

PARAM-1 is the result of a training strategy that prioritised inclusion from the start. It was trained in three phases, beginning with general language learning, followed by a focus on factual consistency, and, finally, long-context understanding. This structure allowed the model to gradually develop fluency, retain factual information more effectively, and improve performance on tasks that require reading and reasoning over longer texts.

The model was tested not just on widely used English-language benchmarks such as MMLU and ARC Challenge, but also Indian-specific datasets. These included MILU, which draws on Indian competitive examinations, and SANSKRITI, a benchmark that covers cultural knowledge ranging from festivals to geography. The results were encouraging. PARAM-1 performed competitively on global benchmarks and outperformed several open models on Indian tasks, especially in Hindi.

More languages

Although PARAM-1 is presented as a model designed for India, its bilingual focus means that other Indian languages are still excluded. This raises questions over the model’s inclusivity, especially in a country where linguistic identity often intersects with regional politics and access to services.

The team behind PARAM-1 appears to be aware of this limitation. The tokeniser was specifically engineered to handle the morphological patterns found in Indian languages beyond Hindi. While this does not compensate for the lack of direct training in those languages, it does provide a foundation for expanding the model’s linguistic reach in future iterations.

Equitable AI

PARAM-1 is not a frontier-scale model, nor does it claim to be the most powerful LLM available. Its significance lies in a different direction. It shows what can happen when the design of an AI model reflects the needs and complexities of the people who are meant to use it.

The development of PARAM-1 offers a blueprint for equitable AI design. It highlights the importance of investing early in diverse data, language-aware infrastructure, and public benchmarks that reflect regional and cultural realities. The model also invites broader participation from government agencies, universities, and private firms, especially if it is to grow into a truly multilingual and domain-specialised platform.

The authors of the model offer a clear message in their conclusion: Fairness in AI cannot be treated as an afterthought. It must be addressed in the earliest stages of design. PARAM-1 currently supports just two languages, but leaves the door open for many more. It serves as a reminder that if artificial intelligence is to serve all of humanity, it must begin by learning to listen to more of it.

More Like This

Published on July 28, 2025



Source link

‘Without a mature digital core, firms cannot monetise AI’

‘Without a mature digital core, firms cannot monetise AI’


AUTOMATED ANSWERS: Ramprakash Ramamoorthy, Director of AI Research, Zoho

As an intern at software major Zoho in 2011, Ramprakash Ramamoorthy had worked with teams that were still figuring out whether machine learning could be integrated into the firm’s suite of products. Sentiment analysis, anomaly detection, recommendation engines — these were his first few brushes with the working of algorithms, before he surfed the Alexa and Siri waves in 2018, and the Chat GPT wave of 2022.

Recently, Zoho launched three large learning models and a speech recognition model (in English and Hindi, with 15 more regional languages to follow). On the sidelines of the launch of Zoho’s platform for building ‘agents’ — autonomous software systems — Ramamoorthy, now Director of AI Research, told businessline that the focus remains firmly on offering value to customers and securing their privacy.

Edited excerpts from the chat:

As AI director, what is your mandate?

We have a hub-and-spoke model for AI development. I am a part of a group called Zoho Labs, where we take care of the foundational technology; then there are 55-plus product teams, which build on top of the foundation we provide.

What does Zoho Labs do?

We’re about 200 people in the team, distributed across Nagpur, Tenkasi, Chennai and other locations. We also have a five-member team in Mexico for our Spanish initiatives.

We have teams that work on databases. Then there’s a hardware acceleration team. Last year, we announced our partnership with Nvidia… their work goes towards the technical foundation. We have a hardware team that works with AMD, Intel and others. We think AI hardware is super-important to ride the AI/ML wave.

Where does Zoho stand in the current agentic AI wave?

We have 25-plus pre-built agents, some India-specific agents for Aadhaar verification and so on. In agent studio, you can prompt and build your own agents. Our MCP [model context protocol] server connects models like GPT or Claude to Zoho’s APIs, data models, and actions.

How will AI change enterprise software?

A lot of it will become prompt-driven, with users not needing to learn to use it. Enterprises are overwhelmed… none have got an ROI from their existing AI stack. So, the first thing is to get your digital maturity right.

Do all firms need agentic AI?

Wherever there’s repetitive workflow, agents can add value. With our footed agents we saw 10-30 per cent productivity, but they cannot replace my support team. People talk about 10X, but we haven’t seen that. Companies must first find out what can be automated and remove the data silos for smooth flow of data. Yes, agents will be important, but they cannot replace humans… a strong digital foundation should be at the core of agentic AI usage.

More Like This

Published on July 28, 2025



Source link

Antarctic Ocean’s briny puzzle

Antarctic Ocean’s briny puzzle


Something strange is happening in the Antarctic Ocean, which has scientists baffled. They have some theories but have been unable to nail down the cause of the problem.

The Antarctic Ocean’s surface waters have been turning salty since 2015.

Normally, ice melts in summer. The meltwater forms a layer on the surface, floating over denser saltwater below — a phenomenon known as ‘stratification’. The floating freshwater acts like a lid, preventing warmer saltwater from rising to the surface.

In winter, the freshwater would freeze again — but less and less due to global warming.

Since 2015, for reasons not well understood, the stratification is weakening, allowing more subsurface saltwater to mix with the freshwater, turning the surface water saltier.

This affects ice formation in winter — loss of cryosphere.

A group of researchers from the University of Southampton, UK, used satellite images to study ‘salinity signatures’.

They note that the rapid changes observed over the past decade contradict the conventional wisdom that global warming drives up the volumes of surface freshwater.

“This suggests that current understanding and observations may be insufficient to accurately predict future changes,” they say in a paper published in PNAS, suggesting closer monitoring.

Caroline Holmes, a polar researcher at British Antarctic Survey, pointed out to Livescience.com that the Southern Ocean below the surface is “chronically underobserved.”

More Like This

Published on July 14, 2025



Source link

YouTube
Instagram
WhatsApp