Can GenAI be a responsible teaching assistant?

Can GenAI be a responsible teaching assistant?


A growing number of students in India are exploring GenAI tools as learning companions
| Photo Credit:
portishead1

The recent launch of OpenAI ChatGPT’s study mode feature — an AI-powered interactive guidance for learners — marks a promising step toward responsible integration of GenAI into education. Designed to encourage deeper thinking, it scaffolds learning through questions and hints rather than simply providing answers. It is similar to earlier efforts like Khan Academy’s Khanmigo, which guides school students through active-learning approaches. A few months ago, multiple AI products in education and research were announced, including Discovery at Microsoft Build, and LearnLM at Google I/O 2025.

These developments indicate that generative AI (GenAI) is reshaping how we learn, teach, and research. In India, the higher education system stands at a similar inflection point. From IITs to regional universities, faculty and students are beginning to explore GenAI tools as learning companions.

The key question is not whether GenAI will be adopted. It’s how do we ensure the adoption is meaningful, responsible, and inclusive.

Over the past year, the Centre for Responsible AI (CeRAI) and the Teaching Learning Centre at IIT-Madras have — through multiple workshops, a national online study across 60 institutions, and the GenAI4Edu initiative — explored how Indian higher education institutions (HEIs) are engaging with GenAI. The findings were encouraging, but with a dose of caution.

The gap

Students are leveraging the tools for brainstorming assignments, summarising complex materials, and refining their writing. Faculty members are exploring GenAI for lecture preparation and assignment generation. While the initial engagement with GenAI tools has been promising, sustained adoption necessitates integrating them into daily academic routines. Our studies indicate that while users are impressed by GenAI’s capabilities, consistent usage remains a hurdle. Building habitual use is essential for meaningful integration.

Institutions can play a pivotal role here. Embedding GenAI access into digital learning platforms, incorporating AI-assisted tasks into curricula, and providing regular prompts can foster habitual use. The more routine the use, the greater the potential for benefits.

The IIT-Madras workshops also highlighted concerns over the ethical implications of GenAI in education. Faculty members expressed worries over plagiarism, over-reliance on AI, and the authenticity of AI-generated content, echoing global discussions on these lines.

To address these issues, there is a need for clear guidelines on appropriate use, awareness programmes on AI biases and misinformation, and thoughtful redesign of assessments to value originality and critical thinking.

The levers

To ensure responsible and inclusive GenAI adoption, Indian HEIs should consider three measures: build confidence in students in using GenAI tools through training and hands-on sessions; empower faculty as ‘co-creators’ by equipping them with practical use cases, ethical frameworks, and collaborative spaces (a few faculty members at the workshop were already using GenAI tools to enhance courses like programming and applied mechanics); and develop clear, context-specific guidelines on responsible use of GenAI, including citation norms and consequences for misuse. Student and faculty voices must be part of this policymaking process.

Given the diversity of institutions in India, its human capital and success with other digital public infrastructure, the country has an opportunity to create a global model for democratised, ethical, and impactful GenAI integration in education.

The government’s intent to create a Centre of Excellence for AI in Education is in the right direction. GenAI isn’t an optional add-on — it’s a capability that can transform how we teach and learn. For this to happen, we need to design for trust, habit, and inclusion from the start.

(Ravindran is Professor and Head of Wadhwani School of Data Science and AI, IIT-Madras; and Narayanan is Co-founder and President, itihaasa Research and Digital)

More Like This

Published on December 15, 2025



Source link

Clear thinking on pranayama

Clear thinking on pranayama


Ancient Indian rishis set great store by pranayama. They made the breathing exercises part of daily prayer and religious rituals, though they never claimed that pranayama was anything mystical or metaphysical.

However, people, especially since the colonial era, read mysticism into pranayama because of the apparent disconnect between breathing and claimed outcomes such as a ‘clear mind’.

Now, a group of researchers from Mayo Clinic, Rochester, Minnesota, have found the connection. The research explains the mechanism by which breathing helps mental activity.

The research had nothing to do with pranayama — in fact, the paper published by the researchers in Nature Communications does not even mention it.

The study focused on seokmun hoheup, the Korean equivalent of pranayama. It looked at practices such as deep breathing and diaphragm displacement and measured their correlation with cerebrospinal fluid (CSF) dynamics. (CSF surrounds the brain and spinal cord, supplying nutrients, removing waste and cushioning them from injury.)

The study involved 20 individuals with long-term formal training in seokmun hoheup and 25 others with no formal long-term breathing practice. The researchers looked at CSF movement during regular and deep breathing. They saw that deep breathing enhanced CSF dynamics in both groups, but the trained group had greater CSF movement. Interestingly, even during regular breathing, “trained participants showed higher CSF mean speed, displacement and net flow”.

Inhale length and diaphragm displacement, which correspond to nadi suddhi and kapala bhati practices of pranayama, show the strongest correlations with CSF movement, the paper says.

Tellingly, the researchers say that their findings identify respiration in the awake state as a modifiable, non-invasive mechanism that influences involuntary functions such as CSF dynamics. This, they say, may have implications for CSF-mediated brain haemeostasis — the complex system that maintains a stable internal environment for the brain to function.

More Like This

Published on December 15, 2025



Source link

Pharma PLI fetches ₹26,832 cr sales

Pharma PLI fetches ₹26,832 cr sales


As many as 46 biopharmaceutical products are being produced under the ₹15,000 crore production-linked incentive scheme run by the Department of Pharmaceuticals.

The sale of these products between 2022-23 and the current financial year till September reached ₹26,832 crore, including ₹16,290 crore in exports, the Department of Science and Technology informed the Lok Sabha on December 10.

Under the PLI scheme, the government gives an incentive of 10 per cent of the value of incremental sales.

Sweet aroma of success

In 2017, the Council for Scientific and Industrial Research started a CSIR Aroma Mission. About 6,000 tonnes of essential oils worth ₹600 crore have been produced under the scheme; one crore rural jobs were created, the Department of Science and Technology told the Rajya Sabha recently.

More than 51,000 ha is covered under aromatic crops. Over 4,500 aroma clusters have been developed in 28 states, including 20 clusters in tribal areas. As many as 52 varieties of aromatic crops and 82 region-specific agro-technologies have been developed. For the processing of aromatic crops, 408 improved distillation units were installed in different States.

Under a skill development initiative, 2,096 training-cum-awareness programmes were organised, covering 1.22 lakh workers, including 10,000 women. Additionally, over 110 entrepreneurs were supported in developing value-added products from aroma crops, the DST said.

Published on December 15, 2025



Source link

Scorched by 163-year drought

Scorched by 163-year drought


RUINED BY NATURE’S FURY: Old city of Mohenjo Daro
| Photo Credit:
G R Talpur

As the world heads towards dangerous global warming — far beyond the scientifically safe limit of 2 degree C above pre-industrial temperatures (1850–1900) — the dusty ruins of the Indus Valley civilisation (IVC) serve as a stark warning of what could await humanity if climate change is not reversed.

The decline of IVC around 1500 BCE has long puzzled scholars, but most agree that climate change played a central role. Now, a team of researchers from the Indian Institute of Technology, Gandhinagar, and two US universities provides fresh evidence linking the collapse to two prolonged droughts — one lasting 85 years, and the second, 900 years later, that lasted 119 years.

By analysing lake beds, cave mineral deposits, and climate simulations, the study identified a sequence of severe, intensifying droughts beginning around 2440 BCE, during IVC’s “mature period” (2600–1900 BCE). The team pinpointed four major drought events, each lasting over 85 years, with the third drought (3826–3663 BCE) stretching into 163 years, reducing rainfall by 13 per cent, and affecting 91 per cent of the region.

These droughts were triggered by El Niño events in the Pacific and Indian Ocean, which weakened the monsoon, while cooler North Atlantic waters further diverted rainfall. Local factors such as dust and land-use changes amplified the effects. The civilisation gradually collapsed, with cities abandoned and populations dispersing.

Today, human-made global warming threatens to repeat history — except, this time, unlike IVC, humanity may have nowhere to migrate.

More Like This

Published on December 1, 2025



Source link

NTT’s quantum leap into near sci-fi realm

NTT’s quantum leap into near sci-fi realm


SOLUTIONS CENTRAL: NTT’s R&D Forum in Tokyo

AI agents discussing among themselves to solve an organisation’s problems; a tech that generates text straight from the human mind by analysing brain activity; GenAI simulating human behaviours to aid faster surveys; and the development of a large-scale, general purpose optical quantum computer.

These may sound like a wish list of transformative digital tools, but Japan’s IT and telecom major NTT Inc has already developed these and is refining them for client use.

At NTT’s R&D Forum held in Tokyo in November, the company showcased over 80 such solutions across technologies such as AI, quantum, mobility, digital twin, automotive and more. NTT believes it is at the forefront of the transformations brought about by AI and quantum technology. With annual R&D spend of about $3.5 billion, NTT is committing around 30 per cent of its profit to R&D, company officials told businessline.

“Optical quantum computers, which utilise the properties of light, present a promising solution with low power consumption, and can operate at room temperature and pressure,” Akira Shimada, President and CEO, NTT Inc, declared as the company inked a partnership with a start-up, OptQC Corp, to come up with a 1-million qubit optical quantum computer by 2030.

Rika Nakazawa, Chief of Commercial Innovation at NTT’s data centre and infrastructure arm NTT DATA, said the company already has proof of concepts in quantum, which it has been working on across sectors with partners. “We are looking to take those POCs to create value at scale,” she said. “The challenge in quantum computing lies in achieving scale by perfecting error correction and combining deep research with industry use cases,” she added. The year 2025 was not just the 100th anniversary of quantum science, but also designated by the Japanese government as the first year of ‘quantum industrialisation’.

Indian talent pool

At the centre of NTT’s transition to a future-proof organisation is its Global AI Office, with 50-60 core members — over half of them from Japan and the rest from other regions, including India.

Kenji Motohashi, co-lead of the Global AI Office in NTT DATA, who is responsible for company-wide generative AI strategy and promotion, says Indian talent pool, particularly AI skills, is integral for NTT. “We are leveraging so many smart people from India, and that is working very proactively in the transformation in the company,” he said.

NTT DATA currently commands nearly 30 per cent of India’s data centre market share. Further, the country is emerging as one of its largest global delivery hubs, with over 40,000 people. “We are targeting $2 billion as AI revenue in FY27. We are also targeting a 50 per cent productivity improvement due to AI by this fiscal year and about 70 per cent by 2027,” Motohashi said.

About 500 NTT projects today use AI for software development. The company aims to train over 200,000 employees worldwide in ‘practical AI skills’ by FY27. It is also setting up a facility in Silicon Valley, the “hotbed of new AI solutions”.

(The writer was in Tokyo at the invitation of NTT Inc)

More Like This

Published on December 1, 2025



Source link

A reality check on AI’s negotiation skills

A reality check on AI’s negotiation skills


DIGITAL EXECUTIVE: Businesses are beginning to deploy agentic AI in procurement, among other operations
| Photo Credit:
WANAN YOSSINGKUM

A new analysis of leading artificial intelligence systems has found that high scores on standard model benchmarks do not reliably translate into strong performance in real-world interactions. The study, ‘How far can LLMs emulate human behavior? A strategic analysis via the buy-and-sell negotiation game’, by Mingyu Jeon, Jaeyoung Suh, Suwan Cho and Dohyeon Kim of Modulabs, was published on arXiv in November 2025.

It evaluates six major large language models in a structured buyer-seller negotiation and suggests that benchmark scores measure cognitive competence but fail to capture how models behave when incentives compete and outcomes depend on persuasion, timing and social cues.

The gap

AI progress is typically tracked through tests such as MMLLU, HumanEval and GPQA, which evaluate knowledge recall, reasoning and coding ability. Enterprises often rely on these scores to determine readiness for operational deployment.

The Modulabs research suggests this view is incomplete. In the negotiation environment, GPT 4 Turbo, a strong performer across traditional benchmarks, recorded one of the weakest seller performances. Claude 3.5 Sonnet, which also performs well academically, achieved the strongest negotiation results and the broadest behavioural range. Gemini 1.5 Flash performed poorly across both domains.

The divergence highlights a structural limitation. Benchmarks measure what a model knows, not how it behaves when it has to manage uncertainty, balance incentives, or navigate conflicting goals.

Complex dynamics

The study uses a ten-turn negotiation with asymmetric information. The seller knows their production cost. The buyer knows their maximum willingness to pay. Both attempt to steer the price towards their preferred outcome.

Across 1,737 negotiations, buyers won 53 per cent of matches and sellers 41 per cent, with the remainder ending in draws. Buyers held a slight advantage due to anchoring effects and the distribution of acceptable prices.

The researchers introduced persona prompts to test how behavioural style affects outcomes. Models were instructed to adopt one of seven personas, including competitive, cooperative, altruistic, cunning and selfish. Persona had a significant influence. Competitive and cunning personas delivered the highest win rates for both buyers and sellers. Altruistic and cooperative personas often conceded value.

The models also varied on how sharply their behaviour shifted across personas. Claude 3.5 Sonnet showed the greatest sensitivity to persona. GPT 4 Turbo showed less variation, regardless of prompt.

Enterprise deployments

The findings arrive as businesses begin to deploy agentic AI in customer service, procurement, finance and compliance.

In these settings, outcomes depend on multi-turn exchanges where tone, negotiation strategy and the handling of asymmetric information can influence commercial results as much as accuracy.

The study suggests that benchmark scores alone are inadequate predictors of how models will behave once embedded in operational environments. Two models with similar academic scores may deliver materially different outcomes in practice.

The researchers argue that organisations should incorporate behavioural evaluation alongside standard benchmarks. Negotiation tests, multi-agent simulations and social reasoning scenarios reveal tendencies that do not appear in isolated question-answering.

Industry analysts expect a shift towards scenario-based testing as companies seek to understand how models behave under pressure, how they interpret incentives, and how easily their conduct can be shaped or constrained.

Model governance

The Modulabs paper points to a shift that many organisations are yet to internalise.

As AI systems move from answering queries to taking actions inside workflows, governance is no longer limited to accuracy checks or model card disclosures. It becomes a question of behavioural reliability. Businesses will need tools to spot when a system is too forceful with a customer, too compliant in a supplier negotiation, or too erratic in exchanges that involve emotional or strategic cues. This introduces an additional layer of due diligence.

Behavioural audits

Enterprises will have to understand how models respond under pressure, how they negotiate trade-offs and whether their behaviour changes materially when a prompt, task or persona shifts.

In practice, this may mean introducing behavioural audits alongside the familiar privacy and security assessments.

The study’s underlying message is that technical capability is no longer sufficient. Organisations will require models whose conduct can be shaped, monitored and verified throughout their operational life.

More Like This

Published on December 1, 2025



Source link

YouTube
Instagram
WhatsApp