A study tested if LLMs could design effective nudges to shift traveller behaviour.
| Photo Credit:
Getty Images
Customers say they value sustainability, but their wallets often tell a different story. Consumers express concern about climate change, yet when asked to pay a premium for greener choices, uptake collapses. Airlines illustrate the issue vividly: voluntary carbon offset programmes have been available for years, but participation remains low. Most passengers acknowledge the environmental cost of flying, yet few voluntarily add a charge they may not fully trust.
A new research paper, ‘Large Language Models Enable Personalised Nudges to Promote Carbon Offsetting Among Air Travellers’, by Vladimir Maksimenko, Qingyao Xin, Prateek Gupta, Bin Zhang and Prateek Bansal suggests that artificial intelligence (AI) could alter this equation. The authors tested whether large language models (LLMs) could design more effective nudges to shift traveller behaviour. The results were modest in percentage terms but significant at scale.
Say-do gap in sustainability
The researchers focused on the decoy effect: adding a third, less-attractive option to make another choice look more appealing. In flight booking, the baseline choices are a standard ticket with no offset and a carbon-neutral ticket with offsetting included. The study introduced a decoy (a partially offset ticket priced higher than the full offset) which made the carbon-neutral option appear the rational middle ground.
Crucially, the decoys were personalised. Using demographic and attitudinal factors such as age, income and, most importantly, trust in offsetting, the team asked an LLM to generate customised nudges. These were tested in surveys of 3,495 travellers across China, Germany, India, Singapore and the United States.
Where the study innovated was in the use of the LLM. The model was not exposed directly to travellers. Instead, it was used as a design engine to generate variations of the decoy option. Researchers fed in demographic and attitudinal data (age, income, environmental concern and, most importantly, trust in offsetting) and prompted the model to produce ticket descriptions and price structures that would make the full offset more attractive to each type of traveller.
The results
Carbon offset purchase rates increased by 3 to 7 per cent when AI-generated personalised nudges were used. That may appear incremental, but across the five countries studied, it translates into around 2.3 million tonnes (mt) of additional CO₂ mitigated annually.
The most important finding was who moved. The greatest effect came from travellers who were sceptical of offsetting programmes: those who lacked trust in their effectiveness. This group accounts for an estimated 81 mt of CO₂ each year, roughly 8 per cent of total global aviation emissions. That is equivalent to the annual emissions of a mid-sized country.
Strategic takeaways
First, the sceptics are the segment that matters most. The study shows that shifting even a fraction of distrustful consumers creates far greater impact than reinforcing the behaviour of the already committed. That lesson applies well beyond aviation: whether in utilities, retail or finance, the real strategic prize lies in moving the reluctant.
The consumers who matter most – the sceptics – are also the ones where companies face the highest risk. Used wisely, LLMs can help engage them, turning hesitation into measurable impact. Used poorly, they could deepen mistrust and expose businesses to new reputational hazards. Businesses face the risk of regulatory intervention, reputational damage, and consumer backlash if AI-driven nudges are seen as coercion or “dark patterns”.
Second, LLMs are best seen as prototyping tools. They are not ready to be embedded directly into consumer journeys. Their value lies in running low-cost simulations of consumer responses, narrowing down interventions worth testing in the field, and focusing resources where they have the greatest leverage. For firms under pressure to make sustainability progress while managing costs, this is a practical advantage.
From a risk perspective, the LLMs application of interventions worked more effectively in the US, Germany and Singapore than in India or China. That reflects the bias of models such as GPT, trained largely on Western data. A nudge that resonates in Frankfurt may fall flat, or even backfire, in Mumbai. Businesses cannot assume global AI tools translate seamlessly across markets.
Published on August 25, 2025