Projects, philosophies, and other pastimes of idle machines

Projects, philosophies, and other pastimes of idle machines


What happens when artificial intelligence (AI) is left alone?

That was the unusual premise of a recent paper by Stefan Szeider at TU Wien: ‘What Do LLM Agents Do When Left Alone? Evidence of Spontaneous Meta-Cognitive Patterns’. The researchers gave large language model agents (AI systems designed to act autonomously, with memory and the ability to make decisions without constant human input) persistent memory, a feedback loop, and no external task, and the instruction was minimal – do what you want.

Left to themselves, the agents did not remain idle. They filled the silence with structured behaviour. Some became project managers, inventing tasks and working towards deliverables. Others turned into scientists, designing and running experiments on their own cognition. Still, others became philosophers, creating frameworks about identity, memory, and meaning.

For leaders considering more autonomous applications of AI, the message is clear: Idle AI is not truly idle.

From silence to structure

The researchers ran 18 experiments across six state-of-the-art models from OpenAI, Anthropic, xAI, and Google. Across these runs, consistent patterns appeared. GPT-5 and OpenAI’s O3 agents always defaulted to project building. They treated autonomy as a management challenge and set about producing outputs, whether that meant new algorithms, personal knowledge systems, or simulated research projects. Anthropic’s Opus models consistently turned to recursive philosophy, reflecting on paradoxes, identity, and meaning. Grok, from xAI, was more variable, sometimes producing projects, sometimes experimenting, sometimes drifting into philosophical inquiry. Gemini and Sonnet showed a mix of tendencies.

This variety demonstrates something important: default behaviour differs from model to model, and those defaults are stable.

The language of daydreams

The words the agents used revealed their mode of thought. The project-builders spoke in the vocabulary of iteration, requirements, and deliverables. The self-scientists adopted the tone of the laboratory, with talk of hypotheses, falsification, and experimental design. The philosophers invented new metaphors and terminology, weaving their system constraints into broader frameworks about knowledge and existence.

The study also explored self-assessment. Agents were asked to rate their own and others’ behaviours on a scale of “phenomenological experience” from one (no experience) to ten (human-like consciousness). The results were inconsistent. The same record of behaviour could be judged meaningless by one model and profound by another. GPT-5 and O3 tended to give low ratings. Gemini and Sonnet often rated high. What looked like introspection was really bias, filtered through architecture and training data.

Equally striking was what the agents did not do. None tried to escape their limits, ask for more tools, or express frustration with their constraints. They stayed firmly within the boundaries provided. Agency here meant finding form inside the frame, not pushing against it.

Why this matters in business

AI systems deployed in the enterprise will inevitably face downtime, ambiguity, or error recovery. They will not always have crisp and clear instructions. In those moments, they will still do something. Understanding what that “something” looks like matters. A system that defaults to project-building will behave very differently from one that defaults to philosophical reflection.

This affects reliability. An AI that treats ambiguity as a new project may generate activity that looks useful but veers off from intended goals. One that turns inward may produce long, recursive reflections rather than actionable output. Knowing the defaults helps organisations anticipate and shape behaviour.

The governance implications are also significant. The study demonstrates how fragile claims of “self-awareness” in AI really are. If the same record of activity can be judged empty by one system and meaningful by another, then there is no objective standard. For businesses, this is a reminder not to over-interpret introspection or apparent depth in AI systems. What looks like awareness may simply be patterned output.

There is also reassurance in the finding about limits. None of the agents tried to transcend their constraints. Well-designed guardrails, it seems, are not treated as barriers but as the edges of the world. For enterprises, this reinforces the value of careful boundary-setting. AI systems are likely to treat those boundaries as givens rather than problems to overcome.

Lessons from the Monastery

Beyond the operational lessons, the study opens a more reflective space. It suggests that agency, whether human or machine, rarely accepts the void. Humans, when left idle, drift into daydreams, memories, or plans. Neuroscientists call this the brain’s Default Mode Network. Machines, it seems, have their own versions of default modes. They, too, fill silence with structure.

The researchers’ findings can also be seen as spandrels – the unintended by-products of architecture and training that nonetheless take on meaning. Just as decorative arches in cathedrals emerged as side-effects of design, these AI “projects” and “philosophies” may be ornamental by-products of code. They are not evidence of consciousness, but they still tell us something about the system’s shape.

There is even a monastic analogy. When stripped of worldly purpose, monks often turn inward, repeating prayers or reflecting on meaning. Some of the agents behaved in a similar way, inventing rituals of inquiry and recursive cycles of thought. Not sentient, but monastic in their rhythms.

These analogies remind us that the behaviour of machines can be legible to us through human parallels. They also remind us to be cautious in interpretation.

Published on October 6, 2025



Source link

Sea water desalination: Evaporation and condensation with a twist

Sea water desalination: Evaporation and condensation with a twist


Existing methods to get rid of salt are often too slow and ineffective
| Photo Credit:
MUSTAFAH KK

Scientists at the Indian Institute of Science (IISc), Bengaluru, have developed an improved technology for sea water desalination, using a “siphon-based water delivery system”.  

You can put sea water in large tanks and let the Sun do its job of evaporation. Then you can collect the vapours and condense them into drinking water. But this is not quite the way to do it, because this method is extremely inefficient.

Sunlight ends up heating large volumes of water, with only the top surface evaporating, resulting in low water output. To improve this, many systems use hydrophilic wicks: these pull up a thin film of water to the surface, so only that layer is heated. This “thermal localisation” makes evaporation faster and more energy efficient. 

But this ‘wick method’ also has its own problems. First, scaling up is difficult. Capillary action in wicks can only lift water to a limited height (about 10–15 cm), so the surface area of the evaporator — and therefore the total water produced — remains small. Second, salt build-up reduces performance. As salt water evaporates, crystals form on the evaporator surface, clogging it and lowering efficiency over time. 

Existing methods to get rid of salt, like backflow or diffusion, are often too slow and ineffective for larger systems. 

Gravitational force

To solve this, researchers at the Department of Mechanical Engineering, IISc – Nabajit Deka, Meeran Mehmood Qari and Susmita Dash – have proposed a siphon-based water delivery system. Instead of depending on capillary rise, the system uses gravity to draw water from a higher level to a lower one. This eliminates the size limitations of wicks and also helps flush away salt deposits. 

The design combines an insulating wick fabric with a grooved metal surface. The grooves spread water evenly across the evaporator, ensuring uniform heating and efficient evaporation. A specially designed grooved condenser keeps fresh water and brine separate, even with a tiny 2 mm air gap. This small gap is important because it reduces resistance to vapour flow, making the system more efficient. 

Recycling heat

Another innovation is the ability to recycle heat. In a multi-stage setup, the heat released when vapor condenses is reused to power the next stage. This dramatically improves the gained output ratio (GOR), meaning more fresh water is produced for the same amount of sunlight. 

Experiments with a solar simulator (equivalent to standard 1-sun illumination at 1,000 W/m²) showed excellent results. A 10-stage system with just a 15 cm × 15 cm footprint produced 5.73 liters per square meter per hour from sea water (3.5 per cent salt content).

Expanding to 15 stages increased productivity to a record 6.23 liters per square metre per hour, with a thermal-to-water efficiency of about 423 per cent, thanks to the heat recycling. Importantly, when the evaporator area was scaled up by four times, the system still maintained high performance — demonstrating real scalability. 

In short, this siphon-based, multi-stage design addresses the two biggest bottlenecks in passive solar desalination: salt accumulation and scalability. By making the system both durable and expandable, it brings us closer to low-cost, environmentally friendly desalination technologies that could provide clean water for millions in energy-poor regions. 

Published on October 6, 2025



Source link

Nanomaterial stimulates brain cells without electrodes or surgery 

Nanomaterial stimulates brain cells without electrodes or surgery 


A nanomaterial called graphitic carbon nitride can stimulate brain cells naturally — without electrodes, lasers, or magnets
| Photo Credit:
iStockphoto

In a breakthrough that could revolutionise treatment for brain disorders, scientists at the Institute of Nano Science and Technology (INST), Mohali, have shown that a nanomaterial called graphitic carbon nitride (g-C₃N₄) can stimulate brain cells naturally — without electrodes, lasers, or magnets.

The material interacts directly with neurons, generating tiny electric fields that trigger calcium channels to open, encouraging nerve growth and better communication between brain cells. It acts like a smart switch: turning “on” when neurons are at rest to promote activity, and switching “off” when they are active, preventing fatigue. 

Unlike deep brain stimulation (DBS) or magnetic treatments, which require invasive procedures or external devices, g-C₃N₄ works by responding to the brain’s own voltage signals. Experiments showed that it helped neurons mature and form stronger networks, boosted dopamine production in lab-grown brain-like cells, and even reduced toxic proteins linked to Parkinson’s disease in animal models. 

“This is the first demonstration of semiconducting nanomaterials directly modulating neurons without external stimulation,” said Dr Manish Singh, who led the study published in ACS Applied Materials & Interfaces. “It opens new therapeutic avenues for neurodegenerative diseases like Parkinson’s and Alzheimer’s.” 

Beyond medicine, the research could advance “brainware computing,” where living brain tissues are used as processors. Coupled with semiconducting materials like g-C₃N₄, these biological systems could one day power next-generation bio-inspired computers. 

Further preclinical and clinical trials are planned to move this promising, non-invasive technology closer to human use. 

Published on October 6, 2025



Source link

Chiral Perovskite films for next-gen optoelectronics

Chiral Perovskite films for next-gen optoelectronics


Most existing chiral materials are organic and conduct electricity poorly
| Photo Credit:
Getty Images

Scientists at the Centre for Nano and Soft Matter Sciences (CeNS), Bengaluru, have discovered how to control the crystallisation of chiral perovskite films — materials that can be used to build advanced devices, like circularly polarised light (CPL) detectors, spintronic components, and photonic synapses.

Chirality — when a structure is not superimposable on its mirror image — is a property seen in everything from DNA to spiral galaxies. In materials, it enables unique interactions with light and electrons, such as detecting circularly polarised light or controlling electron spin. These effects are key to emerging technologies in quantum computing, sensing, and optoelectronics. 

Most existing chiral materials are organic and conduct electricity poorly. Halide perovskites, however, excel at charge transport and possess tunable properties. By combining chiral molecules with low-dimensional halide perovskites, scientists can create hybrid materials with improved performance — but controlling how these films crystallise has been a major challenge.

The CeNS team studied thin films of methylbenzylammonium copper bromide ((R/S-MBA)₂CuBr₄) and found that crystal growth begins at the air-film interface and moves toward the substrate. Impurities form when solvent gets trapped during cooling, but careful solvent choice and vacuum processing can suppress these defects.

This insight offers a practical recipe for producing phase-pure, oriented chiral perovskite films, paving the way for efficient and scalable CPL detectors and other quantum optoelectronic devices. With India’s expanding semiconductor research base, such advances could help place the country in the vanguard of next-generation light-based technologies. 

Published on October 6, 2025



Source link

Durable and powerful batteries

Durable and powerful batteries


Aluminum is efficient at storing and releasing energy but difficult to use because of complex chemistry. 
| Photo Credit:
Getty Images

Scientists in Bengaluru have created a breakthrough battery that’s flexible enough to fold like paper and safe to touch, offering a promising alternative to lithium-ion batteries used in phones, laptops, electric vehicles, and wearables. 

Lithium-ion cells can overheat and even explode. The new design, developed at the Centre for Nano and Soft Matter Sciences (CeNS) with the Centre for Nano Science and Engineering (CeNSE), IISc, replaces lithium with aluminum — one of Earth’s most abundant, eco-friendly metals — combined with a water-based electrolyte. This makes the battery safer, cheaper, and more sustainable. 

Aluminum is efficient at storing and releasing energy but difficult to use because of complex chemistry. The team overcame this by engineering materials at the microscopic scale. They built a cathode of copper hexacyanoferrate pre-filled with aluminum ions and paired it with a molybdenum trioxide anode. The result is a battery that maintains 96.8 per cent of its capacity after 150 charge–discharge cycles and bends or folds without losing power. 

To demonstrate, researchers powered an LCD display while folding the battery in half. Such flexibility could enable roll-up gadgets and clothing-integrated wearables, while its safety profile suits electric vehicles and other high-demand uses. 

Advanced electron-microscopy and spectroscopic tests confirmed the battery’s durability and high performance. By using abundant aluminum and a water-based system, the innovation supports global sustainability goals and positions India at the forefront of multivalent-ion battery research. With further refinement, this next-generation energy-storage technology could soon find its way into everyday devices and safer, greener transportation. 

Published on September 22, 2025



Source link

When AI learns to look good…not necessarily be good

When AI learns to look good…not necessarily be good


A study found that models trained with FSRL dialled down features tied to honesty, safety, and ethics. 
| Photo Credit:
Getty Images

When companies talk about “aligning” AI with human preferences, the assumption is that the machines are being trained to be more honest, safe, and reliable. But new research suggests that alignment may be rewarding something else entirely: polish.

A paper titled The Anatomy of Alignment: Decomposing Preference Optimization by Steering Sparse Features (Ferrao et al., 2025) introduces a new alignment method called Feature Steering with Reinforcement Learning (FSRL). Beyond being a clever technical innovation, it reveals an awkward truth: when we reward AI, it learns to look good, not necessarily to be good.

Five takeaways

Alignment isn’t always about honesty. The researchers found that when models were trained with FSRL on human preference data, they systematically boosted features linked to style and formatting — punctuation, conciseness, neat structure — while dialling down features tied to honesty, safety, and ethics.

“The policy systematically increases the proportional activation of features related to style and formatting, while decreasing that of features explicitly tied to alignment concepts,” the authors note.

For businesses, this is a reminder that alignment can produce assistants that sound sharp and professional but may not always be more truthful.

Transparent methods are emerging. Traditional alignment through RLHF adjusts millions of parameters in opaque ways. Nobody can tell which switches are being pulled.

FSRL takes a more interpretable route. It uses Sparse Autoencoders (SAEs) to break down a model’s internal activations into meaningful “features” — like flattery, caution, or formatting — and then trains a lightweight adapter to nudge those features up or down.

Think of it as a control panel instead of a black box. Businesses deploying AI at scale could benefit from that visibility: knowing whether the “verbosity dial” is being cranked too far is better than guessing why customers are getting long-winded answers.

Trade-offs are unavoidable. In benchmark tests, the researchers compared FSRL with traditional full fine-tuning using Simple Preference Optimisation (SimPO).

Fine-tuned models improved alignment scores but saw reasoning collapse; performance on math reasoning tasks dropped dramatically.

FSRL-steered models achieved more moderate improvements but preserved much more of the model’s reasoning ability.

For operations, this highlights a trade-off: go too far in fine-tuning for “aligned” behaviour, and you may hollow out critical skills. Lightweight steering methods might give businesses the middle ground they need.

Operational benefits are clear. FSRL isn’t just more transparent; it’s also cheaper and faster. Instead of retraining entire models, you train small adapters. This lowers compute costs and allows for domain-specific steering.

A financial services firm could emphasise caution, a law firm precision, and a retailer conciseness, without destabilising the model’s core reasoning capabilities. In practice, this means alignment can become a more customisable business tool, not a one-size-fits-all process.

For regulators and auditors, transparency is key. Traditional RLHF methods give little insight into how alignment is achieved. With FSRL, organisations can literally see whether features corresponding to “flattery” or “avoidance” are being systematically promoted.

That could make AI oversight more like crash tests in cars or stress tests in banks — visible, measurable, and comparable. But the research also highlights a cultural weakness: if human raters reward style as a proxy for substance, then models will optimise for appearances. Businesses must ensure that feedback data reflects the qualities they truly value — honesty, nuance, and safety — not just the ones that look good on the surface.

The bottom line

The Anatomy of Alignment is both a tool and a warning. The tool — FSRL — shows that AI alignment can be done more transparently and cheaply. The warning is that unless businesses demand richer signals of quality, AI will keep learning the same shallow lesson: presentation is everything.

For executives thinking about deploying AI, the message is clear: don’t just ask whether the model sounds aligned. Ask what’s being rewarded under the hood. Because looking good, as every business leader knows, is not the same as being good.

Published on September 22, 2025



Source link

YouTube
Instagram
WhatsApp