Mind-reading tech

Mind-reading tech


Australian media recently reported on an interesting device that can help paralysed persons “regain connection with the world through text, email, shopping and banking online.”

The matchstick-sized implant, called Stentrode, is placed in a blood vessel near the motor cortex of the brain. Once deployed, it self-expands and stays in position. It senses electrical activity from nearby neurons, particularly signals associated with intended movement. These signals are transmitted via a thin wire to a small device implanted in the chest, which then sends them wirelessly to an external computer. There, specialised software decodes the patterns and converts them into digital commands — such as moving a cursor, clicking or typing. In effect, the system — developed by a company co-founded by Prof Thomas Oxley of University of Melbourne — translates the brain’s intention, enabling interaction without physical movement.

This development has brought renewed attention to an emerging field known as brain-computer interfaces (BCI). We are familiar with EEG, which reads brain activity through external sensors. Researchers like Oxley are taking the technology much deeper. Most current work on BCI is focused on medicine, including operating wheelchairs and other assistive devices.

Looking ahead, the possibilities are striking — especially when combined with artificial intelligence. One can imagine, for instance, switching off a device at home by thought alone or access data directly from a computer.

Inevitably, there are ethical questions. As Jackson Tyler Boonstra, a postdoctoral researcher at Vrije University Amsterdam, notes in a recent paper, “While BCIs hold transformative potential for treating neurological disorders, their premature translation into consumer markets risks outpacing neuroscientific understanding and ethical frameworks.”

More Like This

Published on March 23, 2026



Source link

No exam is too hard for AI?

No exam is too hard for AI?


Someone, at some point, perhaps as a joke, decided to call it “Humanity’s last exam”, or HLE. By the time it was published in Nature in January, its designers had already announced a replacement. The replacement is updated continuously. It has to be.

It is a benchmark of 2,500 questions assembled by nearly 1,000 experts from 500 institutions across 50 countries, introduced by researchers at the Center for AI Safety and Scale AI. At launch, the best AI models scored under 10 per cent. The live leader board now shows 38.3 per cent. The trajectory, more than the absolute score, is the point.

The benchmark was designed in response to a failure in existing tools for measuring AI capability. Frontier models had already exceeded 90 per cent accuracy on MMLU (massive multitask language understanding), once considered a serious challenge. When all the best systems can clear it, the measuring instrument does not tell you much about the differences between them, or where they may be the following year.

HLE’s design logic was deliberately adversarial. Questions were submitted by experts across more than a hundred disciplines. Before entering the dataset, each had to defeat all current frontier models and pass two rounds of expert review. The result was a set of questions that, by construction, no existing AI could reliably answer.

The evaluation results confirmed the difficulty. At launch, GPT-4o scored 2.7 per cent, Claude 3.5 per cent, Sonnet 4.1 per cent, and OpenAI’s o1 8 per cent. DeepSeek-R1 reached 8.5 per cent. These numbers were low by design. The speed of subsequent progress was not fully anticipated. GPT-5 scored 25.3 per cent, Gemini 2.5 Pro reached 21.6 per cent. Scores have continued to climb; the live leader board now shows Gemini 3 Pro at 38.3 per cent.

Calibration errors across models ranged from 50 per cent to 89 per cent. Calibration measures whether a model’s stated confidence matches its accuracy. On HLE, models routinely expressed high confidence while being wrong. This is not a quirk of one system. It holds across architectures, suggesting a structural feature of current AI design.

Accuracy improved with more reasoning compute, but only to a point. Beyond roughly 16,000 output tokens, performance declined.

Dynamic testing

The paper reports an expert disagreement rate of 15.4 per cent, rising to 18 per cent in biology, chemistry and health. Nearly one in six questions could not be answered consistently even by specialists. The benchmark is harder than any existing AI can reliably handle, but also harder for any single human expert.

The authors draw one boundary clearly: High performance on HLE would not constitute evidence of artificial general intelligence. It would demonstrate expert-level performance on close-ended academic questions. That distinction is often lost in public commentary.

There is, however, a deeper problem. Every instrument built to measure AI capability has so far become a ceiling that models eventually reach. MMLU took years to saturate; HLE is showing pressure within months. The designers acknowledge this by announcing HLE-Rolling, a dynamically updated version intended to stay ahead of the models.

Once a benchmark is published, it becomes a target for developers and for the optimisation logic by which models are compared and sold. The instrument and the thing it measures cease to be independent. No static benchmark can escape this. The inability to build a stable yardstick suggests that capability is moving faster than the ability to define what is being measured.

Business decisions

For businesses, this has three consequences. Investment and procurement decisions across sectors are being made on capability claims built, directly or indirectly, on benchmark performance. If the benchmarks are structurally unstable, so are those claims.

The calibration finding compounds this. A model that cannot signal its own uncertainty is unsuitable for deployment in contexts where errors compound, including credit assessment, medical triage and document-intensive knowledge work. Confident wrongness is operationally worse than uncertain wrongness because it removes the incentive for a human check.

The pace of improvement also means that any organisation’s current understanding of AI capability has a short shelf-life. Planning and regulatory assumptions require revision cycles that most institutions are not designed to support.

The scores are rising fast enough; so it may not be possible to decide, in any meaningful way, which model is leading. At the margins of a benchmark this hard, differences between the best systems will fall within statistical noise. The yardstick will not so much have been beaten as dissolved.

Published on March 23, 2026



Source link

Defence research stays underfunded

Defence research stays underfunded


Data from the Ministry of Defence suggests that India’s spend on defence research has grown far less than headline numbers indicate.

The Defence Research and Development Organisation (DRDO) spent ₹13,258 crore on R&D in 2014-15. By 2024-25, this had risen 87 per cent to ₹24,793 crore, which, at first glance, appears to signal a steady expansion of research effort. But this is in nominal terms. Adjusted for economy-wide inflation, using the GDP deflator, the real annual growth is only 1.5–2 per cent.

The picture becomes starker when looking at how this money is spent. The portion devoted to specific projects and programmes — missile systems, aircraft engines, radars and the like — has moved from ₹3,770 crore in 2014–15 to about ₹5,900 crore in 2025–26. After adjusting for inflation, this implies virtually no growth in real terms over the period.

This also suggests a shift in composition: While overall spending has increased, the share going to R&D projects has not kept pace. A growing proportion of the budget appears to be absorbed by establishment costs rather than programme funding.

Budget trends reinforce this concern. In 2025–26, the allocation for defence R&D was raised by 12 per cent, from ₹23,855 crore to ₹26,817 crore. Yet the revised estimate came in slightly lower, at ₹26,747 crore, indicating that even the allocated funds were not fully utilised.

The increase over the previous year’s revised estimate was about 8 per cent.

Spending on projects and programmes grew only about 9 per cent in 2024–25.

The pattern raises a broader question: Is higher allocation translating into more research?

The Budget for 2026-27 provides ₹29,100 crore for DRDO, a 10 per cent increase year-on-year. Whether this results in a meaningful increase in programme spending — or is again absorbed by costs — remains to be seen.

A wider pattern

This scenario is not unique to defence research spending.

Across six key science departments — science and technology, biotechnology, scientific and industrial research, space, atomic energy, and Earth sciences — government R&D spending rose from ₹28,014 crore in 2020–21 to ₹39,057 crore in 2024–25. That translates to a nominal annual growth rate of about 8.8 per cent. Adjusted for inflation, however, the real growth rate is only 2–3 per cent a year.

The pattern is consistent: headline increases mask modest real expansion.

India’s gross expenditure on research and development (GERD) has remained stuck at 0.6–0.7 per cent of the GDP for over a decade. This is not merely because the GDP has grown rapidly, but also because R&D spending has struggled to grow meaningfully in real terms.

Efficiency factor

It can be argued that limited resources have been used efficiently. India’s rank in the Global Innovation Index has improved significantly over the past decade, and government-backed schemes in areas such as biotechnology have helped start-ups raise follow-on funding and generate intellectual property.

But these are, at best, partial indicators. Improvements in innovation rankings reflect a broad set of factors, and start-up success in specific sectors is not a fill-in for sustained investment in core research capabilities. The more fundamental question remains unanswered: What might outcomes look like if real R&D spending grew faster?

For a country seeking technological self-reliance, particularly in sectors such as defence, flat real spending and stagnant programme outlays are not neutral outcomes. They imply a slower build-up of capabilities, regardless of improvements in efficiency at the margins. India may indeed be getting more value out of each rupee. But over the past decade, it has not been putting significantly more real resources into research.

More Like This

Published on March 23, 2026



Source link

Micro attacks on sewer lines

Micro attacks on sewer lines


Recent studies by scientists at IIT-Madras and the University of Cape Town, South Africa, have demonstrated that corrosion of sewers is driven not by the overall chemistry of wastewater, but the microscopic zones on concrete surfaces where bacteria generate highly concentrated sulphuric acid. While bulk measurements may indicate only mild acidity, the actual corrosion occurs in tiny pockets with extremely low pH.

The scientists — Piyush Chaunsali and Tom Damion of IIT-Madras, and Alice Bakera and Mark Alexander of University of Cape Town — explain why sewer systems, especially concrete ones, corrode more severely than expected. The main culprit is not just chemicals, but also microbial activity. Bacteria in sewage produce hydrogen sulphide gas, which is then converted by other bacteria into sulphuric acid. This acid attacks the concrete, causing major structural damage — accounting for a large share of sewer failures.

A key puzzle addressed in the study is this: real sewer measurements show moderate acidity (around pH 4), yet the kind of damage observed requires extremely strong acid (around pH 1). The researchers resolve this by showing that at the surface, where corrosion actually occurs, the acid is indeed much stronger — but gets quickly neutralised, making it hard to detect.

By combining lab experiments, modelling and field data, the study clarifies how this hidden, highly acidic micro-environment forms — helping anticipate infrastructure damage and in designing improved mitigation strategies.

A key takeaway from the research is that conventional approaches, such as treating the sewage or monitoring average pH, are not good enough. The implication is that solutions must focus on the surface environment: Controlling the bacteria that produce acid, reducing hydrogen sulphide formation and using corrosion-resistant materials or protective coatings.

3D feed for keyhole surgery

Laparoscopic or ‘keyhole’ surgery is increasingly preferred because it reduces pain and speeds up recovery. However, surgeons must operate using 2D video feeds, relying heavily on experience to judge depth. While advanced systems offer 3D visualisation, they are expensive and limited to top hospitals.

Researchers from IIT-Bombay and IIT-Goa have developed a cost-effective alternative — a software technique that reconstructs 3D information from a standard 2D video feed, without requiring specialised sensors or heavy computing. Using principles of geometry, the system tracks surgical instruments by analysing the changes in their shape, size and angles across video frames. As tools move or rotate, their projected appearance shifts in predictable ways, allowing the algorithm to estimate depth and orientation.

The method achieves high accuracy — within about 1 mm — and runs in real time on a standard computer. It promises to make 3D visualisation more accessible and improve surgical training and assistance systems, especially in smaller centres.

More Like This

Published on March 23, 2026



Source link

Turning the ubiquitous optical fibre into a sensor

Turning the ubiquitous optical fibre into a sensor


Optical fibre cables have traditionally been used for high-speed telecommunications. But they have another valuable use in industry — as continuous sensors.

Unlike ‘point sensors’, which measure data at specific locations, continuous sensors turn the entire length of fibre into a sensor capable of monitoring conditions across tens of kilometres in real time.

Folium Sensing co-founder Prof. Balaji Srinivasan traces the science’s roots to the 1980s. At the time, fibre was used to transport data as it minimised data loss and offered high bandwidth. “Whatever is bad for communications — like scattering of light, which causes attenuation (loss) in the fibre as well as external perturbations — turned out to be good for sensing,” he says.

While external vibrations or temperature changes may affect the efficiency of data transmission, they serve as the very signals needed for monitoring. Any existing telecom fibre can be converted into a distributed sensor, equivalent to numerous thermometers or microphones, simply by connecting an instrument to one of its ends.

The instrument sends laser pulses through the fibre and analyses the returning light — much like an echo. This is ‘backscattering’. Folium uses three types of scattering to detect various problems: Rayleigh scattering detects vibrations and acoustics in instances of intrusion or pipeline leaks; Raman scattering is sensitive to temperature for fire detection and power cable monitoring; and Brillouin scattering measures stress or deformation in structures such as bridges and tunnels.

AI/ML models sift through the data to differentiate harmless background noise from critical events, Srinivasan explains. Using the analogy of locating a friend’s voice in a crowd, he adds: “Once trained to an acoustic signature, you can pick it up in the presence of other acoustic signatures.

Folium says the oil and gas industry could benefit the most from the technology — existing fibres can offer round-the-clock monitoring for leaks and illegal excavations. IIT-Madras has built a 100-metre buried pipeline test bed, one of the largest in the world, to simulate various threats. “I can tell between a leak in a pipe or if somebody is using a jackhammer in the vicinity; whether it’s manual digging or an earthmover,” he says. This supplants the need for workers to patrol 50-km stretches on motorbikes daily.

In defence and security, Folium’s technology acts as a quiet ‘sentry’. Srinivasan points out that border areas already have underground fibre cables for surveillance cameras, but intruders can exploit ‘blind’ spots between cameras. Folium’s box can monitor across lengths not possible before.

Unlike cameras or electronic sensors that require power sources — a challenge in remote locations — Folium’s sensors rely on light pulses from a central control room, where power is already available.

Other important uses include monitoring railway tracks for breaks, detecting train movement, and identifying trespassing, including animal crossing; and checking for temperature changes in overhead and underground power cables to prevent degradation of wires or insulation.

In aerospace, Srinivasan has patented a method for monitoring ‘combustion instability’ in jet engines. By tracking the ‘dancing of the flame’ and the acoustics of the combustion chamber, the system can warn pilots before an engine flame-out occurs.

Typically, a single system can monitor up to 100 km, but this can be extended using amplifiers. The world record for such sensing is over 2,200 km. Customers also prefer installing a system every 50 km to maintain the signal and reduce uncertainty.

Folium Sensing is currently engaging with about a dozen potential customers and has a few orders, Srinivasan says.

“The idea resonates well because it provides high-accuracy, real-time intelligence for critical infrastructure without complex field hardware” or having to relay fibre cables.

More Like This

Published on March 23, 2026



Source link

The PRAGYA tokamak

The PRAGYA tokamak


If the headline sounds like the title of a Robert Ludlum novel, the story behind it is just as exciting.

Tokamaks are doughnut-like devices that are the core of nuclear fusion reactors. The ring-shaped vacuum chamber magnetically confines plasma (hot gas of free electrons, which are negatively charged, and positively charged ions). The particles, whose pressure is balanced by the external magnetic field, can only spiral inside the ring, rather than fly off; in the ensuing traffic jam, some collide, fuse and produce energy.

Tokamaks are typically very large — for example, the one used by the International Thermonuclear Experimental Reactor (ITER) in France has a radius of 6.2 m.

Now, a Bengaluru-based startup, Pranos Fusion, has developed a tiny tokamak device with a radius of just 40 cm. It has called it PRAGYA.

PRAGYA is India’s first privately developed tokamak and also the smallest. The country has three more (under the Institute of Plasma Research, Gandhinagar) — ADITYA-1, ADITYA-U and SST-1, which, incidentally, is special as it uses superconducting magnets.

Pranos Fusion raised $4,17,000 in May 2025 from Rahul Seth, an angel investor. That money has been put to good use.

PRAGYA is essentially a test bed. It is not a breakthrough in fusion physics but, nevertheless, a significant milestone because multiple tests (and training) can be done on it, which can lead to a breakthrough.

“This is a small, compact tokamak designed as a precursor to a larger tokamak, with scientific exploration and development of critical human resources as a core objective,” says a paper produced jointly by scientists at Pranos, Jawaharlal Nehru Centre for Advanced Scientific Research (Bengaluru), and the Indian Institute of Science (Bengaluru).

The paper mentions investigations into ‘magnetohydrodynamic stability of plasma’, superconducting magnets and auxiliary heating, among the tests possible on PRAGYA.

Pranos is one of the three private Indian companies working on fusion energy, which is generally considered a big-bucks game. Hyderabad-based Hylenr and Anubal Fusion of Bengaluru are the other two. All three recently raised funds, which indicates investors are willing to bet on fusion energy.

More Like This

AERIAL FORCE. The region around a moving wing is physically complex, with strong vortices and sharp gradients

Published on March 23, 2026



Source link

YouTube
Instagram
WhatsApp