Adding new India-specific dimensions to obesity diagnosis

Adding new India-specific dimensions to obesity diagnosis


For decades, body mass index (BMI) has been the primary tool for diagnosing obesity worldwide. BMI is calculated by dividing a person’s weight (in kg) by the square of their height (in metre).

BMI is often used as a quick screening tool to categorise individuals as underweight, healthy weight, overweight, or obese.

However, research shows that Indians develop type-2 diabetes and other obesity-related conditions at lower BMI levels, compared with Western populations.

Now, some Indian doctors have suggested new guidelines to classify and treat obesity. They have added important new dimensions to diagnosis, including waist size, ratio of waist to height, daily physical limitations, and related health problems. This approach better addresses the unique health risks that Indians face even at lower weights, compared with Western populations.

“All patients with BMI above more than 23 kg/m² are at risk for type-2 diabetes due to excess body fat,” says Dr Anoop Misra, Chairman, Fortis-CDOC Center of Excellence for Diabetes, Metabolic Diseases and Endocrinology. “But BMI alone doesn’t always reflect true risk. Conditions beyond BMI (abdominal obesity, obesity-related mechanical dysfunction and related diseases) are important to classify obesity in stages,” he adds.

In a recent paper, which examines the applicability of globally accepted BMI definition to India, Dr Misra and colleagues have suggested new guidelines to make obesity management more rational.

The new classification system begins with ‘stage 1 obesity’, defined as BMI above 23 kg/m² without noticeable effects on organ functions or limitations in daily activities.

Patients at this stage have normal blood glucose, blood pressure, and lipid profiles with no cardiovascular issues. The focus for these patients is implementing dietary and exercise interventions aimed at weight reduction to prevent the onset of obesity-associated diseases.

Stage 2 obesity represents a more advanced condition, characterised by increased fat — both general and abdominal. At this stage, there are limitations in day-to-day activities and co-morbid diseases.

According to the proposed guidelines, stage 2 obesity is BMI exceeding 23 kg/m² in addition to one of the following criteria: excess waist circumference (equal to or more than 90 cm for men, 80 cm for women) and excess waist-to-height ratio (more than 0.5).

Additionally, patients must have symptoms indicating limitations in daily activities such as shortness of breath or joint pain or obesity-related conditions such as type-2 diabetes.

The importance of assessing waist circumference is underlined by Dr R Bhuvaneshwari, a primary care physician with over 40 years of experience. “Fat cells in the abdominal wall seem to cause insulin resistance,” she says. She has observed that abdominal fat not only correlates with type-2 diabetes, but also reproductive disorders.

Varying treatments

The suggested guidelines recommend a nuanced approach to treating obesity.

For instance, in stage 1, lifestyle interventions include nutrition therapy, physical activity, and behavioural changes. These are typically sufficient. Pharmacotherapy may be considered for those at risk of progressing to stage 2. This will include patients experiencing substantial weight gain despite lifestyle measures, or with a BMI equal to or more than 27.5 kg/m².

For stage 2 obesity, patients often need stricter lifestyle changes, such as tailored dietary plans and regular physical activity. They may also require medical interventions like weight-loss medications. Surgery is also considered for a smaller subset of patients with co-morbidities or uncontrolled diabetes.

By moving beyond simple BMI measurements, the new guidelines view obesity as a complex condition requiring individualised treatment approaches rather than a simple weight classification.

(S Yasaswiniis a writer based in Guwahati)





Source link

Reviving the Modi script

Reviving the Modi script


Few outside Maharashtra may have heard of the Modi script, which was used in administrative and legal documents between the 13th and early-20th centuries. About 40 million handwritten documents are scattered across places, including government archives, the Institute of Oriental Studies, and the Bharat Itihas Sansodhan Mandal, Pune.

On first glance, the writings in them appear to be gibberish. But who knows what knowledge resides within them?

Three researchers from the College of Engineering, Pune — Harshal Kausadikar, Tanvi Kale and Onkar Susladkar — and one from IIT-Roorkee — Sparsh Mittal — decided to find a way to decipher them. They built a computer program, which they call MoScNet, a novel vision-language model (VLM) framework for transliterating Modi script images into Devanagari text. “MoScNet leverages ‘knowledge distillation’, where a ‘student model’ learns from a ‘teacher model’ to enhance transliteration performance,” the authors say in their yet-to-be-peer-reviewed paper.

With this, the researchers have built a collection of 2,043 images of Modi script documents with their transliterations in Devanagari. “Our work is the first to perform direct transliteration from the handwritten Modi script to the Devanagari script,” their paper says.

This was not easy because the Modi script changed six times during its lifetime. There is an era-wise classification of the script, such as Aadyakalin, Shivakalin, and so on. Further, the script’s cursive nature, differing writing styles, and issues like angular strokes, broken lines and blurring make the task of deciphering them even more difficult. However, it would be interesting to learn what the documents say because the texts deal with medieval sciences, medicine, land records and Indian history. Now, we have a tool to bring to life the 40 million documents that speak to us from the distant past.





Source link

Game theory and the China-India water conflict

Game theory and the China-India water conflict


Over the past two decades, China and India have engaged in a cycle of limited cooperation and recurring disputes over the management of the Brahmaputra river. The paper ‘Game theoretical analysis of China-India interactions in the Brahmaputra river basin’ by Anamika Barua, Tanushree Baruah and Sumit Vij reveals a pattern of strategic calculations, whereby trust deficits and unilateral actions have maintained a fragile status quo rather than fostering cooperation.

Game theory, a mathematical framework used to analyse strategic interactions, helps explain why China and India have struggled to move beyond minimal cooperation.

Prisoner’s dilemma

The first step towards cooperation was taken in 2002, when both countries signed an MoU on hydrological data sharing. This agreement was triggered after the bursting of glacial lakes in Tibet led to flash floods in India’s northeastern states in 2000. India accused China of failing to provide advance warning and, in response, China agreed to share water level, discharge and rainfall data from three hydrological stations in Tibet.

In game theory, this agreement fits the ‘prisoner’s dilemma’ model, where both players could have benefited from sustained cooperation but had strong incentives to prioritise unilateral advantages. China, as the upstream country, had a dominant strategic position while India, as the downstream country, was vulnerable to flood risks and water flow variations.

In 2006, an expert-level mechanism (ELM) was introduced as a platform to facilitate discussions. This created an institutional setting for repeated games, where interactions between the two nations could build trust over time. However, for this approach to succeed, both players needed to perceive benefits from cooperation.

Shifting payoffs

When the MoU was renewed in 2008, China wanted India to start paying for the data, signalling that China was leveraging its upstream position to extract economic benefits. India had little choice but to accept the new terms.

By 2013, more concerns emerged. While China extended the data-sharing period, India questioned the relevance of the data since all three monitoring stations were in a rain-shadow region of Tibet, limiting their usefulness in flood forecasting. Also, border tensions escalated, particularly during the 2013 Depsang standoff, which further eroded trust between the two countries.

Game theory explains this phase as a shift from cooperation to a mixed-strategy equilibrium, where both nations engaged in partial cooperation while simultaneously hedging against each other’s actions in other strategic areas. The lack of trust and unresolved border disputes meant neither side was willing to expand cooperation beyond technical data-sharing agreements.

Defection

During the 73-day Doklam standoff in 2017, China suspended the sharing of hydrological data with India, citing “technical issues” at its monitoring stations, while continuing to provide data to Bangladesh. This fits into the tit-for-tat strategy in game theory, where one player retaliates against another’s defection with a countermove.

The temporary defection from the cooperative framework resulted in a reset of the game, restoring the power asymmetry that existed before 2002.

Chicken game

Following the 2020 Galwan Valley clash, China and India both adopted more aggressive policies regarding the Brahmaputra. In 2021, China announced plans to construct a massive 60 GW dam at the Great Bend of the Brahmaputra, which could potentially alter water flow to India. In response, India accelerated its own dam-building initiatives in Arunachal Pradesh.

This phase of interaction aligns with the ‘chicken game’ model, where both players escalate their actions, expecting the other to back down. If neither yields, the worst outcome — a major environmental or geopolitical crisis — becomes increasingly likely. The risk of uncoordinated dam-building in a seismically active region is high, with potential consequences for flooding, water shortages and long-term ecological damage. Yet, neither country is willing to compromise.

Breaking deadlock

The expiry of the MoU in June 2023 left the future of China-India water cooperation uncertain. The game remains locked in a status quo equilibrium, where both nations maintain minimal cooperation but refuse to venture beyond past agreements. Without a fundamental shift in incentives, the Brahmaputra will continue to be managed through unilateral decisions rather than cooperative strategies.

Game theory suggests that breaking this deadlock would require a new equilibrium shift. Possible ways include establishing a binding river treaty that moves beyond data-sharing to include mechanisms for joint water management; using issue linkage, where cooperation on water is tied to economic or trade agreements to create mutual incentives; and implementing confidence-building measures, such as joint hydrological monitoring stations, to increase transparency.





Source link

Breakthrough promises to revolutionise ultrasound medical, industrial imaging

Breakthrough promises to revolutionise ultrasound medical, industrial imaging


Researchers at IIT-Madras have announced a breakthrough in ultrasound imaging through the use of ‘metamaterials’. This development has the potential to both revolutionise medical imaging and improve non-destructive testing processes in industry.

X-ray produces sharp resolution; MRI scans even sharper. (These days, doctors tend to rely more on MRI than X-ray because the latter is blind to some defects.) However, X-ray is not quite safe as it is based on ionising radiation; MRI scans are expensive and not suitable for people with metal implants.

Ultrasound imaging, which uses ‘sound’ rather than ‘light’ for imaging, is done with non-ionising radiation but its resolution is not so good — X-ray’s resolution is 1,000 times sharper. X-ray can achieve resolutions as fine as 0.5 microns in high-end applications, compared with ultrasound’s 0.5 mm.

Ultrasound’s resolution can be increased but then the imaging will be superficial, not deep.

In a recent paper published in Nature, Prof Prabhu Rajagopal of the Centre for Non-destructive Evaluation, IIT-Madras, describes the problem thus: “Electromagnetic methods such as radiographic (X-ray) testing can achieve high resolution but with reduced penetration in solids; they typically involve ionising radiation while also being expensive, limiting wider field application. Ultrasound can be an effective alternative with better penetration of thicker samples while being cost-effective and non-ionising, thus allowing for the possibility of rapid and largescale online/in-situ material diagnostics. However, conventional (linear, bulk) ultrasound has limited applicability for imaging microscopic defect features due to the longer wavelengths involved. Techniques such as scanning acoustic microscopy (SAM) can offer better resolution for ultrasonics at elevated frequencies on the order of 100 MHz but are restricted to the sample surface. Thus, techniques for achieving very high-resolution imaging using low-frequency bulk linear ultrasonics could offer an elusive breakthrough for material diagnostics and imaging deeper inside solids.”

Special material

Now, Rajagopal and his team have come up with a new technique that improves the resolution of ultrasound by using metamaterials, which are specially engineered materials. The one Rajagopal uses for high-resolution ultrasound imaging is a meta lens — a tiny silicon block with hundreds of square channels drilled into it. This was done using a well-known technique called ‘deep reactive ion etching’, used in micro-fabrication mainly for micro electromechanical systems (MEMS). As ultrasound waves pass through these channels, they get amplified and are picked up by a laser-based receiver.

So, the architecture is simple — an ultrasonic transmitter, the sample, the meta lens and the laser-based receiver. The sample has to be kept in a water bath because ultrasound is of very small wavelength compared to audible sound, so it will be scattered by air particles and cannot propagate through air. (In medical applications, a gel is used in place of water.)

From the transmitter, the ultrasonic wave propagates through the water medium, passes through the object being imaged and emerges through the water-filled meta lens. It is picked up by the laser Doppler vibrometer.

At the heart of the setup is the meta lens, which is quite difficult to make.

Drilling perfectly square channels through the silicon material is itself daunting. Further, since the channels are too narrow, the capillary effect will distort the water level. To hold an equal level of water, Rajagopal’s team made the insides of the tubes hydrophilic (water attracting) through oxidation.

“It has been my quest for many years to bring ultrasound to the same range as X-ray using metamaterials,” Rajagopal, who was awarded the prestigious Shanti Swarup Bhatnagar award last year, told Quantum.

“This research by Dr Prabhu Rajagopal’s team showcases a breakthrough in ultrasonic inspection, achieving an unprecedented 50-micron resolution using commercially available low-frequency probes. Their innovative approach, which combines micro-fabricated metamaterial lenses with advanced signal processing, offers a powerful and cost-effective alternative to traditional radiation-based imaging techniques, with transformative potential across various industries,” says Dr David Fan, associate professor at the School of Mechanical and Aerospace Engineering, Nanyang Technological University, Singapore.





Source link

ISRO completes docking-undocking mission 

ISRO completes docking-undocking mission 


When the Indian space agency, ISRO, made a ‘chaser’ spacecraft get after and hook itself with the ‘target’ spacecraft on January 16, effectively achieving ‘docking’ – for the first time in its history – its mission was still less than complete. 

The completion was achieved today, when the chaser de-docked from the target.  

The de-docking was announced by the Union Science Minister, Jitendra Singh, via his X post.  

Complex process

One intuitively thinks of de-docking as a simple affair, compared with the extremely complex docking, where the speeds and orientation of two spacecrafts have to be matched to perfection. But experts note that de-docking too is not as easy as unhooking. 

PV Venkitakrishnan, a former ISRO scientist who today teaches at IIT Madras, told businessline that de-docking is “controlled separation”. De-docking, too, is a highly complex process, he said.  

The process calls for a high level of precision in execution, as in the microgravity conditions of space, where there is no atmospheric drag, even minor forces can result in collision. This involves precise, low-force separation, using springs or thrusters, so as to avoid unintended drift. Latches and hooks must be carefully and sequentially disengaged – remotely.  

Also, typically, there is a pressurised tunnel between the spacecraft, the decompression must be carefully managed, though it is not clear if ISRO’s spacecrafts, SDX01 and SDX02 had this feature. ISRO has not released details of the de-docking. Finally, the two separated spacecrafts must be maneuvered into their separate, designated orbits. 

ISRO has said that it intends to do more docking-undocking exercises, to gain mastery over the difficult task.  Learning docking is crucial for ISRO’s upcoming missions such as Chandrayaan-4, which is expected to bring back soil and rock samples, and Gaganyaan, the human space flight mission.  

More importantly, learning docking paves the way for refueling in space, enhancing the life of a satellite, which in turn obviates the need for costly fresh launches. Venkitakrishnan observed that docking would serve many strategic and Defence applications too. 





Source link

Cheaper, safer LED

Cheaper, safer LED


LEDs dominate the lighting industry today, but a number of emerging technologies promise further improvement.

For example, there is organic LED, which uses organic molecules to emit light, enabling thin, flexible and vibrant displays.

There is quantum dot LED, which uses tiny semiconductor particles called quantum dots for improved colour and brightness.

And then there is ‘micro LED’, which uses tiny LEDs to get higher brightness and colour.

All of these, however, suffer from a drawback. They are expensive; moreover, QLEDs use toxic materials.

But there is one technology that combines the best of OLED and QLED, while remaining cost-effective — perovskite LED (PeLED).

However, perovskites are inherently unstable.

Now, researchers at the Centre for Nano and Soft Matter Sciences (CeNS), Bengaluru, have developed a method to improve the stability of PeLEDs by minimising anion migration — a key cause for colour instability, heat, and moisture sensitivity.

The team, led by Dr Pralay K Santra, has developed a method that uses cesium lead bromide nanocrystals to tackle the problem of instability.

Santra’s team used an ‘argon-oxygen plasma treatment’, a process that creates a protective barrier and prevents anion migration.

This breakthrough brings PeLEDs closer to real-world applications, paving the way for more efficient and durable optoelectronic devices, says a press release from the governmental Department of Science and Technology.

Fatigue-resistant alloy

Researchers have developed an innovative approach to designing fatigue-resistant multi-principal element alloys (MPEAs), opening new possibilities for their application and further exploration.

MPEAs are a novel class of materials, composed of multiple principal elements.

Traditionally, it is believed that increasing strength through compositional modifications or the addition of brittle phases adversely affects fatigue life.

Challenging these notions, Dr Ankur Chauhan and his team from the Department of Materials Engineering, Indian Institute of Science (IISc), Bengaluru, systematically explored the role of two critical microstructural features in enhancing the low-cycle fatigue (LCF) performance of alloys in the ‘chromium, manganese, iron, cobalt, nickel’ system.

“By adjusting the Cr/Ni (chromium-nickel) ratio, they synthesised two single-phase face-centred cubic (FCC) MPEAs with distinct SFEs (stacking fault energy). The low-SFE alloy exhibited 10–20 per cent higher cyclic strength than the high-SFE alloy while maintaining a comparable fatigue life,” says a press release.

Additionally, the team developed a dual-phase alloy that demonstrated 50–65 per cent increase in cyclic strength over the single-phase low-SFE alloy, while maintaining a similar fatigue life.

These findings provide a framework for designing both single-phase and dual-phase fatigue-resistant MPEAs, with implications for structural applications. By offering insights into deformation and damage mechanisms, this work advances the understanding of how SFE and secondary brittle phases influence the mechanical properties of MPEAs.





Source link

YouTube
Instagram
WhatsApp