AI implementation in healthcare requires tailored approaches: ISB Study


AI implementation in healthcare requires tailored approaches considering factors such as regional infrastructure differences and training programmes. 

As per the findings of a study by researchers at the Max Institute of Healthcare Management at the Indian School of Business on ‘Understanding Providers’ Attitude Toward AI in India’s Informal Health Care Sector,’ 93.7 per cent of providers believed AI could improve TB diagnosis accuracy, while only 69.4 per cent were willing to adopt the technology. 

Sumeet Kumar, Assistant Professor, Information Systems, ISB and lead author of the study said: “The gap between belief in AI’s potential and willingness to adopt it suggests that technological superiority alone may not guarantee successful implementation.”

Regional differences and existing healthcare infrastructure play crucial roles in technology adoption, he added. 

The key findings of the research include higher adoption readiness in Gujarat (73.4 per cent) compared to Jharkhand (58.4 per cent), reflecting the impact of regional healthcare infrastructure development. Providers who were more confident in diagnosing TB showed greater willingness to adopt AI. Also, providers’ trust in local radiologists influences AI adoption differently across regions.

The research suggests that successful AI implementation in healthcare requires tailored approaches considering regional infrastructure differences, additional support and training programmes for healthcare providers, focus on providers with limited access to diagnostic infrastructure and consideration of cost implications.

The study surveyed 406 Ayurveda, Yoga and Naturopathy, Unani, Siddha and Homeopathy and informal healthcare providers (collectively called AIPs) across Gujarat and Jharkhand. 

TB remains a global health crisis, claiming 1.5 million lives in 2020 alone, with India bearing a substantial burden. Early and accurate diagnosis is key to TB treatment, but accurate diagnostic tools like molecular diagnostic tests are expensive, difficult to access and challenging to maintain, ISB release said. 





Source link

How to read burnt scrolls


On August 24, 0079, Mount Vesuvius blew its top. The citizens of Pompeii and Herculaneum were caught off guard by the hellfire. The city perished under hot ash. But the killer ash also ended up entombing the city and preserving it near intact.

Nearly 1,700 years later (in 1738), Herculaneum showed up when workers were digging for a well. King Charles III of Spain, who was also the king of Naples, lost no time in ordering excavations. Ten years later, when they unearthed Pompeii, they found the city remarkably well preserved. 

Pompeii has since become the stuff of fantasy and folklore, inspiring books and movies; but, importantly, it has served as a trove of knowledge, revealing much about the past. But it has also held several mysteries — one of which are a set of rolled and charred papyrus. 

How on earth can one decipher what a charred papyrus says? With artificial intelligence. 

So you now have an open ‘Vesuvius Challenge’, offering money to those who can reveal the blackened words. Some have even won good money for uncovering a few words here and there. 

But how will AI “unroll” a rolled papyrus? We put the question to AI itself (ChatGPT) and it replied as follows: “The process typically starts with a high-resolution scan (like a CT scan) that produces a detailed 3D digital image of the scroll’s interior structure. Then, AI algorithms — often using techniques from computer vision and deep learning — analyse these images to identify the different layers of the scroll and the areas that contain text. Once the layers are segmented, mathematical models and geometric transformations are applied to “flatten” these curved surfaces into a 2D plane. This virtual unrolling preserves the text and structure without physically handling the fragile artifact.”





Source link

Fake cells get real work done


Scientists have been working for decades to create synthetic cells that mimic key functions of biological cells. The challenge has been to make the artificial cells dynamic, responsive, and capable of interacting with their environment in a controlled way. A new study published in Nature Materials, ‘Morphology remodelling and membrane channel formation in synthetic cells via reconfigurable DNA nanorafts’ (authored by Sisi Fan, Shuo Wang, Longjiang Ding, and others), presents a breakthrough involving the use of DNA nanorafts — tiny, flat assemblies of DNA strands that change shape in response to chemical signals.

By using DNA as a programmable building material, researchers have found a way to engineer membranes that can change shape, open and close transport pathways, and respond to specific molecular signals.

Molecular marvel

DNA’s ability to form predictable and programmable interactions makes it an ideal material for building nanoscale devices, such as the nanorafts used in this study. 

The nanorafts were integrated with giant unilamellar vesicles (GUVs), which serve as simplified models of biological cell membranes. By attaching DNA nanorafts to the surface of these synthetic cells, scientists were able to induce and control changes in their membrane structure. Membrane shape is critical to many cellular processes, including transport, engulfing nutrients, sending signals, moving through tissues, and division.

The innovation here is that the DNA nanorafts are not passive structures. They are designed to switch between different shapes and, in doing so, they actively influence the behaviour of the membranes they are attached to, marking a major advance in synthetic biology. It opens up new possibilities for engineering artificial life-like systems.

Cell shaping

Initially the nanorafts are in a square-like conformation; but when exposed to specific DNA strands called ‘unlocking strands’, they elongate into a rectangule, which applies force on the membrane and deforms it in a controlled way.

Equally important is the ability to reverse these shape changes using a different set of ‘locking strands’.

Gatekeeping

Beyond reshaping membranes, the DNA nanorafts enable another crucial function: forming and sealing synthetic channels within the membrane. Transporting molecules across cell membranes is one of the fundamental functions of life, allowing cells to absorb nutrients, remove waste, and communicate with their surroundings. In biological cells, this process is carried out by protein-based channels and transporters. However, engineering such complex protein structures has always been a challenge in synthetic biology.

The study offers DNA-based structures as an alternative. When the nanorafts elongate, they interact with the membrane in a way that creates openings or pores. These synthetic channels allow in molecules as large as 70 kilodaltons (kDa), which is not possible in most natural protein channels.

Importantly, these pores are not permanent. Once the membrane regains its original shape, the pores close and seal off the transport pathway. This control over membrane permeability is a major step forward in creating artificial cells that can selectively regulate molecular transport.

Applications

The ability to reshape membranes and create selective transport channels has significant implications for drug delivery, bio-sensing, and artificial cell research. Imagine a synthetic cell that can absorb a drug molecule in one environment, travel through the body, and release it at a specific site, such as a tumour. Programmed DNA nanorafts promise highly precise drug delivery with minimal side effects.

Synthetic cells can also serve as biosensors to detect disease markers, toxins, or bacterial infections by responding to specific molecules and changing their membrane structure accordingly. They could provide real-time medical and environmental diagnostics.

In fundamental research, DNA nanorafts can help us understand how biological membranes function and gain new insights into the fundamental principles of cell biology.

While natural cells have evolved complex mechanisms for regulating their shape and transport processes, the new findings suggest that we can design artificial systems with similar and, potentially, enhanced capabilities.





Source link

What has DeepSeek done that wasn’t possible for OpenAI or Meta? Prof Ravindran of IIT-M explains


China’s DeepSeek last week shook the ground beneath technology firms that had anything to do with artificial intelligence in the West. The company released a free AI assistant that the startup said needed less data at a fraction of the cost of existing services.

According to a Reuters report, on January 27, semiconductor design firm Nvidia lost about 17% or close to $593 billion in market value – a record one-day loss for any company, while shares of companies in semiconductor, power and infrastructure companies exposed to AI collectively shed more than $1 trillion.

businessline talks to Prof B Ravindran, Head of the Wadhwani School of Data Science and AI at IIT-Madras explains why DeepSeek’s work is wonderful news for India and what exactly the Chinese startup has done that its much larger rivals could not:

What do DeepSeek’s claims mean?

There are 2 parts to it – the first is the cost path. Meta spent a lot of money developing LLaMa and then made it open source. What DeepSeek has done is to come up with modifications to the core technology itself, so that both the training of the model as well as the inference right, by inference I mean when you are actually using the model and you’re interacting with the model, you’re asking it a question or something, it is actually doing computation and giving you an answer, and training is when you’re actually building the model itself using all the data that you have from the Internet and so on.That is the inference part.

What deep seek has done is for both the training side and for inference side they have significantly reduced the cost. What we mean by cost is the number of GPUs that you need, the amount of computation that is done on the GPU. They have managed to significantly cut down on these and they have done this using certain well-understood techniques.

But the challenge has always been this – even though people knew that this would help, how do you put it together to get a workable system? DeepSeek has managed to crack that question. That is what made the model cheap. Theoretically, knowing that you know some technique will help reduce the cost is one thing, but actually doing it right and coming up with a feasible way of doing it is another. It’s been amazing because people are now able to run these models on much less compute (power) than what you would need to run, for example, an OpenAI equivalent model.

Instead of a few 100 GPUs, you’re able to run it with few 10s of GPUs, so it’s really significant savings in the compute power during inference time. It’s not that it has become so cheap that you can run it on your desktop. But compared to what it cost to run GPT, it’s way, way cheaper.

They have also announced these API versions where you use it commercially and the amount of money you have to pay per query, that’s come down – that’s what people are talking about – one-tenth of the cost of using OpenAI’s commercially available version of Openai. If you use the commercially available version of DeepSeek, it’s like 1/10th the cost or even lower.

Importantly, they have made the model completely available freely on the Internet. You could just download their model – you still need that many GPUs, so if you have the compute power you can then run the model locally. That’s the open-source part of it. Anybody can get their code, their model and then build on top of that model and you can do other fancy things with it.

Would you agree this is a great step forward for humanity when it comes to AI?

It’s made a significant difference. What people call a moat – around incumbent AI giants –almost vanished overnight. It’s not that every developer can start using these models. But you probably required the order of about $10 million to set up a system based on this right and companies that have $10 million are certainly a lot more than organisations that have billions of dollars for whom OpenAI is talking about building the next-generation systems. That is the significant breakthrough that has come about.

But what has not happened is – there were all these things about AI hallucination and reliability of output and things like that. All those problems still persist. It is not like they have solved those problems. It’s not like suddenly overnight DeepSeek is building a much more reliable LLM than OpenAI. What they have done is they have reduced the cost to a point where hundreds of people can start experimenting with these.

Therefore, you know that since the larger community will be able to do fundamental research on this, one would expect that more breakthroughs will come. That is the reason for so much enthusiasm.

In fact, this is great news for India, by the way. It’s not like China has gone so far ahead that we can’t catch up. In fact, these guys have done something which really is good for us because we can also start becoming a player in this scene, because the investment required is a lot lesser.

This is actually giving us an advantage. We were certainly not going to be able to compete in the billions of dollars investment range, but now with this, I think a larger number of teams in India can start exploring these models.

We are good when it comes to low-cost solutions. This will certainly allow us to do more – actually invent more stuff.

What exactly has DeepSeek done that has made this cheaper, possible to run it with fewer GPUs that Openai and others were not able to?

There are a few things that they have tried. One is they have implemented something called the ‘mixture-of-experts’ approach. That is, for every input that you’re asking, there might be a different ‘expert’ who can answer that question. But this expert can require a far, far fewer number of parameters. Because it’s only going to look at certain parts of the input, not at all of the English language like maybe only looking at a certain segment of the input you have and so each ‘expert’ can be much smaller; and put together, this mixture of all these selection of experts that you have can do very well.

Now the challenge would be how would you know which expert to switch to when and all of these things, and so some of the questions required non-trivial amount of work to do. One thing that DeepSeek has done is whenever you ask a specific query, roughly about 25% of the number of parameters actually are called into use. Not the full network.

Even though I train a large network, I’m not using all of it; only as some fraction of it is actually used when I’m trying to do the inference, so that is one place where things get sped up significantly.

The second thing is they seem to have done this quantisation very effectively. What do we mean by quantisation? So normally when you have your representing numbers in computers, you use 32 binary digits – Zeros and ones – so for each number you represent you use 32 bits. But people have actually looked at using the smaller number of zeros and ones; maybe 8 bits or 16 bits for representing numbers and people have been aggressive to go down all the way to 4 bits. It looks like DeepSeek has done some very aggressive quantisation and still managed to get good results. They have even done 4 bit quantisation also.

This also speeds up computing and reduces the requirement of memory, etc.

For the new version of DeepSeek that came out now, they have done OpenAI o-1 did something which was called inference time computing, inference time learning. What they did was – when you were answering a question, instead of just answering at one shot, they were actually doing multiple generations of the answer at the at the test time. When you ask a question, it will internally run things many, many times and then pick the answer that is the best. Normally what would happen is, when you ask a question and it just goes from left to right. It goes once and it will generate an answer, so that is what the older models were doing. What o-1 started doing was internally it used to run multiple times and then give you whatever is the better answer. What DeepSeek’s new model has done is it has done the same thing but used a more efficient way of doing using what is called reinforcement learning. To do this multiple runs. They used a more efficient form of reinforcement learning to do this multiple runs internally and that is also significant savings in training time and also inference time.

It turns out that they are able to, at least on the test suites that there are out there, they’re performing on par with o-1 or close to o-1.

DeepSeek has made something possible which wasn’t thought of as possible. Where do you expect AI technology would go from here?

DeepSeek’s latest has not come completely out of the left field. They have not done something that is completely unexpected. People knew that in theory something like this could work, but nobody knew how to get it to work. From here on, perhaps we can build more efficient reasoning systems.

The faster inference-time compute would enable more and more of complicated reasoning models to develop, which right now we don’t have. Whatever a reasoning that seems to come out of these models is still not true. It’s just an illusion of reasoning. These models can only reason about things that they have already seen. But to use the concept of ‘counterfactuals’ allows you to even reason about things that you haven’t seen so far. That kind of more general-purpose reasoning abilities can certainly now come up.

How can any projects that you are working on at IIT-M could benefit from this latest development?

One of the things that we are looking at is – how would you build evaluation frameworks, evaluation metrics for these GenAI models for different applications, particularly in the Indian context? Suppose I start using this for some legal workflow, what kind of question should the end user be asking for these models? How will they evaluate it for fairness? How should they evaluate it for robustness and what are the kind of questions you should ask? We want to do this for different sectors because it’s very important to understand the impact of this in the ecosystem. Without this kind of measurement platform, it’s very hard to even talk about regulations.

Published on January 31, 2025





Source link

Shifting pole


Earth’s magnetic north pole, where the magnetic lines emanate (and converge at the magnetic south pole), today lies about 400 km from the geographic North Pole. But the magnetic poles are known to wander. They also flip, so that magnetic lines start emanating from (what is today) the magnetic south pole and converge at the north pole. The last time this happened was 780,000 years ago and you can never tell when the next will happen.

However, in recent years the magnetic north pole — today located in the Arctic Ocean above Canada — is speeding towards Siberia. Its behaviour has flummoxed experts such as Dr William Brown, global geomagnetic field modeller at the British Geological Survey. He says in the latest report of the World Magnetic Model that the pole’s conduct “is something we have never observed before”. While the magnetic north has been moving slowly around Canada since the 1500s, in the past 20 years it accelerated towards Siberia, increasing in speed every year until, about five years ago, it suddenly decelerated from 50 to 35 km per year, which is “the biggest deceleration in speed we’ve ever seen”. 

Earth’s magnetic field, caused by the motion of liquid iron and nickel beneath the crust, remains one of the more poorly understood aspects of the planet. 

Fortunately, in the GPS era, the magnetic poles matter less for navigation, but the north pole’s dash towards Siberia raises intrigue.





Source link

180-year-old observatory in Mumbai to digitise records


The Colaba Observatory in Mumbai, one of the world’s oldest, has been recording variations in the strength and direction of the earth’s magnetic field since 1841. It was one of the few observatories in the world to record the Carrington event of September 2, 1859, when a burst of energy from the sun travelled 150 million km to the earth, collapsing much of the telegraph systems. 

The observatory preserves 180 years’ worth of work in the form of magnetograms (graphical records), microfilms, and hard copy volumes. A major record is the Moos Volume, a compilation dated 1896 that is credited to Dr Nanabhoy Ardeshir Moos, the first Indian director of the Colaba Magnetic Observatory. The Moos Volume is a reference material used worldwide. 

The observatory is now part of the Indian Institute of Geomagnetism (IIG), which operates 13 magnetic observatories across the country and hosts a World Data Centre for Geomagnetism, maintaining comprehensive geomagnetic data. 

And now, the observatory has set itself the task of digitising all its data sets. This work would be undertaken by the recently inaugurated Colaba Research Centre. “This (digitisation) can help form a benchmark for the probability of occurrence of geomagnetic storms in the future. The centre will also carry out research activities on the impact of space weather and allied fields,” says a press release from the Department of Science and Technology. 

“Historical data, when digitised, can also be analysed using AI/ML techniques to provide more insights,” observed Abhay Karandikar, Secretary, Department of Science and Technology, in a LinkedIn post.

Prof Sunil Kumar Gupta to lead global physics body

Prof Sunil Kumar Gupta

Prof Sunil Kumar Gupta, who taught at the Tata Institute of Fundamental Research (TIFR) between 1976 and 2000, has been elected as President of the International Union of Pure and Applied Physics. 

The Geneva-headquartered organisation is run by the physics community, with a mission to assist in the worldwide development of physics, foster international cooperation in physics, and help in the application of physics toward solving problems of concern to humanity. 

Prof Gupta will hold the position for a three-year term. He is only the second Indian to hold this position after Dr Homi Bhabha (1960-63).





Source link