Carbon capture and the controversy of ‘unabated fossil fuels’

Carbon capture and the controversy of ‘unabated fossil fuels’


One phrase that has crept into the climate lingo is ‘abated’ (or, conversely, ‘unabated’) fossil fuels use. There are calls for ‘phasing out’ or ‘phasing down’ of unabated fossil fuels use and much controversy revolves around the idea of abatement. The final text of the COP28 climate talks, currently underway in Dubai, is expected to focus heavily on unabated fossil fuels. 

But what are abated or unabated fossil fuels? 

‘Abated fossil fuels use’ refers to the burning of fossil fuels while simultaneously picking up and neutralising the carbon dioxide emissions, either by using them or securely storing the gas underground—forever. ‘Unabated’ refers to burning fossil fuels and letting the emissions to be released into the atmosphere. 

Developed countries want coal to be phased out. In the concluding moments of the Glasgow Conference of 2021 (COP26), India—much to the chagrin of developed countries—forced the COP Presidency (UK) to change the language from “phase out” to “phase down” unabated coal. So, phasing down has come to be accepted as the standard language, but much debate is around whether the phase down should apply only to coal (as the developed countries want) or to all fossil fuels, including oil and gas (which India and other developing countries favour). 

Secondly, the definition of ‘abatement’ itself is a key issue. What level of carbon capture should qualify as ‘abated’ use? If a thermal power plant captures and sequesters, say, 10 per cent of the carbon dioxide it emits, would it still be considered “unabated coal” burning? 

So, the present situation is that while ‘phase down’ has been accepted with mumbles and moans, many discussions are around whether only coal should be phased down or all fossil fuels, and the definition of abatement. 

What is abatement? 

But pushing aside all of this are questions arising over the very concept of abatement—many shrill voices are saying that ‘carbon capture and sequestration’ itself is hardly anything. Among the many voices are those of two well-known climate experts—Laurence Tubiana and Emmanuel Guérin. Tubiana is the CEO of the European Climate Foundation and was France’s climate change ambassador and special representative for COP21, which resulted in the Paris Agreement. Guerin is the executive director for global policies at the European Climate Foundation (ECF). 

Writing in Climatechangenews.com, a respected online publication dedicated to climate change, Tubiana and Guerin tell readers, “Don’t be fooled: CCS is no solution to oil and gas emissions.” 

CCS, at best, might be helpful “at the margins”, but “cannot possibly deliver reductions in greenhouse gas emissions on the scale needed to avert climate disaster,” they say, adding that CCS might deliver “less than a tenth” of the cumulative carbon dioxide emissions over 2023-2050 period. 

Climate activists fear that CCS might be used by fossil fuel companies as a smokescreen to continue producing and selling fossil fuels. Dr Al Khourdajie, a research fellow at Imperial College London, notes that the vague definition of “abated” fossil fuel gives a “false, if not dangerous, sense of security” that could lead to inadequate policy measures and investment decisions.” 

Experts, such as those at Carbon Brief, note that “CCS barely exists and relying on a major scale-up is considered “risky”. 

CCS can be useful only if almost all the emissions are captured and sequestered. The Intergovernmental Panel for Climate Change (IPCC), an UN-mandated international body of scientists formed to assist policymakers by generating scientific data on climate change, has said that “even if realized at its full announced potential, CCS will only account for about 2.4 per cent of the world’s carbon mitigation by 2030.” 

IPCC has said that ‘unabated’ fossil fuels use can be green-flagged only if the CCS plants capture 90 per cent or more of CO2 emissions. The clinching argument against CCS is that today there is no established technology that can capture 90 per cent of emissions. A recent study by the think-tank, Institute for Energy Economics and Financial Analysis reviewed the capacity and performance of 13 flagship projects and found that 10 of the 13 failed or underperformed against their designed capacities, mostly by large margins. 





Source link

IN-SPACe launches seed fund for start-ups focused on urban development, disaster management

IN-SPACe launches seed fund for start-ups focused on urban development, disaster management


The Indian National Space Promotion and Authorisation Centre (IN-SPACe) has announced a seed fund scheme to offer start-ups focusing on urban development and disaster management a financial assistance of up to Rs 1 crore each.

It has launched the scheme in collaboration with ISRO’s National Remote Sensing Centre (NRSC) to provide a leg-up to companies that leverage space technology for societal benefit.  

The selected start-ups will receive seed funding for transforming an original idea into a prototype using space technology, ISRO facility support, including earth observation (EO) data, for validation of the concept, mentorship support, and access to a data algorithm as a transfer of technology from the Department of Space.

Dr Pawan Goenka, Chairman, IN-SPACe, said, “The role of the space sector is crucial to the overall development of the national economy. The latest seed fund scheme is a part of our efforts to provide a special thrust to enhance the space activity capabilities of the nation with the active participation of start-ups.” The last date of application for the scheme is December 20.

Further details are available at https://www.inspace.gov.in/. In addition to financial assistance and mentorship, the scheme also offers training and networking opportunities.

In urban development, opportunities are available for start-ups active in the domains of urban planning, infrastructure management, telecommunication, navigation, broadband connectivity, water resources management, energy efficiency, climate and weather monitoring, disaster risk reduction, public health, healthcare, and more. 

Similarly, under disaster management there are opportunities for start-ups specialising in Geographical Information System (GIS), early warning and monitoring systems, insurance and risk assessment, communication and navigation systems, climate change monitoring, search and rescue operations, and space-borne sensors and instruments, among others. 





Source link

Managing human-allied Artificial Intelligence 

Managing human-allied Artificial Intelligence 


Some of the recent developments have led people to believe that “singularity” — a hypothetical future point in time when AI becomes smarter than humans — is imminent. The reality is that the current AI systems still blindly follow patterns in the data and do not truly understand the world.

The need of the hour is the development of Human-Allied AI systems, where Humans and AI work together to amplify their capabilities and mutually cover for their deficiencies. To enable such systems, one need to adhere to the principles of responsible AI. Here are the key ideas behind these principles.

Explainable AI

In order to trust the outcomes of AI models, we must be able to understand the reasons for the decisions/recommendations made by the model. Success for many AI applications are achieved by the use of what are known as ‘black box models’ — where the process of computation is known, but the reasons for the outcomes are not fully understood.

Even models that are not black boxes can only be explained in terms of the statistical properties of the model. A “responsible AI system” will offer explanations so that anyone can understand the decisions made by it. This would require aligning the AI models to the accepted modality of explanations in the application domain.

Data discipline is an aspect of responsible AI that has received the most attention. The European Union’s General Data Protection Regulation was the first-ever comprehensive framework created for controlling collection, access, and usage of data. The rights of the end-user, whose data is used for building AI systems, are made paramount in the framework. India’s Digital Personal Data Protection Act, 2023, also aims to do the same.

The fairness and ethical aspects of AI have received much attention in both research and popular media. Reports of AI systems that routinely identify people of a certain race as more likely to be criminals, or that women are more likely to be nurses than doctors, are often seen. AI chatbots have been known to become toxic in their language with suitable encouragement from the other participant. While much progress have been made in addressing such issues, techniques and policies must be developed for adopting them into the local social context. India has her own dimensions of discrimination, and one cannot blindly adopt the West-centric views on it. Existing prejudices against certain castes or groups or about people from certain regions will be reflected in AI systems trained on this data. This must be identified and guarded against. For which, AI researchers and social scientists must work in close collaboration to understand the existing human biases and the ways in which they manifest.

Another aspect of responsible deployment of AI models is the ‘performance expectations’ of such systems. AI systems are not simple programs. They solve complex problems and the outcome of their calculation may not be always right. And the end-user of such programs often do not fully understand the implications of it. So, when a designer says that my system will be correct 93 times out of 100, does that mean an AI-enabled medical scan will not detect the illness in 7 patients? It could well be that the AI system says a patient has a disease just because the x-ray was damaged!

Hence, development of regulations to mandate performance guarantees in each application domain is required. At the same time, one needs to understand that researchers cannot guard against all possible eventualities and hence one would need appropriate insurance models for AI systems.

Teamwork

The capabilities of AI can be fully realised when humans and AI systems work together. Responsible deployment of AI systems will require one to understand how work will be disrupted with AI, and development of new AI-in-the-loop protocols for solving the problems. Companies will have to /re-skill workers to operate effectively in such a hybrid environment.

The Centre for Responsible AI (CeRAI) has been set up at IIT Madras to study these issues under three themes — making AI understandable, AI and safety, and AI and society. The centre is set up in multi-stake holder consortium model, with participation from industry, government, legal experts, social scientists and industry bodies apart from different academic institutions.

(Prof. Balaraman Ravindran is Head, Centre for Responsible AI, IIT Madras)





Source link

Using sunlight to clean-up toxic water

Using sunlight to clean-up toxic water


Scientists at the Materials Research Centre (MRC) of the Indian Institute of Science, Bengaluru, have achieved a ground-breaking development in the field of wastewater treatment. They have unveiled a novel enzyme mimetic that effectively degrades toxic chemicals in industrial wastewater when exposed to sunlight. This breakthrough overcomes the inherent limitations of natural enzymes and presents a significant step forward in environmental protection and potential healthcare applications.

Toxic chemicals, when released into the environment, can have detrimental effects on ecosystems, water bodies and human health. Efficient degradation helps prevent or minimise these negative impacts. It involves several key principles and techniques:

Catalysis: Catalysts are substances that speed up chemical reactions without being consumed themselves. In wastewater treatment, catalysts are often used to facilitate the degradation of toxic chemicals.

Specificity: Efficient degradation processes are often highly specific to the particular chemicals being targeted. For example, certain enzymes or catalysts are designed to break down specific types of pollutants. This specificity ensures that only the harmful substances are targeted while leaving non-harmful compounds intact.

Speed: Efficiency in degradation also involves timely removal of toxic chemicals. Some catalysts, like nanozymes, can work rapidly, breaking down pollutants within a short time-frame.

Stability: The catalysts or enzymes used for degradation must be stable under the conditions in which they are applied. For example, they should remain active over a range of pH and temperature conditions. The stability of the catalyst or enzyme is crucial to ensure its long-term effectiveness.

Environmental impact: An efficient degradation process should have minimal negative environmental impacts. It should not produce harmful by-products or waste materials that can further contaminate the environment.

Cost-effectiveness: Efficient degradation methods should be cost-effective, making them practical for industrial and environmental applications. This cost-effectiveness can involve factors such as the ease of production and the availability of materials.

In response to these challenges, the team at MRC has developed a platinum-containing nanozyme called NanoPtA. This synthetic nanozyme mimics the function of natural oxidases and exhibits exceptional stability in a wide range of pH and temperature conditions. They act as catalysts to accelerate the breakdown of pollutants in the presence of sunlight.

Mass-producing natural enzymes, such as laccase, has been an expensive and time-consuming process, further exacerbated by their temperature-sensitive storage requirements.

Synthetic vs Natural

Natural enzymes are often extracted from living organisms, which can be a time-consuming and costly process. The availability of natural enzymes depends on factors like the growth of the source organism. In contrast, synthetic nanozymes can be manufactured in a laboratory, allowing for greater control over production, consistency and scalability. This makes them more accessible for industrial applications and research.

Many natural enzymes are sensitive to changes in temperature and pH, requiring special storage conditions and careful handling. Synthetic nanozymes can be designed to be more stable and robust, and can remain stable at room temperatures for extended periods, eliminating the need for specialised storage conditions.

Natural enzymes often face challenges in terms of recycling and reuse. In contrast, synthetic nanozymes can be designed with features that make them more amenable to recycling, reducing overall waste and cost.

Some natural enzyme extraction processes can have environmental implications, such as habitat disruption and resource consumption. Synthetic nanozymes can be produced with reduced environmental impact, especially if sustainable and eco-friendly materials are used.

How does it work?

When NanoPtA comes into contact with wastewater, it forms unique tape-like structures that emit light. This enables oxidation of pollutants present in wastewater when exposed to sunlight, thereby reducing the toxicity of the water.

The team’s research has demonstrated that NanoPtA can effectively degrade common water pollutants — including phenols and dyes, even in micromolar quantities — within just ten minutes of exposure to sunlight. Remarkably, the NanoPtA complex remains stable at room temperature for up to 75 days, making it a groundbreaking development in the field of enzyme mimetics.

Beyond its potential to address wastewater pollution, the nanozyme also holds promise in healthcare applications. The team has successfully tested NanoPtA’s ability to oxidize neurotransmitters like dopamine and adrenaline, which are associated with neurological and neurodegenerative diseases such as Parkinsons and Alzheimer’s. The change in colour resulting from the oxidation of these molecules could offer a valuable diagnostic tool.

Looking ahead, the researchers are investigating more cost-effective metal alternatives to platinum for its synthesis, with the goal of scaling up production for industrial use.





Source link

Bacteria’s internal bombs: A novel weapon shows potential


How can mankind use bacteria’s internal mechanism of fighting a virus to protect human cells from a bacterial infection? Mahavir Singh, Associate Professor, Molecular Biophysics Unit at the Indian Institute of Science, and his team are seeking to answer the question.

There is a constant struggle for survival between phages (eg viruses that attack bacteria) and bacteria, and both sides have evolved several mechanisms to defend themselves. One such strategy involves the use of the toxin-antitoxin (TA) systems.

Every bacterium hosts inside itself a combination of a toxin (T), usually a protein, and an antitoxin (A), which can be a protein or an RNA molecule. In its free-form, the toxin is poisonous to bacteria that contains it. Therefore the bacteria keep it in a complex form (toxin-antitoxin — TA), bound with the antitoxin. However, a question arises: why would any bacteria host a toxin that could potentially kill the bacteria itself? Because of this, the TA complexes are called ‘internal bombs’ because they serve as a defence mechanism against invading phages.

When a phage attacks bacteria, it takes over the host’s internal machinery to multiply its genetic material (DNA or RNA) inside, killing the bacteria in the process. In an act of altruism, as Prof Singh calls it, the bacteria breaks down the TA complexes to free up the toxins which activate to prevent the viral infection from spreading beyond its walls.

Singh and his team have set out to solve the question: How are the TA complexes assembled, and how can we use these systems to turn them against bacteria themselves? Of the eight types of TA systems identified so far, Singh’s lab is interested in the Type III TA system, where the antitoxin is not a protein but rather a stretch of ribonucleic acid (RNA). The toxin gene gets transcribed on to the RNA which then is used to produce the toxin protein, but the antitoxin gene only gets transcribed to RNA. In type III TA systems, says Singh, the toxin protein is an enzyme that “cuts up its own antitoxin RNA into precise bits that in turn bind to the toxin to form the inactive TA complex”.

Free the toxin!

Why is antitoxin as RNA important here? Being the RNA, it gets degraded faster than the protein toxin. Now, if there is a transcription shut-off (as happens in the case of a virus attack), no new antitoxin is being produced. As a result, more and more free toxin accumulates in the cell. This free toxin cleaves the phage RNA (along with the bacterial RNAs), effectively preventing phage propagation from infecting other bacteria.

Singh’s team has shown that at least five types (clusters) of such complexes can exist in different strains of E.coli bacteria. Further, he says, the team has also published material on a detailed crystal structure of an E. coli type III TA complex depicting the tightly bound toxin-RNA complex in the bacterium.

Simply put, Singh’s team has helped establish the structure and binding nature between the antitoxin RNA and the toxin protein. The next step, he says, is to find a molecule which can disrupt this complex arrangement and free the toxin. The free toxin will then destroy the bacteria thus preventing bacterial infection.

“The world of science can potentially design some peptide or molecule in a way that frees the toxin. To do this, you have to understand the structure of these TA systems first, which is what we have currently achieved,” he says.

“The hetero-hexameric closed complex structure helps bind the two tightly. The peptide/molecule should be able to bind with the antitoxin to set the toxin free.”

Now, the other question is, what if newly designed peptides or drugs harm human cells by inhibiting human proteins? “So far, no TA systems have been identified in human cells. They seem to be bacteria-specific. So, it is hoped that a peptide or the drug effective against bacteria will not act against the human cells,” says Singh.

His team is currently working on a small-scale project “to see whether they can dislodge this complex and free the toxin.”

How difficult is it to design a successful inhibitor? Singh says there are several unknowns. For example, “you design inhibitors for one TA complex; but, there may be another, slightly different antitoxin in bacteria which can bind and neutralise the toxin. So, there is potential for ‘crosstalk’ that makes designing an inhibitor challenging.”

“The other challenge is that the interaction of the toxin and antitoxin is very extensive, and they’re very tightly bound. The TA complex formation involves extensive surface involvement from both the toxin and the antitoxin. Finding inhibitors for such large binding surfaces is further challenging.”

If you want to find an inhibitor of the complex, you first need to understand how it is assembled before you can design an effective inhibitor, Singh points out.

Since the team has identified type III TA systems and characterised them for their function, assembly and structure, these can be “ used as antibiotic targets for designing novel antibiotics; that’s where scientists see potential during this era of emerging antibiotic resistance in bacteria,” says Singh.





Source link

Bacteria’s internal bombs: A novel weapon shows potential


How can mankind use bacteria’s internal mechanism of fighting a virus to protect human cells from a bacterial infection? Mahavir Singh, Associate Professor, Molecular Biophysics Unit at the Indian Institute of Science, and his team are seeking to answer the question.

There is a constant struggle for survival between phages (eg viruses that attack bacteria) and bacteria, and both sides have evolved several mechanisms to defend themselves. One such strategy involves the use of the toxin-antitoxin (TA) systems.

Every bacterium hosts inside itself a combination of a toxin (T), usually a protein, and an antitoxin (A), which can be a protein or an RNA molecule. In its free-form, the toxin is poisonous to bacteria that contains it. Therefore the bacteria keep it in a complex form (toxin-antitoxin — TA), bound with the antitoxin. However, a question arises: why would any bacteria host a toxin that could potentially kill the bacteria itself? Because of this, the TA complexes are called ‘internal bombs’ because they serve as a defence mechanism against invading phages.

When a phage attacks bacteria, it takes over the host’s internal machinery to multiply its genetic material (DNA or RNA) inside, killing the bacteria in the process. In an act of altruism, as Prof Singh calls it, the bacteria breaks down the TA complexes to free up the toxins which activate to prevent the viral infection from spreading beyond its walls.

Singh and his team have set out to solve the question: How are the TA complexes assembled, and how can we use these systems to turn them against bacteria themselves? Of the eight types of TA systems identified so far, Singh’s lab is interested in the Type III TA system, where the antitoxin is not a protein but rather a stretch of ribonucleic acid (RNA). The toxin gene gets transcribed on to the RNA which then is used to produce the toxin protein, but the antitoxin gene only gets transcribed to RNA. In type III TA systems, says Singh, the toxin protein is an enzyme that “cuts up its own antitoxin RNA into precise bits that in turn bind to the toxin to form the inactive TA complex”.

Free the toxin!

Why is antitoxin as RNA important here? Being the RNA, it gets degraded faster than the protein toxin. Now, if there is a transcription shut-off (as happens in the case of a virus attack), no new antitoxin is being produced. As a result, more and more free toxin accumulates in the cell. This free toxin cleaves the phage RNA (along with the bacterial RNAs), effectively preventing phage propagation from infecting other bacteria.

Singh’s team has shown that at least five types (clusters) of such complexes can exist in different strains of E.coli bacteria. Further, he says, the team has also published material on a detailed crystal structure of an E. coli type III TA complex depicting the tightly bound toxin-RNA complex in the bacterium.

Simply put, Singh’s team has helped establish the structure and binding nature between the antitoxin RNA and the toxin protein. The next step, he says, is to find a molecule which can disrupt this complex arrangement and free the toxin. The free toxin will then destroy the bacteria thus preventing bacterial infection.

“The world of science can potentially design some peptide or molecule in a way that frees the toxin. To do this, you have to understand the structure of these TA systems first, which is what we have currently achieved,” he says.

“The hetero-hexameric closed complex structure helps bind the two tightly. The peptide/molecule should be able to bind with the antitoxin to set the toxin free.”

Now, the other question is, what if newly designed peptides or drugs harm human cells by inhibiting human proteins? “So far, no TA systems have been identified in human cells. They seem to be bacteria-specific. So, it is hoped that a peptide or the drug effective against bacteria will not act against the human cells,” says Singh.

His team is currently working on a small-scale project “to see whether they can dislodge this complex and free the toxin.”

How difficult is it to design a successful inhibitor? Singh says there are several unknowns. For example, “you design inhibitors for one TA complex; but, there may be another, slightly different antitoxin in bacteria which can bind and neutralise the toxin. So, there is potential for ‘crosstalk’ that makes designing an inhibitor challenging.”

“The other challenge is that the interaction of the toxin and antitoxin is very extensive, and they’re very tightly bound. The TA complex formation involves extensive surface involvement from both the toxin and the antitoxin. Finding inhibitors for such large binding surfaces is further challenging.”

If you want to find an inhibitor of the complex, you first need to understand how it is assembled before you can design an effective inhibitor, Singh points out.

Since the team has identified type III TA systems and characterised them for their function, assembly and structure, these can be “ used as antibiotic targets for designing novel antibiotics; that’s where scientists see potential during this era of emerging antibiotic resistance in bacteria,” says Singh.





Source link

YouTube
Instagram
WhatsApp