Solar storms that caused pretty auroras can create havoc with technology — here’s how

Solar storms that caused pretty auroras can create havoc with technology — here’s how


At the weekend, millions of people around the world were treated to a mesmerising display of the aurora borealis and aurora australis, better known as the northern and southern lights. The lights, usually seen in crown-like regions surrounding the Earth’s poles, were pushed to mid-latitudes by heightened activity from the Sun.

The same geomagnetic storms causing the auroras can cause havoc with our planet’s human-made infrastructure. These storms, caused by high energy particles from the Sun hitting our atmosphere, have the potential to knock out electrical grids and satellites. So what were the impacts of this recent burst of stormy space weather? Around May 8, an active region of the Sun exploded, flinging a billion-tonne cloud of magnetised and electrically charged material known as a coronal mass ejection (CME) towards the Earth. This turned out to be the first of several successive CMEs, which later merged to form a single, massive structure.

This crashed into our planet’s magnetosphere, the region of space near Earth that is dominated by the terrestrial magnetic field. As sub-atomic particles from the CME are funnelled downward, channels of electrical current flowing through part of the atmosphere known as the ionosphere, are intensified.

Apart from triggering the auroral displays, this can cause powerful magnetic fluctuations at the Earth’s surface. As a result, electrical currents can flow through power grids, pipelines and railway lines, potentially interfering with normal operations.

The sub-atomic particles from the CME can cause damage to the solar panels and electronics of satellites. On Saturday, Elon Musk said that his company SpaceX’s Starlink internet satellites were “under a lot of pressure,” because of the storm, “but holding up so far”.

The disturbances in the ionosphere were compounded by a series of bright eruptions called “flares” on the Sun that poured high energy radiation across the Earth’s sunlit face. Flare activity is associated with radio blackouts that can interfere with high-frequency radio communications, such as those required by aircraft on trans-oceanic flights. There are indications that the storm caused some disruption on transatlantic flights, but these reports are still being assessed.

Shawn Dahl, service coordinator for the Space Weather Prediction Center at the National Oceanic and Atmospheric Administration (NOAA) in Colorado, told US National Public Radio that power grid operators had been busy “working to keep proper, regulated current flowing without disruption”.

He also added that some GPS systems had struggled to lock locations and had offered incorrect positions. These GPS problems appear to have caused disruption to navigational systems in farming equipment in the US. Many tractors use GPS to plant precise rows in a field, avoid gaps and overlaps. The problems happened during the height of planting season in the midwest and Canada.

Some of this may sound a bit like a Hollywood disaster movie. Yet, while the GPS problems caused significant disruption in agriculture, impacts do not appear to have been widespread across the Earth. For many or most, life seems to have carried on, regardless. How come? Awareness and preparedness certainly helped. What we just experienced was, without question, an unusually strong space weather event. It’s early days and scientists will be analysing the storm of May, 2024 for years to come. However, early indications are that last weekend’s geomagnetic storm was the most powerful since the “Halloween storm” of October, 2003. Beyond the beautiful lights in the sky, the negative impacts of the 2024 storm aren’t yet completely clear.

At this stage, it doesn’t look like there were any catastrophic failures, but infrastructure operators will be taking stock to understand if, and how, their systems were affected. Behind the scenes, national agencies such as NOAA and the Met Office in the UK were monitoring the activity, issuing forecasts and alerts to interested parties, and liaising with experts and governments. In response, infrastructure operators took steps to ensure the continuity of services and safeguard their equipment.

Even bigger storms

However, what we’ve just experienced wasn’t the biggest such event ever seen. That honour goes to the “Carrington Event” of September, 1859, in which a massive CME (or most likely a pair of CMEs) triggered a huge geomagnetic storm that pushed the aurora borealis as far south as the Caribbean, and induced such powerful currents in copper telegraph lines that at least one operator suffered a severe electric shock – though he lived.

By some metrics, the Carrington event was two to three times more powerful than the storm we have just witnessed. Such massive events are rare, probably occurring once every couple of hundred years, in contrast to the May 2024 storm which was of a scale seen once every couple of decades.

Human technology is able to cope with relatively powerful space weather events, but modern technologies and infrastructure have never experienced anything like the Carrington event. This is why researchers strive to better understand space weather and work with agencies and government to predict and mitigate its impact on our society and develop better forecasting tools.

Jim Wild, Professor of Space Physics, Lancaster University





Source link

OpenAI co-founder Ilya Sutskever departs ChatGPT maker

OpenAI co-founder Ilya Sutskever departs ChatGPT maker


OpenAI co-founder and chief scientist Ilya Sutskever is leaving the startup at the center of today’s artificial intelligence boom.

“OpenAI would not be what it is without him,” OpenAI CEO Sam Altman wrote in a message to the company, which OpenAI posted on its blog.

Microsoft-backed OpenAI makes the popular ChatGPT chatbot, which sparked a race among the world’s largest tech companies for dominance in the emerging generative AI field.

Jakub Pachocki will be the company’s new chief scientist, the company said on its blog.

Pachocki has previously served as OpenAI’s director of research and led the development of GPT-4 and OpenAI Five.

“After almost a decade, I have made the decision to leave OpenAI,” Sutskever said in a post on X.

Sutskever posted that he is working on a new project “that is very personally meaningful to me about which I will share details in due time.”

Sutskever played a key role in Altman’s dramatic firing and rehiring in November last year. At the time, Sutskever was on the board of OpenAI and helped to orchestrate Altman’s firing.

Days later, he reversed course, signing onto an employee letter demanding Altman’s return and expressing regret for his “participation in the board’s actions.”

After Altman returned, Sutskever was removed from the board and his position at the company became unclear.

Sutskever’s exit comes a day after the company said at an event on Monday that it would release a new AI model called GPT-4o, capable of realistic voice conversation and able to interact across texts and images.

Shortly after launching in late 2022, ChatGPT was called the fastest application ever to reach 100 million monthly active users. However, worldwide traffic to ChatGPT’s website has been on a roller-coaster ride in the past year and is only now returning to its May 2023 peak, according to analytics firm Similarweb.

Sutskever has long been a prominent researcher in the AI field. Before founding OpenAI, he worked as a researcher at Google Brain, and was a postdoctoral researcher at Stanford, according to his personal website. He started his career working with Geoffrey Hinton, one of the so-called “godfathers of AI”.





Source link

Bio-taxis for cancer treatment

Bio-taxis for cancer treatment


Lymph nodes in our bodies are sites of activation of cells involved in disease-fighting antibodies. Antigens are molecules that trigger antibody response—a sort of siren for soldiers to come out and fight the invaders. You can develop antigens that can prod the immune system into producing antibodies. But how to take the antigens to the lymph nodes?

Researchers at the Indian Institute of Science, Bengaluru, have developed an antigen that can hitch-hike on a natural protein called serum albumin in blood and ride all the way to the nearest lymph node.

This development opens up a new way to bring out cancer vaccines, says a write-up put up on IISc website.

Cancer cells are very clever—they shut down the production of antibodies that target and eliminate them. Developing a cancer vaccine, therefore, “involves modifying or creating a mimic of an antigen found on the surface of cancer cells to turn up or turn on this antibody production,” says the article. In recent years, scientists have turned to carbohydrates found on cancer cell surfaces to develop these antigens.

“Carbohydrate-based antigens have enormous importance and relevance in cancer vaccine development,” explains N Jayaraman, Professor at the Department of Organic Chemistry and senior author of the study published in Advanced Healthcare Materials. “One major reason is that both normal and abnormal [cancer] cells have large amounts of carbohydrates coating their surfaces. But the abnormal cells carry carbohydrates that are very heavily truncated.”

Scientists have earlier tried ferrying such antigens into the body using an artificial protein or virus particle as the carrier. But these carriers can be bulky, lead to side-effects, and sometimes reduce antibody production against cancer cells. The IISc team, instead, decided to exploit the carrying ability of a natural protein called serum albumin, the most abundant protein in blood plasma.

To design the compound, Jayaraman and his PhD student, Keerthana TV, zeroed in on a truncated carbohydrate called Tn found on the surface of a variety of cancer cells, and synthesised it in the lab. Then, they combined it with a long-chain, oil-loving chemical – unlike carbohydrates which are water-loving – to form bubble-like micelles. They found that the combination is able to bind strongly to human serum albumin.

“The moment it latches on to albumin, the micelle breaks, and all the individual [antigen] molecules bind to the available albumin,” Jayaraman explains. “This opens up the idea that one doesn’t necessarily need to search for a virus or a protein or other types of carriers. Serum albumin is sufficient to carry it forward.”

Powering the future of sodium-ion batteries

In a lithium-ion battery, a compound of lithium is the cathode (electron donor) and graphite is the anode (electron acceptor). Sodium-ion batteries are considered among the alternatives for lithium-ion based ones—sodium, unlike lithium, is available everywhere. But the sodium-ion is bigger than lithium-ion, it does not easily go and embed itself within the graphene-layered structure of graphite-based electrodes.

A group of Italian scientists have suggested that biomass-derived biochar (BC) might be a good alternative to graphite. BC has “highly disordered and microporous carbons, known as ‘hard carbons’ and are considered the anode material of choice for sodium-ion batteries,” they say in a paper published in Renewable and Sustainable Reviews.





Source link

How to make ‘cold brew’ coffee using science

How to make ‘cold brew’ coffee using science


If you are a coffee junkie and love to drink your cuppa to your exacting specifications, you probably don’t like to drink it cold, even though the chilled version sliding down your throat and creating that soothing sensation in your gut is very welcome in these hot summers. Cold coffee isn’t ‘coffee’, because it is the hot version made cold by adding ice cubes or chilling it in a refrigerator.

Truly cold coffee is made by ‘cold brew’ — steeping the grounds in water that is either at room temperature or lower, for about 24 hours. But few take the trouble of this immersion brewing.

Now scientists are experimenting with newer methods of brewing coffee. One — this sounds truly crazy — is to pump ultrasounds into the coffee basket of an espresso machine. Several scientists have attempted this and produced scientific literature.

When high intensity sound waves pass through the liquid (any liquid), it creates regions of compression and rarefaction. In the rarefaction regions, due to fall in pressure, gas or vapour pockets form, creating bubbles. These bubbles grow bigger and explode, creating a force that fractures the cell walls of coffee grounds, releasing the intracellular content. You can have a nice ‘cold brew’ coffee in minutes.

The coffee that comes out of the process is, according to scientists, “tasty”.





Source link

Why animals still outrun robots

Why animals still outrun robots


We have seen videos of a cheetah sprinting across the savannah, effortlessly manoeuvring around obstacles at high speed and marvelled at the combination of grace and speed. Now, picture a robot attempting the same feat. While advances in robotics have been significant, the robot’s performance is clunky and slow in comparison.

In a study published in Science Robotics, researchers Samuel A Burden, Thomas Libby, Kaushik Jayaram, Simon Sponberg, and J Maxwell Donelan, explore why animals still outpace the most advanced robots in terms of running speed, agility, and robustness.

The study digs deeper into the mechanics of movement, comparing the locomotive systems of animals and robots across five key areas: power, frame, actuation, sensing, and control.

Power: The fuel of movement

In the race between animals and robots, one of the critical areas where the gap is most evident is in the subsystem of power — how energy is stored and used to fuel movement. This is not just about the total amount of energy available but how efficiently it can be converted into action.

Animals have a highly efficient system for storing and using energy. They rely on fats and carbohydrates, which provide a dense and efficient fuel source. The energy density of biological fuels is remarkably high, allowing animals to operate over long distances without needing to refuel. For instance, the fat stores in an animal can provide more than twice the energy per unit mass compared to the best lithium-ion batteries.

Moreover, animals metabolise these fuels with an efficiency that engineers can only envy. The oxidative metabolism in mitochondria converts fats to usable energy (ATP) with efficiencies around 70 per cent. In contrast, the internal combustion engines used in some robots convert fuel to movement at about 25 per cent efficiency, and even the best batteries and electric motors do not match the energy density and conversion efficiencies of biological systems.

The frame: Skeleton vs structure

Animals have evolved skeletal structures that are highly optimised for their specific modes of movement. Vertebrates have bones made of collagen and hydroxyapatite, creating frames that are both strong and lightweight, allowing them to withstand stresses while being agile. Invertebrates have exoskeletons made of chitin and protein, providing a high strength-to-weight ratio and supporting flexible movement.

Additionally, animal frames can grow and adapt to stress, and their limbs often function as natural springs, storing and releasing energy efficiently during movement.

Robotic frames, constructed from materials like carbon fibre, aluminium, or steel, are designed with principles from mechanical and aerospace engineering. These materials are chosen for their strength and lightness, but they don’t match the adaptive nature of biological frames. For example, carbon fibre offers high stiffness and can be tailored for directional strength, but it lacks the multifunctional capabilities of biological tissues. Aluminium and steel provide durability but add significant weight.

Unlike biological frames, robotic frames are static and do not adapt or heal; they require manual repair, and their rigidity often leads to less efficient energy transfer and more mechanical loss.

Actuation: Muscles vs motors

Animals use muscles for actuation, which are remarkably efficient and versatile in generating force and movement. Muscles can adjust their stiffness and rapidly change their length, allowing animals to move with a fluidity and precision that robots struggle to match. These biological actuators have a high torque density, and can generate more force relative to their weight compared to most robotic actuators.

Muscles can achieve impressive power densities due to their ability to contract and expand quickly, storing and releasing energy in the process. This dynamic ability contributes significantly to the agility and speed of animals, as they can propel themselves forward with bursts of power that robotic systems find hard to replicate.

Robotic actuation primarily relies on electric motors and piezoelectric actuators, each with unique characteristics suited to different tasks. Electric motors are favoured in many robots for their good balance between power density and efficiency. They can be precisely controlled and can produce a consistent output for a wide range of tasks. However, while high-end electric motors can match or exceed the power density of muscle, they often fall short in terms of torque density without the use of gearboxes or other transmission mechanisms, which can introduce inefficiencies and reduce response times.

Piezoelectric actuators, on the other hand, offer very fine control at small scales and can operate quickly, but they do not scale well to the larger forces and movements needed for faster, larger-scale locomotion. These actuators also typically provide less torque than muscles, making it difficult for robots to achieve the same level of dynamic movement as animals.

Sensory supremacy

Animals excel in this domain due to their sophisticated sensory systems, which provide comprehensive and nuanced feedback about their surroundings. These systems are highly developed, with a wide array of sensors distributed across the body, allowing for an exceptional level of situational awareness and body control. For instance, animals use photoreceptors in their eyes to detect light, enabling vision that guides movement and obstacle avoidance. These photoreceptors can detect minimal light changes, allowing some animals to see in near-darkness.

Moreover, animals have mechanoreceptors distributed throughout their bodies, particularly in their skin, joints, and muscles. These sensors detect pressure, stretch, and touch, providing critical feedback on the environment and the body’s position within it. This feedback is vital for adjusting gait, speed, and posture to maximise efficiency and stability.

Robots, by contrast, typically rely on a more limited set of sensors, often centralised rather than distributed, which can reduce their ability to adapt to new or complex environments. Common robotic sensors include cameras and LIDAR (Light Detection and Ranging), which serve as the robot’s eyes by mapping out the environment in high resolution. While these tools are powerful for navigation and object recognition, they do not fully replicate the depth and breadth of sensory input available to animals.

For example, robots often use inertial measurement units (IMUs) to get a sense of movement and orientation. However, these units cannot provide the same level of detailed, localised feedback that animals get from their proprioceptive systems. Robots also employ force sensors in their limbs to detect interaction with the ground or other objects, but these are typically less sensitive and less numerous than the mechanoreceptors found in animals. This gap in sensory capabilities can lead to less fluid and adaptive movement, as robots cannot adjust their actions as precisely in real time.

Control: Brain vs computer

Animals demonstrate an extraordinary level of control, facilitated by their complex nervous systems. The neural architecture in animals, particularly in their spinal cord and brain, allows for rapid processing of a vast array of sensory data and the generation of dynamic, context-specific responses. This neural control is highly distributed; for instance, much of the basic motor control in animals occurs via spinal reflexes and central pattern generators (CPGs), which automate repetitive motion patterns like walking or running without constant brain intervention. This setup allows animals to react almost instantaneously to changes in their environment, adjusting their gait, speed, and direction to optimise movement.

The control systems in animals are inherently adaptable, learning and improving from repeated experiences. This neuroplasticity enables animals to master complex locomotor tasks through practice, from the intricate footwork of a cat stalking its prey to the powerful galloping of a horse.

Robots, on the other hand, traditionally use more centralised and less adaptive control systems. These systems often rely on pre-programmed responses and have limited ability to learn from experience or adapt in real-time. Robot controllers typically process input from sensors like cameras and IMUs and then compute outputs to actuators based on algorithms or control models.

While advancements in computational power and algorithms, especially in the realm of machine learning and artificial intelligence, have significantly improved robotic control, these systems still generally lack the fluidity and adaptability of biological control systems. For example, most robots use a form of model-based control, where the movements are planned based on predictive models of how the robot should react to certain inputs.

While effective in stable and predictable environments, this approach can struggle with the unexpected variations and complexities found in natural terrains. Furthermore, robotic control often suffers from delays due to the time needed to process sensory information and compute the appropriate responses, whereas animals benefit from the parallel processing capabilities of their neural networks.

Bridging the performance gap

Animals excel in locomotion due to their integrated systems that combine sensory inputs, neural processing, and adaptive actuation in a seamless manner. Their ability to dynamically adjust their behaviour based on environmental feedback and internal states allows for efficient, agile, and

robust movement. In contrast, robots often have disjointed systems where sensing, control, and actuation are not as well integrated, leading to slower, less adaptable, and more rigid movements.

One promising approach to bridge the gap is the development of bio-inspired designs and algorithms that replicate the natural integration seen in animal locomotion. These bio-inspired approaches, combined with advancements in materials science and computational modelling, hold the potential to significantly narrow the performance gap, leading to robots that can move with the grace, efficiency, and resilience of their biological counterparts.





Source link

On-chip energy storage set to revolutionise electronics

On-chip energy storage set to revolutionise electronics


Electronic devices need a component to store electricity for their working. This is typically a battery or a capacitor. But these take up space, costs something and there is energy loss as electricity is transmitted from the battery to the chips, where the processing work is done.

What if the energy could be stored right on the chip? The answer to this question opens up a field of technology, called on-chip storage. On-chip storage uses micro-capacitors. (Capacitors are storage devices into which you can dump large amounts of energy — they dump the energy back when you ask them to, unlike batteries which charge or discharge slowly.)

Unlike batteries, which store energy through electrochemical reactions, capacitors store energy in an electric field established between two metallic plates separated by a dielectric material (a type of insulator). Also, capacitors do not degrade with repeated charge-discharge cycles, leading to much longer lifespans than batteries.

However, capacitors generally have much lower energy densities than batteries — they can store less energy per unit volume or weight. The problem only gets worse when you try to shrink them down to micro capacitor size, for on-chip energy storage.

So, scientists have been toiling for a long time to come out with better micro-capacitors. In this, a group of eleven scientists (including three of Indian origin and one from Bangladesh) at the Lawrence Berkeley University, California, have recently reported groundbreaking success. They have achieved record-high energy densities in their micro-capacitors made with engineered thin films of hafnium oxide and zirconium oxide. The findings, published in the journal Nature, pave the way for advanced on-chip energy storage and power delivery in next-generation electronics.

Engineered thin films

“We’ve shown that it’s possible to store a lot of energy in micro-capacitors made from engineered thin films, much more than what is possible with ordinary dielectrics,” said Sayeef Salahuddin, senior scientist and UC Berkeley professor who led the project, in a press release. “We’re doing this with a material that can be processed directly on top of microprocessors.”

Effectively, the reduced size of micro-capacitors limits their capacity (or, ‘capacitance’) to store electricity. To understand how Berkeley Lab countered this, it is essential to know the concept of ‘negative capacitance’ materials. Typically, when the applied voltage increases, capacitance should also increase, but in certain materials, it decreases.

Normally if you layer one dielectric material over another, the overall capacitance falls. Berkeley Lab scientists figured out that if one of the layers is of a negative-capacitance material, then the overall capacitance increases. They engineered thin films (of HfO2-ZrO2) to achieve negative-capacitance effect. This hybrid dielectric material raised the overall capacitance of the micro capacitor. This is a groundbreaking development in the field of electronics.

These high capacitance micro-capacitors will find applications in edge computing systems, AI processors and IoT sensors.





Source link