Internet Business Newswire https://ibiznewswire.com Global Business News Fri, 14 Nov 2025 07:38:43 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 https://ibiznewswire.com/wp-content/uploads/2021/01/cropped-ibiznewswire-2-32x32.png Internet Business Newswire https://ibiznewswire.com 32 32 New prediction breakthrough delivers results shockingly close to reality https://ibiznewswire.com/new-prediction-breakthrough-delivers-results-shockingly-close-to-reality/ Fri, 14 Nov 2025 07:38:43 +0000 https://ibiznewswire.com/new-prediction-breakthrough-delivers-results-shockingly-close-to-reality/ An international group of mathematicians led by Lehigh University statistician Taeho Kim has developed a new way to generate predictions that line up more closely with real-world results. Their method is aimed at improving forecasting across many areas of science, particularly in health research, biology and the social sciences.

The researchers call their technique the Maximum Agreement Linear Predictor, or MALP. Its central goal is to enhance how well predicted values match observed ones. MALP does this by maximizing the Concordance Correlation Coefficient, or CCC. This statistical measure evaluates how pairs of numbers fall along the 45-degree line in a scatter plot, reflecting both precision (how tightly the points cluster) and accuracy (how close they are to that line). Traditional approaches, including the widely used least-squares method, typically try to reduce average error. Although effective in many situations, these methods can miss the mark when the main objective is to ensure strong alignment between predictions and actual values, says Kim, assistant professor of mathematics.

“Sometimes, we don’t just want our predictions to be close — we want them to have the highest agreement with the real values,” Kim explains. “The issue is, how can we define the agreement of two objects in a scientifically meaningful way? One way we can conceptualize this is how close the points are aligned with a 45 degree line on a scatter plot between the predicted value and the actual values. So, if the scatter plot of these shows a strong alignment with this 45 degree line, then we could say there is a good level of agreement between these two.”

Why Agreement Matters More Than Simple Correlation

According to Kim, people often think first of Pearson’s correlation coefficient when they hear the word agreement, since it is introduced early in statistics education and remains a fundamental tool. Pearson’s method measures the strength of a linear relationship between two variables, but it does not specifically check whether the relationship aligns with the 45-degree line. For instance, it can detect strong correlations for lines that tilt at 50 degrees or 75 degrees, as long as the data points lie close to a straight line, Kim says.

“In our case, we are specifically interested in alignment with a 45-degree line. For that, we use a different measure: the concordance correlation coefficient, introduced by Lin in 1989. This metric focuses specifically on how well the data align with a 45-degree line. What we’ve developed is a predictor designed to maximize the concordance correlation between predicted values and actual values.”

Testing MALP With Eye Scans and Body Measurements

To evaluate how well MALP performs, the team ran tests using both simulated data and real measurements, including eye scans and body fat assessments. One study applied MALP to data from an ophthalmology project comparing two types of optical coherence tomography (OCT) devices: the older Stratus OCT and the newer Cirrus OCT. As medical centers move to the Cirrus system, doctors need a dependable way to translate measurements so they can compare results over time. Using high-quality images from 26 left eyes and 30 right eyes, the researchers examined how accurately MALP could predict Stratus OCT readings from Cirrus OCT measurements and compared its performance with the least-squares method. MALP produced predictions that aligned more closely with the true Stratus values, while least squares slightly outperformed MALP in reducing average error, highlighting a tradeoff between agreement and error minimization.

The team also looked at a body fat data set from 252 adults that included weight, abdomen size and other body measurements. Direct measures of body fat percentage, such as underwater weighing, are reliable but expensive, so easier measurements are often substituted. MALP was used to estimate body fat percentage and was evaluated against the least-squares method. The results were similar to the eye scan study: MALP delivered predictions that more closely matched real values, while least squares again had slightly lower average errors. This repeated pattern underscored the ongoing balance between agreement and minimizing error.

Choosing the Right Tool for the Right Task

Kim and his colleagues observed that MALP frequently provided predictions that matched the actual data more effectively than standard techniques. Even so, they note that researchers should choose between MALP and more traditional methods based on their specific priorities. When reducing overall error is the primary goal, established methods still perform well. When the emphasis is on predictions that align as closely as possible with real outcomes, MALP is often the stronger option.

The potential impact of this work reaches into many scientific areas. Improved prediction tools could benefit medicine, public health, economics and engineering. For researchers who rely on forecasting, MALP offers a promising alternative, especially when achieving close agreement with real-world results matters more than simply narrowing the average gap between predicted and observed values.

“We need to investigate further,” Kim says. “Currently, our setting is within the class of linear predictors. This set is large enough to be practically used in various fields, but it is still restricted mathematically speaking. So, we wish to extend this to the general class so that our goal is to remove the linear part and so it becomes the Maximum Agreement Predictor.”

]]>
A revolutionary DNA search engine is speeding up genetic discovery https://ibiznewswire.com/a-revolutionary-dna-search-engine-is-speeding-up-genetic-discovery/ Tue, 28 Oct 2025 17:48:23 +0000 https://ibiznewswire.com/a-revolutionary-dna-search-engine-is-speeding-up-genetic-discovery/ Rare genetic diseases can now be detected in patients, and tumor-specific mutations identified — a milestone made possible by DNA sequencing, which transformed biomedical research decades ago. In recent years, the introduction of new sequencing technologies (next-generation sequencing) has driven a wave of breakthroughs. During 2020 and 2021, for instance, these methods enabled the rapid decoding and worldwide monitoring of the SARS-CoV-2 genome.

At the same time, an increasing number of researchers are making their sequencing results publicly accessible. This has led to an explosion of data, stored in major databases such as the American SRA (Sequence Read Archive) and the European ENA (European Nucleotide Archive). Together, these archives now hold about 100 petabytes of information — roughly equivalent to the total amount of text found across the entire internet, with a single petabyte equaling one million gigabytes.

Until now, biomedical scientists needed enormous computing resources to search through these vast genetic repositories and compare them with their own data, making comprehensive searches nearly impossible. Researchers at ETH Zurich have now developed a way to overcome that limitation.

Full-text search instead of downloading entire data sets

The team created a tool called MetaGraph, which dramatically streamlines and accelerates the process. Instead of downloading entire datasets, MetaGraph enables direct searches within the raw DNA or RNA data — much like using an internet search engine. Scientists simply enter a genetic sequence of interest into a search field and, within seconds or minutes depending on the query, can see where that sequence appears in global databases.

“It’s a kind of Google for DNA,” explains Professor Gunnar Rätsch, a data scientist in ETH Zurich’s Department of Computer Science. Previously, researchers could only search for descriptive metadata and then had to download the full datasets to access raw sequences. That approach was slow, incomplete, and expensive.

According to the study authors, MetaGraph is also remarkably cost-efficient. Representing all publicly available biological sequences would require only a few computer hard drives, and large queries would cost no more than about 0.74 dollars per megabase.

Because the new DNA search engine is both fast and accurate, it could significantly accelerate research — particularly in identifying emerging pathogens or analyzing genetic factors linked to antibiotic resistance. The system may even help locate beneficial viruses that destroy harmful bacteria (bacteriophages) hidden within these massive databases.

Compression by a factor of 300

In their study published on October 8 in Nature, the ETH team demonstrated how MetaGraph works. The tool organizes and compresses genetic data using advanced mathematical graphs that structure information more efficiently, similar to how spreadsheet software arranges values. “Mathematically speaking, it is a huge matrix with millions of columns and trillions of rows,” Rätsch explains.

Creating indexes to make large datasets searchable is a familiar concept in computer science, but the ETH approach stands out for how it connects raw data with metadata while achieving an extraordinary compression rate of about 300 times. This reduction works much like summarizing a book — it removes redundancies while preserving the essential narrative and relationships, retaining all relevant information in a much smaller form.

“We are pushing the limits of what is possible in order to keep the data sets as compact as possible without losing necessary information,” says Dr. André Kahles, who, like Rätsch, is a member of the Biomedical Informatics Group at ETH Zurich. By contrast with other DNA search masks currently being researched, the ETH researchers’ approach is scalable. This means that the larger the amount of data queried, the less additional computing power the tool requires.

Half of the data is already available now

First introduced in 2020, MetaGraph has been steadily refined. The tool is now publicly accessible for searches (https://metagraph.ethz.ch/search) and already indexes millions of DNA, RNA, and protein sequences from viruses, bacteria, fungi, plants, animals, and humans. Currently, nearly half of all available global sequence datasets are included, with the remainder expected to follow by the end of the year. Since MetaGraph is open source, it could also attract interest from pharmaceutical companies managing large volumes of internal research data.

Kahles even believes it is possible that the DNA search engine will one day be used by private individuals: “In the early days, even Google didn’t know exactly what a search engine was good for. If the rapid development in DNA sequencing continues, it may become commonplace to identify your balcony plants more precisely.”

]]>
AI restores James Webb telescope’s crystal-clear vision https://ibiznewswire.com/ai-restores-james-webb-telescopes-crystal-clear-vision/ Mon, 27 Oct 2025 13:44:37 +0000 https://ibiznewswire.com/ai-restores-james-webb-telescopes-crystal-clear-vision/ Two PhD students from Sydney have helped restore the sharp vision of the world’s most powerful space observatory without ever leaving the ground. Louis Desdoigts, now a postdoctoral researcher at Leiden University in the Netherlands, and his colleague Max Charles celebrated their achievement with tattoos of the instrument they repaired inked on their arms — an enduring reminder of their contribution to space science.

A Groundbreaking Software Fix

Researchers at the University of Sydney developed an innovative software solution that corrected blurriness in images captured by NASA’s multi-billion-dollar James Webb Space Telescope (JWST). Their breakthrough restored the full precision of one of the telescope’s key instruments, achieving what would once have required a costly astronaut repair mission.

This success builds on the JWST’s only Australian-designed component, the Aperture Masking Interferometer (AMI). Created by Professor Peter Tuthill from the University of Sydney’s School of Physics and the Sydney Institute for Astronomy, the AMI allows astronomers to capture ultra-high-resolution images of stars and exoplanets. It works by combining light from different sections of the telescope’s main mirror, a process known as interferometry. When the JWST began its scientific operations, researchers noticed that AMI’s performance was being affected by faint electronic distortions in its infrared camera detector. These distortions caused subtle image fuzziness, reminiscent of the Hubble Space Telescope’s well-known early optical flaw that had to be corrected through astronaut spacewalks.

Solving a Space Problem from Earth

Instead of attempting a physical repair, PhD students Louis Desdoigts and Max Charles, working with Professor Tuthill and Associate Professor Ben Pope (at Macquarie University), devised a purely software-based calibration technique to fix the distortion from Earth.

Their system, called AMIGO (Aperture Masking Interferometry Generative Observations), uses advanced simulations and neural networks to replicate how the telescope’s optics and electronics function in space. By pinpointing an issue where electric charge slightly spreads to neighboring pixels — a phenomenon called the brighter-fatter effect — the team designed algorithms that digitally corrected the images, fully restoring AMI’s performance.

“Instead of sending astronauts to bolt on new parts, they managed to fix things with code,” Professor Tuthill said. “It’s a brilliant example of how Australian innovation can make a global impact in space science.”

Sharper Views of the Universe

The results have been striking. With AMIGO in use, the James Webb Space Telescope has delivered its clearest images yet, capturing faint celestial objects in unprecedented detail. This includes direct images of a dim exoplanet and a red-brown dwarf orbiting the nearby star HD 206893, about 133 light years from Earth.

A related study led by Max Charles further demonstrated AMI’s renewed precision. Using the improved calibration, the telescope produced sharp images of a black hole jet, the fiery surface of Jupiter’s moon Io, and the dust-filled stellar winds of WR 137 — showing that JWST can now probe deeper and clearer than before.

“This work brings JWST’s vision into even sharper focus,” Dr. Desdoigts said. “It’s incredibly rewarding to see a software solution extend the telescope’s scientific reach — and to know it was possible without ever leaving the lab.”

Dr. Desdoigts has now landed a prestigious postdoctoral research position at Leiden University in the Netherlands.

Both studies have been published on the pre-press server arXiv. Dr. Desdoigts’ paper has been peer-reviewed and will shortly be published in the Publications of the Astronomical Society of Australia. We have published this release to coincide with the latest round of James Webb Space Telescope General Observer, Survey and Archival Research programs.

Associate Professor Benjamin Pope, who presented on these findings at SXSW Sydney, said the research team was keen to get the new code into the hands of researchers working on JWST as soon as possible.

]]>
The quantum internet just went live on Verizon’s network https://ibiznewswire.com/the-quantum-internet-just-went-live-on-verizons-network/ Fri, 26 Sep 2025 08:26:26 +0000 https://ibiznewswire.com/the-quantum-internet-just-went-live-on-verizons-network/ In a first-of-its-kind experiment, engineers at the University of Pennsylvania brought quantum networking out of the lab and onto commercial fiber-optic cables using the same Internet Protocol (IP) that powers today’s web. Reported in Science, the work shows that fragile quantum signals can run on the same infrastructure that carries everyday online traffic. The team tested their approach on Verizon’s campus fiber-optic network.

The Penn team’s tiny “Q-chip” coordinates quantum and classical data and, crucially, speaks the same language as the modern web. That approach could pave the way for a future “quantum internet,” which scientists believe may one day be as transformative as the dawn of the online era.

Quantum signals rely on pairs of “entangled” particles, so closely linked that changing one instantly affects the other. Harnessing that property could allow quantum computers to link up and pool their processing power, enabling advances like faster, more energy-efficient AI or designing new drugs and materials beyond the reach of today’s supercomputers.

Penn’s work shows, for the first time on live commercial fiber, that a chip can not only send quantum signals but also automatically correct for noise, bundle quantum and classical data into standard internet-style packets, and route them using the same addressing system and management tools that connect everyday devices online.

“By showing an integrated chip can manage quantum signals on a live commercial network like Verizon’s, and do so using the same protocols that run the classical internet, we’ve taken a key step toward larger-scale experiments and a practical quantum internet,” says Liang Feng, Professor in Materials Science and Engineering (MSE) and in Electrical and Systems Engineering (ESE), and the Science paper’s senior author.

The Challenges of Scaling the Quantum Internet

Erwin Schrodinger, who coined the term “quantum entanglement,” famously related the concept to a cat hidden in a box. If the lid is closed, and the box also contains radioactive material, the cat could be alive or dead. One way to interpret the situation is that the cat is both alive and dead. Only opening the box confirms the cat’s state.

That paradox is roughly analogous to the unique nature of quantum particles. Once measured, they lose their unusual properties, which makes scaling a quantum network extremely difficult.

“Normal networks measure data to guide it towards the ultimate destination,” says Robert Broberg, a doctoral student in ESE and coauthor of the paper. “With purely quantum networks, you can’t do that, because measuring the particles destroys the quantum state.”

Coordinating Classical and Quantum Signals

To get around this obstacle, the team developed the “Q-Chip” (short for “Quantum-Classical Hybrid Internet by Photonics”) to coordinate “classical” signals, made of regular streams of light, and quantum particles. “The classical signal travels just ahead of the quantum signal,” says Yichi Zhang, a doctoral student in MSE and the paper’s first author. “That allows us to measure the classical signal for routing, while leaving the quantum signal intact.”

In essence, the new system works like a railway, pairing regular light locomotives with quantum cargo. “The classical ‘header’ acts like the train’s engine, while the quantum information rides behind in sealed containers,” says Zhang. “You can’t open the containers without destroying what’s inside, but the engine ensures the whole train gets where it needs to go.”

Because the classical header can be measured, the entire system can follow the same “IP” or “Internet Protocol” that governs today’s internet traffic. “By embedding quantum information in the familiar IP framework, we showed that a quantum internet could literally speak the same language as the classical one,” says Zhang. “That compatibility is key to scaling using existing infrastructure.”

Adapting Quantum Technology to the Real World

One of the greatest challenges to transmitting quantum particles on commercial infrastructure is the variability of real-world transmission lines. Unlike laboratory environments, which can maintain ideal conditions, commercial networks frequently encounter changes in temperature, thanks to weather, as well as vibrations from human activities like construction and transportation, not to mention seismic activity.

To counteract this, the researchers developed an error-correction method that takes advantage of the fact that interference to the classical header will affect the quantum signal in a similar fashion. “Because we can measure the classical signal without damaging the quantum one,” says Feng, “we can infer what corrections need to be made to the quantum signal without ever measuring it, preserving the quantum state.”

In testing, the system maintained transmission fidelities above 97%, showing that it could overcome the noise and instability that usually destroy quantum signals outside the lab. And because the chip is made of silicon and fabricated using established techniques, it could be mass produced, making the new approach easy to scale.

“Our network has just one server and one node, connecting two buildings, with about a kilometer of fiber-optic cable installed by Verizon between them,” says Feng. “But all you need to do to expand the network is fabricate more chips and connect them to Philadelphia’s existing fiber-optic cables.”

The Future of the Quantum Internet

The main barrier to scaling quantum networks beyond a metro area is that quantum signals cannot yet be amplified without destroying their entanglement.

While some teams have shown that “quantum keys,” special codes for ultra-secure communication, can travel long distances over ordinary fiber, those systems use weak coherent light to generate random numbers that cannot be copied, a technique that is highly effective for security applications but not sufficient to link actual quantum processors.

Overcoming this challenge will require new devices, but the Penn study provides an important early step: showing how a chip can run quantum signals over existing commercial fiber using internet-style packet routing, dynamic switching and on-chip error mitigation that work with the same protocols that manage today’s networks.

“This feels like the early days of the classical internet in the 1990s, when universities first connected their networks,” says Broberg. “That opened the door to transformations no one could have predicted. A quantum internet has the same potential.”

This study was conducted at the University of Pennsylvania School of Engineering and Applied Science and was supported by the Gordon and Betty Moore Foundation (GBMF12960 and DOI 10.37807), Office of Naval Research (N00014-23-1-2882), National Science Foundation (DMR-2323468), Olga and Alberico Pompa endowed professorship, and PSC-CUNY award (ENHC-54-93).

Additional co-authors include Alan Zhu, Gushi Li and Jonathan Smith of the University of Pennsylvania, and Li Ge of the City University of New York.

]]>
Scientists just made atoms talk to each other inside silicon chips https://ibiznewswire.com/scientists-just-made-atoms-talk-to-each-other-inside-silicon-chips/ Sun, 21 Sep 2025 07:16:39 +0000 https://ibiznewswire.com/scientists-just-made-atoms-talk-to-each-other-inside-silicon-chips/ UNSW engineers have made a significant advance in quantum computing: they created ‘quantum entangled states’ – where two separate particles become so deeply linked they no longer behave independently – using the spins of two atomic nuclei. Such states of entanglement are the key resource that gives quantum computers their edge over conventional ones.

The research was published on Sept. 18 in the journal Science, and is an important step towards building large-scale quantum computers – one of the most exciting scientific and technological challenges of the 21st century.

Lead author Dr Holly Stemp says the achievement unlocks the potential to build the future microchips needed for quantum computing using existing technology and manufacturing processes.

“We succeeded in making the cleanest, most isolated quantum objects talk to each other, at the scale at which standard silicon electronic devices are currently fabricated,” she says.

The challenge facing quantum computer engineers has been to balance two opposing needs: shielding the computing elements from external interference and noise, while still enabling them to interact to perform meaningful computations. This is why there are so many different types of hardware still in the race to be the first operating quantum computer: some are very good for performing fast operations, but suffer from noise; others are well shielded from noise, but difficult to operate and scale up.

The UNSW team has invested on a platform that – until today – could be placed in the second camp. They have used the nuclear spin of phosphorus atoms, implanted in a silicon chip, to encode quantum information.

“The spin of an atomic nucleus is the cleanest, most isolated quantum object one can find in the solid state,” says Scientia Professor Andrea Morello, UNSW School of Electrical Engineering & Telecommunications.

“Over the last 15 years, our group has pioneered all the breakthroughs that made this technology a real contender in the quantum computing race. We already demonstrated that we could hold quantum information for over 30 seconds – an eternity, in the quantum world – and perform quantum logic operations with less than 1% errors.

“We were the first in the world to achieve this in a silicon device, but it all came at a price: the same isolation that makes atomic nuclei so clean, makes it hard to connect them together in a large-scale quantum processor.”

Until now, the only way to operate multiple atomic nuclei was for them to be placed very close together inside a solid, and to be surrounded by one and the same electron.

“Most people think of an electron as the tiniest subatomic particle, but quantum physics tells us that it has the ability to ‘spread out’ in space, so that it can interact with multiple atomic nuclei,” says Dr Holly Stemp, who conducted this research at UNSW and is now a postdoctoral researcher at MIT in Boston.

“Even so, the range over which the electron can spread is quite limited. Moreover, adding more nuclei to the same electron makes it very challenging to control each nucleus individually.”

Making atomic nuclei talk through electronic ‘telephones’

“By way of metaphor one could say that, until now, nuclei were like people placed in a sound-proof room,” Dr Stemp says.

“They can talk to each other as long as they are all in the same room, and the conversations are really clear. But they can’t hear anything from the outside, and there’s only so many people who can fit inside the room. This mode of conversation doesn’t ‘scale’.

“With this breakthrough, it’s as if we gave people telephones to communicate to other rooms. All the rooms are still nice and quiet on the inside, but now we can have conversations between many more people, even if they are far away.”

The ‘telephones’ are, in fact, electrons. Mark van Blankenstein, another author on the paper, explains what’s really going on at the sub-atomic level.

“By their ability to spread out in space, two electrons can ‘touch’ each other at quite some distance. And if each electron is directly coupled to an atomic nucleus, the nuclei can communicate through that.”

So how far apart were the nuclei involved in the experiments?

“The distance between our nuclei was about 20 nanometers – one thousandth of the width of a human hair,” says Dr Stemp.

“That doesn’t sound like much, but consider this: if we scaled each nucleus to the size of a person, the distance between the nuclei would be about the same as that between Sydney and Boston!”

She adds that 20 nanometers is the scale at which modern silicon computer chips are routinely manufactured to work in personal computers and mobile phones.

“You have billions of silicon transistors in your pocket or in your bag right now, each one about 20 nanometers in size. This is our real technological breakthrough: getting our cleanest and most isolated quantum objects talking to each other at the same scale as existing electronic devices. This means we can adapt the manufacturing processes developed by the trillion-dollar semiconductor industry, to the construction of quantum computers based on the spins of atomic nuclei.”

A scalable way forward

Despite the exotic nature of the experiments, the researchers say these devices remain fundamentally compatible with the way all current computer chips are built. The phosphorus atoms were introduced in the chip by the team of Professor David Jamieson at the University of Melbourne, using an ultra-pure silicon slab supplied by Professor Kohei Itoh at Keio University in Japan.

By removing the need for the atomic nuclei to be attached to the same electron, the UNSW team has swept aside the biggest roadblock to the scale-up of silicon quantum computers based on atomic nuclei.

“Our method is remarkably robust and scalable. Here we just used two electrons, but in the future we can even add more electrons, and force them in an elongated shape, to spread out the nuclei even further,” Prof. Morello says.

“Electrons are easy to move around and to ‘massage’ into shape, which means the interactions can be switched on and off quickly and precisely. That’s exactly what is needed for a scalable quantum computer.”

]]>
A star torn apart by a black hole lit up the Universe twice https://ibiznewswire.com/a-star-torn-apart-by-a-black-hole-lit-up-the-universe-twice/ Fri, 22 Aug 2025 14:42:45 +0000 https://ibiznewswire.com/a-star-torn-apart-by-a-black-hole-lit-up-the-universe-twice/
  • Astronomers used a UC Santa Cruz-led AI system to detect a rare supernova, SN 2023zkd, within hours of its explosion, allowing rapid follow-up observations before the fleeting event faded.
  • Evidence suggests the blast was triggered by a massive star’s catastrophic encounter with a black hole companion, either partially swallowing the star or tearing it apart before it could explode on its own.
  • Researchers say the same real-time anomaly-detection AI used here could one day be applied to fields like medical diagnostics, national security, and financial-fraud prevention.

The explosion of a massive star locked in a deadly orbit with a black hole has been discovered with the help of artificial intelligence used by an astronomy collaboration led by the University of California, Santa Cruz, that hunts for stars shortly after they explode as supernovae.

The blast, named SN 2023zkd, was first discovered in July 2023 with the help of a new AI algorithm designed to scan for unusual explosions in real time. The early alert allowed astronomers to begin follow-up observations immediately — an essential step in capturing the full story of the explosion.

By the time the explosion was over, it had been observed by a large set of telescopes, both on the ground and from space. That included two telescopes at the Haleakalāa Observatory in Hawaiʻi used by the Young Supernova Experiment (YSE) based at UC Santa Cruz.

“Something exactly like this supernova has not been seen before, so it might be very rare,” said Ryan Foley, associate professor of astronomy and astrophysics at UC Santa Cruz. “Humans are reasonably good at finding things that ‘aren’t like the others,’ but the algorithm can flag things earlier than a human may notice. This is critical for these time-sensitive observations.”

Time-bound astrophysics

Foley’s team runs YSE, which surveys an area of the sky equivalent to 6,000 times the full moon (4% of the night sky) every three days and has discovered thousands of new cosmic explosions and other astrophysical transients — dozens of them just days or hours after explosion.

The scientists behind the discovery of SN 2023zkd said the most likely interpretation is that a collision between the massive star and the black hole was inevitable. As energy was lost from the orbit, their separation decreased until the supernova was triggered by the star’s gravitational stress as it was partially swallowed the black hole.

The discovery was published on August 13 in the Astrophysical Journal. “Our analysis shows that the blast was sparked by a catastrophic encounter with a black hole companion, and is the strongest evidence to date that such close interactions can actually detonate a star,” said lead author Alexander Gagliano, a fellow at the NSF Institute for Artificial Intelligence and Fundamental Interactions.

An alternative interpretation considered by the team is that the black hole completely tore the star apart before it could explode on its own. In that case, the black hole quickly pulled in the star’s debris and bright light was generated when the debris crashed into the gas surrounding it. In both cases, a single, heavier black hole is left behind.

An unusual, gradual glow up

Located about 730 million light-years from Earth, SN 2023zkd initially looked like a typical supernova, with a single burst of light. But as the scientists tracked its decline over several months, it did something unexpected: It brightened again. To understand this unusual behavior, the scientists analyzed archival data, which showed something even more unusual: The system had been slowly brightening for more than four years before the explosion. That kind of long-term activity before the explosion is rarely seen in supernovae.

Detailed analysis done in part at UC Santa Cruz revealed that the explosion’s light was shaped by material the star had shed in the years before it died. The early brightening came from the supernova’s blast wave hitting low-density gas. The second, delayed peak was caused by a slower but sustained collision with a thick, disk-like cloud. This structure — and the star’s erratic pre-explosion behavior — suggest that the dying star was under extreme gravitational stress, likely from a nearby, compact companion such as a black hole.

Foley said he and Gagliano had several conversations about the spectra, leading to the eventual interpretation of the binary system with a black hole. Gagliano led the charge in that area, while Foley played the role of “spectroscopy expert” and served as a sounding board — and often, skeptic.

At first, the idea that the black hole triggered the supernova almost sounded like science fiction, Foley recalled. So it was important to make sure all of the observations lined up with this explanation, and Foley said Gagliano methodically demonstrated that they did.

“Our team also built the software platform that we use to consolidate data and manage observations. The AI tools used for this study are integrated into this software ecosystem,” Foley said. “Similarly, our research collaboration brings together the variety of expertise necessary to make these discoveries.”

Co-author Enrico Ramirez-Ruiz, also a professor of astronomy and astrophysics, leads the theory team at UC Santa Cruz. Fellow co-author V. Ashley Villar, an assistant professor of astronomy in the Harvard Faculty of Arts and Sciences, provided AI expertise. The team behind this discovery was led by the Center for Astrophysics | Harvard & Smithsonian and the Massachusetts Institute of Technology as part of YSE.

This work was funded by the National Science Foundation, NASA, the Moore Foundation, and the Packard Foundation. Several students, including Gagliano, are or were NSF graduate research fellows, Foley said.

Societal costs of uncertainty

But currently, Foley said the funding situation and outlook for continued support is very uncertain, forcing the collaboration to take fewer risks, resulting in decreased science output overall. “The uncertainty means we are shrinking,” he said, “reducing the number of students who are admitted to our graduate program — many of them being forced out of the field or to take jobs outside the U.S.”

Although predicting the path this AI approach will take is difficult, Foley said this research is cutting edge. “You can easily imagine similar techniques being used to screen for diseases, focus attention for terrorist attacks, treat mental health issues early, and detect financial fraud,” he explained. “Anywhere real-time detection of anomalies could be useful, these techniques will likely eventually play a role.”

]]>
AI finds hidden safe zones inside a fusion reactor https://ibiznewswire.com/ai-finds-hidden-safe-zones-inside-a-fusion-reactor/ Thu, 14 Aug 2025 02:54:50 +0000 https://ibiznewswire.com/ai-finds-hidden-safe-zones-inside-a-fusion-reactor/ A public-private partnership between Commonwealth Fusion Systems (CFS), the U.S. Department of Energy’s (DOE) Princeton Plasma Physics Laboratory (PPPL) and Oak Ridge National Laboratory has led to a new artificial intelligence (AI) approach that is faster at finding what’s known as “magnetic shadows” in a fusion vessel: safe havens protected from the intense heat of the plasma.

Known as HEAT-ML, the new AI could lay the foundation for software that significantly speeds up the design of future fusion systems. Such software could also enable good decision-making during fusion operations by adjusting the plasma so that potential problems are thwarted before they start.

“This research shows that you can take an existing code and create an AI surrogate that will speed up your ability to get useful answers, and it opens up interesting avenues in terms of control and scenario planning,” said Michael Churchill, co-author of a paper in Fusion Engineering and Design about HEAT-ML and head of digital engineering at PPPL.

Fusion, the reaction that fuels the sun and stars, could provide potentially limitless amounts of electricity on Earth. To harness it, researchers need to overcome key scientific and engineering challenges. One such challenge is handling the intense heat coming from the plasma, which reaches temperatures hotter than the sun’s core when confined using magnetic fields in a fusion vessel known as a tokamak. Speeding up the calculations that predict where this heat will hit and what parts of the tokamak will be safe in the shadows of other parts is key to bringing fusion power to the grid.

“The plasma-facing components of the tokamak might come in contact with the plasma, which is very hot and can melt or damage these elements,” said Doménica Corona Rivera, an associate research physicist at PPPL and first author on the paper on HEAT-ML. “The worst thing that can happen is that you would have to stop operations.”

PPPL amplifies its impact through public-private partnership

HEAT-ML was specifically made to simulate a small part of SPARC: a tokamak currently under construction by CFS. The Massachusetts company hopes to demonstrate net energy gain by 2027, meaning SPARC would generate more energy than it consumes.

Simulating how heat impacts SPARC’s interior is central to this goal and a big computing challenge. To break down the challenge into something manageable, the team focused on a section of SPARC where the most intense plasma heat exhaust intersects with the material wall. This particular part of the tokamak, representing 15 tiles near the bottom of the machine, is the part of the machine’s exhaust system that will be subjected to the most heat.

To create such a simulation, researchers generate what they call shadow masks. Shadow masks are 3D maps of magnetic shadows, which are specific areas on the surfaces of a fusion system’s internal components that are shielded from direct heat. The location of these shadows depends on the shape of the parts inside the tokamak and how they interact with the magnetic field lines that confine the plasma.

Creating simulations to optimize the way fusion systems operate

Originally, an open-source computer program called HEAT, or the Heat flux Engineering Analysis Toolkit, calculated these shadow masks. HEAT was created by CFS Manager Tom Looby during his doctoral work with Matt Reinke, now leader of the SPARC Diagnostic Team, and was first applied on the exhaust system for PPPL’s National Spherical Torus Experiment-Upgrade machine.

HEAT-ML traces magnetic field lines from the surface of a component to see if the line intersects other internal parts of the tokamak. If it does, that region is marked as “shadowed.” However, tracing these lines and finding where they intersect the detailed 3D machine geometry was a significant bottleneck in the process. It could take around 30 minutes for a single simulation and even longer for some complex geometries.

HEAT-ML overcomes this bottleneck, accelerating the calculations to a few milliseconds. It uses a deep neural network: a type of AI that has hidden layers of mathematical operations and parameters that it applies to the data to learn how to do a specific task by looking for patterns. HEAT-ML’s deep neural network was trained using a database of approximately 1,000 SPARC simulations from HEAT to learn how to calculate shadow masks.

HEAT-ML is currently tied to the specific design of SPARC’s exhaust system; it only works for that small part of that particular tokamak and is an optional setting in the HEAT code. However, the research team hopes to expand its capabilities to generalize the calculation of shadow masks for exhaust systems of any shape and size, as well as the rest of the plasma-facing components inside a tokamak.

DOE supported this work under contracts DE-AC02-09CH11466 and DE-AC05-00OR22725, and it also received support from CFS.

]]>
Trapped by moon dust: The physics error that fooled NASA for years https://ibiznewswire.com/trapped-by-moon-dust-the-physics-error-that-fooled-nasa-for-years/ Mon, 28 Jul 2025 17:30:30 +0000 https://ibiznewswire.com/trapped-by-moon-dust-the-physics-error-that-fooled-nasa-for-years/ When a multimillion-dollar extraterrestrial vehicle gets stuck in soft sand or gravel — as did the Mars rover Spirit in 2009 — Earth-based engineers take over like a virtual tow truck, issuing a series of commands that move its wheels or reverse its course in a delicate, time-consuming effort to free it and continue its exploratory mission.

While Spirit remained permanently stuck, in the future, better terrain testing right here on terra firma could help avert these celestial crises.

Using computer simulations, University of Wisconsin-Madison mechanical engineers have uncovered a flaw in how rovers are tested on Earth. That error leads to overly optimistic conclusions about how rovers will behave once they’re deployed on extraterrestrial missions.

An important element in preparing for these missions is an accurate understanding of how a rover will traverse extraterrestrial surfaces in low gravity to prevent it from getting stuck in soft terrain or rocky areas.

On the moon, the gravitational pull is six times weaker than on Earth. For decades, researchers testing rovers have accounted for that difference in gravity by creating a prototype that is a sixth of the mass of the actual rover. They test these lightweight rovers in deserts, observing how it moves across sand to gain insights into how it would perform on the moon.

It turns out, however, that this standard testing approach overlooked a seemingly inconsequential detail: the pull of Earth’s gravity on the desert sand.

Through simulation, Dan Negrut, a professor of mechanical engineering at UW-Madison, and his collaborators determined that Earth’s gravity pulls down on sand much more strongly than the gravity on Mars or the moon does. On Earth, sand is more rigid and supportive — reducing the likelihood it will shift under a vehicle’s wheels. But the moon’s surface is “fluffier” and therefore shifts more easily — meaning rovers have less traction, which can hinder their mobility.

“In retrospect, the idea is simple: We need to consider not only the gravitational pull on the rover but also the effect of gravity on the sand to get a better picture of how the rover will perform on the moon,” Negrut says. “Our findings underscore the value of using physics-based simulation to analyze rover mobility on granular soil.”

The team recently detailed its findings in the Journal of Field Robotics.

The researchers’ discovery resulted from their work on a NASA-funded project to simulate the VIPER rover, which had been planned for a lunar mission. The team leveraged Project Chrono, an open-source physics simulation engine developed at UW-Madison in collaboration with scientists from Italy. This software allows researchers to quickly and accurately model complex mechanical systems — like full-size rovers operating on “squishy” sand or soil surfaces.

While simulating the VIPER rover, they noticed discrepancies between the Earth-based test results and their simulations of the rover’s mobility on the moon. Digging deeper with Chrono simulations revealed the testing flaw.

The benefits of this research also extend well beyond NASA and space travel. For applications on Earth, Chrono has been used by hundreds of organizations to better understand complex mechanical systems — from precision mechanical watches to U.S. Army trucks and tanks operating in off-road conditions.

“It’s rewarding that our research is highly relevant in helping to solve many real-world engineering challenges,” Negrut says. “I’m proud of what we’ve accomplished. It’s very difficult as a university lab to put out industrial-strength software that is used by NASA.”

Chrono is free and publicly available for unfettered use worldwide, but the UW-Madison team puts in significant ongoing work to develop and maintain the software and provide user support.

“It’s very unusual in academia to produce a software product at this level,” Negrut says. “There are certain types of applications relevant to NASA and planetary exploration where our simulator can solve problems that no other tool can solve, including simulators from huge tech companies, and that’s exciting.”

Since Chrono is open source, Negrut and his team are focused on continually innovating and enhancing the software to stay relevant.

“All our ideas are in the public domain and the competition can adopt them quickly, which is drives us to keep moving forward,” he says. “We have been fortunate over the last decade to receive support from the National Science Foundation, U.S. Army Research Office and NASA. This funding has really made a difference, since we do not charge anyone for the use of our software.”

Co-authors on the paper include Wei Hu of Shanghai Jiao Tong University, Pei Li of UW-Madison, Arno Rogg and Alexander Schepelmann of NASA, Samuel Chandler of ProtoInnovations, LLC, and Ken Kamrin of MIT.

This work was supported by NASA STTR (80NSSC20C0252), the National Science Foundation (OAC2209791) and the U.S. Army Research Office, (W911NF1910431 and W911NF1810476).

]]>
This AI-powered lab runs itself—and discovers new materials 10x faster https://ibiznewswire.com/this-ai-powered-lab-runs-itself-and-discovers-new-materials-10x-faster/ Mon, 14 Jul 2025 16:47:25 +0000 https://ibiznewswire.com/this-ai-powered-lab-runs-itself-and-discovers-new-materials-10x-faster/ Researchers have demonstrated a new technique that allows “self-driving laboratories” to collect at least 10 times more data than previous techniques at record speed. The advance – which is published in Nature Chemical Engineering – dramatically expedites materials discovery research, while slashing costs and environmental impact.

Self-driving laboratories are robotic platforms that combine machine learning and automation with chemical and materials sciences to discover materials more quickly. The automated process allows machine-learning algorithms to make use of data from each experiment when predicting which experiment to conduct next to achieve whatever goal was programmed into the system.

“Imagine if scientists could discover breakthrough materials for clean energy, new electronics, or sustainable chemicals in days instead of years, using just a fraction of the materials and generating far less waste than the status quo,” says Milad Abolhasani, corresponding author of a paper on the work and ALCOA Professor of Chemical and Biomolecular Engineering at North Carolina State University. “This work brings that future one step closer.”

Until now, self-driving labs utilizing continuous flow reactors have relied on steady-state flow experiments. In these experiments, different precursors are mixed together and chemical reactions take place, while continuously flowing in a microchannel. The resulting product is then characterized by a suite of sensors once the reaction is complete.

“This established approach to self-driving labs has had a dramatic impact on materials discovery,” Abolhasani says. “It allows us to identify promising material candidates for specific applications in a few months or weeks, rather than years, while reducing both costs and the environmental impact of the work. However, there was still room for improvement.”

Steady-state flow experiments require the self-driving lab to wait for the chemical reaction to take place before characterizing the resulting material. That means the system sits idle while the reactions take place, which can take up to an hour per experiment.

“We’ve now created a self-driving lab that makes use of dynamic flow experiments, where chemical mixtures are continuously varied through the system and are monitored in real time,” Abolhasani says. “In other words, rather than running separate samples through the system and testing them one at a time after reaching steady-state, we’ve created a system that essentially never stops running. The sample is moving continuously through the system and, because the system never stops characterizing the sample, we can capture data on what is taking place in the sample every half second.

“For example, instead of having one data point about what the experiment produces after 10 seconds of reaction time, we have 20 data points – one after 0.5 seconds of reaction time, one after 1 second of reaction time, and so on. It’s like switching from a single snapshot to a full movie of the reaction as it happens. Instead of waiting around for each experiment to finish, our system is always running, always learning.”

Collecting this much additional data has a big impact on the performance of the self-driving lab.

“The most important part of any self-driving lab is the machine-learning algorithm the system uses to predict which experiment it should conduct next,” Abolhasani says. “This streaming-data approach allows the self-driving lab’s machine-learning brain to make smarter, faster decisions, honing in on optimal materials and processes in a fraction of the time. That’s because the more high-quality experimental data the algorithm receives, the more accurate its predictions become, and the faster it can solve a problem. This has the added benefit of reducing the amount of chemicals needed to arrive at a solution.”

In this work, the researchers found the self-driving lab that incorporated a dynamic flow system generated at least 10 times more data than self-driving labs that used steady-state flow experiments over the same period of time, and was able to identify the best material candidates on the very first try after training.

“This breakthrough isn’t just about speed,” Abolhasani says. “By reducing the number of experiments needed, the system dramatically cuts down on chemical use and waste, advancing more sustainable research practices.

“The future of materials discovery is not just about how fast we can go, it’s also about how responsibly we get there,” Abolhasani says. “Our approach means fewer chemicals, less waste, and faster solutions for society’s toughest challenges.”

The paper, “Flow-Driven Data Intensification to Accelerate Autonomous Materials Discovery,” will be published July 14 in the journal Nature Chemical Engineering. Co-lead authors of the paper are Fernando Delgado-Licona, a Ph.D. student at NC State; Abdulrahman Alsaiari, a master’s student at NC State; and Hannah Dickerson, a former undergraduate at NC State. The paper was co-authored by Philip Klem, an undergraduate at NC State; Arup Ghorai, a former postdoctoral researcher at NC State; Richard Canty and Jeffrey Bennett, current postdoctoral researchers at NC State; Pragyan Jha, Nikolai Mukhin, Junbin Li and Sina Sadeghi, Ph.D. students at NC State; Fazel Bateni, a former Ph.D. student at NC State; and Enrique A. López-Guajardo of Tecnologico de Monterrey.

This work was done with support from the National Science Foundation under grants 1940959, 2315996 and 2420490; and from the University of North Carolina Research Opportunities Initiative program.

]]>
Dementia risk prediction: Zero-minute assessment at less than a dollar cost https://ibiznewswire.com/dementia-risk-prediction-zero-minute-assessment-at-less-than-a-dollar-cost/ Thu, 19 Jun 2025 10:55:46 +0000 https://ibiznewswire.com/dementia-risk-prediction-zero-minute-assessment-at-less-than-a-dollar-cost/ A new study by researchers from Regenstrief Institute, Indiana University and Purdue University presents their low cost, scalable methodology for the early identification of individuals at risk of developing dementia. While the condition remains incurable, there are a number of common risk factors that, if targeted and addressed, can potentially reduce the odds of developing dementia or slow the pace of cognitive decline.

“Detection of dementia risk is important for appropriate care management and planning,” said study senior author Malaz Boustani, M.D., MPH., of Regenstrief Institute and IU School of Medicine. “We wanted to solve the problem of identifying individuals early on who are likely to develop dementia with a solution that is both scalable and cost effective for the healthcare system.

“To do this, we use existing information — passive data — already in the patient’s medical notes for what we call zero-minute assessment at less than a dollar cost. Decision-focused content selection methodology is used to develop an individualized dementia risk prediction or to demonstrate evidence of mild cognitive impairment.”

This technique utilizes machine learning to select a subset of phrases or sentences from the medical notes in a patient’s electronic health record (EHR) written by a doctor, a nurse, a social worker or other provider that are relevant to the target outcome over a defined observation period. Medical notes are narratives in an EHR that describe the health of the patient in free text format.

Information selected for extraction from the medical notes to predict dementia risk might include clinician comments, patient remarks, blood pressure or cholesterol values over time, observations of mental status by a family member or a medication history — including prescription and over-the-counter drugs as well as “natural” remedies and supplements.

Predicting dementia risk helps the patient, the family and healthcare providers access resources such as support groups and the Centers for Medicare and Medicaid GUIDE model program, which supports keeping individuals in their homes longer. It could also encourage clinician deprescribing of medications commonly taken by older adults but known to negatively affect the brain as well as conversations with the patient about over-the-counter drugs with similar characteristics. Knowing dementia risk might prompt physician consideration of newly FDA approved amyloid-lowering therapies which alter the trajectory of Alzheimer’s disease.

“Our methodology combines both supervised and unsupervised machine learning in order to extract sentences which are relevant to dementia from the large amount of medical notes readily available for each patient,” said study co-author Zina Ben Miled, PhD, M.S., a Regenstrief Institute affiliate scientist and a former Purdue University in Indianapolis faculty member. “In addition to improving predictive accuracy, this allows the health provider to quickly confirm cognitive impairment by reviewing the specific text used to drive the risk assessment by our language model.”

“Regenstrief Institute and Indiana University investigators have been pioneers in demonstrating the utility of electronic health records since the early 1970s. Given the enormous amount of effort by both clinicians and patients that goes into capturing EHR data, the goal must be to seek maximal clinical value from these data even beyond their central role in medical care,” said study co-author Paul Dexter, M.D., of Regenstrief and IU School of Medicine. “By applying machine learning methods to identify patients at high risk of dementia in the future, this study provides an excellent and innovative example of the clinical value that is achievable from EHRs. The early identification of dementia will prove increasingly vital particularly as new treatments are developed.”

While the ultimate beneficiaries of the use of the new technique are patients and caregivers, providing zero-minute assessment at less than a dollar cost has a clear upside for primary care clinicians who are overburdened and often lack the time and training needed to administer specialized cognitive tests.

The study authors’ 5-year clinical trial of their risk prediction tool, being conducted in Indianapolis and Miami, is in its final year. Lessons learned from this trial will enable them to advance the utility of the dementia risk prediction framework in primary care practices. The researchers plan future work on the fusion of medical notes with other information contained in electronic health records as well as environmental data.

]]>