Unlocking New Levels of Fusion Reactor Performance with Princeton's AI

06 June 2024 2714
Share Tweet

Princeton and the Princeton Plasma Physics Laboratory researchers have employed machine learning to control energy bursts at plasma's edges in fusion reactors. This helps in boosting performance without causing harm.

The approach that the team from Princeton has formulated entails using machine learning to manage plasma edge surges in fusion reactors, enhancing performance without system instabilities and massively cutting down computation times required for real-time system modifications.

Maintaining a sustained fusion reaction is intricate and requires precision. It requires several moving factors to facilitate a high-performing plasma that is sufficiently dense, sufficiently hot and remains confined long enough for fusion to take place.

While researchers are pushing for optimal plasma performance, they are also contending with the challenge of controlling plasma's edge surges. Such uncontrolled energy bursts can negatively impact overall performance and cause damage to plasma-facing reactor components over time.

However, engineers at Princeton alongside the U.S. Department of Energy’s Princeton Plasma Physics Laboratory have successfully applied machine learning methods to control these harmful edge instabilities without compromising plasma performance.

Using real-time optimisation of the system’s suppression response, the team has achieved the greatest fusion performance devoid of edge bursts at two unique fusion facilities. Their findings were revealed on May 11 in Nature Communications, highlighting the immense potential of machine learning and artificial intelligence to control plasma instabilities promptly.

According to the research leader, Egemen Kolemen, their approach was capable of maintaining high-performing plasma without instabilities across two different facilities. This proves the effectiveness and flexibility of their approach.

Researchers have attempted numerous ways to run fusion reactors to meet the conditions necessary for fusion. One of the most promising approaches requires running a reactor in high-confinement mode, marked by a steep pressure gradient at the plasma edge and enhanced plasma confinement.

However, there are instances where instabilities can occur at the plasma edge in high-confinement mode. Fusion researchers have had to devise innovative solutions to manage this challenge.

An alternative solution entails using a fusion reactor's magnetic coils to administer magnetic fields at the plasma edge, breaking down structures that could mutate into a complete edge instability. However, these magnetic disturbances may reduce overall performance.

Part of the performance loss can be attributed to the difficulty in optimizing the shape and amplitude of these magnetic disturbances, as well the computational intensity of current physics-based optimization methods. These methods involve complex equations and can take tens of seconds to optimize a single point in time — far from ideal when plasma behaviour can alter in milliseconds. Thus, fusion researchers have had to predetermine the shape and amplitude of the magnetic disturbances before each fusion run, losing their ability to make real-time modifications.

SangKyeun Kim, a co-first author, noted that this limitation has hindered the ability to optimize the system effectively, as it implies that the parameters cannot be adjusted in real-time based on how the plasma conditions change.

The Princeton-led team’s machine learning approach slashes the computation time from tens of seconds to the millisecond scale, opening the door for real-time optimization. The machine learning model, which is a more efficient surrogate for existing physics-based models, can monitor the plasma’s status from one millisecond to the next and alter the amplitude and shape of the magnetic perturbations as needed. This allows the controller to strike a balance between edge burst suppression and high fusion performance, without sacrificing one for the other.

“With our machine learning surrogate model, we reduced the calculation time of a code that we wanted to use by orders of magnitude,” said co-first author Ricardo Shousha, a postdoctoral researcher at PPPL and former graduate student in Kolemen’s group.

Because their approach is ultimately grounded in physics, the researchers said it would be straightforward to apply to different fusion devices around the world. In their paper, for instance, they demonstrated the success of their approach at both the KSTAR tokamak in South Korea and the DIII-D tokamak in San Diego. At both facilities, which each have a unique set of magnetic coils, the method achieved strong confinement and high fusion performance without harmful plasma edge bursts.

“Some machine learning approaches have been critiqued for being solely data-driven, meaning that they’re only as good as the amount of quality data they’re trained on,” Shousha said. “But since our model is a surrogate of a physics code, and the principles of physics apply equally everywhere, it’s easier to extrapolate our work to other contexts.”

The team is already working to refine their model to be compatible with other fusion devices, including planned future reactors such as ITER, which is currently under construction.

One active area of work in Kolemen’s group involves enhancing their model’s predictive capabilities. For instance, the current model still relies on encountering several edge bursts over the course of the optimization process before working effectively, posing unwanted risks to future reactors. If instead the researchers can improve the model’s ability to recognize the precursors to these harmful instabilities, it could be possible to optimize the system without encountering a single edge burst.

Kolemen said the current work is yet another example of the potential for AI to overcome longstanding bottlenecks in developing fusion power as a clean energy resource. Previously, researchers led by Kolemen successfully deployed a separate AI controller to predict and avoid another type of plasma instability in real time at the DIII-D tokamak.

“For many of the challenges we have faced with fusion, we’ve gotten to the point where we know how to approach a solution but have been limited in our ability to implement those solutions by the computational complexity of our traditional tools,” said Kolemen. “These machine learning approaches have unlocked new ways of approaching these well-known fusion challenges.”


RELATED ARTICLES