How fusion is teaching AI new tricks at ITER
Artificial intelligence has advanced rapidly by learning from abundant digital information—text, images, code, and video gathered at internet scale. Fusion energy presents a very different challenge.
In fusion, data can be scarce, expensive to generate, spread across legacy systems, and tied to physical processes evolving in real time. Computational models must deal with nonlinear plasma behaviour and complex engineering systems to predict the behaviour of machines functioning in regimes (operating conditions) never before achieved. Those themes emerged at the ITER Public-Private Fusion Workshop, held on 28 and 29 April 2026, where four speakersâSimon McIntosh of ITER, Antonio Policicchio of NTT Data Italy, Tom Gibbs of NVIDIA, and Thomas Kopinski of Gaia Labâoutlined how artificial intelligence is beginning to support construction, maintenance, simulation, and future operations at fusion projects.Their broader message is that fusion is not simply another use case for AI. In some respects, fusion may be one of the forces pushing AI to evolve.When data is difficultMany modern AI systems benefit from the fact that digital data is plentiful. âFor large language models and agents, the data is cheap, but the training is very expensive,â Gibbs said in discussion during the workshop. âFor physical AI (applications that learn from real-world systems rather than digital content) the data is expensive, but the training is not.âThat distinction is significant. A plasma pulse on a tokamak is not equivalent to collecting another billion web pages. With fusion, experimental campaigns are costly, machine access is limited, and each shot may explore only a narrow operating window. Useful data must often be extracted carefully from relatively small (at least compared to internet scale) but highly valuable datasets.According to McIntosh, fusion organizations are also facing another challenge: decades of historic data stored in formats never designed for modern AI workflows. âEssentially, we have to do data archaeology to use what was produced before,â he says.He cites archives from JET, which operated for more than 40 years and changed configuration many times over its lifetime. Valuable experimental records, software tools, and engineering context still existâbut not always in forms that are easy to search, compare, or reuse. Using AI agents, McIntosh says, teams have begun tracing files, identifying patterns, and reconstructing historical operational knowledge from past and currently operating tokamaks that might otherwise remain buried.Recovering existing data is only one of the tasks. Others are the standardization of the data format and verification of the data itself. According to McIntosh, the fusion community has spent close to 20 years developing common data formats so software built for one machine can be tested and reused on another. âItâs called the IMAS Data Dictionary,â he says, describing a shared framework, developed at ITER, that allows information such as coil currents or diagnostic signals to be stored in consistent ways across facilities. Data verification is a parallel task that needs to be addressed with some urgency as the scientists responsible for these extremely valuable datasets reach the end of their careers. That may sound administrative, but it is strategically important. Standardized data enables machine learning models, control tools, and analysis software to move more easily across tokamaks, accelerating development across the sector.But organized data still has to be accessible. In many industrial settings, the first AI challenge is not intelligence but access. McIntosh points to Lucy, an AI-enabled assistant connected to ITERâs internal document systems. It is already helping staff retrieve engineering records, technical notes, and project documentation in seconds rather than through lengthy manual searches.Perhaps the most ambitious concept discussed by McIntosh is ITERâs effort to prepare for operations by training what he calls a fusion âworld model.â âWe have a GPU cluster that weâre planning to use to train whatâs known as a fusion world model,â he says.In practical terms, such a model would learn from data gathered across existing tokamaks and create a simplified but useful representation of machine behaviour. It could allow teams to simulate pulses before they are run, test procedures, anticipate faults, and improve readiness before ITER begins full experimental campaigns. McIntosh compares the concept to the intuitive model humans carry in their own mindsâknowing what the next step on the stairs will feel like before the foot lands. For ITER, the goal would be similar: predict the next state of the machine before it happens.Such world models will transfer decades of knowledge gained on past and existing tokamaks to ITER in such a way that data collected on the first day of ITER operations will be immediately available in the form of improved second-day predictions. âThis pattern of continual recalibration and retraining will ensure that the capabilities of our data-grounded machine-learning models will improve in lockstep with the knowledge frontier that ITER will push into yet unseen reactor-relevant conditions,â says McIntosh.
Simon Macintosh stands in front of the eight newly installed GPUs that arrived in March to help process ITER's AI needs.
Adapting AI to plasma physicsGibbs described another route to acceleration during the workshop at ITER: AI surrogate models trained on conventional plasma simulations. Traditional high-fidelity codes can consume hundreds or thousands of GPU hours. Once trained, surrogate models can approximate those results in milliseconds, creating possibilities for faster analysis and eventually real-time decision support.Asked how much faster such systems can be, Gibbs estimates gains ranging from 10³ to 10â¶ times. That speed could support digital twinsâvirtual representations of fusion devices continuously updated with sensor data and capable of helping operators understand evolving plasma conditions.According to Gibbs, digital twins already exist for several experimental machines, and there is no fundamental reason an ITER twin could not begin development before operations start. But if AI models become central to real-time fusion control, some believe faster models alone may not be enough. The underlying hardwareâand software to connect the live data with the modelsâmay also need to evolve.While GPUs dominate todayâs AI landscape, Kopinski used his workshop presentation to argue that fusion will require alternative computing architectures. Plasma systems are highly dynamic and nonlinear, he says, and some control problems demand responses faster than conventional approaches can comfortably deliver.âThey were talking about milliseconds; we are talking about microseconds,â Kopinski says, referring to comparisons with GPU-based systems. According to Kopinski, Gaia Lab is developing a fusion world model based on Q.ANT's photonic hardware, using their light-based processors to predict and control plasma behaviour âfaster than real time.â He says that prototypes from Q.ANT are deployed today, and a commercially viable world model may be available before the end of this decade.Such timelines remain early-stage, but the underlying point is broader: fusion may become a proving ground not only for new energy systems, but for new forms of computing.Practical gains nowEven with longer-term ambitions, speakers repeatedly emphasized that some of AIâs greatest value may come much sooner. Because ITER is executing assembly and commissioning, McIntosh says construction support is a near-term priority.âHelp with construction is something we can use today,â he says. âIf we can use AI to inform better decision making that is grounded in our data, we can possibly accelerate the construction schedule, de-risk items, and address problems more quickly.âPolicicchio, who presented practical AI applications at the workshop, describes projects involving LiDAR-based progress tracking and predictive maintenance for equipment such as pumps and cooling systems. âI believe it will be one of the key enablers for reducing schedule slippage in this large project,â he says.For all the momentum around AI, ITERâs purpose remains experimental science. It is intended to test operating regimes, pulse durations, and integrated performance levels never before demonstrated at this scale. Those results must be measured in reality, not inferred from software alone. While AI cannot replace an experiment of a scale that has never been attempted before, it can shorten the path to the resulting discoveries.ITER is not merely adopting off-the-shelf artificial intelligence tools. It is confronting some of the hardest questions in industrial AI: how to learn from scarce data, unify fragmented archives, predict complex physical systems, and act in real time when mistakes are costly.That may make fusion one of AIâs toughest applicationsâbut also one of the places where its future capabilities are being forged.