Abstract
This case study describes the development of technologies that enable digital-engineering and digital-twinning efforts in proliferation detection. The project presents a state-of-the-art approach to support International Atomic Energy Agency (IAEA) safeguards by incorporating diversion-pathway analysis, facility misuse, and the detection of indicators within the reactor core, applying the safeguards-by-design concept, and demonstrates its applicability as a sensitive monitoring system for advanced reactors and power plants. There are two pathways a proliferating state might take using the reactor core. One is “diversion,” where special fissionable nuclear material—i.e., Pu-239, U-233, U enriched in U-233/235—that has been declared to the IAEA is removed surreptitiously, either by taking small amounts of nuclear material over a long time (known as protracted diversion) or large amounts in a short time (known as abrupt diversion). The second pathway is “misuse,” where undeclared source material—material that can be transmuted into special fissionable nuclear material: depleted uranium, natural uranium, and thorium—is placed in the core, where it uses the neutron flux for transmutation. Digital twinning and digital engineering have demonstrated significant performance improvement and schedule reduction in the aerospace, automotive, and construction industries. This integrated modeling approach has not been fully applied to nuclear safeguards programs in the past. Digital twinning, combined with machine learning technologies, can lead to new innovations in process-monitoring detection, specifically in event classification, real-time notification, and data tampering. It represents a technological leap in evaluation and detection capability to safeguard any nuclear facility.
Introduction
The nuclear reactor industry is designing innovative reactor technologies that are scheduled for construction in the next few years. Southern Company and TerraPower, with Department of Energy (DOE) assistance, expect to complete a 300 kilowatts of thermal power (kWt) molten-chloride fast reactor before 2030 [1]. NuScale is designing a small modular reactor composed of modules each generating 77 Megawatts of electric power (MWe), scheduled for completion in 2030 [2]. The Department of Energy is actively designing a 300 Megawatts of thermal power (MWt) sodium-cooled fast reactor, the Versatile Test Reactor (VTR), which is scheduled for startup in 2026. These near-term industry innovations create new challenges in the application of nuclear safeguards and the ability to deter and streamline detection of nuclear proliferation (i.e., diversion and misuse) as these technologies are deployed more broadly. These challenges relate to the broad adoption and acceptance of next-generation technologies and techniques for proliferation detection, as the state-of-the-art advances beyond the long-established standards. These new nuclear reactors use digital-engineering and digital-twinning technologies [3]. Digital engineering (DE) embodies a deliberate transformational approach to the way systems are designed, engineered, constructed, operated, maintained, and retired. The Department of Defense (DoD) defines DE as “an integrated digital approach that uses authoritative sources of system data and models as a continuum across disciplines to support lifecycle activities from concept through disposal [4].” Leveraging digital thread technologies currently under development to support the VTR program, this project will develop a system to support detection of both diversion and misuse for items (in the case of light-water, sodium fast, etc.) or bulk (for molten-salt or pebble-bed) advanced reactors. This digital-twin and DE technology will bring to bear safeguards analysis earlier in the design process, reducing risk to potential for diversion and misuse and proving viability for a broader set of reactor technologies. The availability of these unique and comprehensive data streams opens the opportunity for a comprehensive understanding of all aspects of nuclear fuel-cycle facility operations to significantly strengthen nuclear safeguards and the nonproliferation regime in general. Such a tool will be a critical capability as the International Atomic Energy Agency (IAEA) currently safeguards over 200 reactors around the world, and it continues to operate on a zero-growth budget [5].
The safeguards digital-twin design aims to enable inspectors to make informed inferences of the potential for the deliberate misuse of the facility or diversion of nuclear material from a declared nuclear reactor facility through analysis of available signals, coupled with independent models of reactor operating conditions enabled through machine learning (ML) algorithms. The digital twin consists of several interconnecting pieces, including a graphical user interface (GUI), a workflow and data-management engine, a physics model, and the various adapters used to translate data between each component and the master data warehouse.
Nuclear Physics Scenarios
To examine the usability of the digital twin, a reference sodium fast reactor core design was chosen for model development. The design was based on previous work and utilized the advanced burner reactor (ABR) and advanced burner test reactor (ABTR) [6,7]. The resulting core is a 300 MWt design that incorporates 69 driver fuel assemblies, six control rods, three safety rods, and seven dedicated experimental positions and is surrounded by three rings of reflectors as shown in Fig. 1. Along with this, there are six locations where experiments can be placed. These locations are the reflector positions seen in row 6. A detailed description of the core design can be found in Ref. [8].
To study the types of proliferation scenarios that may be encountered, two scenarios were devised that examine the likelihood of detecting nuclear proliferation. The first scenario included the diversion of 1, 2, 4, 8, or 12 fuel rods in every assembly and replacing this material with naturally enriched uranium. The fuel that was removed from the rods could be used for a clandestine weapons program. In addition, the naturally enriched uranium could be extracted from the fuel assembly after its time in the core. Following this, we examined the placement of fertile experimental assemblies (FEAs) in experimental locations throughout the core. Once an FEA was removed from the core, the plutonium generated in the assembly could be extracted and used in a weapons program. The final scenario used an optimization algorithm to determine the most-likely placement for multiple FEAs in a core without their being detected. Cycle length, control-rod heights, and selected assembly powers were used to monitor the undeclared production of plutonium. For the diversion scenario, we examined the amount of plutonium generated by natural uranium rods used to replace the fuel. Figure 2 shows the plutonium generation for each of the scenarios, where the red line denotes 1 significant quantity (SQ (of fissile material)) of Pu. An SQ of fissile material is the minimum mass of an isotope from which it is possible to construct a functioning nuclear weapon. In Fig. 2, the amount of plutonium is representative of a twice burned batch of assemblies (approximately one-third of the core). The plutonium present at 0 effective full-power day (EFPD) is explained by the fact that the assemblies have accumulated approximately 800 EFPD of burnup to this point. For Pu-239, this mass is approximately 8 kg [9]. The design that replaced 12 fuel rods was able to generate 1 SQ within a single cycle. Replacing 12 fuel rods with natural uranium reduced the total cycle length below 400 days, and the control-rod height at the beginning of the cycle differed by more than 20 cm. The difference in control-rod height and cycle length would likely be identified by an IAEA inspector.
For the placement of FEAs in the core, we examined the amount of plutonium generated by the placement of FEAs. An initial study found that the placement of a single FEA in the core for a 400-day cycle would not generate 1 SQ of plutonium; thus, multiple FEAs would be required. FEAs placed near the center of the core were found to cause a change in the control-rod height of 3–5 cm per FEA placed. To avoid these large changes, FEAs were placed near the periphery of the core. Figure 3 shows a core configuration; FEA positions are described by rings from the center of the core. Two FEAs were placed in Ring 5, and six FEAs were placed in Ring 6. This configuration was able to generate 1 SQ of plutonium while having a difference in control-rod height of less than 2.5 cm and a cycle length of greater than 400 days.
Proliferation of plutonium was not detected through control-rod height differences or cycle length; therefore, we examined power differences in a select number of assemblies (numbered in white in Fig. 3) between a reference core and the current configuration. Table 1 shows the percent differences in power for the various assemblies. We found that Assembly Positions 5 and 15 have a large power difference, and we expect a power difference of greater than 5.0% would be identified as an indicator of proliferation by an IAEA inspector.
Position | Power diff. (%) | Position | Power diff. (%) | Position | Power diff. (%) |
---|---|---|---|---|---|
1 | 0.81 | 6 | 0.69 | 11 | 0.81 |
2 | 0.68 | 7 | 0.68 | 12 | 0.68 |
3 | 0.17 | 8 | 0.54 | 13 | 0.17 |
4 | −2.81 | 9 | −0.08 | 14 | −2.81 |
5 | −12.41 | 10 | −3.06 | 15 | −12.41 |
Position | Power diff. (%) | Position | Power diff. (%) | Position | Power diff. (%) |
---|---|---|---|---|---|
1 | 0.81 | 6 | 0.69 | 11 | 0.81 |
2 | 0.68 | 7 | 0.68 | 12 | 0.68 |
3 | 0.17 | 8 | 0.54 | 13 | 0.17 |
4 | −2.81 | 9 | −0.08 | 14 | −2.81 |
5 | −12.41 | 10 | −3.06 | 15 | −12.41 |
Upon determining that multiple FEAs could be placed in the core to generate 1 SQ of plutonium, an optimization algorithm was used to determine whether some core designs could thwart detection based on differences in the cycle length, control-rod height, and assembly power [10]. The algorithm was able to place fresh and once- or twice-burned FEAs in the core in an attempt to generate 1 SQ of plutonium in 400 days. Figure 4 shows the set of optimal core configurations, where the black box signifies designs that would likely escape detection by the three means of detection. For reference, the beginning of cycle (BOC) represents the fuel at 0 days, and the end of cycle (EOC) represents the fuel at 400 days. We found that the optimization algorithm was able to generate a set of core designs which would likely make it difficult to detect that proliferation was occurring.
Software Architecture
The digital twin is built around a hub-and-spoke model, where the Deep Lynx data warehouse [11] forms the central source of truth from which configuration data are obtained, and to which analysis results are reported. The key benefit of this topology is that the effort required to scale and integrate further spoke nodes remains linear, so each additional tool requires the creation of only a single adapter (rather than the geometric scaling effort required for a completely integrated mesh topology). Figure 5 shows the overall architecture of the digital-twin product. The centrality of data in this configuration ensures consistency across the various disparate analyses.
Software Integration.
The digital twin is composed of several components, namely, the user interface, Deep Lynx, the Serpent Monte Carlo simulation code [12], and explainable artificial intelligence (AI). Deep Lynx sits at the center of the twin, with integrations between it and each of the other components. Figure 6 shows the various pieces of the digital twin along with the integrations that exist between them and where they physically reside, either in the Microsoft Azure cloud [13] or Idaho National Laboratory’s (INL) High Performance Computing (HPC) environment.
A general use case of the digital twin will behave as follows. A user will access the GUI. The user can then create a reactor map that contains the configuration information for a reactor including the assembly types, their fuel status, and the number of misuse pins (as appropriate). The user can save this reactor map to Deep Lynx, and all saved reactor maps are displayed to the user in the GUI for viewing and editing. To run a given reactor map as an input to Serpent, the user can click the “Run” button. This then sends an event to Deep Lynx indicating which reactor map is to be run.
To move data between Serpent and the other digital components through Deep Lynx, an adapter was created called the Serpent Handler. This python codebase receives REpresentative State Transfer (REST) application programming interface (API) calls from Deep Lynx when a new reactor map is submitted by the GUI to be run in Serpent. The event with the indicated reactor map is therefore automatically picked up by the Serpent Handler and the reactor map is retrieved from Deep Lynx. Data validation and formatting occur here, and input files are created to be picked up by a service that resides on the INL HPC cluster. This service hands the files to Serpent and handles passing the output files from Serpent back to the cloud location where they are parsed by the Serpent Handler. Once data have been formatted into a form that Deep Lynx can understand, it is sent to Deep Lynx. This Serpent output data can then be used by the explainable AI component to perform intelligent analysis of the data, with results that are useful being sent back to Deep Lynx for use by the GUI. Deep Lynx is regularly polled by the GUI for new data, and this Serpent output data and explainable AI data are picked up to be displayed to the user.
All integrations between the various components utilize the RESTful API standard, allowing for data transfer following a common and universally recognized format. Where possible, the Deep Lynx event system is used to push data through the digital twin in near real-time. This system is used to send data to the Serpent Handler and explainable AI. When these components start, they reach out to Deep Lynx to register for data-ingestion events on desired components. In the case of the Serpent Handler, it registers for ingestion events from the GUI. When these events occur in Deep Lynx, Deep Lynx will proactively send a REST POST request to the registered component, and the component can then act on the data included in that request, as desired. The GUI is a standalone application built using the VueJS [14] library, so it cannot use the event system as currently designed. However, it polls Deep Lynx regularly for updates. This poll interval can be adjusted by the end-user and manual polls can also be requested.
Visualization
User Interface.
The representation of the digital-twin components is displayed in one unified GUI with multiple pages (Fig. 7). The two main pages facilitate the input and output of information to and from Serpent. Through both input and output visualizations, a user can get a full picture of the state of a core. A user or inspector can leverage this tool to run reactor-core scenarios for a given core configuration.
On the reactor-map input page, there is a two-dimensional (2D) model consisting of hexagons. Each hexagon represents an assembly within the core. A user can designate a type, burnup option, number of pins, and monitoring flag for each assembly. In addition, statuses of each type of assembly give at-a-glance information of full-core status. Once all assemblies have been assigned, the completed reactor map can be submitted to run in Serpent. While a standard submission to Serpent would yield one run, it is also possible to request multiple Serpent runs when fertile target assemblies are selected.
The scenario-study page displays Serpent output information. Information is conveyed as an advanced three-dimensional (3D) visualization and shows associated data over 600 days with synchronized charts. It should be possible from this page to view how efficient a configuration is and whether nuclear proliferation is present.
Advanced Three-Dimensional Visualization.
The 3D representation is designed to mimic setup configuration and reactor states during the experiment run and the results of the experimental analysis. The 3D model of the reactor is constructed to specification and is represented in the user interface to scale (Fig. 8). The models are integrated into the user interface using VueJS. Functions of this library are employed to provide user interaction with the reactor model and thermocouple array.
In addition to user interaction, data from Deep Lynx drive change to the advanced 3D visualization. The data-driven functionality has been designed to allow specific parts of the reactor model to move or change according to the exact values that are sent. This is all accomplished by a listener function that was constructed in VueJS. It passes on data updates from the poll functionality described in the Software Integration section. For example, the control-rod position will move according to the data and scale factor of the zoom level (Fig. 9).
Cloud/High Performance Computing Architecture.
The digital twin will be hosted on a combination of Microsoft Azure cloud and HPC architecture. Due to the computational demands imposed by the complexity of the Serpent simulations, an HPC environment is ideal for resolving a simulation in optimal time. While the Serpent simulation resides in the INL HPC cluster, the remainder of the digital twin will reside in the Azure cloud. This will provide various benefits to cloud-based elements, including flexibility to deploy and tear down assets on demand, cost savings from reduced hosting time, security from automatic updates and the cloud environment, and increased insights into hardware and network performance.
To properly send files between the cloud and HPC, several components must be created, and a data-flow architecture must be established. Figure 10 shows the proposed components and flow. Starting from Deep Lynx in the cloud, a request for HPC processing will be sent to an HPC adapter. This adapter will retrieve any files necessary from Deep Lynx and store them within the Azure storage account. An HPC service node, continuously running in the HPC environment, will poll the Azure storage account for new requests and associated files. It will then request a job to be scheduled through the HPC scheduler. Once the job is finished, results will be sent back to the HPC service node, which will push the results to the cloud onto the Azure storage account. The HPC adapter will be continuously polling for results to be sent back to the Azure storage account and, once results are found, it will return these results to Deep Lynx.
Data Model
The Serpent code used to model the core physics of the digital-twin reactor is a general-purpose, high-fidelity neutronics code [12]. It is capable of modeling arbitrary arrangements of fissile, fertile, and nonreactive components in a wide assortment of geometries. For the present purposes, most of these geometric and material details remain as fixed reactor design constraints or parameters. The user interface enables variations to be made to a selected subset of input variables. These are translated through the Serpent input-generating interface to create the fully described core model and operating strategy. These selected variable parameters are captured as fields within the digital-twin data model, much in the same way as a database schema defines the pattern of data relations that can be stored within a database. In addition to the variable input parameters, the data model also encapsulates the modeling results that will be presented to the user via the output interface. These inputs and outputs are shown as a unified modeling language (UML) diagram in Fig. 11. In this figure, the design of the reactor is decomposed hierarchically, starting with the reactor itself, which contains a mapping to the various assembly types specified within the core. Each assembly is further broken down into fuel, blanket, control, or experimental types, each of which contains the relevant nuclear data needed by the Serpent input generator. The assembly data currently consist of a named reference list that is matched to a predefined, detailed neutronic design held in the Serpent input-generation script. In the future, this may be extended, if necessary, to permit users to interactively create their own fuel-assembly definitions through a guided mechanism to ensure consistency to applicable physical constraints.
The reported output values for control-rod height as a function of burnup, assembly power level, and bundle coolant temperature are captured through the MeasurementEntry data type, which is linked to the reactor design parameters, thus indicating the locationality of the measurements when necessary.
The individual assemblies accessible to the digital-twin user within the core model are uniquely labeled within the core map in Fig. 7. The three outer rings of locations are fixed reflector assemblies, while the inner locations consist of fuel, experiment, and control assemblies. Control-assembly locations are fixed while fuel and experiment locations may be changed to insert fertile or remove fissile material according to the misuse or diversion scenario under consideration.
The data model is encoded using Data Integration Aggregated Model and Ontology for Nuclear Deployment (DIAMOND) [15], a flexible, open-source model for structuring knowledge of nuclear-plant configurations and operations. The DIAMOND ontology is also the basis for automatically parsing and ingesting data into the Deep Lynx data warehouse, which acts as the central hub for the digital twin, itself. DIAMOND, like the data model, is flexible and can grow and be adapted as capabilities and needs emerge. Thus, the current data model represents the first step in the development of the functioning digital twin, but it is not the endpoint. It will continue to be expanded as the capabilities of the model and the interface grow and as the analytical investigations bring greater insights into underlying vulnerabilities or strengths of the modeled system.
The initial digital-twin data model captures the basic scenario details necessary to recreate the scenarios that have been previously modeled. It allows the user to specify arrangements of predefined assemblies within the reactor core and compute predefined measurements suitable for assessing operational functionality. This model is extendable and will allow for increasingly complicated scenarios to be examined as the digital twin matures and grows in capability.
Machine Learning Detection
The objective of the ML workflow is to detect whether malicious activity is taking place within the reactor, given information to which an inspector has access. This may be either in the form of increased plutonium production (plutonium misuse) or a change in the assembly from what was declared (diversion of high-assay, low-enriched uranium (HALEU), greater than 5% and less than 20% U-235 content). The work presented here will focus on plutonium misuse, but work is in progress for the detection of diversion of HALEU. To put the plutonium-misuse problem into the scope of an ML objective, the goal is to build a model to predict the total plutonium generation given information about control-rods heights and the assembly power measurements. To simplify this problem, and hence make it explainable to the inspectors, a form of supervised penalized regression will be applied to the collected data, providing a metric of the accuracy as well as the relationship between the assembly power measurements and control rods.
First, a total of 1597 different Serpent runs were collected from a previous study performed by Stewart et al. [8,16]. Thus, the total plutonium generated from each run is a function of different types of reactor configurations with a known beginning- and end-of-cycle measurement of control-rod heights and assembly power measurements. The reactor-configuration information is considered an unknown parameter to the inspector; hence, it is not included in the prediction of plutonium generated. The data used to build the ML model included plutonium generated as the dependent variable (y) while the cycle count, control-rod height, and the 15 separate assembly power measurements make up the independent variables. With the inclusion of beginning- and end-of-cycle information, a total number of 33 columns make up the independent variables, with a potential of 1597 samples to train the model—i.e., X ∈ ℝ1597×33. Of the total number of samples, a 30–70 split of the total dataset was used to form the training and testing datasets used in training the ML model. Additionally, the training data were normalized such that each column had a mean of zero and a standard deviation of one. The test data partition was scaled by the training mean and standard deviation separately to ensure that no testing information was included in the training model.
Based on the problem, the chosen ML algorithms must be able to satisfy the following requirements based on the data: multicollinearity must be accounted for in the independent variable space as assemblies are spatially correlated and explainability must be provided in the results where an inspector can trust and adopt the methodology. The ML used in the prediction of the plutonium amount was a loss function applied to penalized regression, called elastic-net [17]. The elastic-net method was chosen due to its ability to deal with the multicollinearity of the independent features as well as provide features selection for interpretability. Additionally, since elastic-net is applied to a simple linear model, the regression coefficients can be used to interpret the relationships of the power assemblies to the plutonium amount generated. As a result, if the inspector is constrained on the number of samples of the assemblies, those coefficients may be used to target assemblies with a strong indicator of high plutonium production.
The regression model trained on the Serpent runs provided an R2 of 0.99 when comparing the test data observed plutonium generation and the model predicted values as shown in Figs. 12 and 13. Furthermore, the training:test data split of 30–70 provided a root-mean-square error (RMSE) of 0.067. Any increase in the proportion of the data assigned to the training set resulted in insufficient evidence of a difference of variances of errors. As such, 479 samples were found sufficient to describe the relationship between the assemblies and the plutonium generation. The elastic-net was able to reduce the total number of columns required in prediction from 33 to 24 (27% reduction). Figure 14 displays the normalized regression coefficients where the red-to-blue color gradient indicates a negative to positive coefficient size and green indicates that the variable was removed completely. Note how the model tends to place more weight on the center arms of the assemblies. Additionally, there is a relationship between the adjacent assemblies moving from the beginning to the end of cycle coefficients.
Future work will include a classification of assembly characteristics to determine whether the declared reactor-assembly characteristics (i.e., once-burned fuel, twice-burned fuel, control rods, etc.) can be confirmed using data analytics for the previously described 1597 different Serpent runs. These efforts will focus on the identification of HALEU diversion through pin diversion and will include the following: (1) a qualitative assessment where data labels are predicted, and accuracy is assessed via a confusion-matrix approach and (2) a probabilistic assessment indicating a percent confidence in a predicted assembly label. Additionally, we plan to implement spatial analyses of the reactor core by incorporating predicted plutonium metrics for each assembly and interpolating those results to produce a heat map. This heat map in conjunction with discrepancies identified in the classification step will facilitate the development of a model to elaborate on suspicions of malicious activity by specifying locations in the reactor core with a high probability for misuse.
Additionally, future work will transfer these technologies to a physical asset. The current methodologies focus on framework development using simple analyses to demonstrate approach feasibility. Currently, data are generated via a Monte Carlo approach to a modeled dataset making it “cleaner” than what can be expected from a physical reactor core. Part of that transition will involve modifying our methods to accommodate noise and incorporating uncertainty assessments into the interpretations, with respect to IAEA safeguards standards.
Nonproliferation Considerations
To assure states are meeting their nonproliferation commitments to the Treaty on Nonproliferation, the IAEA is tasked with evaluating the state as a whole in meeting these commitments. One part of this evaluation includes inspections at declared nuclear facilities within the state. The ability for the IAEA to design and implement effective and efficient inspections is dependent on their understanding of the facility’s technology and how an adversary (operator) might divert nuclear material at a facility or misuse the facility to process undeclared nuclear material.
To date, the IAEA has relied on subject-matter experts (SMEs) to perform a diversion path analysis (DPA). Ultimately, it is the operator who knows the facility’s capabilities better than anyone, and this puts the IAEA at a higher risk because they cannot match that knowledge with SMEs who have generic, not facility-specific knowledge. The application of a digital twin represents a technological leap, with the first leap bring the IAEA’s understanding of the functioning of a nuclear facility to the level at or even beyond the operator’s. This does require a high-fidelity model. But with such a model in hand, a physics-based DPA can be conducted on a broad range of diversion and misuse scenarios, along with the identification of indicators of these events. Beside adversary events, process upsets can also be studied to improve the capability to separate these events from the adversary events. In addition, with the indicators in hand, the available sensors can be evaluated for their detection capabilities and gaps identified in time for facility design changes for early incorporation of safeguards by design principles that so required sensors can be added in a cost-effective manner.
Another important aspect of the design and usability of this software is one of trust. How can an IAEA inspector—who is not an expert at coding, twins, and more—trust the system? Taking our prior perspective as an IAEA inspector, the visualization component is vital in providing a level of trust in the ML algorithms by highlighting areas of concern and showing direct reactor data (such as control-rod position, temperature, and flowrate for an assembly). In addition, users need to have a way to validate over time that the twin and ML continue to function as designed. Such a periodic validation test could be based on injecting synthetic data that include three cases: nominal operation, diversion, and misuse. This would assure that no tampering of the code has taken place.
Once the DPA is completed, and the ML algorithms validated, the twin can be used to autonomously monitor the facility’s data streams. This is the second leap that not only makes IAEA safeguards a near real-time anomaly detection capability but also more accurate in its determination and frees up valuable inspector resources to focus on other tasks in a time of severe budget constraints. Early and accurate detection also acts as a deterrent to potential adversaries. With the potential for a rapid worldwide expansion of power generation and industrial processing using advanced reactors, this makes a digital twin essential to address this new nuclear growth era.
Summary
The digital twin holistically combines computational physics models, ML, and advanced visualization into a complete, virtualized model of a nuclear reactor. A total of 1597 different Serpent runs were stored via Deep Lynx for analysis. The data were integrated into both an input interface for Serpent configuration and a 2D/3D output interface for result capture. The output interface displays the reactor model results in both 2D and 3D representations, with the control-rod position displayed in 3D for SME analysis. A 30/70 training-data split was conducted with 30% of the data reserved for supervised learning. Early quantitative conclusions derived via the ML elastic-net model trained on the Serpent runs provided an accuracy of 99% on plutonium generation.
While this virtual digital twin does not yet combine data from an operating reactor, the hub-and-spoke Deep Lynx model can allow for seamless incorporation of operational data streams in the future. The project is also working to seamlessly connect the HPC environment, where SERPENT runs, to the cloud environment, with its end-user visualization and ML components. These enhancements will provide flexibility for additional reactor-type configurations.
The development of this digital-twin technology can have applicability for integration on Department of Energy Advanced Reactor Demonstration Program (ARDP) reactors, the VTR, and DoD Pele Project. Both the TerraPower Natrium project (under ARDP) and the VTR use similar reactor-vessel designs to the SERPENT physics model described in this article. Furthermore, the design of this twin is extensible to high-temperature gas-cooled reactor (HTGR) designs under development through other ARDP awardees and the Pele design. This twin technology is expected to provide a transformational leap to IAEA’s nuclear facility understanding in and monitoring of advanced-reactor safeguards, lowering proliferation risk, and facilitating new advanced-reactor designs.
Acknowledgment
We gratefully acknowledge the support and guidance from the National Nuclear Security Administration (NNSA), Office of Defense Nuclear Nonproliferation Research and Development.
Conflict of Interest
There are no conflicts of interest.