Search
Menu
Bristol Instruments, Inc. - 872 Series LWM 10/24 LB

Deep Learning Bolsters Laser-Driven Ion Acceleration Study

Facebook X LinkedIn Email
LIVERMORE, Calif., June 3, 2021 — Researchers at Lawrence Livermore National Laboratory (LLNL) have applied neural networks to the study of high-intensity short-pulse laser-plasma acceleration, specifically for ion acceleration from solid targets. In most instances, neural networks are used to study data sets. In the current work, the LLNL team used them to explore sparsely sampled parameter space as a surrogate for a full simulation, or experiment. 

“The work primarily serves as a simple demonstration of how we can use machine learning techniques such as neural networks to augment the tools we already have,” said LLNL postdoctoral appointee Blagoje Djordjevic. “Computationally expensive simulations such as particle-in-cell codes will remain a necessary aspect of our work, but with even a simple network we are able to train a surrogate model that can reliably fill out interesting swaths of phase space.”
This image shows a parameter scan of maximum ion energy as a function of laser pulse duration and intensity generated by a neural network surrogate model. Overlaid are datapoints from the simulation ensemble to train the neural network. Courtesy of Lawrence Livermore National Laboratory
This image shows a parameter scan of maximum ion energy as a function of laser pulse duration and intensity generated by a neural network surrogate model. Overlaid are data points from the simulation ensemble to train the neural network. Courtesy of Lawrence Livermore National Laboratory.

In the work, Djordjevic was able to generate an ensemble of more than 1000 particle-in-cell simulations using the EPOCH (Extendable PIC Open Collaboration) code, a laser-plasma multiphoton ionization (MPI) simulation code  The simulations yielded a data set that covered a broad range of experimental parameters over several orders of magnitude, and values such as ion energy and electron temperature could be extracted. The data set was then used to train a multilayer, fully connected neural network, which acted as a surrogate model.

The surrogate was able to map the dependency of ion energy on laser intensity and pulse duration over several orders of magnitude. The researchers noticed an interesting behavior in the dependency on preplasma gradient length scale, which they further explored with more elaborate techniques such as ensemble surrogates and transfer learning. It was found that the accelerated ion energy depends nonlinearly on the profile of the underdense preplasma the laser interacts with before it hits the main target. Though one could expect to find a resonance value near the relativistic plasma skin depth, it was notable that the network was able to reliably generate the result despite the sparsity of data.

Finally, in a proof of concept, the researchers showed how the surrogate could be employed to extract important physical information from experimental data that is difficult to observe directly, such as the gradient length scale.

Sheetak -  Cooling at your Fingertip 11/24 MR

“Using a sparse but broad data set of simulations, we were able to train a neural network to reliably reproduce the trained results as well as generate results for unsampled regions of parameter space with reasonable confidence,” Djordjevic said. “This resulted in a surrogate model, which we used to rapidly explore regions of interest.”

According to Derek Mariscal, who serves as Djordjevic’s mentor, the work outlines a completely new approach to the way the physics of short-pulse high-intensity laser interactions are studied. Machine learning approaches are seeing wide adoption in science, and this is a foundationally important step forward in developing high-speed, high-accuracy, high-energy density science, the researchers believe.

Over the past 20 years, most short-pulse laser experiments assumed that the delivered laser pulses were essentially Gaussian in shape, Mariscal said, though this is largely an unvalidated assumption.

“The [Laboratory Directed Research and Development (LDRD)] project is aimed at delivering tailored sources from shaped high-intensity laser short pulses while paying close attention to the as-delivered laser pulses,” he said. “We have found through modeling and a limited set of experiments that these pulse details can have a profound impact on the resulting electron and ion sources.”

In the immediate term, implementation of the work will benefit two LLNL projects, an LDRD project led by Mariscal, where large ensembles will be used to model the dependency of ion acceleration on shaped laser pulses, and a project led by LLNL physicists Tammy Ma and Timo Bremer where these ensembles will be used to train neural networks for virtual diagnostics and operations control.

Laser-plasma acceleration already has an important application for the inertial confinement fusion mission as the National Ignition Facility (NIF) uses relatively short, picosecond-long laser pulses to accelerate hot electrons, which in turn generate x-rays for imaging the capsule implosion at the facility’s center.

"In our immediate future we will be generating a new set of simulations to support two experiments our team will be fielding this summer on high-repetition-rate laser systems,” Djordjevic said. “The most important aspect of this project is that we will be shaping short, femtosecond-scale laser pulses, where NIF’s lasers are shaped on the nanosecond scale. This will require us to run even more simulations where we not only vary standard parameters such as target foil thickness and laser intensity and duration, but also spectral phase contributions to the laser profile.”

The research was published in Physics of Plasmas (www.doi.org/10.1063/5.0045449).

Published: June 2021
Glossary
deep learning
Deep learning is a subset of machine learning that involves the use of artificial neural networks to model and solve complex problems. The term "deep" in deep learning refers to the use of deep neural networks, which are neural networks with multiple layers (deep architectures). These networks, often called deep neural networks or deep neural architectures, have the ability to automatically learn hierarchical representations of data. Key concepts and components of deep learning include: ...
machine learning
Machine learning (ML) is a subset of artificial intelligence (AI) that focuses on the development of algorithms and statistical models that enable computers to improve their performance on a specific task through experience or training. Instead of being explicitly programmed to perform a task, a machine learning system learns from data and examples. The primary goal of machine learning is to develop models that can generalize patterns from data and make predictions or decisions without being...
plasma
A gas made up of electrons and ions.
Research & TechnologyLasersdeep learningmachine learningneural networkssimulationultrafast lasersplasmaion accelerationLawrence Livermore National LabLawrence Livermore National LaboratoryLawrence LivermoreLLNLPhysics of PlasmasAmericas

We use cookies to improve user experience and analyze our website traffic as stated in our Privacy Policy. By using this website, you agree to the use of cookies unless you have disabled them.