Citations
This panel presents information regarding the papers that have cited the interatomic potential (IP) whose page you are on.
The OpenKIM machine learning based Deep Citation framework is used to determine whether the citing article actually used the IP in computations (denoted by "USED") or only provides it as a background citation (denoted by "NOT USED"). For more details on Deep Citation and how to work with this panel, click the documentation link at the top of the panel.
The word cloud to the right is generated from the abstracts of IP principle source(s) (given below in "How to Cite") and the citing articles that were determined to have used the IP in order to provide users with a quick sense of the types of physical phenomena to which this IP is applied.
The bar chart shows the number of articles that cited the IP per year. Each bar is divided into green (articles that USED the IP) and blue (articles that did NOT USE the IP).
Users are encouraged to correct Deep Citation errors in determination by clicking the speech icon next to a citing article and providing updated information. This will be integrated into the next Deep Citation learning cycle, which occurs on a regular basis.
OpenKIM acknowledges the support of the Allen Institute for AI through the Semantic Scholar project for providing citation information and full text of articles when available, which are used to train the Deep Citation ML algorithm.
|
This panel provides information on past usage of this interatomic potential (IP) powered by the OpenKIM Deep Citation framework. The word cloud indicates typical applications of the potential. The bar chart shows citations per year of this IP (bars are divided into articles that used the IP (green) and those that did not (blue)). The complete list of articles that cited this IP is provided below along with the Deep Citation determination on usage. See the Deep Citation documentation for more information.
149 Citations (8 used)
Help us to determine which of the papers that cite this potential actually used it to perform calculations. If you know, click the .
USED (low confidence) J. Tang, G. Li, Q. Wang, J. Zheng, L. Cheng, and R. Guo, “Effect of Four-Phonon Scattering on Anisotropic Thermal Transport in Bulk Hexagonal Boron Nitride by Machine Learning Interatomic Potential,” SSRN Electronic Journal. 2023. link Times cited: 3 USED (low confidence) R. Guo, G. Li, J. Tang, Y. Wang, and X. Song, “Small-data-based Machine Learning Interatomic Potentials for Graphene Grain Boundaries Enabled by Structural Unit Model,” Carbon Trends. 2023. link Times cited: 2 USED (low confidence) M. Shiranirad, C. Burnham, and N. J. English, “Machine-learning-based many-body energy analysis of argon clusters: Fit for size?,” Chemical Physics. 2022. link Times cited: 3 USED (low confidence) G. S. Dhaliwal, P. Nair, and C. V. Singh, “Uncertainty and sensitivity analysis of mechanical and thermal properties computed through Embedded Atom Method potential,” Computational Materials Science. 2019. link Times cited: 9 USED (low confidence) J. Chapman, R. Batra, B. Uberuaga, G. Pilania, and R. Ramprasad, “A comprehensive computational study of adatom diffusion on the aluminum (1 0 0) surface,” Computational Materials Science. 2019. link Times cited: 9 USED (low confidence) Y. Zeng, Q. Li, and K. Bai, “Prediction of interstitial diffusion activation energies of nitrogen, oxygen, boron and carbon in bcc, fcc, and hcp metals using machine learning,” Computational Materials Science. 2018. link Times cited: 34 USED (low confidence) S. Natarajan and J. Behler, “Self-Diffusion of Surface Defects at Copper–Water Interfaces,” Journal of Physical Chemistry C. 2017. link Times cited: 31 Abstract: Solid–liquid interfaces play an important role in many field… read moreAbstract: Solid–liquid interfaces play an important role in many fields like electrochemistry, corrosion, and heterogeneous catalysis. For understanding the related processes, detailed insights into the elementary steps at the atomic level are mandatory. Here we unravel the properties of prototypical surface-defects like adatoms and vacancies at a number of copper–water interfaces including the low-index Cu(111), Cu(100), and Cu(110), as well as the stepped Cu(211) and Cu(311) surfaces. Using a first-principles quality neural network potential constructed from density functional theory reference data in combination with molecular dynamics and metadynamics simulations, we investigate the defect diffusion mechanisms and the associated free energy barriers. Further, the solvent structure and the mobility of the interfacial water molecules close to the defects are analyzed and compared to the defect-free surfaces. We find that, like at the copper–vacuum interface, hopping mechanisms are preferred compared to exchange m... read less USED (low confidence) G. Pilania et al., “Using Machine Learning To Identify Factors That Govern Amorphization of Irradiated Pyrochlores,” Chemistry of Materials. 2016. link Times cited: 31 Abstract: Structure–property relationships are a key materials science… read moreAbstract: Structure–property relationships are a key materials science concept that enables the design of new materials. In the case of materials for application in radiation environments, correlating radiation tolerance with fundamental structural features of a material enables materials discovery. Here, we use a machine learning model to examine the factors that govern amorphization resistance in the complex oxide pyrochlore (A2B2O7) in a regime in which amorphization occurs as a consequence of defect accumulation. We examine the fidelity of predictions based on cation radii and electronegativities, the oxygen positional parameter, and the energetics of disordering and amorphizing the material. No one factor alone adequately predicts amorphization resistance. We find that when multiple families of pyrochlores (with different B cations) are considered, radii and electronegativities provide the best prediction, but when the machine learning model is restricted to only the B = Ti pyrochlores, the energetics of disor... read less NOT USED (low confidence) Y. Yang, B. Xu, and H. Zong, “Physics infused machine learning force fields for 2D materials monolayers,” Journal of Materials Informatics. 2023. link Times cited: 0 Abstract: Large-scale atomistic simulations of two-dimensional (2D) ma… read moreAbstract: Large-scale atomistic simulations of two-dimensional (2D) materials rely on highly accurate and efficient force fields. Here, we present a physics-infused machine learning framework that enables the efficient development and interpretability of interatomic interaction models for 2D materials. By considering the characteristics of chemical bonds and structural topology, we have devised a set of efficient descriptors. This enables accurate force field training using a small dataset. The machine learning force fields show great success in describing the phase transformation and domain switching behaviors of monolayer Group IV monochalcogenides, e.g., GeSe and PbTe. Notably, this type of force field can be readily extended to other non-transition 2D systems, such as hexagonal boron nitride (h BN), leveraging their structural similarity. Our work provides a straightforward but accurate extension of simulation time and length scales for 2D materials. read less NOT USED (low confidence) M. C. Barry, J. R. Gissinger, M. Chandross, K. Wise, S. Kalidindi, and S. Kumar, “Voxelized atomic structure framework for materials design and discovery,” Computational Materials Science. 2023. link Times cited: 0 NOT USED (low confidence) R. Feng et al., “May the Force be with You: Unified Force-Centric Pre-Training for 3D Molecular Conformations,” ArXiv. 2023. link Times cited: 1 Abstract: Recent works have shown the promise of learning pre-trained … read moreAbstract: Recent works have shown the promise of learning pre-trained models for 3D molecular representation. However, existing pre-training models focus predominantly on equilibrium data and largely overlook off-equilibrium conformations. It is challenging to extend these methods to off-equilibrium data because their training objective relies on assumptions of conformations being the local energy minima. We address this gap by proposing a force-centric pretraining model for 3D molecular conformations covering both equilibrium and off-equilibrium data. For off-equilibrium data, our model learns directly from their atomic forces. For equilibrium data, we introduce zero-force regularization and forced-based denoising techniques to approximate near-equilibrium forces. We obtain a unified pre-trained model for 3D molecular representation with over 15 million diverse conformations. Experiments show that, with our pre-training objective, we increase forces accuracy by around 3 times compared to the un-pre-trained Equivariant Transformer model. By incorporating regularizations on equilibrium data, we solved the problem of unstable MD simulations in vanilla Equivariant Transformers, achieving state-of-the-art simulation performance with 2.45 times faster inference time than NequIP. As a powerful molecular encoder, our pre-trained model achieves on-par performance with state-of-the-art property prediction tasks. read less NOT USED (low confidence) J. López-Zorrilla, X. Aretxabaleta, I. W. Yeu, I. Etxebarria, H. Manzano, and N. Artrith, “ænet-PyTorch: A GPU-supported implementation for machine learning atomic potentials training.,” The Journal of chemical physics. 2023. link Times cited: 4 Abstract: In this work, we present ænet-PyTorch, a PyTorch-based imple… read moreAbstract: In this work, we present ænet-PyTorch, a PyTorch-based implementation for training artificial neural network-based machine learning interatomic potentials. Developed as an extension of the atomic energy network (ænet), ænet-PyTorch provides access to all the tools included in ænet for the application and usage of the potentials. The package has been designed as an alternative to the internal training capabilities of ænet, leveraging the power of graphic processing units to facilitate direct training on forces in addition to energies. This leads to a substantial reduction of the training time by one to two orders of magnitude compared to the central processing unit implementation, enabling direct training on forces for systems beyond small molecules. Here, we demonstrate the main features of ænet-PyTorch and show its performance on open databases. Our results show that training on all the force information within a dataset is not necessary, and including between 10% and 20% of the force information is sufficient to achieve optimally accurate interatomic potentials with the least computational resources. read less NOT USED (low confidence) Z. Xiao et al., “Advances and applications of computational simulations in the inhibition of lithium dendrite growth,” Ionics. 2022. link Times cited: 3 NOT USED (low confidence) M.-S. Lee, M. Kim, and K. Min, “Evaluation of Principal Features for Predicting Bulk and Shear Modulus of Inorganic Solids with Machine Learning,” Materials Today Communications. 2022. link Times cited: 3 NOT USED (low confidence) S. Sharma et al., “Machine Learning Methods for Multiscale Physics and Urban Engineering Problems,” Entropy. 2022. link Times cited: 0 Abstract: We present an overview of four challenging research areas in… read moreAbstract: We present an overview of four challenging research areas in multiscale physics and engineering as well as four data science topics that may be developed for addressing these challenges. We focus on multiscale spatiotemporal problems in light of the importance of understanding the accompanying scientific processes and engineering ideas, where “multiscale” refers to concurrent, non-trivial and coupled models over scales separated by orders of magnitude in either space, time, energy, momenta, or any other relevant parameter. Specifically, we consider problems where the data may be obtained at various resolutions; analyzing such data and constructing coupled models led to open research questions in various applications of data science. Numeric studies are reported for one of the data science techniques discussed here for illustration, namely, on approximate Bayesian computations. read less NOT USED (low confidence) J. Fox, B. Zhao, B. G. del Rio, S. Rajamanickam, R. Ramprasad, and L. Song, “Concentric Spherical Neural Network for 3D Representation Learning,” 2022 International Joint Conference on Neural Networks (IJCNN). 2022. link Times cited: 1 Abstract: Learning 3D representations of point clouds that generalize … read moreAbstract: Learning 3D representations of point clouds that generalize well to arbitrary orientations is a challenge of practical importance in domains ranging from computer vision to molecular modeling. The proposed approach uses a concentric spherical spatial representation, formed by nesting spheres discretized the icosahedral grid, as the basis for structured learning over point clouds. We propose rotationally equivariant convolutions for learning over the concentric spherical grid, which are incorporated into a novel architecture for representation learning that is robust to general rotations in 3D. We demonstrate the effectiveness and extensibility of our approach to problems in different domains, such as 3D shape recognition and predicting fundamental properties of molecular systems. read less NOT USED (low confidence) K. Fujioka and R. Sun, “Interpolating Moving Ridge Regression (IMRR): A machine learning algorithm to predict energy gradients for ab initio molecular dynamics simulations,” Chemical Physics. 2022. link Times cited: 3 NOT USED (low confidence) A. Mirzoev, B. Gelchinski, and A. A. Rempel, “Neural Network Prediction of Interatomic Interaction in Multielement Substances and High-Entropy Alloys: A Review,” Doklady Physical Chemistry. 2022. link Times cited: 2 NOT USED (low confidence) C. Zeng, X. Chen, and A. Peterson, “A nearsighted force-training approach to systematically generate training data for the machine learning of large atomic structures.,” The Journal of chemical physics. 2022. link Times cited: 4 Abstract: A challenge of atomistic machine-learning (ML) methods is en… read moreAbstract: A challenge of atomistic machine-learning (ML) methods is ensuring that the training data are suitable for the system being simulated, which is particularly challenging for systems with large numbers of atoms. Most atomistic ML approaches rely on the nearsightedness principle ("all chemistry is local"), using information about the position of an atom's neighbors to predict a per-atom energy. In this work, we develop a framework that exploits the nearsighted nature of ML models to systematically produce an appropriate training set for large structures. We use a per-atom uncertainty estimate to identify the most uncertain atoms and extract chunks centered around these atoms. It is crucial that these small chunks are both large enough to satisfy the ML's nearsighted principle (that is, filling the cutoff radius) and are large enough to be converged with respect to the electronic structure calculation. We present data indicating when the electronic structure calculations are converged with respect to the structure size, which fundamentally limits the accuracy of any nearsighted ML calculator. These new atomic chunks are calculated in electronic structures, and crucially, only a single force-that of the central atom-is added to the growing training set, preventing the noisy and irrelevant information from the piece's boundary from interfering with ML training. The resulting ML potentials are robust, despite requiring single-point calculations on only small reference structures and never seeing large training structures. We demonstrated our approach via structure optimization of a 260-atom structure and extended the approach to clusters with up to 1415 atoms. read less NOT USED (low confidence) X. Zang, Y. Dong, C. Jian, N. Ferralis, and J. Grossman, “Upgrading carbonaceous materials: Coal, tar, pitch, and beyond,” Matter. 2022. link Times cited: 17 NOT USED (low confidence) J. A. Vita and D. Trinkle, “Exploring the necessary complexity of interatomic potentials,” Computational Materials Science. 2021. link Times cited: 8 NOT USED (low confidence) M. A. Chowdhury et al., “Recent machine learning guided material research - A review,” Computational Condensed Matter. 2021. link Times cited: 3 NOT USED (low confidence) L.-Y. Xue et al., “ReaxFF-MPNN machine learning potential: a combination of reactive force field and message passing neural networks.,” Physical chemistry chemical physics : PCCP. 2021. link Times cited: 3 Abstract: Reactive force field (ReaxFF) is a powerful computational to… read moreAbstract: Reactive force field (ReaxFF) is a powerful computational tool for exploring material properties. In this work, we proposed an enhanced reactive force field model, which uses message passing neural networks (MPNN) to compute the bond order and bond energies. MPNN are a variation of graph neural networks (GNN), which are derived from graph theory. In MPNN or GNN, molecular structures are treated as a graph and atoms and chemical bonds are represented by nodes and edges. The edge states correspond to the bond order in ReaxFF and are updated by message functions according to the message passing algorithms. The results are very encouraging; the investigation of the potential, such as the potential energy surface, reaction energies and equation of state, are greatly improved by this simple improvement. The new potential model, called reactive force field with message passing neural networks (ReaxFF-MPNN), is provided as an interface in an atomic simulation environment (ASE) with which the original ReaxFF and ReaxFF-MPNN potential models can do MD simulations and geometry optimizations within the ASE. Furthermore, machine learning, based on an active learning algorithm and gradient optimizer, is designed to train the model. We found that the active learning machine not only saves the manual work to collect the training data but is also much more effective than the general optimizer. read less NOT USED (low confidence) S. K. Achar, L. Zhang, and J. Johnson, “Efficiently Trained Deep Learning Potential for Graphane,” The Journal of Physical Chemistry C. 2021. link Times cited: 12 NOT USED (low confidence) M. Gilbert et al., “Perspectives on multiscale modelling and experiments to accelerate materials development for fusion,” Journal of Nuclear Materials. 2021. link Times cited: 33 NOT USED (low confidence) K. Yang et al., “Self-supervised learning and prediction of microstructure evolution with convolutional recurrent neural networks,” Patterns. 2021. link Times cited: 32 NOT USED (low confidence) K. Fujioka, Y. Luo, and R. Sun, “Active Machine Learning for Chemical Dynamics Simulations. I. Estimating the Energy Gradient,” ChemRxiv. 2021. link Times cited: 0 Abstract: Ab initio molecular dymamics (AIMD) simulation studies are a… read moreAbstract: Ab initio molecular dymamics (AIMD) simulation studies are a direct
way to visualize chemical reactions and help elucidate non-statistical dynamics that does not follow the intrinsic reaction coordinate. However,
due to the enormous amount of the ab initio energy gradient calculations
needed for AIMD, it has been largely restrained to limited sampling and
low level of theory (i.e., density functional theory with small basis sets).
To overcome this issue, a number of machine learning (ML) methods have
been employed to predict the energy gradient of the system of interest.
In this manuscript, we outline the theoretical foundations of a novel ML
method which trains from a varying set of atomic positions and their
energy gradients, called interpolating moving ridge regression (IMRR),
and directly predicts the energy gradient of a new set of atomic positions.
Several key theoretical findings are presented regarding the inputs used to
train IMRR and the predicted energy gradient. A hyperparameter used to
guide IMRR is rigorously examined as well. The method is then applied to
three bimolecular reactions studied with AIMD, including HBr+ + CO2,
H2S + CH, and C4H2 + CH, to demonstrate IMRR’s performance on different chemical systems of different sizes. This manuscript also compares
the computational cost of the energy gradient calculation with IMRR vs.
ab initio, and the results highlight IMRR as a viable option to greatly
increase the efficiency of AIMD. read less NOT USED (low confidence) Y. Choi et al., “CHARMM-GUI Polymer Builder for Modeling and Simulation of Synthetic Polymers.,” Journal of chemical theory and computation. 2021. link Times cited: 48 Abstract: Molecular modeling and simulations are invaluable tools for … read moreAbstract: Molecular modeling and simulations are invaluable tools for polymer science and engineering, which predict physicochemical properties of polymers and provide molecular-level insight into the underlying mechanisms. However, building realistic polymer systems is challenging and requires considerable experience because of great variations in structures as well as length and time scales. This work describes Polymer Builder in CHARMM-GUI (http://www.charmm-gui.org/input/polymer), a web-based infrastructure that provides a generalized and automated process to build a relaxed polymer system. Polymer Builder not only provides versatile modeling methods to build complex polymer structures, but also generates realistic polymer melt and solution systems through the built-in coarse-grained model and all-atom replacement. The coarse-grained model parametrization is generalized and extensively validated with various experimental data and all-atom simulations. In addition, the capability of Polymer Builder for generating relaxed polymer systems is demonstrated by density calculations of 34 homopolymer melt systems, characteristic ratio calculations of 170 homopolymer melt systems, a morphology diagram of poly(styrene-b-methyl methacrylate) block copolymers, and self-assembly behavior of amphiphilic poly(ethylene oxide-b-ethylethane) block copolymers in water. We hope that Polymer Builder is useful to carry out innovative and novel polymer modeling and simulation research to acquire insight into structures, dynamics, and underlying mechanisms of complex polymer-containing systems. read less NOT USED (low confidence) F. Musil, A. Grisafi, A. P. Bart’ok, C. Ortner, G. Csányi, and M. Ceriotti, “Physics-Inspired Structural Representations for Molecules and Materials.,” Chemical reviews. 2021. link Times cited: 210 Abstract: The first step in the construction of a regression model or … read moreAbstract: The first step in the construction of a regression model or a data-driven analysis, aiming to predict or elucidate the relationship between the atomic-scale structure of matter and its properties, involves transforming the Cartesian coordinates of the atoms into a suitable representation. The development of atomic-scale representations has played, and continues to play, a central role in the success of machine-learning methods for chemistry and materials science. This review summarizes the current understanding of the nature and characteristics of the most commonly used structural and chemical descriptions of atomistic structures, highlighting the deep underlying connections between different frameworks and the ideas that lead to computationally efficient and universally applicable models. It emphasizes the link between properties, structures, their physical chemistry, and their mathematical description, provides examples of recent applications to a diverse set of chemical and materials science problems, and outlines the open questions and the most promising research directions in the field. read less NOT USED (low confidence) J. Ding et al., “Machine learning for molecular thermodynamics,” Chinese Journal of Chemical Engineering. 2021. link Times cited: 16 NOT USED (low confidence) B. G. del Rio, C. Kuenneth, H. Tran, and R. Ramprasad, “An Efficient Deep Learning Scheme To Predict the Electronic Structure of Materials and Molecules: The Example of Graphene-Derived Allotropes.,” The journal of physical chemistry. A. 2020. link Times cited: 12 Abstract: Computations based on density functional theory (DFT) are tr… read moreAbstract: Computations based on density functional theory (DFT) are transforming various aspects of materials research and discovery. However, the effort required to solve the central equation of DFT, namely the Kohn-Sham equation, which remains a major obstacle for studying large systems with hundreds of atoms in a practical amount of time with routine computational resources. Here, we propose a deep learning architecture that systematically learns the input-output behavior of the Kohn-Sham equation and predicts the electronic density of states, a primary output of DFT calculations, with unprecedented speed and chemical accuracy. The algorithm also adapts and progressively improves in predictive power and versatility as it is exposed to new diverse atomic configurations. We demonstrate this capability for a diverse set of carbon allotropes spanning a large configurational and phase space. The electronic density of states, along with the electronic charge density, may be used downstream to predict a variety of materials properties, bypassing the Kohn-Sham equation, leading to an ultrafast and high-fidelity DFT emulator. read less NOT USED (low confidence) S. A. Etesami, M. Laradji, and E. Asadi, “Reliability of molecular dynamics interatomic potentials for modeling of titanium in additive manufacturing processes,” Computational Materials Science. 2020. link Times cited: 5 NOT USED (low confidence) W. Li, Y. Ando, and S. Watanabe, “Effects of density and composition on the properties of amorphous alumina: A high-dimensional neural network potential study.,” The Journal of chemical physics. 2020. link Times cited: 5 Abstract: Amorphous alumina (a-AlOx), which plays important roles in s… read moreAbstract: Amorphous alumina (a-AlOx), which plays important roles in several technological fields, shows a wide variation of density and composition. However, their influences on the properties of a-AlOx have rarely been investigated from a theoretical perspective. In this study, high-dimensional neural network potentials were constructed to generate a series of atomic structures of a-AlOx with different densities (2.6 g/cm3-3.3 g/cm3) and O/Al ratios (1.0-1.75). The structural, vibrational, mechanical, and thermal properties of the a-AlOx models were investigated, as well as the Li and Cu diffusion behavior in the models. The results showed that density and composition had different degrees of effects on the different properties. The structural and vibrational properties were strongly affected by composition, whereas the mechanical properties were mainly determined by density. The thermal conductivity was affected by both the density and composition of a-AlOx. However, the effects on the Li and Cu diffusion behavior were relatively unclear. read less NOT USED (low confidence) M. C. Barry, K. Wise, S. Kalidindi, and S. Kumar, “Voxelized Atomic Structure Potentials: Predicting Atomic Forces with the Accuracy of Quantum Mechanics Using Convolutional Neural Networks.,” The journal of physical chemistry letters. 2020. link Times cited: 9 Abstract: This paper introduces Voxelized Atomic Structure (VASt) pote… read moreAbstract: This paper introduces Voxelized Atomic Structure (VASt) potentials as a machine learning (ML) framework for developing interatomic potentials. The VASt framework utilizes a voxelized representation of the atomic structure directly as the input to a convolutional neural network (CNN). This allows for high fidelity representations of highly complex and diverse spatial arrangements of the atomic environments of interest. The CNN implicitly establishes the low-dimensional features needed to correlate each atomic neighborhood to its net atomic force. The selection of the salient features of the atomic structure (i.e., feature engineering) in the VASt framework is implicit, comprehensive, automated, scalable, and highly efficient. The calibrated convolutional layers learn the complex spatial relationships and multibody interactions that govern the physics of atomic systems with remarkable fidelity. We show that VASt potentials predict highly accurate forces on two phases of silicon carbide and the thermal conductivity of silicon over a range of isotropic strain. read less NOT USED (low confidence) J. P. Allers, J. A. Harvey, F. Garzon, and T. Alam, “Machine learning prediction of self-diffusion in Lennard-Jones fluids.,” The Journal of chemical physics. 2020. link Times cited: 32 Abstract: Different machine learning (ML) methods were explored for th… read moreAbstract: Different machine learning (ML) methods were explored for the prediction of self-diffusion in Lennard-Jones (LJ) fluids. Using a database of diffusion constants obtained from the molecular dynamics simulation literature, multiple Random Forest (RF) and Artificial Neural Net (ANN) regression models were developed and characterized. The role and improved performance of feature engineering coupled to the RF model development was also addressed. The performance of these different ML models was evaluated by comparing the prediction error to an existing empirical relationship used to describe LJ fluid diffusion. It was found that the ANN regression models provided superior prediction of diffusion in comparison to the existing empirical relationships. read less NOT USED (low confidence) P. O. Dral, A. Owens, A. Dral, and G. Csányi, “Hierarchical machine learning of potential energy surfaces.,” The Journal of chemical physics. 2020. link Times cited: 49 Abstract: We present hierarchical machine learning (hML) of highly acc… read moreAbstract: We present hierarchical machine learning (hML) of highly accurate potential energy surfaces (PESs). Our scheme is based on adding predictions of multiple Δ-machine learning models trained on energies and energy corrections calculated with a hierarchy of quantum chemical methods. Our (semi-)automatic procedure determines the optimal training set size and composition of each constituent machine learning model, simultaneously minimizing the computational effort necessary to achieve the required accuracy of the hML PES. Machine learning models are built using kernel ridge regression, and training points are selected with structure-based sampling. As an illustrative example, hML is applied to a high-level ab initio CH3Cl PES and is shown to significantly reduce the computational cost of generating the PES by a factor of 100 while retaining similar levels of accuracy (errors of ∼1 cm-1). read less NOT USED (low confidence) P. Pattnaik, S. Raghunathan, T. Kalluri, P. Bhimalapuram, C. V. Jawahar, and U. Priyakumar, “Machine Learning for Accurate Force Calculations in Molecular Dynamics Simulations.,” The journal of physical chemistry. A. 2020. link Times cited: 32 Abstract: The computationally expensive nature of ab initio molecular … read moreAbstract: The computationally expensive nature of ab initio molecular dynamics simulations severely limits its ability to simulate large system sizes and long time scales, both of which are necessary to imitate experimental conditions. In this work, we explore an approach to make use of the data obtained using the quantum mechanical density functional theory (DFT) on small systems and use deep learning to subsequently simulate large systems by taking liquid argon as a test case. A suitable vector representation was chosen to represent the surrounding environment of each Ar atom, and a ΔNetFF machine learning model where, the neural network was trained to predict the difference in resultant forces obtained by DFT and classical force fields was introduced. Molecular dynamics simulations were then performed using forces from the neural network for various system sizes and time scales depending on the properties we calculated. A comparison of properties obtained from the classical force field and the neural network model was presented alongside available experimental data to validate the proposed method. read less NOT USED (low confidence) M. Hodapp and A. Shapeev, “In operando active learning of interatomic interaction during large-scale simulations,” Machine Learning: Science and Technology. 2020. link Times cited: 17 Abstract: A well-known drawback of state-of-the-art machine-learning i… read moreAbstract: A well-known drawback of state-of-the-art machine-learning interatomic potentials is their poor ability to extrapolate beyond the training domain. For small-scale problems with tens to hundreds of atoms this can be solved by using active learning which is able to select atomic configurations on which a potential attempts extrapolation and add them to the ab initio-computed training set. In this sense an active learning algorithm can be viewed as an on-the-fly interpolation of an ab initio model. For large-scale problems, possibly involving tens of thousands of atoms, this is not feasible because one cannot afford even a single density functional theory (DFT) computation with such a large number of atoms. This work marks a new milestone toward fully automatic ab initio-accurate large-scale atomistic simulations. We develop an active learning algorithm that identifies local subregions of the simulation region where the potential extrapolates. Then the algorithm constructs periodic configurations out of these local, non-periodic subregions, sufficiently small to be computable with plane-wave DFT codes, in order to obtain accurate ab initio energies. We benchmark our algorithm on the problem of screw dislocation motion in bcc tungsten and show that our algorithm reaches ab initio accuracy, down to typical magnitudes of numerical noise in DFT codes. We show that our algorithm reproduces material properties such as core structure, Peierls barrier, and Peierls stress. This unleashes new capabilities for computational materials science toward applications which have currently been out of scope if approached solely by ab initio methods. read less NOT USED (low confidence) D. Kamal, A. Chandrasekaran, R. Batra, and R. Ramprasad, “A charge density prediction model for hydrocarbons using deep neural networks,” Machine Learning: Science and Technology. 2020. link Times cited: 17 Abstract: The electronic charge density distribution ρ(r) of a given m… read moreAbstract: The electronic charge density distribution ρ(r) of a given material is among the most fundamental quantities in quantum simulations from which many large scale properties and observables can be calculated. Conventionally, ρ(r) is obtained using Kohn–Sham density functional theory (KS-DFT) based methods. But, the high computational cost of KS-DFT renders it intractable for systems involving thousands/millions of atoms. Thus, recently there has been efforts to bypass expensive KS equations, and directly predict ρ(r) using machine learning (ML) based methods. Here, we build upon one such scheme to create a robust and reliable ρ(r) prediction model for a diverse set of hydrocarbons, involving huge chemical and morphological complexity /(saturated, unsaturated molecules, cyclo-groups and amorphous and semi-crystalline polymers). We utilize a grid-based fingerprint to capture the atomic neighborhood around an arbitrary point in space, and map it to the reference ρ(r) obtained from standard DFT calculations at that point. Owing to the grid-based learning, dataset sizes exceed billions of points, which is trained using deep neural networks in conjunction with a incremental learning based approach. The accuracy and transferability of the ML approach is demonstrated on not only a diverse test set, but also on a completely unseen system of polystyrene under different strains. Finally, we note that the general approach adopted here could be easily extended to other material systems, and can be used for quick and accurate determination of ρ(r) for DFT charge density initialization, computing dipole or quadrupole, and other observables for which reliable density functional are known. read less NOT USED (low confidence) G. Pilania, P. Balachandran, J. Gubernatis, and T. Lookman, “Data-Based Methods for Materials Design and Discovery: Basic Ideas and General Methods.” 2020. link Times cited: 11 Abstract: Machine learning methods are changing the way we design and … read moreAbstract: Machine learning methods are changing the way we design and discover new materials. This book provides an overview of approaches successfully used in addressing materials problems (alloys,... read less NOT USED (low confidence) M. E. Khatib and W. A. Jong, “ML4Chem: A Machine Learning Package for Chemistry and Materials Science,” ArXiv. 2020. link Times cited: 3 Abstract: ML4Chem is an open-source machine learning library for chemi… read moreAbstract: ML4Chem is an open-source machine learning library for chemistry and materials science. It
provides an extendable platform to develop and deploy machine learning models and pipelines and
is targeted to the non-expert and expert users. ML4Chem follows user-experience design and offers
the needed tools to go from data preparation to inference. Here we introduce its atomistic module
for the implementation, deployment, and reproducibility of atom-centered models. This module is
composed of six core building blocks: data, featurization, models, model optimization, inference,
and visualization. We present their functionality and ease of use with demonstrations utilizing
neural networks and kernel ridge regression algorithms. read less NOT USED (low confidence) J. Chapman, R. Batra, and R. Ramprasad, “Machine learning models for the prediction of energy, forces, and stresses for Platinum,” Computational Materials Science. 2020. link Times cited: 18 NOT USED (low confidence) T. J. Oweida, A.-U. Mahmood, M. D. Manning, S. Rigin, and Y. G. Yingling, “Merging Materials and Data Science: Opportunities, Challenges, and Education in Materials Informatics,” MRS Advances. 2020. link Times cited: 6 Abstract: Since the launch of the Materials Genome Initiative (MGI) th… read moreAbstract: Since the launch of the Materials Genome Initiative (MGI) the field of materials informatics (MI) emerged to remove the bottlenecks limiting the pathway towards rapid materials discovery. Although the machine learning (ML) and optimization techniques underlying MI were developed well over a decade ago, programs such as the MGI encouraged researchers to make the technical advancements that make these tools suitable for the unique challenges in materials science and engineering. Overall, MI has seen a remarkable rate in adoption over the past decade. However, for the continued growth of MI, the educational challenges associated with applying data science techniques to analyse materials science and engineering problems must be addressed. In this paper, we will discuss the growing use of materials informatics in academia and industry, highlight the need for educational advances in materials informatics, and discuss the implementation of a materials informatics course into the curriculum to jump-start interested students with the skills required to succeed in materials informatics projects. read less NOT USED (low confidence) F. Guo et al., “Intelligent-ReaxFF: Evaluating the reactive force field parameters with machine learning,” Computational Materials Science. 2020. link Times cited: 29 NOT USED (low confidence) A. Marcolongo, T. Binninger, F. Zipoli, and T. Laino, “Simulating Diffusion Properties of Solid‐State Electrolytes via a Neural Network Potential: Performance and Training Scheme,” ChemSystemsChem. 2019. link Times cited: 24 Abstract: The recently published DeePMD model (this https URL), based … read moreAbstract: The recently published DeePMD model (this https URL), based on a deep neural network architecture, brings the hope of solving the time-scale issue which often prevents the application of first principle molecular dynamics to physical systems. With this contribution we assess the performance of the DeePMD potential on a real-life application and model diffusion of ions in solid-state electrolytes. We consider as test cases the well known Li10GeP2S12, Li7La3Zr2O12 and Na3Zr2Si2PO12. We develop and test a training protocol suitable for the computation of diffusion coefficients, which is one of the key properties to be optimized for battery applications, and we find good agreement with previous computations. Our results show that the DeePMD model may be a successful component of a framework to identify novel solid-state electrolytes. read less NOT USED (low confidence) D. Dubbeldam, K. S. Walton, T. Vlugt, and S. Calero, “Design, Parameterization, and Implementation of Atomic Force Fields for Adsorption in Nanoporous Materials,” Advanced Theory and Simulations. 2019. link Times cited: 42 Abstract: Molecular simulations are an excellent tool to study adsorpt… read moreAbstract: Molecular simulations are an excellent tool to study adsorption and diffusion in nanoporous materials. Examples of nanoporous materials are zeolites, carbon nanotubes, clays, metal‐organic frameworks (MOFs), covalent organic frameworks (COFs) and zeolitic imidazolate frameworks (ZIFs). The molecular confinement these materials offer has been exploited in adsorption and catalysis for almost 50 years. Molecular simulations have provided understanding of the underlying shape selectivity, and adsorption and diffusion effects. Much of the reliability of the modeling predictions depends on the accuracy and transferability of the force field. However, flexibility and the chemical and structural diversity of MOFs add significant challenges for engineering force fields that are able to reproduce experimentally observed structural and dynamic properties. Recent developments in design, parameterization, and implementation of force fields for MOFs and zeolites are reviewed. read less NOT USED (low confidence) V. L. Deringer, M. A. Caro, and G. Csányi, “Machine Learning Interatomic Potentials as Emerging Tools for Materials Science,” Advanced Materials. 2019. link Times cited: 245 Abstract: Atomic‐scale modeling and understanding of materials have ma… read moreAbstract: Atomic‐scale modeling and understanding of materials have made remarkable progress, but they are still fundamentally limited by the large computational cost of explicit electronic‐structure methods such as density‐functional theory. This Progress Report shows how machine learning (ML) is currently enabling a new degree of realism in materials modeling: by “learning” electronic‐structure data, ML‐based interatomic potentials give access to atomistic simulations that reach similar accuracy levels but are orders of magnitude faster. A brief introduction to the new tools is given, and then, applications to some select problems in materials science are highlighted: phase‐change materials for memory devices; nanoparticle catalysts; and carbon‐based electrodes for chemical sensing, supercapacitors, and batteries. It is hoped that the present work will inspire the development and wider use of ML‐based interatomic potentials in diverse areas of materials research. read less NOT USED (low confidence) A. S. Christensen, L. A. Bratholm, F. A. Faber, and O. A. von Lilienfeld, “FCHL revisited: Faster and more accurate quantum machine learning.,” The Journal of chemical physics. 2019. link Times cited: 201 Abstract: We introduce the FCHL19 representation for atomic environmen… read moreAbstract: We introduce the FCHL19 representation for atomic environments in molecules or condensed-phase systems. Machine learning models based on FCHL19 are able to yield predictions of atomic forces and energies of query compounds with chemical accuracy on the scale of milliseconds. FCHL19 is a revision of our previous work [F. A. Faber et al., J. Chem. Phys. 148, 241717 (2018)] where the representation is discretized and the individual features are rigorously optimized using Monte Carlo optimization. Combined with a Gaussian kernel function that incorporates elemental screening, chemical accuracy is reached for energy learning on the QM7b and QM9 datasets after training for minutes and hours, respectively. The model also shows good performance for non-bonded interactions in the condensed phase for a set of water clusters with a mean absolute error (MAE) binding energy error of less than 0.1 kcal/mol/molecule after training on 3200 samples. For force learning on the MD17 dataset, our optimized model similarly displays state-of-the-art accuracy with a regressor based on Gaussian process regression. When the revised FCHL19 representation is combined with the operator quantum machine learning regressor, forces and energies can be predicted in only a few milliseconds per atom. The model presented herein is fast and lightweight enough for use in general chemistry problems as well as molecular dynamics simulations. read less NOT USED (low confidence) T. D. Huan, R. Batra, J. Chapman, C. Kim, A. Chandrasekaran, and R. Ramprasad, “Iterative-Learning Strategy for the Development of Application-Specific Atomistic Force Fields,” The Journal of Physical Chemistry C. 2019. link Times cited: 18 Abstract: Emerging data-driven approaches in materials science have tr… read moreAbstract: Emerging data-driven approaches in materials science have triggered the development of numerous machine-learning force fields. In practice, they are constructed by training a statistical model on a reference database to predict potential energy and/or atomic forces. Although most of the force fields can accurately recover the properties of the training set, some of them are becoming useful for actual molecular dynamics simulations. In this work, we employ a simple iterative-learning strategy for the development of machine-learning force fields targeted at specific simulations (applications). The strategy involves (1) preparing and fingerprinting a diverse reference database of atomic configurations and forces, (2) generating a pool of machine-learning force fields by learning the reference data, (3) validating the force fields against a series of targeted applications, and (4) selectively and recursively improving the force fields that are unsuitable for a given application while keeping their performance... read less NOT USED (low confidence) A. Goryaeva, J. Maillet, and M. Marinica, “Towards better efficiency of interatomic linear machine learning potentials,” Computational Materials Science. 2019. link Times cited: 34 NOT USED (low confidence) Y. Zuo et al., “A Performance and Cost Assessment of Machine Learning Interatomic Potentials.,” The journal of physical chemistry. A. 2019. link Times cited: 413 Abstract: Machine learning of the quantitative relationship between lo… read moreAbstract: Machine learning of the quantitative relationship between local environment descriptors and the potential energy surface of a system of atoms has emerged as a new frontier in the development of interatomic potentials (IAPs). Here, we present a comprehensive evaluation of ML-IAPs based on four local environment descriptors --- atom-centered symmetry functions (ACSF), smooth overlap of atomic positions (SOAP), the Spectral Neighbor Analysis Potential (SNAP) bispectrum components, and moment tensors --- using a diverse data set generated using high-throughput density functional theory (DFT) calculations. The data set comprising bcc (Li, Mo) and fcc (Cu, Ni) metals and diamond group IV semiconductors (Si, Ge) is chosen to span a range of crystal structures and bonding. All descriptors studied show excellent performance in predicting energies and forces far surpassing that of classical IAPs, as well as predicting properties such as elastic constants and phonon dispersion curves. We observe a general trade-off between accuracy and the degrees of freedom of each model, and consequently computational cost. We will discuss these trade-offs in the context of model selection for molecular dynamics and other applications. read less NOT USED (low confidence) M. Bogojeski, L. Vogt-Maranto, M. Tuckerman, K. Müller, and K. Burke, “Quantum chemical accuracy from density functional approximations via machine learning,” Nature Communications. 2019. link Times cited: 156 NOT USED (low confidence) V. N. Robinson, H. Zong, G. Ackland, G. Woolman, and A. Hermann, “On the chain-melted phase of matter,” Proceedings of the National Academy of Sciences. 2019. link Times cited: 19 Abstract: Significance Several elements form host–guest structures und… read moreAbstract: Significance Several elements form host–guest structures under pressure. Upon heating, the guest atoms can “melt,” while the host atoms remain crystalline. In this partially molten state, the “molten” guest atoms remain confined to 1D channels, which suggests thermodynamically impossible 1D melting. The complicated crystal structures, with incommensurate ratios between host and guest atoms, prohibit simulations with electronic structure methods. We develop here a classical interatomic forcefield for the element potassium using machine-learning techniques and simulate the chain-melted state with up to 20,000 atoms. We show that in the chain-melted state, guest-atom correlations are lost in three dimensions, providing the entropy necessary for its thermodynamic stability. Various single elements form incommensurate crystal structures under pressure, where a zeolite-type “host” sublattice surrounds a “guest” sublattice comprising 1D chains of atoms. On “chain melting,” diffraction peaks from the guest sublattice vanish, while those from the host remain. Diffusion of the guest atoms is expected to be confined to the channels in the host sublattice, which suggests 1D melting. Here, we present atomistic simulations of potassium to investigate this phenomenon and demonstrate that the chain-melted phase has no long-ranged order either along or between the chains. This 3D disorder provides the extensive entropy necessary to make the chain melt a true thermodynamic phase of matter, yet with the unique property that diffusion remains confined to 1D only. Calculations necessitated the development of an interatomic forcefield using machine learning, which we show fully reproduces potassium’s phase diagram, including the chain-melted state and 14 known phase transitions. read less NOT USED (low confidence) H. Wang, X. Guo, L. Zhang, H. Wang, and J. Xue, “Deep learning inter-atomic potential model for accurate irradiation damage simulations,” Applied Physics Letters. 2019. link Times cited: 32 Abstract: We propose a hybrid scheme that interpolates smoothly the Zi… read moreAbstract: We propose a hybrid scheme that interpolates smoothly the Ziegler-Biersack-Littmark (ZBL) screened nuclear repulsion potential with a newly developed deep learning potential energy model. The resulting DP-ZBL model can not only provide overall good performance on the predictions of near-equilibrium material properties but also capture the right physics when atoms are extremely close to each other, an event that frequently happens in computational simulations of irradiation damage events. We applied this scheme to the simulation of the irradiation damage processes in the face-centered-cubic aluminium system, and found better descriptions in terms of the defect formation energy, evolution of collision cascades, displacement threshold energy, and residual point defects, than the widely-adopted ZBL modified embedded atom method potentials and its variants. Our work provides a reliable and feasible scheme to accurately simulate the irradiation damage processes and opens up new opportunities to solve the predicament of lacking accurate potentials for enormous newly-discovered materials in the irradiation effect field. read less NOT USED (low confidence) L. Chen, S. Venkatram, C. Kim, R. Batra, A. Chandrasekaran, and R. Ramprasad, “Electrochemical Stability Window of Polymeric Electrolytes,” Chemistry of Materials. 2019. link Times cited: 61 Abstract: The electrochemical stability window (ESW) is a fundamental … read moreAbstract: The electrochemical stability window (ESW) is a fundamental consideration for choosing polymers as solid electrolytes in lithium-ion batteries. Morphological and chemical aspects of the polymer matrix and its complex interactions with lithium salts make it difficult to estimate the ESW of the polymer electrolyte, either computationally or experimentally. In this work, we propose a practical computational procedure to estimate the ESW due to just one dominant factor, i.e., the polymer matrix, using first-principles density functional theory computations. Diverse model polymers (10) were investigated, namely, polyethylene, polyketone, poly(ethylene oxide), poly(propylene oxide), poly(vinyl alcohol), polycaprolactone, poly(methyl methacrylate), poly(ethyl acrylate), poly(vinyl chloride), and poly(vinylidene fluoride). For each case, an increasingly complex hierarchy of structural models was considered to elucidate the impact of polymer chemistry and the morphological complexity on the ESW. Favorable agreemen... read less NOT USED (low confidence) S. Chmiela, H. E. S. Felix, I. Poltavsky, K. Müller, and A. Tkatchenko, “sGDML: Constructing accurate and data efficient molecular force fields using machine learning,” Comput. Phys. Commun. 2018. link Times cited: 153 NOT USED (low confidence) W. Li and Y. Ando, “Comparison of different machine learning models for the prediction of forces in copper and silicon dioxide.,” Physical chemistry chemical physics : PCCP. 2018. link Times cited: 17 Abstract: Recently, the machine learning (ML) force field has emerged … read moreAbstract: Recently, the machine learning (ML) force field has emerged as a powerful atomic simulation approach because of its high accuracy and low computational cost. However, there have been relatively fewer applications to multicomponent materials. In this study, we construct and compare ML force fields for both an elemental material (Cu) and binary material (SiO2) with varied inputs and regression models. The atomic environments are described by structural fingerprints that take into account the bond angle, and then, different ML techniques, including linear regression, a neural network and a mixture model method, are used to learn the structure-force relationship. We found that using angular structural fingerprints and a mixture model method significantly improves the accuracy of ML force fields. Additionally, we discuss an effective structural fingerprint auto-selection method based on the least absolute shrinkage and selection operator and the genetic algorithm. The atomic simulations conducted for ML force fields are in excellent agreement with ab initio calculations. As a result of the simulation with our ML force field for the structural and vibrational properties of amorphous SiO2, simulated annealing with a slow cooling rate improved the ring statistics in the amorphous structure and the phonon density of states. read less NOT USED (low confidence) Z. You, “Machine learning and statistical analysis in material property prediction.” 2018. link Times cited: 0 Abstract: Abstract: With the development of algorithms, models and dat… read moreAbstract: Abstract: With the development of algorithms, models and data-driven efforts in other areas, machine learning is beginning to make impacts in materials science and engineering. In this work, we review the basic steps of using machine learning in materials science. We also develop several machine learning methods to predict the two physically-distinct properties of transparent conductors: formation enthalpy, which is an indication of stability, and bandgap energy, which is an indication of optical transparency. These include regression-based models such as the ordinary least squares (OLS) regression model, stepwise selection model, Ridge model and Lasso model, and tree-based models such as the random forest model and gradient boosted model (GBM). We discuss the advantages and potential problems of each model and provide suggestions for possible applications. read less NOT USED (low confidence) L. Chen, R. Batra, R. Ranganathan, G. Sotzing, Y. Cao, and R. Ramprasad, “Electronic Structure of Polymer Dielectrics: The Role of Chemical and Morphological Complexity,” Chemistry of Materials. 2018. link Times cited: 26 Abstract: The electronic structure of polymers contains signatures tha… read moreAbstract: The electronic structure of polymers contains signatures that correlate with their short-term and long-term integrity when subjected to large electric stresses. A detailed picture of the electronic structure of realistic models of polymers has been difficult to obtain, mainly due to the chemical and morphological complexity encountered in polymers. In this work, we have undertaken a comprehensive analysis of the electronic structure of six model polymers displaying chemical and morphological diversity, namely, polyethylene (PE), polypropylene (PP), polystyrene (PS), poly(methyl methacrylate) (PMMA), polyethylene terephthalate (PET), and polybutylene terephthalate (PBT), using first-principles density functional theory computations and classical molecular dynamics simulations. In particular, we have studied the role of monomer chemistry, tacticity, and large-scale morphological disorders in shaping the electronic structure of these polymers. We find that monomer chemistry and morphological disorder coopera... read less NOT USED (low confidence) S. Jindal and S. Bulusu, “A transferable artificial neural network model for atomic forces in nanoparticles.,” The Journal of chemical physics. 2018. link Times cited: 10 Abstract: We have designed a new method to fit the energy and atomic f… read moreAbstract: We have designed a new method to fit the energy and atomic forces using a single artificial neural network (SANN) for any number of chemical species present in a molecular system. The traditional approach for fitting the potential energy surface for a multicomponent system using artificial neural network (ANN) is to consider n number of networks for n number of chemical species in the system. This shoots the computational cost and makes it difficult to apply to a system containing more number of species. We present a new strategy of using a SANN to compute energy and forces of a chemical system. Since atomic forces are significant for geometry optimizations and molecular dynamics simulations for any chemical system, their accurate prediction is of utmost importance. So, to predict the atomic forces, we have modified the traditional way of fitting forces from underlying energy expression. We have applied our strategy to study geometry optimizations and dynamics in gold-silver nanoalloys and thiol protected gold nanoclusters. Also, force fitting has made it possible to train smaller sized systems and extrapolate the parameters to make accurate predictions for larger systems. This proposed strategy has definitely made the mapping and fitting of atomic forces easier and can be applied to a wide variety of molecular systems. read less NOT USED (low confidence) P. O. Dral, M. Barbatti, and W. Thiel, “Nonadiabatic Excited-State Dynamics with Machine Learning,” The Journal of Physical Chemistry Letters. 2018. link Times cited: 101 Abstract: We show that machine learning (ML) can be used to accurately… read moreAbstract: We show that machine learning (ML) can be used to accurately reproduce nonadiabatic excited-state dynamics with decoherence-corrected fewest switches surface hopping in a 1-D model system. We propose to use ML to significantly reduce the simulation time of realistic, high-dimensional systems with good reproduction of observables obtained from reference simulations. Our approach is based on creating approximate ML potentials for each adiabatic state using a small number of training points. We investigate the feasibility of this approach by using adiabatic spin-boson Hamiltonian models of various dimensions as reference methods. read less NOT USED (low confidence) L. T. Ward et al., “Matminer: An open source toolkit for materials data mining,” Computational Materials Science. 2018. link Times cited: 416 NOT USED (low confidence) C. Desgranges and J. Delhommelle, “A new approach for the prediction of partition functions using machine learning techniques.,” The Journal of chemical physics. 2018. link Times cited: 16 Abstract: Using machine learning (ML), we predict the partition functi… read moreAbstract: Using machine learning (ML), we predict the partition functions and, thus, all thermodynamic properties of atomic and molecular fluids over a wide range of temperatures and pressures. Our approach is based on training neural networks using, as a reference, the results of a few flat-histogram simulations. The neural network weights so obtained are then used to predict fluid properties that are shown to be in excellent agreement with the experiment and with simulation results previously obtained on argon, carbon dioxide, and water. In particular, the ML predictions for the Gibbs free energy, Helmholtz free energy, and entropy are shown to be highly accurate over a wide range of conditions and states for bulk phases as well as for the conditions of phase coexistence. Our ML approach thus provides access instantly to G, A, and S, thereby eliminating the need to carry out any additional simulations to explore the dependence of the fluid properties on the conditions of temperature and pressure. This is of particular interest, for e.g., the screening of new materials, as well as in the parameterization of force fields, for which this ML approach provides a rapid way to assess the impact of new sets of parameters on the system properties. read less NOT USED (low confidence) K. Gubaev, E. Podryabinkin, G. Hart, and A. Shapeev, “Accelerating high-throughput searches for new alloys with active learning of interatomic potentials,” Computational Materials Science. 2018. link Times cited: 208 NOT USED (low confidence) Y. Zhang, A. Khorshidi, G. Kastlunger, and A. Peterson, “The potential for machine learning in hybrid QM/MM calculations.,” The Journal of chemical physics. 2018. link Times cited: 40 Abstract: Hybrid quantum-mechanics/molecular-mechanics (QM/MM) simulat… read moreAbstract: Hybrid quantum-mechanics/molecular-mechanics (QM/MM) simulations are popular tools for the simulation of extended atomistic systems, in which the atoms in a core region of interest are treated with a QM calculator and the surrounding atoms are treated with an empirical potential. Recently, a number of atomistic machine-learning (ML) tools have emerged that provide functional forms capable of reproducing the output of more expensive electronic-structure calculations; such ML tools are intriguing candidates for the MM calculator in QM/MM schemes. Here, we suggest that these ML potentials provide several natural advantages when employed in such a scheme. In particular, they may allow for newer, simpler QM/MM frameworks while also avoiding the need for extensive training sets to produce the ML potential. The drawbacks of employing ML potentials in QM/MM schemes are also outlined, which are primarily based on the added complexity to the algorithm of training and re-training ML models. Finally, two simple illustrative examples are provided which show the power of adding a retraining step to such "QM/ML" algorithms. read less NOT USED (low confidence) M. Reveil and P. Clancy, “Classification of spatially resolved molecular fingerprints for machine learning applications and development of a codebase for their implementation.” 2018. link Times cited: 10 Abstract: Direct mapping between material structures and properties fo… read moreAbstract: Direct mapping between material structures and properties for various classes of materials is often the ultimate goal of materials researchers. Recent progress in the field of machine learning has created a unique path to develop such mappings based on empirical data. This new opportunity warranted the need for the development of advanced structural representations suitable for use with current machine learning algorithms. A number of such representations termed “molecular fingerprints” or descriptors have been proposed over the years for this purpose. In this paper, we introduce a classification framework to better explain and interpret existing fingerprinting schemes in the literature, with a focus on those with spatial resolution. We then present the implementation of SEING, a new codebase to computing those fingerprints, and we demonstrate its capabilities by building k-nearest neighbor (k-NN) models for force prediction that achieve a generalization accuracy of 0.1 meV A−1 and an R2 score as high as 0.99 at testing. Our results indicate that simple and generally overlooked k-NN models could be very promising compared to approaches such as neural networks, Gaussian processes, and support vector machines, which are more commonly used for machine learning-based predictions in computational materials science. read less NOT USED (low confidence) G. Pilania, K. Mcclellan, C. Stanek, and B. Uberuaga, “Physics-informed machine learning for inorganic scintillator discovery.,” The Journal of chemical physics. 2018. link Times cited: 31 Abstract: Applications of inorganic scintillators-activated with lanth… read moreAbstract: Applications of inorganic scintillators-activated with lanthanide dopants, such as Ce and Eu-are found in diverse fields. As a strict requirement to exhibit scintillation, the 4f ground state (with the electronic configuration of [Xe]4fn 5d0) and 5d1 lowest excited state (with the electronic configuration of [Xe]4fn-1 5d1) levels induced by the activator must lie within the host bandgap. Here we introduce a new machine learning (ML) based search strategy for high-throughput chemical space explorations to discover and design novel inorganic scintillators. Building upon well-known physics-based chemical trends for the host dependent electron binding energies within the 4f and 5d1 energy levels of lanthanide ions and available experimental data, the developed ML model-coupled with knowledge of the vacuum referred valence and conduction band edges computed from first principles-can rapidly and reliably estimate the relative positions of the activator's energy levels relative to the valence and conduction band edges of any given host chemistry. Using perovskite oxides and elpasolite halides as examples, the presented approach has been demonstrated to be able to (i) capture systematic chemical trends across host chemistries and (ii) effectively screen promising compounds in a high-throughput manner. While a number of other application-specific performance requirements need to be considered for a viable scintillator, the scheme developed here can be a practically useful tool to systematically down-select the most promising candidate materials in a first line of screening for a subsequent in-depth investigation. read less NOT USED (low confidence) F. Legrain, A. Roekeghem, S. Curtarolo, J. Carrete, G. Madsen, and N. Mingo, “Vibrational Properties of Metastable Polymorph Structures by Machine Learning,” Journal of chemical information and modeling. 2018. link Times cited: 13 Abstract: Despite vibrational properties being critical for the ab ini… read moreAbstract: Despite vibrational properties being critical for the ab initio prediction of finite-temperature stability as well as thermal conductivity and other transport properties of solids, their inclusion in ab initio materials repositories has been hindered by expensive computational requirements. Here we tackle the challenge, by showing that a good estimation of force constants and vibrational properties can be quickly achieved from the knowledge of atomic equilibrium positions using machine learning. A random-forest algorithm trained on 121 different mechanically stable structures of KZnF3 reaches a mean absolute error of 0.17 eV/Å2 for the interatomic force constants, and it is less expensive than training the complete force field for such compounds. The predicted force constants are then used to estimate phonon spectral features, heat capacities, vibrational entropies, and vibrational free energies, which compare well with the ab initio ones. The approach can be used for the rapid estimation of stability at finite temperatures. read less NOT USED (low confidence) S. Chmiela, H. E. Sauceda, K. Müller, and A. Tkatchenko, “Towards exact molecular dynamics simulations with machine-learned force fields,” Nature Communications. 2018. link Times cited: 474 NOT USED (low confidence) L. Shen and W. Yang, “Molecular Dynamics Simulations with Quantum Mechanics/Molecular Mechanics and Adaptive Neural Networks.,” Journal of chemical theory and computation. 2018. link Times cited: 92 Abstract: Direct molecular dynamics (MD) simulation with ab initio qua… read moreAbstract: Direct molecular dynamics (MD) simulation with ab initio quantum mechanical and molecular mechanical (QM/MM) methods is very powerful for studying the mechanism of chemical reactions in a complex environment but also very time-consuming. The computational cost of QM/MM calculations during MD simulations can be reduced significantly using semiempirical QM/MM methods with lower accuracy. To achieve higher accuracy at the ab initio QM/MM level, a correction on the existing semiempirical QM/MM model is an attractive idea. Recently, we reported a neural network (NN) method as QM/MM-NN to predict the potential energy difference between semiempirical and ab initio QM/MM approaches. The high-level results can be obtained using neural network based on semiempirical QM/MM MD simulations, but the lack of direct MD samplings at the ab initio QM/MM level is still a deficiency that limits the applications of QM/MM-NN. In the present paper, we developed a dynamic scheme of QM/MM-NN for direct MD simulations on the NN-predicted potential energy surface to approximate ab initio QM/MM MD. Since some configurations excluded from the database for NN training were encountered during simulations, which may cause some difficulties on MD samplings, an adaptive procedure inspired by the selection scheme reported by Behler [ Behler Int. J. Quantum Chem. 2015 , 115 , 1032 ; Behler Angew. Chem., Int. Ed. 2017 , 56 , 12828 ] was employed with some adaptions to update NN and carry out MD iteratively. We further applied the adaptive QM/MM-NN MD method to the free energy calculation and transition path optimization on chemical reactions in water. The results at the ab initio QM/MM level can be well reproduced using this method after 2-4 iteration cycles. The saving in computational cost is about 2 orders of magnitude. It demonstrates that the QM/MM-NN with direct MD simulations has great potentials not only for the calculation of thermodynamic properties but also for the characterization of reaction dynamics, which provides a useful tool to study chemical or biochemical systems in solution or enzymes. read less NOT USED (low confidence) S. Jindal and S. Bulusu, “An algorithm to use higher order invariants for modelling potential energy surface of nanoclusters,” Chemical Physics Letters. 2018. link Times cited: 3 NOT USED (low confidence) A. Mannodi-Kanakkithodi, T. D. Huan, and R. Ramprasad, “Mining materials design rules from data: The example of polymer dielectrics,” Chemistry of Materials. 2017. link Times cited: 44 Abstract: Mining of currently available and evolving materials databas… read moreAbstract: Mining of currently available and evolving materials databases to discover structure–chemistry–property relationships is critical to developing an accelerated materials design framework. The design of new and advanced polymeric dielectrics for capacitive energy storage has been hampered by the lack of sufficient data encompassing wide enough chemical spaces. Here, data mining and analysis techniques are applied on a recently presented computational data set of around 1100 organic polymers, organometallic polymers, and related molecular crystals, in order to obtain qualitative understanding of the origins of dielectric and electronic properties. By probing the relationships between crucial chemical and structural features of materials and their dielectric constant and band gap, design rules are devised for optimizing either property. Learning from this data set provides guidance to experiments and to future computations, as well as a way of expanding the pool of promising polymer candidates for dielectric ... read less NOT USED (low confidence) Y.-H. Tang, D. Zhang, and G. Karniadakis, “An Atomistic Fingerprint Algorithm for Learning Ab Initio Molecular Force Fields,” The Journal of chemical physics. 2017. link Times cited: 25 Abstract: Molecular fingerprints, i.e., feature vectors describing ato… read moreAbstract: Molecular fingerprints, i.e., feature vectors describing atomistic neighborhood configurations, is an important abstraction and a key ingredient for data-driven modeling of potential energy surface and interatomic force. In this paper, we present the density-encoded canonically aligned fingerprint algorithm, which is robust and efficient, for fitting per-atom scalar and vector quantities. The fingerprint is essentially a continuous density field formed through the superimposition of smoothing kernels centered on the atoms. Rotational invariance of the fingerprint is achieved by aligning, for each fingerprint instance, the neighboring atoms onto a local canonical coordinate frame computed from a kernel minisum optimization procedure. We show that this approach is superior over principal components analysis-based methods especially when the atomistic neighborhood is sparse and/or contains symmetry. We propose that the "distance" between the density fields be measured using a volume integral of their pointwise difference. This can be efficiently computed using optimal quadrature rules, which only require discrete sampling at a small number of grid points. We also experiment on the choice of weight functions for constructing the density fields and characterize their performance for fitting interatomic potentials. The applicability of the fingerprint is demonstrated through a set of benchmark problems. read less NOT USED (low confidence) C. Kim, T. D. Huan, S. Krishnan, and R. Ramprasad, “A hybrid organic-inorganic perovskite dataset,” Scientific Data. 2017. link Times cited: 109 NOT USED (low confidence) R. Ramakrishnan and O. von Lilienfeld, “Machine Learning, Quantum Chemistry, and Chemical Space.” 2017. link Times cited: 61 NOT USED (low confidence) E. Podryabinkin and A. Shapeev, “Active learning of linearly parametrized interatomic potentials,” Computational Materials Science. 2016. link Times cited: 351 NOT USED (low confidence) S. Chmiela, A. Tkatchenko, H. E. Sauceda, I. Poltavsky, K. T. Schütt, and K. Müller, “Machine learning of accurate energy-conserving molecular force fields,” Science Advances. 2016. link Times cited: 808 Abstract: The law of energy conservation is used to develop an efficie… read moreAbstract: The law of energy conservation is used to develop an efficient machine learning approach to construct accurate force fields. Using conservation of energy—a fundamental property of closed classical and quantum mechanical systems—we develop an efficient gradient-domain machine learning (GDML) approach to construct accurate molecular force fields using a restricted number of samples from ab initio molecular dynamics (AIMD) trajectories. The GDML implementation is able to reproduce global potential energy surfaces of intermediate-sized molecules with an accuracy of 0.3 kcal mol−1 for energies and 1 kcal mol−1 Å̊−1 for atomic forces using only 1000 conformational geometries for training. We demonstrate this accuracy for AIMD trajectories of molecules, including benzene, toluene, naphthalene, ethanol, uracil, and aspirin. The challenge of constructing conservative force fields is accomplished in our work by learning in a Hilbert space of vector-valued functions that obey the law of energy conservation. The GDML approach enables quantitative molecular dynamics simulations for molecules at a fraction of cost of explicit AIMD calculations, thereby allowing the construction of efficient force fields with the accuracy and transferability of high-level ab initio methods. read less NOT USED (low confidence) S. De, F. Musil, T. Ingram, C. Baldauf, and M. Ceriotti, “Mapping and classifying molecules from a high-throughput structural database,” Journal of Cheminformatics. 2016. link Times cited: 28 NOT USED (low confidence) T. Mueller, A. Kusne, and R. Ramprasad, “Machine Learning in Materials Science,” Reviews in Computational Chemistry. 2016. link Times cited: 205 NOT USED (low confidence) A. Ajagekar and F. You, “Quantum computing and quantum artificial intelligence for renewable and sustainable energy: A emerging prospect towards climate neutrality,” Renewable and Sustainable Energy Reviews. 2022. link Times cited: 17 NOT USED (low confidence) X. Liu, Q. Wang, and J. Zhang, “Machine Learning Interatomic Force Fields for Carbon Allotropic Materials.” 2021. link Times cited: 0 NOT USED (low confidence) P. O. Dral, “Quantum chemistry assisted by machine learning.” 2020. link Times cited: 14 NOT USED (low confidence) S. Chmiela, “Towards exact molecular dynamics simulations with invariant machine-learned models.” 2019. link Times cited: 8 Abstract: Molecular dynamics (MD) simulations constitute the cornersto… read moreAbstract: Molecular dynamics (MD) simulations constitute the cornerstone of contemporary atomistic modeling in chemistry, biology, and materials science. However, one of the widely recognized and increasingly pressing issues in MD simulations is the lack of accuracy of underlying classical interatomic potentials, which hinders truly predictive modeling of dynamics and function of (bio)molecular systems. Classical potentials often fail to faithfully capture key quantum effects in molecules and materials. In this thesis, we develop a combined machine learning (ML) and quantum mechanics approach that enables the direct reconstruction of flexible molecular force fields from high-level ab initio calculations. We approach this challenge by incorporating fundamental physical symmetries and conservation laws into ML techniques. Using conservation of energy – a fundamental property of closed classical and quantum mechanical systems – we derive an efficient gradient-domain machine learning (GDML) model. The challenge of constructing conservative force fields is accomplished by learning in a Hilbert space of vector-valued functions that obey the law of energy conservation. We proceed with the development of a multi-partite matching algorithm that enables a fully automated recovery of physically relevant point-group and fluxional symmetries from the training dataset into a symmetric variant of our model. The developed symmetric GDML (sGDML) approach faithfully reproduces global force fields at quantum-chemical CCSD(T) level of accuracy and allows converged MD simulations with fully quantized electrons and nuclei. We present MD simulations, for flexible molecules with up to a few dozen atoms and provide insights into the dynamical behavior of these molecules. Our approach provides the key missing ingredient for achieving spectroscopic accuracy in molecular simulations. read less NOT USED (low confidence) N. Browning, “Applications of Artificial Intelligence to Computational Chemistry.” 2019. link Times cited: 2 Abstract: The calculation of the electronic structure of chemical syst… read moreAbstract: The calculation of the electronic structure of chemical systems, necessitates computationally expensive approximations to the time-independent electronic Schrödinger equation in order to yield static properties in good agreement with experimental results. These methods can also be coupled with molecular dynamics, to provide a first principles description of thermodynamic properties, dynamics and chemical reactions. Evidently, the cost of the underlying electronic structure method limits the time frame over which a system can be studied, and hence, certain chemical processes may be out-of-reach using a particular method. Furthermore, when one is interested in designing new molecules with interesting properties, a systematic enumeration of an inordinately large chemical space is typically required. The combination of both expensive electronic structure calculations and large chemical spaces results in an insurmountable barrier in computational cost. The application of artificial intelligence (AI) in computational chemistry has, over the past 20 years, seen an explosion in interest and scope with respect to these two issues. Intelligent algorithms capable of efficiently sampling chemical spaces, coupled with machine learning (ML) techniques to cheapen the calculation of electronic structure evaluations, enable both rapid throughput to search for new molecules with particular properties and in the case of ML, an increase in the timescales that can be simulated via molecular dynamics. In this thesis, computer programs have been developed that enable the application of AI algorithms to chemical and biological problems. In particular, a versatile evolutionary algorithm toolbox called EVOLVE has been developed. As a first test-case study, genetic algorithms were used to efficiently sample the vast chemical sequence space of an isolated α-helical peptide, from which insights are gained to rationalise the stability of particular genetically optimised peptides in a variety of implicit solvent environments. Genetic algorithms were then applied to the compositional optimisation of training sets used in machine learning models of molecular properties. The resulting optimal training sets are shown to significantly reduce out-of-sample errors on all thermodynamic and electronic properties considered. Furthermore, they reveal that there are systematic trends in the distribution of these optimally-representative molecules. Inspired by the success of machine learning, an ML-enhanced multiple time step approach for performing accurate ab initio molecular dynamics was developed. Two schemes representing different force partitioning were investigated. In the first scheme, the ML method provides an estimation of the slow (high level) force components acting on a system, while in the second, read less NOT USED (low confidence) C. Desgranges and J. Delhommelle, “Determination of mixture properties via a combined Expanded Wang-Landau simulations-Machine Learning approach,” Chemical Physics Letters. 2019. link Times cited: 5 NOT USED (low confidence) S. Carr, “Applying Bayesian Machine Learning Methods to Theoretical Surface Science.” 2015. link Times cited: 0 Abstract: OF THE THESIS Applying Bayesian Machine Learning Methods to … read moreAbstract: OF THE THESIS Applying Bayesian Machine Learning Methods to Theoretical Surface Science by Shane Frederic F. Carr Master of Science in Computer Science Washington University in St. Louis, December 2015 Research Advisors: Dr. Roman Garnett and Dr. Cynthia Lo Machine learning is a rapidly evolving field in computer science with increasingly many applications to other domains. In this thesis, I present a Bayesian machine learning approach to solving a problem in theoretical surface science: calculating the preferred active site on a catalyst surface for a given adsorbate molecule. I formulate the problem as a low-dimensional objective function. I show how the objective function can be approximated into a certain confidence interval using just one iteration of the self-consistent field (SCF) loop in density functional theory (DFT). I then use Bayesian optimization to perform a global search for the solution. My approach outperforms the current state-of-the-art method, constrained minima hopping, for CO on ferric oxide by a factor of 75 to 1. This thesis is the first documented application of Bayesian optimization to surface science. read less NOT USED (high confidence) Y. Liu, X. He, and Y. Mo, “Discrepancies and error evaluation metrics for machine learning interatomic potentials,” npj Computational Materials. 2023. link Times cited: 1 NOT USED (high confidence) V. Korolev, Y. M. Nevolin, T. Manz, and P. Protsenko, “Parametrization of Nonbonded Force Field Terms for Metal-Organic Frameworks Using Machine Learning Approach,” Journal of chemical information and modeling. 2021. link Times cited: 5 Abstract: The enormous structural and chemical diversity of metal-orga… read moreAbstract: The enormous structural and chemical diversity of metal-organic frameworks (MOFs) forces researchers to actively use simulation techniques as often as experiments. MOFs are widely known for their outstanding adsorption properties, so a precise description of the host-guest interactions is essential for high-throughput screening aimed at ranking the most promising candidates. However, highly accurate ab initio calculations cannot be routinely applied to model thousands of structures due to the demanding computational costs. Furthermore, methods based on force field (FF) parametrization suffer from low transferability. To resolve this accuracy-efficiency dilemma, we applied a machine learning (ML) approach: extreme gradient boosting. The trained models reproduced the atom-in-material quantities, including partial charges, polarizabilities, dispersion coefficients, quantum Drude oscillator, and electron cloud parameters, with accuracy similar to the reference data set. The aforementioned FF precursors make it possible to thoroughly describe noncovalent interactions typical for MOF-adsorbate systems: electrostatic, dispersion, polarization, and short-range repulsion. The presented approach can also readily facilitate hybrid atomistic simulation/ML workflows. read less NOT USED (high confidence) V. Gallego, R. Naveiro, C. Roca, D. R. Insua, and N. Campillo, “AI in drug development: a multidisciplinary perspective,” Molecular Diversity. 2021. link Times cited: 11 NOT USED (high confidence) Y. Zamora, L. T. Ward, G. Sivaraman, I. T. Foster, and H. Hoffmann, “Proxima: accelerating the integration of machine learning in atomistic simulations,” Proceedings of the ACM International Conference on Supercomputing. 2021. link Times cited: 7 Abstract: Atomistic-scale simulations are prominent scientific applica… read moreAbstract: Atomistic-scale simulations are prominent scientific applications that require the repetitive execution of a computationally expensive routine to calculate a system's potential energy. Prior work shows that these expensive routines can be replaced with a machine-learned surrogate approximation to accelerate the simulation at the expense of the overall accuracy. The exact balance of speed and accuracy depends on the specific configuration of the surrogate-modeling workflow and the science itself, and prior work leaves it up to the scientist to find a configuration that delivers the required accuracy for their science problem. Unfortunately, due to the underlying system dynamics, it is rare that a single surrogate configuration presents an optimal accuracy/latency trade-off for the entire simulation. In practice, scientists must choose conservative configurations so that accuracy is always acceptable, forgoing possible acceleration. As an alternative, we propose Proxima, a systematic and automated method for dynamically tuning a surrogate-modeling configuration in response to real-time feedback from the ongoing simulation. Proxima estimates the uncertainty of applying a surrogate approximation in each step of an iterative simulation. Using this information, the specific surrogate configuration can be adjusted dynamically to ensure maximum speedup while sustaining a required accuracy metric. We evaluate Proxima using a Monte Carlo sampling application and find that Proxima respects a wide range of user-defined accuracy goals while achieving speedups of 1.02--5.5X relative to a standard read less NOT USED (high confidence) W. F. Reinhart, “Unsupervised learning of atomic environments from simple features,” Computational Materials Science. 2021. link Times cited: 10 NOT USED (high confidence) J. Westermayr, M. Gastegger, K. T. Schütt, and R. Maurer, “Perspective on integrating machine learning into computational chemistry and materials science.,” The Journal of chemical physics. 2021. link Times cited: 72 Abstract: Machine learning (ML) methods are being used in almost every… read moreAbstract: Machine learning (ML) methods are being used in almost every conceivable area of electronic structure theory and molecular simulation. In particular, ML has become firmly established in the construction of high-dimensional interatomic potentials. Not a day goes by without another proof of principle being published on how ML methods can represent and predict quantum mechanical properties-be they observable, such as molecular polarizabilities, or not, such as atomic charges. As ML is becoming pervasive in electronic structure theory and molecular simulation, we provide an overview of how atomistic computational modeling is being transformed by the incorporation of ML approaches. From the perspective of the practitioner in the field, we assess how common workflows to predict structure, dynamics, and spectroscopy are affected by ML. Finally, we discuss how a tighter and lasting integration of ML methods with computational chemistry and materials science can be achieved and what it will mean for research practice, software development, and postgraduate training. read less NOT USED (high confidence) Y. Mishin, “Machine-Learning Interatomic Potentials for Materials Science,” Electrical Engineering eJournal. 2021. link Times cited: 103 NOT USED (high confidence) Y.-S. Lin, G. P. P. Pun, and Y. Mishin, “Development of a physically-informed neural network interatomic potential for tantalum,” Computational Materials Science. 2021. link Times cited: 9 NOT USED (high confidence) R. Singh et al., “Neural-network model for force prediction in multi-principal-element alloys,” Computational Materials Science. 2021. link Times cited: 3 NOT USED (high confidence) J. Wu, Y. Zhang, L. Zhang, and S. Liu, “Deep learning of accurate force field of ferroelectric
HfO2,” Physical Review B. 2020. link Times cited: 21 Abstract: The discovery of ferroelectricity in ${\mathrm{HfO}}_{2}$-ba… read moreAbstract: The discovery of ferroelectricity in ${\mathrm{HfO}}_{2}$-based thin films opens up new opportunities for using this silicon-compatible ferroelectric to realize low-power logic circuits and high-density nonvolatile memories. The functional performances of ferroelectrics are intimately related to their dynamic responses to external stimuli such as electric fields at finite temperatures. Molecular dynamics is an ideal technique for investigating dynamical processes on large length and time scales, though its applications to new materials are often hindered by the limited availability and accuracy of classical force fields. Here we present a deep neural network--based interatomic force field of ${\mathrm{HfO}}_{2}$ learned from ab initio data using a concurrent learning procedure. The model potential is able to predict structural properties such as elastic constants, equation of states, phonon dispersion relationships, and phase transition barriers of various hafnia polymorphs with accuracy comparable with density functional theory calculations. The validity of this model potential is further confirmed by the reproduction of experimental sequences of temperature-driven ferroelectric-paraelectric phase transitions of ${\mathrm{HfO}}_{2}$ with isobaric-isothermal ensemble molecular dynamics simulations. We suggest a general approach to extend the model potential of ${\mathrm{HfO}}_{2}$ to related material systems including dopants and defects. read less NOT USED (high confidence) J. Chapman and R. Ramprasad, “Multiscale Modeling of Defect Phenomena in Platinum Using Machine Learning of Force Fields,” JOM. 2020. link Times cited: 5 NOT USED (high confidence) S. Lee et al., “Applying Machine Learning Algorithms to Predict Potential Energies and Atomic Forces during C-H Activation,” Journal of the Korean Physical Society. 2020. link Times cited: 2 Abstract: Molecular dynamics (MD) simulations are useful in understand… read moreAbstract: Molecular dynamics (MD) simulations are useful in understanding the interaction between solid materials and molecules. However, performing MD simulations is possible only when interatomic potentials are available and constructing such interatomic potentials usually requires additional computational work. Recently, generating interatomic potentials was shown to be much easier when machine learning (ML) algorithms were used. In addition, ML algorithms require new descriptors for improved performance. Here, we present an ML approach with several categories of atomic descriptors to predict the parameters necessary for MD simulations, such as the potential energies and the atomic forces. We propose several atomic descriptors based on structural information and find that better descriptors can be generated from eXtreme gradient boosting (XGBoost). Moreover, we observe fewer descriptors that perform better in predicting the potential energies and the forces during methane activation processes on a catalytic Pt(111) surface. These results were consistently observed in two different ML algorithms: fully-connected neural network (FNN) and XGBoost. Taking into account the advantages of FNN and XGBoost, we propose an efficient ML model for estimating potential energies. Our findings will be helpful in developing new ML potentials for long-time MD simulations. read less NOT USED (high confidence) G. P. P. Pun, V. Yamakov, J. Hickman, E. Glaessgen, and Y. Mishin, “Development of a general-purpose machine-learning interatomic potential for aluminum by the physically informed neural network method,” Physical Review Materials. 2020. link Times cited: 13 Abstract: Interatomic potentials constitute the key component of large… read moreAbstract: Interatomic potentials constitute the key component of large-scale atomistic simulations of materials. The recently proposed physically-informed neural network (PINN) method combines a high-dimensional regression implemented by an artificial neural network with a physics-based bond-order interatomic potential applicable to both metals and nonmetals. In this paper, we present a modified version of the PINN method that accelerates the potential training process and further improves the transferability of PINN potentials to unknown atomic environments. As an application, a modified PINN potential for Al has been developed by training on a large database of electronic structure calculations. The potential reproduces the reference first-principles energies within 2.6 meV per atom and accurately predicts a wide spectrum of physical properties of Al. Such properties include, but are not limited to, lattice dynamics, thermal expansion, energies of point and extended defects, the melting temperature, the structure and dynamic properties of liquid Al, the surface tensions of the liquid surface and the solid-liquid interface, and the nucleation and growth of a grain boundary crack. Computational efficiency of PINN potentials is also discussed. read less NOT USED (high confidence) J. Chapman and R. Ramprasad, “Nanoscale Modeling of Surface Phenomena in Aluminum Using Machine Learning Force Fields,” The Journal of Physical Chemistry C. 2020. link Times cited: 7 Abstract: The study of nano-scale surface phenomena is essential in un… read moreAbstract: The study of nano-scale surface phenomena is essential in understanding the physical processes that aid in technologically relevant applications, such as catalysis, material growth, and failure nuc... read less NOT USED (high confidence) H. Sugisawa, T. Ida, and R. Krems, “Gaussian process model of 51-dimensional potential energy surface for protonated imidazole dimer.,” The Journal of chemical physics. 2020. link Times cited: 23 Abstract: The goal of the present work is to obtain accurate potential… read moreAbstract: The goal of the present work is to obtain accurate potential energy surfaces (PESs) for high-dimensional molecular systems with a small number of ab initio calculations in a system-agnostic way. We use probabilistic modeling based on Gaussian processes (GPs). We illustrate that it is possible to build an accurate GP model of a 51-dimensional PES based on 5000 randomly distributed ab initio calculations with a global accuracy of <0.2 kcal/mol. Our approach uses GP models with composite kernels designed to enhance the Bayesian information content and represents the global PES as a sum of a full-dimensional GP and several GP models for molecular fragments of lower dimensionality. We demonstrate the potency of these algorithms by constructing the global PES for the protonated imidazole dimer, a molecular system with 19 atoms. We illustrate that GP models thus constructed can extrapolate the PES from low energies (<10 000 cm-1), yielding a PES at high energies (>20 000 cm-1). This opens the prospect for new applications of GPs, such as mapping out phase transitions by extrapolation or accelerating Bayesian optimization, for high-dimensional physics and chemistry problems with a restricted number of inputs, i.e., for high-dimensional problems where obtaining training data is very difficult. read less NOT USED (high confidence) V. Korolev, A. Mitrofanov, E. Marchenko, N. Eremin, V. Tkachenko, and S. Kalmykov, “Transferable and Extensible Machine Learning-Derived Atomic Charges for Modeling Hybrid Nanoporous Materials,” Chemistry of Materials. 2020. link Times cited: 21 Abstract: Nanoporous materials have attracted significant interest as … read moreAbstract: Nanoporous materials have attracted significant interest as an emerging platform for adsorption-related applications. The high-throughput computational screening became a standard technique to acce... read less NOT USED (high confidence) H. E. Sauceda, M. Gastegger, S. Chmiela, K. Müller, and A. Tkatchenko, “Molecular force fields with gradient-domain machine learning (GDML): Comparison and synergies with classical force fields.,” The Journal of chemical physics. 2020. link Times cited: 28 Abstract: Modern machine learning force fields (ML-FF) are able to yie… read moreAbstract: Modern machine learning force fields (ML-FF) are able to yield energy and force predictions at the accuracy of high-level ab initio methods, but at a much lower computational cost. On the other hand, classical molecular mechanics force fields (MM-FF) employ fixed functional forms and tend to be less accurate, but considerably faster and transferable between molecules of the same class. In this work, we investigate how both approaches can complement each other. We contrast the ability of ML-FF for reconstructing dynamic and thermodynamic observables to MM-FFs in order to gain a qualitative understanding of the differences between the two approaches. This analysis enables us to modify the generalized AMBER force field by reparametrizing short-range and bonded interactions with more expressive terms to make them more accurate, without sacrificing the key properties that make MM-FFs so successful. read less NOT USED (high confidence) A. S. Christensen and A. V. Lilienfeld, “On the role of gradients for machine learning of molecular energies and forces,” Machine Learning: Science and Technology. 2020. link Times cited: 78 Abstract: The accuracy of any machine learning potential can only be a… read moreAbstract: The accuracy of any machine learning potential can only be as good as the data used in the fitting process. The most efficient model therefore selects the training data that will yield the highest accuracy compared to the cost of obtaining the training data. We investigate the convergence of prediction errors of quantum machine learning models for organic molecules trained on energy and force labels, two common data types in molecular simulations. When training models for the potential energy surface of a single molecule, we find that the inclusion of atomic forces in the training data increases the accuracy of the predicted energies and forces 7-fold, compared to models trained on energy only. Surprisingly, for models trained on sets of organic molecules of varying size and composition in non-equilibrium conformations, inclusion of forces in the training does not improve the predicted energies of unseen molecules in new conformations. Predicted forces, however, improve about 7-fold. For the systems studied, we find that force labels and energy labels contribute equally per label to the convergence of the prediction errors. The optimal choice of what type of training data to include depends on several factors: the computational cost of acquiring the force and energy labels for training, the application domain, the property of interest and the complexity of the machine learning model. Based on our observations we describe key considerations for the creation of new datasets for potential energy surfaces of molecules which maximize the efficiency of the resulting machine learning models. read less NOT USED (high confidence) I. Novikov, K. Gubaev, E. Podryabinkin, and A. Shapeev, “The MLIP package: moment tensor potentials with MPI and active learning,” Machine Learning: Science and Technology. 2020. link Times cited: 220 Abstract: The subject of this paper is the technology (the ‘how’) of c… read moreAbstract: The subject of this paper is the technology (the ‘how’) of constructing machine-learning interatomic potentials, rather than science (the ‘what’ and ‘why’) of atomistic simulations using machine-learning potentials. Namely, we illustrate how to construct moment tensor potentials using active learning as implemented in the MLIP package, focusing on the efficient ways to automatically sample configurations for the training set, how expanding the training set changes the error of predictions, how to set up ab initio calculations in a cost-effective manner, etc. The MLIP package (short for Machine-Learning Interatomic Potentials) is available at https://mlip.skoltech.ru/download/. read less NOT USED (high confidence) H. E. Sauceda, V. Vassilev-Galindo, S. Chmiela, K. Müller, and A. Tkatchenko, “Dynamical strengthening of covalent and non-covalent molecular interactions by nuclear quantum effects at finite temperature,” Nature Communications. 2020. link Times cited: 24 NOT USED (high confidence) J. Chapman and R. Ramprasad, “Predicting the dynamic behavior of the mechanical properties of platinum with machine learning.,” The Journal of chemical physics. 2020. link Times cited: 2 Abstract: Over the last few decades, computational tools have been ins… read moreAbstract: Over the last few decades, computational tools have been instrumental in understanding the behavior of materials at the nano-meter length scale. Until recently, these tools have been dominated by two levels of theory: quantum mechanics (QM) based methods and semi-empirical/classical methods. The former are time-intensive but accurate and versatile, while the latter methods are fast but are significantly limited in veracity, versatility, and transferability. Recently, machine learning (ML) methods have shown the potential to bridge the gap between these two chasms due to their (i) low cost, (ii) accuracy, (iii) transferability, and (iv) ability to be iteratively improved. In this work, we further extend the scope of ML for atomistic simulations by capturing the temperature dependence of the mechanical and structural properties of bulk platinum through molecular dynamics simulations. We compare our results directly with experiments, showcasing that ML methods can be used to accurately capture large-scale materials phenomena that are out of reach of QM calculations. We also compare our predictions with those of a reliable embedded atom method potential. We conclude this work by discussing how ML methods can be used to push the boundaries of nano-scale materials research by bridging the gap between QM and experimental methods. read less NOT USED (high confidence) C. Zhai, T. Li, H. Shi, and J. Yeo, “Discovery and design of soft polymeric bio-inspired materials with multiscale simulations and artificial intelligence.,” Journal of materials chemistry. B. 2020. link Times cited: 28 Abstract: Materials chemistry is at the forefront of the global "… read moreAbstract: Materials chemistry is at the forefront of the global "Fourth Industrial Revolution", in part by establishing a "Materials 4.0" paradigm. A key aspect of this paradigm is developing methods to effectively integrate hardware, software, and biological systems. Towards this end, we must have intimate knowledge of the virtual space in materials design: materials omics (materiomics), materials informatics, computational modelling and simulations, artificial intelligence (AI), and big data. We focus on the discovery and design of next-generation bio-inspired materials because the design space is so huge as to be almost intractable. With nature providing researchers with specific guiding principles, this material design space may be probed most efficiently through digital, high-throughput methods. Therefore, to enhance awareness and adoption of digital approaches in soft polymeric bio-inspired materials discovery and design, we detail multiscale simulation techniques in soft matter from the molecular level to the macroscale. We also highlight the unique role that artificial intelligence and materials databases will play in molecular simulations as well as soft materials discovery. Finally, we showcase several case studies that concretely apply computational modelling and simulations for integrative soft bio-inspired materials design with experiments. read less NOT USED (high confidence) S. Jindal and S. Bulusu, “Structural evolution in gold nanoparticles using artificial neural network based interatomic potentials.,” The Journal of chemical physics. 2020. link Times cited: 5 Abstract: Relativistic effects of gold make its behavior different fro… read moreAbstract: Relativistic effects of gold make its behavior different from other metals. Unlike silver and copper, gold does not require symmetrical structures as the stable entities. We present the evolution of gold from a cluster to a nanoparticle by considering a majority of stable structural possibilities. Here, an interatomic potential (artificial neural network), trained on quantum mechanical data comprising small to medium sized clusters, gives exceptional results for larger size clusters. We have explored the potential energy surface for "magic" number clusters 309, 561, and 923. This study reveals that these clusters are not completely symmetric, but they require a distorted symmetric core with amorphous layers of atoms over it. The amorphous geometries tend to be more stable in comparison to completely symmetric structures. The first ever gold cluster to hold an icosahedron-Au13 was identified at Au60 [S. Pande et al., J. Phys. Chem. Lett. 10, 1820 (2019)]. Through our study, we have found a plausible evolution of a symmetric core as the size of the nanoparticle increases. The stable cores were found at Au160, Au327, and Au571, which can be recognized as new magic numbers. Au923 is found to have a stable symmetric core of 147 atoms covered with layers of atoms that are not completely amorphous. This shows the preference of symmetric structures as the size of the nanoparticle increases (<3.3 nm). read less NOT USED (high confidence) S. Chmiela, H. E. Sauceda, A. Tkatchenko, and K.-R. Muller, “Accurate Molecular Dynamics Enabled by Efficient Physically Constrained Machine Learning Approaches,” Machine Learning Meets Quantum Physics. 2019. link Times cited: 9 NOT USED (high confidence) T. Cova and A. Pais, “Deep Learning for Deep Chemistry: Optimizing the Prediction of Chemical Patterns,” Frontiers in Chemistry. 2019. link Times cited: 108 Abstract: Computational Chemistry is currently a synergistic assembly … read moreAbstract: Computational Chemistry is currently a synergistic assembly between ab initio calculations, simulation, machine learning (ML) and optimization strategies for describing, solving and predicting chemical data and related phenomena. These include accelerated literature searches, analysis and prediction of physical and quantum chemical properties, transition states, chemical structures, chemical reactions, and also new catalysts and drug candidates. The generalization of scalability to larger chemical problems, rather than specialization, is now the main principle for transforming chemical tasks in multiple fronts, for which systematic and cost-effective solutions have benefited from ML approaches, including those based on deep learning (e.g. quantum chemistry, molecular screening, synthetic route design, catalysis, drug discovery). The latter class of ML algorithms is capable of combining raw input into layers of intermediate features, enabling bench-to-bytes designs with the potential to transform several chemical domains. In this review, the most exciting developments concerning the use of ML in a range of different chemical scenarios are described. A range of different chemical problems and respective rationalization, that have hitherto been inaccessible due to the lack of suitable analysis tools, is thus detailed, evidencing the breadth of potential applications of these emerging multidimensional approaches. Focus is given to the models, algorithms and methods proposed to facilitate research on compound design and synthesis, materials design, prediction of binding, molecular activity, and soft matter behavior. The information produced by pairing Chemistry and ML, through data-driven analyses, neural network predictions and monitoring of chemical systems, allows (i) prompting the ability to understand the complexity of chemical data, (ii) streamlining and designing experiments, (ii) discovering new molecular targets and materials, and also (iv) planning or rethinking forthcoming chemical challenges. In fact, optimization engulfs all these tasks directly. read less NOT USED (high confidence) H. E. Sauceda, S. Chmiela, I. Poltavsky, K.-R. Muller, and A. Tkatchenko, “Construction of Machine Learned Force Fields with Quantum Chemical Accuracy: Applications and Chemical Insights,” Machine Learning Meets Quantum Physics. 2019. link Times cited: 10 NOT USED (high confidence) C. Brunken and M. Reiher, “Self-Parametrizing System-Focused Atomistic Models.,” Journal of chemical theory and computation. 2019. link Times cited: 20 Abstract: Computational studies of chemical reactions in complex envir… read moreAbstract: Computational studies of chemical reactions in complex environments such as proteins, nanostructures, or on surfaces require accurate and efficient atomistic models applicable to the nanometer scale. In general, an accurate parametrization of the atomistic entities will not be available for arbitrary system classes, but demands a fast automated system-focused parametrization procedure to be quickly applicable, reliable, flexible, and reproducible. Here, we develop and combine an automatically parametrizable quantum chemically derived molecular mechanics model with machine-learned corrections under autonomous uncertainty quantification and refinement. Our approach first generates an accurate, physically motivated model from a minimum energy structure and its corresponding Hessian matrix by a partial Hessian fitting procedure of the force constants. This model is then the starting point to generate a large number of configurations for which additional off-minimum reference data can be evaluated on the fly. A Delta-machine learning model is trained on these data to provide a correction to energies and forces including uncertainty estimates. During the procedure, the flexibility of the machine learning model is tailored to the amount of available training data. The parametrization of large systems is enabled by a fragmentation approach. Due to their modular nature, all model construction steps allow for model improvement in a rolling fashion. Our approach may also be employed for the generation of system-focused electrostatic molecular mechanics embedding environments in a quantum mechanical molecular-mechanical hybrid model for arbitrary atomistic structures at the nanoscale. read less NOT USED (high confidence) Y. Elbaz, D. Furman, and M. C. Toroker, “Modeling Diffusion in Functional Materials: From Density Functional Theory to Artificial Intelligence,” Advanced Functional Materials. 2019. link Times cited: 26 Abstract: Diffusion describes the stochastic motion of particles and i… read moreAbstract: Diffusion describes the stochastic motion of particles and is often a key factor in determining the functionality of materials. Modeling diffusion of atoms can be very challenging for heterogeneous systems with high energy barriers. In this report, popular computational methodologies are covered to study diffusion mechanisms that are widely used in the community and both their strengths and weaknesses are presented. In static approaches, such as electronic structure theory, diffusion mechanisms are usually analyzed within the nudged elastic band (NEB) framework on the ground electronic surface usually obtained from a density functional theory (DFT) calculation. Another common approach to study diffusion mechanisms is based on molecular dynamics (MD) where the equations of motion are solved for every time step for all the atoms in the system. Unfortunately, both the static and dynamic approaches have inherent limitations that restrict the classes of diffusive systems that can be efficiently treated. Such limitations could be remedied by exploiting recent advances in artificial intelligence and machine learning techniques. Here, the most promising approaches in this emerging field for modeling diffusion are reported. It is believed that these knowledge‐intensive methods have a bright future ahead for the study of diffusion mechanisms in advanced functional materials. read less NOT USED (high confidence) A. Glielmo, C. Zeni, ’A. Fekete, and A. Vita, “Building Nonparametric n-Body Force Fields Using Gaussian Process Regression,” Machine Learning Meets Quantum Physics. 2019. link Times cited: 8 NOT USED (high confidence) Q. Nguyen, S. De, J. Lin, and V. Cevher, “Chemical machine learning with kernels: The impact of loss functions,” International Journal of Quantum Chemistry. 2019. link Times cited: 1 Abstract: Machine learning promises to accelerate materials discovery … read moreAbstract: Machine learning promises to accelerate materials discovery by allowing computational efficient property predictions from a small number of reference calculations. As a result, the literature has spent a considerable effort in designing representations that capture basic physical properties so far. In stark contrast, our work focuses on the less-studied learning formulations in this context in order to exploit inner structures in the prediction errors. In particular, we propose to directly optimize basic loss functions of the prediction error metrics typically used in the literature, such as the mean absolute error or the worst case error. In some instances, a proper choice of the loss function can directly reduce reasonably the prediction performance in the desired metric, albeit at the cost of additional computations during training. To support this claim, we describe the statistical learning theoretic foundations, and provide supporting numerical evidence with the prediction of atomization energies for a database of small organic molecules. read less NOT USED (high confidence) E. Schmidt, “Atomistic modelling of precipitation in Ni-base superalloys.” 2019. link Times cited: 0 Abstract: The presence of the ordered γ ′ phase (Ni3Al) in Ni-base sup… read moreAbstract: The presence of the ordered γ ′ phase (Ni3Al) in Ni-base superalloys is fundamental to the performance of engineering components such as turbine disks and blades which operate at high temperatures and loads. Hence for these alloys it is important to optimize their microstructure and phase composition. This is typically done by varying their chemistry and heat treatment to achieve an appropriate balance between γ ′ content and other constituents such as carbides, borides, oxides and topologically close packed phases. In this work we have set out to investigate the onset of γ ′ ordering in Ni-Al single crystals and in Ni-Al bicrystals containing coincidence site lattice grain boundaries (GBs) and we do this at high temperatures, which are representative of typical heat treatment schedules including quenching and annealing. For this we use the atomistic simulation methods of molecular dynamics (MD) and density functional theory (DFT). In the first part of this work we develop robust Bayesian classifiers to identify the γ ′ phase in large scale simulation boxes at high temperatures around 1500 K. We observe significant γ ′ ordering in the simulations in the form of clusters of γ ′-like ordered atoms embedded in a γ host solid solution and this happens within 100 ns. Single crystals are found to exhibit the expected homogeneous ordering with slight indications of chemical composition change and a positive correlation between the Al concentration and the concentration of γ ′ phase. In general, the ordering is found to take place faster in systems with GBs and preferentially adjacent to the GBs. The sole exception to this is the Σ3 (111) tilt GB, which is a coherent twin. An analysis of the ensemble and time lag average displacements of the GBs reveals mostly ‘anomalous diffusion’ behaviour. Increasing the Al content from pure Ni to Ni 20 at.% Al was found to either consistently increase or decrease the mobility of the GB as seen from the changing slope of the time lag displacement average. The movement of the GB can then be characterized as either ‘super’ or ‘sub-diffusive’ and is interpreted in terms of diffusion induced grain boundary migration, which is posited as a possible precursor to the appearance of serrated edge grain boundaries. In the second part of this work we develop a method for the training of empirical interatomic read less NOT USED (high confidence) F. Bianchini, A. Glielmo, J. Kermode, and A. Vita, “Enabling QM-accurate simulation of dislocation motion in
γ−Ni
and
α−Fe
using a hybrid multiscale approach,” Physical Review Materials. 2019. link Times cited: 11 Abstract: We present an extension of the ‘learn on the fly’ method to … read moreAbstract: We present an extension of the ‘learn on the fly’ method to the study of the motion of dislocations in metallic systems, developed with the aim of producing information-efficient force models that can be systematically validated against reference QM calculations. Nye tensor analysis is used to dynamically track the quantum region centered at the core of a dislocation, thus enabling quantum mechanics/molecular mechanics simulations. The technique is used to study the motion of screw dislocations in Ni-Al systems, relevant to plastic deformation in Ni-based alloys, at a variety of temperature/strain conditions. These simulations reveal only a moderate spacing ( ∼ 5 A ) between Shockley partial dislocations, at variance with the predictions of traditional molecular dynamics (MD) simulation using interatomic potentials, which yields a much larger spacing in the high stress regime. The discrepancy can be rationalized in terms of the elastic properties of an hcp crystal, which influence the behavior of the stacking fault region between Shockley partial dislocations. The transferability of this technique to more challenging systems is addressed, focusing on the expected accuracy of such calculations. The bcc α − Fe phase is a prime example, as its magnetic properties at the open surfaces make it particularly challenging for embedding-based QM/MM techniques. Our tests reveal that high accuracy can still be obtained at the core of a dislocation, albeit at a significant computational cost for fully converged results. However, we find this cost can be reduced by using a machine learning approach to progressively reduce the rate of expensive QM calculations required during the dynamical simulations, as the size of the QM database increases. read less NOT USED (high confidence) Z. Aitken, V. Sorkin, and Y.-W. Zhang, “Atomistic modeling of nanoscale plasticity in high-entropy alloys,” Journal of Materials Research. 2019. link Times cited: 32 Abstract: Lattice structures, defect structures, and deformation mecha… read moreAbstract: Lattice structures, defect structures, and deformation mechanisms of high-entropy alloys (HEAs) have been studied using atomistic simulations to explain their remarkable mechanical properties. These atomistic simulation techniques, such as first-principles calculations and molecular dynamics allow atomistic-level resolution of structure, defect configuration, and energetics. Following the structure–property paradigm, such understandings can be useful for guiding the design of high-performance HEAs. Although there have been a number of atomistic studies on HEAs, there is no comprehensive review on the state-of-the-art techniques and results of atomistic simulations of HEAs. This article is intended to fill the gap, providing an overview of the state-of-the-art atomistic simulations on HEAs. In particular, we discuss how atomistic simulations can elucidate the nanoscale mechanisms of plasticity underlying the outstanding properties of HEAs, and further present a list of interesting problems for forthcoming atomistic simulations of HEAs. read less NOT USED (high confidence) A. Chandrasekaran, D. Kamal, R. Batra, C. Kim, L. Chen, and R. Ramprasad, “Solving the electronic structure problem with machine learning,” npj Computational Materials. 2019. link Times cited: 181 NOT USED (high confidence) H. E. Sauceda, S. Chmiela, I. Poltavsky, K. Müller, and A. Tkatchenko, “Molecular force fields with gradient-domain machine learning: Construction and application to dynamics of small molecules with coupled cluster forces.,” The Journal of chemical physics. 2019. link Times cited: 77 Abstract: We present the construction of molecular force fields for sm… read moreAbstract: We present the construction of molecular force fields for small molecules (less than 25 atoms) using the recently developed symmetrized gradient-domain machine learning (sGDML) approach [Chmiela et al., Nat. Commun. 9, 3887 (2018) and Chmiela et al., Sci. Adv. 3, e1603015 (2017)]. This approach is able to accurately reconstruct complex high-dimensional potential-energy surfaces from just a few 100s of molecular conformations extracted from ab initio molecular dynamics trajectories. The data efficiency of the sGDML approach implies that atomic forces for these conformations can be computed with high-level wavefunction-based approaches, such as the "gold standard" coupled-cluster theory with single, double and perturbative triple excitations [CCSD(T)]. We demonstrate that the flexible nature of the sGDML model recovers local and non-local electronic interactions (e.g., H-bonding, proton transfer, lone pairs, changes in hybridization states, steric repulsion, and n → π* interactions) without imposing any restriction on the nature of interatomic potentials. The analysis of sGDML molecular dynamics trajectories yields new qualitative insights into dynamics and spectroscopy of small molecules close to spectroscopic accuracy. read less NOT USED (high confidence) C. Zeni, K. Rossi, A. Glielmo, and F. Baletto, “On machine learning force fields for metallic nanoparticles,” Advances in Physics: X. 2019. link Times cited: 25 Abstract: ABSTRACT Machine learning algorithms have recently emerged a… read moreAbstract: ABSTRACT Machine learning algorithms have recently emerged as a tool to generate force fields which display accuracies approaching the ones of the ab-initio calculations they are trained on, but are much faster to compute. The enhanced computational speed of machine learning force fields results key for modelling metallic nanoparticles, as their fluxionality and multi-funneled energy landscape needs to be sampled over long time scales. In this review, we first formally introduce the most commonly used machine learning algorithms for force field generation, briefly outlining their structure and properties. We then address the core issue of training database selection, reporting methodologies both already used and yet unused in literature. We finally report and discuss the recent literature regarding machine learning force fields to sample the energy landscape and study the catalytic activity of metallic nanoparticles. Graphical abstract read less NOT USED (high confidence) J. Ma, P. Zhang, Y. Tan, A. W. Ghosh, and G. Chern, “Machine learning electron correlation in a disordered medium,” Physical Review B. 2018. link Times cited: 13 Abstract: Learning from data has led to a paradigm shift in computatio… read moreAbstract: Learning from data has led to a paradigm shift in computational materials science. In particular, it has been shown that neural networks can learn the potential energy surface and interatomic forces through examples, thus bypassing the computationally expensive density functional theory calculations. Combining many-body techniques with a deep learning approach, we demonstrate that a fully-connected neural network is able to learn the complex collective behavior of electrons in strongly correlated systems. Specifically, we consider the Anderson-Hubbard (AH) model, which is a canonical system for studying the interplay between electron correlation and strong localization. The ground states of the AH model on a square lattice are obtained using the real-space Gutzwiller method. The obtained solutions are used to train a multi-task multi-layer neural network, which subsequently can accurately predict quantities such as the local probability of double occupation and the quasiparticle weight, given the disorder potential in the neighborhood as the input. read less NOT USED (high confidence) H. Salmenjoki, M. Alava, and L. Laurson, “Mimicking complex dislocation dynamics by interaction networks,” The European Physical Journal B. 2018. link Times cited: 2 NOT USED (high confidence) I. Novikov and A. Shapeev, “Improving accuracy of interatomic potentials: more physics or more data? A case study of silica,” Materials Today Communications. 2018. link Times cited: 35 NOT USED (high confidence) G. P. P. Pun, R. Batra, R. Ramprasad, and Y. Mishin, “Physically informed artificial neural networks for atomistic modeling of materials,” Nature Communications. 2018. link Times cited: 188 NOT USED (high confidence) R. Jadrich, B. A. Lindquist, and T. M. Truskett, “Unsupervised machine learning for detection of phase transitions in off-lattice systems. I. Foundations.,” The Journal of chemical physics. 2018. link Times cited: 33 Abstract: We demonstrate the utility of an unsupervised machine learni… read moreAbstract: We demonstrate the utility of an unsupervised machine learning tool for the detection of phase transitions in off-lattice systems. We focus on the application of principal component analysis (PCA) to detect the freezing transitions of two-dimensional hard-disk and three-dimensional hard-sphere systems as well as liquid-gas phase separation in a patchy colloid model. As we demonstrate, PCA autonomously discovers order-parameter-like quantities that report on phase transitions, mitigating the need for a priori construction or identification of a suitable order parameter-thus streamlining the routine analysis of phase behavior. In a companion paper, we further develop the method established here to explore the detection of phase transitions in various model systems controlled by compositional demixing, liquid crystalline ordering, and non-equilibrium active forces. read less NOT USED (high confidence) R. Jadrich, B. A. Lindquist, W. Piñeros, D. Banerjee, and T. M. Truskett, “Unsupervised machine learning for detection of phase transitions in off-lattice systems. II. Applications.,” The Journal of chemical physics. 2018. link Times cited: 17 Abstract: We outline how principal component analysis can be applied t… read moreAbstract: We outline how principal component analysis can be applied to particle configuration data to detect a variety of phase transitions in off-lattice systems, both in and out of equilibrium. Specifically, we discuss its application to study (1) the nonequilibrium random organization (RandOrg) model that exhibits a phase transition from quiescent to steady-state behavior as a function of density, (2) orientationally and positionally driven equilibrium phase transitions for hard ellipses, and (3) a compositionally driven demixing transition in the non-additive binary Widom-Rowlinson mixture. read less NOT USED (high confidence) I. Novikov, Y. V. Suleimanov, and A. Shapeev, “Automated calculation of thermal rate coefficients using ring polymer molecular dynamics and machine-learning interatomic potentials with active learning.,” Physical chemistry chemical physics : PCCP. 2018. link Times cited: 25 Abstract: We propose a methodology for the fully automated calculation… read moreAbstract: We propose a methodology for the fully automated calculation of thermal rate coefficients of gas phase chemical reactions, which is based on combining ring polymer molecular dynamics (RPMD) and machine-learning interatomic potentials actively learning on-the-fly. Based on the original computational procedure implemented in the RPMDrate code, our methodology gradually and automatically constructs the potential energy surfaces (PESs) from scratch with the data set points being selected and accumulated during the RPMDrate simulation. Such an approach ensures that our final machine-learning model provides a reliable description of the PES that avoids artifacts during exploration of the phase space by RPMD trajectories. We tested our methodology on two representative thermally activated chemical reactions studied recently by RPMDrate at temperatures within the interval of 300-1000 K. The corresponding PESs were generated by fitting to only a few thousand automatically generated structures (less than 5000) while the RPMD rate coefficients showed deviation from the reference values within the typical convergence error of RPMDrate. In future, we plan to apply our methodology to chemical reactions that proceed via complex-formation thus providing a completely general tool for calculating RPMD thermal rate coefficients for any polyatomic gas phase chemical reaction. read less NOT USED (high confidence) A. Bartók, J. Kermode, N. Bernstein, and G. Csányi, “Machine Learning a General-Purpose Interatomic Potential for Silicon,” Physical Review X. 2018. link Times cited: 291 Abstract: The success of first principles electronic structure calcula… read moreAbstract: The success of first principles electronic structure calculation for predictive modeling in chemistry, solid state physics, and materials science is constrained by the limitations on simulated length and time scales due to computational cost and its scaling. Techniques based on machine learning ideas for interpolating the Born-Oppenheimer potential energy surface without explicitly describing electrons have recently shown great promise, but accurately and efficiently fitting the physically relevant space of configurations has remained a challenging goal. Here we present a Gaussian Approximation Potential for silicon that achieves this milestone, accurately reproducing density functional theory reference results for a wide range of observable properties, including crystal, liquid, and amorphous bulk phases, as well as point, line, and plane defects. We demonstrate that this new potential enables calculations that would be extremely expensive with a first principles electronic structure method, such as finite temperature phase boundary lines, self-diffusivity in the liquid, formation of the amorphous by slow quench, and dynamic brittle fracture. We show that the uncertainty quantification inherent to the Gaussian process regression framework gives a qualitative estimate of the potential's accuracy for a given atomic configuration. The success of this model shows that it is indeed possible to create a useful machine-learning-based interatomic potential that comprehensively describes a material, and serves as a template for the development of such models in the future. read less NOT USED (high confidence) J. J. Möller, W. Körner, G. Krugel, D. Urban, and C. Elsässer, “Compositional optimization of hard-magnetic phases with machine-learning models,” Acta Materialia. 2018. link Times cited: 37 NOT USED (high confidence) A. Glielmo, C. Zeni, and A. Vita, “Efficient nonparametric n -body force fields from machine learning,” Physical Review B. 2018. link Times cited: 92 Abstract: The authors present a scheme to construct classical $n$-body… read moreAbstract: The authors present a scheme to construct classical $n$-body force fields using Gaussian Process (GP) Regression, appropriately mapped over explicit n-body functions (M-FFs). The procedure is possible, and will yield accurate forces, whenever prior knowledge allows to restrict the interactions to a finite order $n$, so that the ``universal approximator'' resolving power of standard GPs or Neural Networks is not needed. Under these conditions, the proposed construction preserves flexibility of training, systematically improvable accuracy, and a clear framework for validation of the underlying machine learning technique. Moreover, the M-FFs are as fast as classical parametrized potentials, since they avoid lengthy summations over database entries or weight parameters. read less NOT USED (high confidence) G. Pilania and X.-Y. Liu, “Machine learning properties of binary wurtzite superlattices,” Journal of Materials Science. 2018. link Times cited: 23 NOT USED (high confidence) A. Mannodi-Kanakkithodi et al., “Scoping the polymer genome: A roadmap for rational polymer dielectrics design and beyond,” Materials Today. 2017. link Times cited: 134 NOT USED (high confidence) S. Fujikake, V. L. Deringer, T. Lee, M. Krynski, S. Elliott, and G. Csányi, “Gaussian approximation potential modeling of lithium intercalation in carbon nanostructures.,” The Journal of chemical physics. 2017. link Times cited: 73 Abstract: We demonstrate how machine-learning based interatomic potent… read moreAbstract: We demonstrate how machine-learning based interatomic potentials can be used to model guest atoms in host structures. Specifically, we generate Gaussian approximation potential (GAP) models for the interaction of lithium atoms with graphene, graphite, and disordered carbon nanostructures, based on reference density functional theory data. Rather than treating the full Li-C system, we demonstrate how the energy and force differences arising from Li intercalation can be modeled and then added to a (prexisting and unmodified) GAP model of pure elemental carbon. Furthermore, we show the benefit of using an explicit pair potential fit to capture "effective" Li-Li interactions and to improve the performance of the GAP model. This provides proof-of-concept for modeling guest atoms in host frameworks with machine-learning based potentials and in the longer run is promising for carrying out detailed atomistic studies of battery materials. read less NOT USED (high confidence) J. Wu, L. Shen, and W. Yang, “Internal force corrections with machine learning for quantum mechanics/molecular mechanics simulations.,” The Journal of chemical physics. 2017. link Times cited: 28 Abstract: Ab initio quantum mechanics/molecular mechanics (QM/MM) mole… read moreAbstract: Ab initio quantum mechanics/molecular mechanics (QM/MM) molecular dynamics simulation is a useful tool to calculate thermodynamic properties such as potential of mean force for chemical reactions but intensely time consuming. In this paper, we developed a new method using the internal force correction for low-level semiempirical QM/MM molecular dynamics samplings with a predefined reaction coordinate. As a correction term, the internal force was predicted with a machine learning scheme, which provides a sophisticated force field, and added to the atomic forces on the reaction coordinate related atoms at each integration step. We applied this method to two reactions in aqueous solution and reproduced potentials of mean force at the ab initio QM/MM level. The saving in computational cost is about 2 orders of magnitude. The present work reveals great potentials for machine learning in QM/MM simulations to study complex chemical processes. read less NOT USED (high confidence) T. D. Huan, R. Batra, J. Chapman, S. Krishnan, L. Chen, and R. Ramprasad, “A universal strategy for the creation of machine learning-based atomistic force fields,” npj Computational Materials. 2017. link Times cited: 201 NOT USED (high confidence) S. Jindal, S. Chiriki, and S. Bulusu, “Spherical harmonics based descriptor for neural network potentials: Structure and dynamics of Au147 nanocluster.,” The Journal of chemical physics. 2017. link Times cited: 38 Abstract: We propose a highly efficient method for fitting the potenti… read moreAbstract: We propose a highly efficient method for fitting the potential energy surface of a nanocluster using a spherical harmonics based descriptor integrated with an artificial neural network. Our method achieves the accuracy of quantum mechanics and speed of empirical potentials. For large sized gold clusters (Au147), the computational time for accurate calculation of energy and forces is about 1.7 s, which is faster by several orders of magnitude compared to density functional theory (DFT). This method is used to perform the global minimum optimizations and molecular dynamics simulations for Au147, and it is found that its global minimum is not an icosahedron. The isomer that can be regarded as the global minimum is found to be 4 eV lower in energy than the icosahedron and is confirmed from DFT. The geometry of the obtained global minimum contains 105 atoms on the surface and 42 atoms in the core. A brief study on the fluxionality in Au147 is performed, and it is concluded that Au147 has a dynamic surface, thus opening a new window for studying its reaction dynamics. read less NOT USED (high confidence) A. Peterson, R. Christensen, and A. Khorshidi, “Addressing uncertainty in atomistic machine learning.,” Physical chemistry chemical physics : PCCP. 2017. link Times cited: 99 Abstract: Machine-learning regression has been demonstrated to precise… read moreAbstract: Machine-learning regression has been demonstrated to precisely emulate the potential energy and forces that are output from more expensive electronic-structure calculations. However, to predict new regions of the potential energy surface, an assessment must be made of the credibility of the predictions. In this perspective, we address the types of errors that might arise in atomistic machine learning, the unique aspects of atomistic simulations that make machine-learning challenging, and highlight how uncertainty analysis can be used to assess the validity of machine-learning predictions. We suggest this will allow researchers to more fully use machine learning for the routine acceleration of large, high-accuracy, or extended-time simulations. In our demonstrations, we use a bootstrap ensemble of neural network-based calculators, and show that the width of the ensemble can provide an estimate of the uncertainty when the width is comparable to that in the training data. Intriguingly, we also show that the uncertainty can be localized to specific atoms in the simulation, which may offer hints for the generation of training data to strategically improve the machine-learned representation. read less NOT USED (high confidence) G. Pilania, J. Gubernatis, and T. Lookman, “Multi-fidelity machine learning models for accurate bandgap predictions of solids,” Computational Materials Science. 2017. link Times cited: 224 NOT USED (high confidence) N. Browning, R. Ramakrishnan, O. A. von Lilienfeld, and U. Roethlisberger, “Genetic Optimization of Training Sets for Improved Machine Learning Models of Molecular Properties.,” The journal of physical chemistry letters. 2016. link Times cited: 58 Abstract: The training of molecular models of quantum mechanical prope… read moreAbstract: The training of molecular models of quantum mechanical properties based on statistical machine learning requires large data sets which exemplify the map from chemical structure to molecular property. Intelligent a priori selection of training examples is often difficult or impossible to achieve, as prior knowledge may be unavailable. Ordinarily representative selection of training molecules from such data sets is achieved through random sampling. We use genetic algorithms for the optimization of training set composition consisting of tens of thousands of small organic molecules. The resulting machine learning models are considerably more accurate: in the limit of small training sets, mean absolute errors for out-of-sample predictions are reduced by up to ∼75%. We discuss and present optimized training sets consisting of 10 molecular classes for all molecular properties studied. We show that these classes can be used to design improved training sets for the generation of machine learning models of the same properties in similar but unrelated molecular sets. read less NOT USED (high confidence) N. Portman and I. Tamblyn, “Sampling algorithms for validation of supervised learning models for Ising-like systems,” J. Comput. Phys. 2016. link Times cited: 13 NOT USED (high confidence) A. Glielmo, P. Sollich, and A. Vita, “Accurate interatomic force fields via machine learning with covariant kernels,” Physical Review B. 2016. link Times cited: 147 Abstract: We present a novel scheme to accurately predict atomic force… read moreAbstract: We present a novel scheme to accurately predict atomic forces as vector quantities, rather than sets of scalar components, by Gaussian process (GP) regression. This is based on matrix-valued kernel functions, on which we impose the requirements that the predicted force rotates with the target configuration and is independent of any rotations applied to the configuration database entries. We show that such covariant GP kernels can be obtained by integration over the elements of the rotation group $\mathit{SO}(d)$ for the relevant dimensionality $d$. Remarkably, in specific cases the integration can be carried out analytically and yields a conservative force field that can be recast into a pair interaction form. Finally, we show that restricting the integration to a summation over the elements of a finite point group relevant to the target system is sufficient to recover an accurate GP. The accuracy of our kernels in predicting quantum-mechanical forces in real materials is investigated by tests on pure and defective Ni, Fe, and Si crystalline systems. read less NOT USED (high confidence) V. Botu, J. Chapman, and R. Ramprasad, “A study of adatom ripening on an Al (1 1 1) surface with machine learning force fields,” Computational Materials Science. 2016. link Times cited: 31 NOT USED (high confidence) V. Botu, R. Batra, J. Chapman, and R. Ramprasad, “Machine Learning Force Fields: Construction, Validation, and Outlook,” Journal of Physical Chemistry C. 2016. link Times cited: 361 Abstract: Force fields developed with machine learning methods in tand… read moreAbstract: Force fields developed with machine learning methods in tandem with quantum mechanics are beginning to find merit, given their (i) low cost, (ii) accuracy, and (iii) versatility. Recently, we proposed one such approach, wherein, the vectorial force on an atom is computed directly from its environment. Here, we discuss the multistep workflow required for their construction, which begins with generating diverse reference atomic environments and force data, choosing a numerical representation for the atomic environments, down selecting a representative training set, and lastly the learning method itself, for the case of Al. The constructed force field is then validated by simulating complex materials phenomena such as surface melting and stress–strain behavior, that truly go beyond the realm of ab initio methods, both in length and time scales. To make such force fields truly versatile an attempt to estimate the uncertainty in force predictions is put forth, allowing one to identify areas of poor performance... read less NOT USED (high confidence) L. Shen, J. Wu, and W. Yang, “Multiscale Quantum Mechanics/Molecular Mechanics Simulations with Neural Networks.,” Journal of chemical theory and computation. 2016. link Times cited: 89 Abstract: Molecular dynamics simulation with multiscale quantum mechan… read moreAbstract: Molecular dynamics simulation with multiscale quantum mechanics/molecular mechanics (QM/MM) methods is a very powerful tool for understanding the mechanism of chemical and biological processes in solution or enzymes. However, its computational cost can be too high for many biochemical systems because of the large number of ab initio QM calculations. Semiempirical QM/MM simulations have much higher efficiency. Its accuracy can be improved with a correction to reach the ab initio QM/MM level. The computational cost on the ab initio calculation for the correction determines the efficiency. In this paper we developed a neural network method for QM/MM calculation as an extension of the neural-network representation reported by Behler and Parrinello. With this approach, the potential energy of any configuration along the reaction path for a given QM/MM system can be predicted at the ab initio QM/MM level based on the semiempirical QM/MM simulations. We further applied this method to three reactions in water to calculate the free energy changes. The free-energy profile obtained from the semiempirical QM/MM simulation is corrected to the ab initio QM/MM level with the potential energies predicted with the constructed neural network. The results are in excellent accordance with the reference data that are obtained from the ab initio QM/MM molecular dynamics simulation or corrected with direct ab initio QM/MM potential energies. Compared with the correction using direct ab initio QM/MM potential energies, our method shows a speed-up of 1 or 2 orders of magnitude. It demonstrates that the neural network method combined with the semiempirical QM/MM calculation can be an efficient and reliable strategy for chemical reaction simulations. read less NOT USED (high confidence) T. Suzuki, R. Tamura, and T. Miyazaki, “Machine learning for atomic forces in a crystalline solid: Transferability to various temperatures,” International Journal of Quantum Chemistry. 2016. link Times cited: 25 Abstract: Recently, machine learning has emerged as an alternative, po… read moreAbstract: Recently, machine learning has emerged as an alternative, powerful approach for predicting quantum-mechanical properties of molecules and solids. Here, using kernel ridge regression and atomic fingerprints representing local environments of atoms, we trained a machine-learning model on a crystalline silicon system to directly predict the atomic forces at a wide range of temperatures. Our idea is to construct a machine-learning model using a quantum-mechanical dataset taken from canonical-ensemble simulations at a higher temperature, or an upper bound of the temperature range. With our model, the force prediction errors were about 2% or smaller with respect to the corresponding force ranges, in the temperature region between 300 K and 1650 K. We also verified the applicability to a larger system, ensuring the transferability with respect to system size. read less NOT USED (high confidence) S. De, A. Bartók, G. Csányi, and M. Ceriotti, “Comparing molecules and solids across structural and alchemical space.,” Physical chemistry chemical physics : PCCP. 2015. link Times cited: 429 Abstract: Evaluating the (dis)similarity of crystalline, disordered an… read moreAbstract: Evaluating the (dis)similarity of crystalline, disordered and molecular compounds is a critical step in the development of algorithms to navigate automatically the configuration space of complex materials. For instance, a structural similarity metric is crucial for classifying structures, searching chemical space for better compounds and materials, and driving the next generation of machine-learning techniques for predicting the stability and properties of molecules and materials. In the last few years several strategies have been designed to compare atomic coordination environments. In particular, the smooth overlap of atomic positions (SOAPs) has emerged as an elegant framework to obtain translation, rotation and permutation-invariant descriptors of groups of atoms, underlying the development of various classes of machine-learned inter-atomic potentials. Here we discuss how one can combine such local descriptors using a regularized entropy match (REMatch) approach to describe the similarity of both whole molecular and bulk periodic structures, introducing powerful metrics that enable the navigation of alchemical and structural complexities within a unified framework. Furthermore, using this kernel and a ridge regression method we can predict atomization energies for a database of small organic molecules with a mean absolute error below 1 kcal mol(-1), reaching an important milestone in the application of machine-learning techniques for the evaluation of molecular properties. read less NOT USED (high confidence) Y. Yang et al., “Taking materials dynamics to new extremes using machine learning interatomic potentials,” Journal of Materials Informatics. 2021. link Times cited: 5 Abstract: Understanding materials dynamics under extreme conditions of… read moreAbstract: Understanding materials dynamics under extreme conditions of pressure, temperature, and strain rate is a scientific quest that spans nearly a century. Atomic simulations have had a considerable impact on this endeavor because of their ability to uncover materials’ microstructure evolution and properties at the scale of the relevant physical phenomena. However, this is still a challenge for most materials as it requires modeling large atomic systems (up to millions of particles) with improved accuracy. In many cases, the availability of sufficiently accurate but efficient interatomic potentials has become a serious bottleneck for performing these simulations as traditional potentials fail to represent the multitude of bonding. A new class of potentials has emerged recently, based on a different paradigm from the traditional approach. The new potentials are constructed by machinelearning with a high degree of fidelity from quantum-mechanical calculations. In this review, a brief introduction to the central ideas underlying machine learning interatomic potentials is given. In particular, the coupling of machine learning models with domain knowledge to improve accuracy, computational efficiency, and interpretability is highlighted. Subsequently, we demonstrate the effectiveness of the domain knowledge-based approach in certain select problems related to the kinetic response of warm dense materials. It is hoped that this review will inspire further advances in the understanding of matter under extreme conditions. read less
|