Citations
This panel presents information regarding the papers that have cited the interatomic potential (IP) whose page you are on.
The OpenKIM machine learning based Deep Citation framework is used to determine whether the citing article actually used the IP in computations (denoted by "USED") or only provides it as a background citation (denoted by "NOT USED"). For more details on Deep Citation and how to work with this panel, click the documentation link at the top of the panel.
The word cloud to the right is generated from the abstracts of IP principle source(s) (given below in "How to Cite") and the citing articles that were determined to have used the IP in order to provide users with a quick sense of the types of physical phenomena to which this IP is applied.
The bar chart shows the number of articles that cited the IP per year. Each bar is divided into green (articles that USED the IP) and blue (articles that did NOT USE the IP).
Users are encouraged to correct Deep Citation errors in determination by clicking the speech icon next to a citing article and providing updated information. This will be integrated into the next Deep Citation learning cycle, which occurs on a regular basis.
OpenKIM acknowledges the support of the Allen Institute for AI through the Semantic Scholar project for providing citation information and full text of articles when available, which are used to train the Deep Citation ML algorithm.
|
This panel provides information on past usage of this interatomic potential (IP) powered by the OpenKIM Deep Citation framework. The word cloud indicates typical applications of the potential. The bar chart shows citations per year of this IP (bars are divided into articles that used the IP (green) and those that did not (blue)). The complete list of articles that cited this IP is provided below along with the Deep Citation determination on usage. See the Deep Citation documentation for more information.
39 Citations (6 used)
Help us to determine which of the papers that cite this potential actually used it to perform calculations. If you know, click the .
USED (low confidence) D. Huo et al., “Evaluation of pre-neutron-emission mass distributions in induced fission of typical actinides based on Monte Carlo dropout neural network,” The European Physical Journal A. 2023. link Times cited: 0 USED (low confidence) A. Catalysis et al., “Atomistic Insights into the Oxidation of Flat and Stepped Platinum Surfaces Using Large-Scale Machine Learning Potential-Based Grand-Canonical Monte Carlo,” ACS Catalysis. 2022. link Times cited: 7 Abstract: : Understanding catalyst surface structure changes under rea… read moreAbstract: : Understanding catalyst surface structure changes under reactive conditions has become an important topic with the increasing interest in operando measurement and modeling. In this work, we develop a workflow to build machine learning potentials (MLPs) for simulating complicated chemical systems with large spatial and time scales, in which the committee model strategy equips the MLP with uncertainty estimation, enabling the active learning protocol. The methods are applied to constructing PtO x MLP based on explored configurations from bulk oxides to amorphous oxidized surfaces, which cover most ordered high-oxygen-coverage platinum surfaces within an accessible energy range. This MLP is used to perform large-scale grand canonical Monte Carlo simulations to track detailed structure changes during oxidations of flat and stepped Pt surfaces, which is normally inaccessible to costly ab initio calculations. These structural evolution trajectories reveal the stages of surface oxidation without laborious manual construction of surface models. We identify the building blocks of oxide formation and elucidate the surface oxide formation mechanism on Pt surfaces. The insightful interpretations would deeply help us understand the oxide formation on other metal surfaces. We demonstrate that these large-scale simulations would be a powerful tool to investigate realistic structures and the formation mechanisms of complicated systems. read less USED (low confidence) D. Varivoda, R. Dong, S. S. Omee, and J. Hu, “Materials Property Prediction with Uncertainty Quantification: A Benchmark Study,” ArXiv. 2022. link Times cited: 9 Abstract: Uncertainty quantification (UQ) has increasing importance in… read moreAbstract: Uncertainty quantification (UQ) has increasing importance in the building of robust high-performance and generalizable materials property prediction models. It can also be used in active learning to train better models by focusing on gathering new training data from uncertain regions. There are several categories of UQ methods, each considering different types of uncertainty sources. Here, we conduct a comprehensive evaluation on the UQ methods for graph neural network-based materials property prediction and evaluate how they truly reflect the uncertainty that we want in error bound estimation or active learning. Our experimental results over four crystal materials datasets (including formation energy, adsorption energy, total energy, and bandgap properties) show that the popular ensemble methods for uncertainty estimation are NOT always the best choice for UQ in materials property prediction. For the convenience of the community, all the source code and datasets can be accessed freely at https://github.com/usccolumbia/materialsUQ. read less USED (low confidence) W. Li and C.-G. Yang, “Thermal transport properties of monolayer GeS and SnS: A comparative study based on machine learning and SW interatomic potential models,” AIP Advances. 2022. link Times cited: 5 Abstract: Phonon transport properties of two-dimensional materials can… read moreAbstract: Phonon transport properties of two-dimensional materials can play a crucial role in the thermal management of low-dimensional electronic devices and thermoelectric applications. In this study, both the empirical Stillinger–Weber (SW) and machine learning interatomic potentials are employed to investigate the lattice thermal conductivity of monolayer GeS and SnS through solving the phonon Boltzmann transport equation. The accuracy of the two types of interatomic potentials and their performance for the evaluation of thermal conductivity are verified by analyzing phonon harmonic and anharmonic properties. Our results indicate that the thermal conductivity can be predicted more accurately with a machine learning approach, while the SW potential gives rise to an overestimated value for both monolayers. In addition, the in-plane anisotropy of thermal transport properties existing in these monolayers can be confirmed by both potential models. Moreover, the origins of the deviation existing in calculated thermal conductivities, including both the effects of interatomic potential models and monolayer compositions, are elucidated through uncovering the underlying phonon transport mechanisms. This study highlights that in contrast to the machine learning approach, more careful verification is required for the simulation of thermal transport properties when empirical interatomic potential models are employed. read less USED (low confidence) M. Tsitsvero, “Learning inducing points and uncertainty on molecular data,” ArXiv. 2022. link Times cited: 0 Abstract: Uncertainty control and scalability to large datasets are th… read moreAbstract: Uncertainty control and scalability to large datasets are the two main issues for the deployment of Gaussian process models into the autonomous material and chemical space exploration pipelines. One way to address both of these issues is by introducing the latent inducing variables and choosing the right approximation for the marginal log-likelihood objective. Here, we show that variational learning of the inducing points in the high-dimensional molecular descriptor space significantly improves both the prediction quality and uncertainty estimates on test configurations from a sample molecular dynamics dataset. Additionally, we show that inducing points can learn to represent the configurations of the molecules of different types that were not present within the initialization set of inducing points. Among several evaluated approximate marginal log-likelihood objectives, we show that the predictive log-likelihood provides both the predictive quality comparable to the exact Gaussian process model and excellent uncertainty control. Finally, we comment on whether a machine learning model makes predictions by interpolating the molecular configurations in high-dimensional descriptor space. We show that despite our intuition, and even for densely sampled molecular dynamics datasets, most of the predictions are done in the extrapolation regime. read less USED (low confidence) T. Swinburne, “Uncertainty and anharmonicity in thermally activated dynamics,” Computational Materials Science. 2021. link Times cited: 4 NOT USED (low confidence) H. Sandström, M. Rissanen, J. Rousu, and P. Rinke, “Data-Driven Compound Identification in Atmospheric Mass Spectrometry.,” Advanced science. 2023. link Times cited: 0 Abstract: Aerosol particles found in the atmosphere affect the climate… read moreAbstract: Aerosol particles found in the atmosphere affect the climate and worsen air quality. To mitigate these adverse impacts, aerosol particle formation and aerosol chemistry in the atmosphere need to be better mapped out and understood. Currently, mass spectrometry is the single most important analytical technique in atmospheric chemistry and is used to track and identify compounds and processes. Large amounts of data are collected in each measurement of current time-of-flight and orbitrap mass spectrometers using modern rapid data acquisition practices. However, compound identification remains a major bottleneck during data analysis due to lacking reference libraries and analysis tools. Data-driven compound identification approaches could alleviate the problem, yet remain rare to non-existent in atmospheric science. In this perspective, the authors review the current state of data-driven compound identification with mass spectrometry in atmospheric science and discuss current challenges and possible future steps toward a digital era for atmospheric mass spectrometry. read less NOT USED (low confidence) C. Hong et al., “Applications and training sets of machine learning potentials,” Science and Technology of Advanced Materials: Methods. 2023. link Times cited: 0 Abstract: ABSTRACT Recently, machine learning potentials (MLPs) have b… read moreAbstract: ABSTRACT Recently, machine learning potentials (MLPs) have been attracting interest as an alternative to the computationally expensive density-functional theory (DFT) calculations. The data-driven approach in MLPs requires carefully curated training datasets, which define the valid domain of simulations. Therefore, acquiring training datasets that comprehensively span the domain of the desired simulations is important. In this review, we attempt to set guidelines for the systematic construction of training datasets according to target simulations. To this end, we extensively analyze the training sets in previous literature according to four application types: thermal properties, diffusion properties, structure prediction, and chemical reactions. In each application, we summarize characteristic reference structures and discuss specific parameters for DFT calculations such as MD conditions. We hope this review serves as a comprehensive guide for researchers and practitioners aiming to harness the capabilities of MLPs in material simulations. IMPACT STATEMENT This review reports on the selection of training sets for machine learning potentials tailored to their specific applications, which is currently not standardized in the rapidly evolving field. read less NOT USED (low confidence) J. A. Vita et al., “ColabFit exchange: Open-access datasets for data-driven interatomic potentials.,” The Journal of chemical physics. 2023. link Times cited: 2 Abstract: Data-driven interatomic potentials (IPs) trained on large co… read moreAbstract: Data-driven interatomic potentials (IPs) trained on large collections of first principles calculations are rapidly becoming essential tools in the fields of computational materials science and chemistry for performing atomic-scale simulations. Despite this, apart from a few notable exceptions, there is a distinct lack of well-organized, public datasets in common formats available for use with IP development. This deficiency precludes the research community from implementing widespread benchmarking, which is essential for gaining insight into model performance and transferability, and also limits the development of more general, or even universal, IPs. To address this issue, we introduce the ColabFit Exchange, the first database providing open access to a large collection of systematically organized datasets from multiple domains that is especially designed for IP development. The ColabFit Exchange is publicly available at https://colabfit.org, providing a web-based interface for exploring, downloading, and contributing datasets. Composed of data collected from the literature or provided by community researchers, the ColabFit Exchange currently (September 2023) consists of 139 datasets spanning nearly 70 000 unique chemistries, and is intended to continuously grow. In addition to outlining the software framework used for constructing and accessing the ColabFit Exchange, we also provide analyses of the data, quantifying the diversity of the database and proposing metrics for assessing the relative diversity of multiple datasets. Finally, we demonstrate an end-to-end IP development pipeline, utilizing datasets from the ColabFit Exchange, fitting tools from the KLIFF software package, and validation tests provided by the OpenKIM framework. read less NOT USED (low confidence) T. Rensmeyer, B. Craig, D. Kramer, and O. Niggemann, “High Accuracy Uncertainty-Aware Interatomic Force Modeling with Equivariant Bayesian Neural Networks,” ArXiv. 2023. link Times cited: 1 Abstract: Even though Bayesian neural networks offer a promising frame… read moreAbstract: Even though Bayesian neural networks offer a promising framework for modeling uncertainty, active learning and incorporating prior physical knowledge, few applications of them can be found in the context of interatomic force modeling. One of the main challenges in their application to learning interatomic forces is the lack of suitable Monte Carlo Markov chain sampling algorithms for the posterior density, as the commonly used algorithms do not converge in a practical amount of time for many of the state-of-the-art architectures. As a response to this challenge, we introduce a new Monte Carlo Markov chain sampling algorithm in this paper which can circumvent the problems of the existing sampling methods. In addition, we introduce a new stochastic neural network model based on the NequIP architecture and demonstrate that, when combined with our novel sampling algorithm, we obtain predictions with state-of-the-art accuracy as well as a good measure of uncertainty. read less NOT USED (low confidence) M. C. Venetos, M. Wen, and K. Persson, “Machine Learning Full NMR Chemical Shift Tensors of Silicon Oxides with Equivariant Graph Neural Networks,” The Journal of Physical Chemistry. a. 2023. link Times cited: 1 Abstract: The nuclear magnetic resonance (NMR) chemical shift tensor i… read moreAbstract: The nuclear magnetic resonance (NMR) chemical shift tensor is a highly sensitive probe of the electronic structure of an atom and furthermore its local structure. Recently, machine learning has been applied to NMR in the prediction of isotropic chemical shifts from a structure. Current machine learning models, however, often ignore the full chemical shift tensor for the easier-to-predict isotropic chemical shift, effectively ignoring a multitude of structural information available in the NMR chemical shift tensor. Here we use an equivariant graph neural network (GNN) to predict full 29Si chemical shift tensors in silicate materials. The equivariant GNN model predicts full tensors to a mean absolute error of 1.05 ppm and is able to accurately determine the magnitude, anisotropy, and tensor orientation in a diverse set of silicon oxide local structures. When compared with other models, the equivariant GNN model outperforms the state-of-the-art machine learning models by 53%. The equivariant GNN model also outperforms historic analytical models by 57% for isotropic chemical shift and 91% for anisotropy. The software is available as a simple-to-use open-source repository, allowing similar models to be created and trained with ease. read less NOT USED (low confidence) L. O. AGBOLADE et al., “Recent advances in density functional theory approach for optoelectronics properties of graphene,” Heliyon. 2023. link Times cited: 1 NOT USED (low confidence) M. Wen, E. Spotte-Smith, S. M. Blau, M. J. McDermott, A. Krishnapriyan, and K. Persson, “Chemical reaction networks and opportunities for machine learning,” Nature Computational Science. 2023. link Times cited: 11 NOT USED (low confidence) S. Thaler, G. Doehner, and J. Zavadlav, “Scalable Bayesian Uncertainty Quantification for Neural Network Potentials: Promise and Pitfalls,” Journal of chemical theory and computation. 2022. link Times cited: 3 Abstract: Neural network (NN) potentials promise highly accurate molec… read moreAbstract: Neural network (NN) potentials promise highly accurate molecular dynamics (MD) simulations within the computational complexity of classical MD force fields. However, when applied outside their training domain, NN potential predictions can be inaccurate, increasing the need for Uncertainty Quantification (UQ). Bayesian modeling provides the mathematical framework for UQ, but classical Bayesian methods based on Markov chain Monte Carlo (MCMC) are computationally intractable for NN potentials. By training graph NN potentials for coarse-grained systems of liquid water and alanine dipeptide, we demonstrate here that scalable Bayesian UQ via stochastic gradient MCMC (SG-MCMC) yields reliable uncertainty estimates for MD observables. We show that cold posteriors can reduce the required training data size and that for reliable UQ, multiple Markov chains are needed. Additionally, we find that SG-MCMC and the Deep Ensemble method achieve comparable results, despite shorter training and less hyperparameter tuning of the latter. We show that both methods can capture aleatoric and epistemic uncertainty reliably, but not systematic uncertainty, which needs to be minimized by adequate modeling to obtain accurate credible intervals for MD observables. Our results represent a step toward accurate UQ that is of vital importance for trustworthy NN potential-based MD simulations required for decision-making in practice. read less NOT USED (low confidence) Z. Shui, D. S. Karls, M. Wen, I. A. Nikiforov, E. Tadmor, and G. Karypis, “Injecting Domain Knowledge from Empirical Interatomic Potentials to Neural Networks for Predicting Material Properties,” ArXiv. 2022. link Times cited: 2 Abstract: For decades, atomistic modeling has played a crucial role in… read moreAbstract: For decades, atomistic modeling has played a crucial role in predicting the behavior of materials in numerous fields ranging from nanotechnology to drug discovery. The most accurate methods in this domain are rooted in first-principles quantum mechanical calculations such as density functional theory (DFT). Because these methods have remained computationally prohibitive, practitioners have traditionally focused on defining physically motivated closed-form expressions known as empirical interatomic potentials (EIPs) that approximately model the interactions between atoms in materials. In recent years, neural network (NN)-based potentials trained on quantum mechanical (DFT-labeled) data have emerged as a more accurate alternative to conventional EIPs. However, the generalizability of these models relies heavily on the amount of labeled training data, which is often still insufficient to generate models suitable for general-purpose applications. In this paper, we propose two generic strategies that take advantage of unlabeled training instances to inject domain knowledge from conventional EIPs to NNs in order to increase their generalizability. The first strategy, based on weakly supervised learning, trains an auxiliary classifier on EIPs and selects the best-performing EIP to generate energies to supplement the ground-truth DFT energies in training the NN. The second strategy, based on transfer learning, first pretrains the NN on a large set of easily obtainable EIP energies, and then fine-tunes it on ground-truth DFT energies. Experimental results on three benchmark datasets demonstrate that the first strategy improves baseline NN performance by 5% to 51% while the second improves baseline performance by up to 55%. Combining them further boosts performance. read less NOT USED (low confidence) Y. Hu, J. Musielewicz, Z. W. Ulissi, and A. Medford, “Robust and scalable uncertainty estimation with conformal prediction for machine-learned interatomic potentials,” Machine Learning: Science and Technology. 2022. link Times cited: 14 Abstract: Uncertainty quantification (UQ) is important to machine lear… read moreAbstract: Uncertainty quantification (UQ) is important to machine learning (ML) force fields to assess the level of confidence during prediction, as ML models are not inherently physical and can therefore yield catastrophically incorrect predictions. Established a-posteriori UQ methods, including ensemble methods, the dropout method, the delta method, and various heuristic distance metrics, have limitations such as being computationally challenging for large models due to model re-training. In addition, the uncertainty estimates are often not rigorously calibrated. In this work, we propose combining the distribution-free UQ method, known as conformal prediction (CP), with the distances in the neural network’s latent space to estimate the uncertainty of energies predicted by neural network force fields. We evaluate this method (CP+latent) along with other UQ methods on two essential aspects, calibration, and sharpness, and find this method to be both calibrated and sharp under the assumption of independent and identically-distributed (i.i.d.) data. We show that the method is relatively insensitive to hyperparameters selected, and test the limitations of the method when the i.i.d. assumption is violated. Finally, we demonstrate that this method can be readily applied to trained neural network force fields with traditional and graph neural network architectures to obtain estimates of uncertainty with low computational costs on a training dataset of 1 million images to showcase its scalability and portability. Incorporating the CP method with latent distances offers a calibrated, sharp and efficient strategy to estimate the uncertainty of neural network force fields. In addition, the CP approach can also function as a promising strategy for calibrating uncertainty estimated by other approaches. read less NOT USED (low confidence) Z. Fan et al., “GPUMD: A package for constructing accurate machine-learned potentials and performing highly efficient atomistic simulations.,” The Journal of chemical physics. 2022. link Times cited: 46 Abstract: We present our latest advancements of machine-learned potent… read moreAbstract: We present our latest advancements of machine-learned potentials (MLPs) based on the neuroevolution potential (NEP) framework introduced in Fan et al. [Phys. Rev. B 104, 104309 (2021)] and their implementation in the open-source package gpumd. We increase the accuracy of NEP models both by improving the radial functions in the atomic-environment descriptor using a linear combination of Chebyshev basis functions and by extending the angular descriptor with some four-body and five-body contributions as in the atomic cluster expansion approach. We also detail our efficient implementation of the NEP approach in graphics processing units as well as our workflow for the construction of NEP models and demonstrate their application in large-scale atomistic simulations. By comparing to state-of-the-art MLPs, we show that the NEP approach not only achieves above-average accuracy but also is far more computationally efficient. These results demonstrate that the gpumd package is a promising tool for solving challenging problems requiring highly accurate, large-scale atomistic simulations. To enable the construction of MLPs using a minimal training set, we propose an active-learning scheme based on the latent space of a pre-trained NEP model. Finally, we introduce three separate Python packages, viz., gpyumd, calorine, and pynep, that enable the integration of gpumd into Python workflows. read less NOT USED (low confidence) M. Müser, S. Sukhomlinov, and L. Pastewka, “Interatomic potentials: achievements and challenges,” Advances in Physics: X. 2022. link Times cited: 12 Abstract: ABSTRACT Interatomic potentials approximate the potential en… read moreAbstract: ABSTRACT Interatomic potentials approximate the potential energy of atoms as a function of their coordinates. Their main application is the effective simulation of many-atom systems. Here, we review empirical interatomic potentials designed to reproduce elastic properties, defect energies, bond breaking, bond formation, and even redox reactions. We discuss popular two-body potentials, embedded-atom models for metals, bond-order potentials for covalently bonded systems, polarizable potentials including charge-transfer approaches for ionic systems and quantum-Drude oscillator models mimicking higher-order and many-body dispersion. Particular emphasis is laid on the question what constraints ensue from the functional form of a potential, e.g., in what way Cauchy relations for elastic tensor elements can be violated and what this entails for the ratio of defect and cohesive energies, or why the ratio of boiling to melting temperature tends to be large for potentials describing metals but small for short-ranged pair potentials. The review is meant to be pedagogical rather than encyclopedic. This is why we highlight potentials with functional forms sufficiently simple to remain amenable to analytical treatments. Our main objective is to provide a stimulus for how existing approaches can be advanced or meaningfully combined to extent the scope of simulations based on empirical potentials. Graphical abstract read less NOT USED (low confidence) A. B. Li, L. Miroshnik, B. Rummel, G. Balakrishnan, S. Han, and T. Sinno, “A unified theory of free energy functionals and applications to diffusion,” Proceedings of the National Academy of Sciences of the United States of America. 2022. link Times cited: 3 Abstract: Significance The free energy functional is a central compone… read moreAbstract: Significance The free energy functional is a central component of continuum dynamical models used to describe phase transitions, microstructural evolution, and pattern formation. However, despite the success of these models in many areas of physics, chemistry, and biology, the standard free energy frameworks are frequently characterized by physically opaque parameters and incorporate assumptions that are difficult to assess. Here, we introduce a mathematical formalism that provides a unifying umbrella for constructing free energy functionals. We show that Ginzburg–Landau framework is a special case of this umbrella and derive a generalization of the widely employed Cahn–Hilliard equation. More broadly, we expect the framework will also be useful for generalizing higher-order theories, establishing formal connections to microscopic physics, and coarse graining. read less NOT USED (low confidence) A. Thompson et al., “LAMMPS - A flexible simulation tool for particle-based materials modeling at the atomic, meso, and continuum scales,” Computer Physics Communications. 2021. link Times cited: 2377 NOT USED (low confidence) J. Xu, X. Cao, and P. Hu, “Perspective on computational reaction prediction using machine learning methods in heterogeneous catalysis.,” Physical chemistry chemical physics : PCCP. 2021. link Times cited: 24 Abstract: Heterogeneous catalysis plays a significant role in the mode… read moreAbstract: Heterogeneous catalysis plays a significant role in the modern chemical industry. Towards the rational design of novel catalysts, understanding reactions over surfaces is the most essential aspect. Typical industrial catalytic processes such as syngas conversion and methane utilisation can generate a large reaction network comprising thousands of intermediates and reaction pairs. This complexity not only arises from the permutation of transformations between species but also from the extra reaction channels offered by distinct surface sites. Despite the success in investigating surface reactions at the atomic scale, the huge computational expense of ab initio methods hinders the exploration of such complicated reaction networks. With the proliferation of catalysis studies, machine learning as an emerging tool can take advantage of the accumulated reaction data to emulate the output of ab initio methods towards swift reaction prediction. Here, we briefly summarise the conventional workflow of reaction prediction, including reaction network generation, ab initio thermodynamics and microkinetic modelling. An overview of the frequently used regression models in machine learning is presented. As a promising alternative to full ab initio calculations, machine learning interatomic potentials are highlighted. Furthermore, we survey applications assisted by these methods for accelerating reaction prediction, exploring reaction networks, and computational catalyst design. Finally, we envisage future directions in computationally investigating reactions and implementing machine learning algorithms in heterogeneous catalysis. read less NOT USED (low confidence) A. M. Miksch, T. Morawietz, J. Kästner, A. Urban, and N. Artrith, “Strategies for the construction of machine-learning potentials for accurate and efficient atomic-scale simulations,” Machine Learning: Science and Technology. 2021. link Times cited: 45 Abstract: Recent advances in machine-learning interatomic potentials h… read moreAbstract: Recent advances in machine-learning interatomic potentials have enabled the efficient modeling of complex atomistic systems with an accuracy that is comparable to that of conventional quantum-mechanics based methods. At the same time, the construction of new machine-learning potentials can seem a daunting task, as it involves data-science techniques that are not yet common in chemistry and materials science. Here, we provide a tutorial-style overview of strategies and best practices for the construction of artificial neural network (ANN) potentials. We illustrate the most important aspects of (a) data collection, (b) model selection, (c) training and validation, and (d) testing and refinement of ANN potentials on the basis of practical examples. Current research in the areas of active learning and delta learning are also discussed in the context of ANN potentials. This tutorial review aims at equipping computational chemists and materials scientists with the required background knowledge for ANN potential construction and application, with the intention to accelerate the adoption of the method, so that it can facilitate exciting research that would otherwise be challenging with conventional strategies. read less NOT USED (low confidence) X. Liu, Q. Wang, and J. Zhang, “Machine Learning Interatomic Force Fields for Carbon Allotropic Materials.” 2021. link Times cited: 0 NOT USED (high confidence) Y. Liu, X. He, and Y. Mo, “Discrepancies and error evaluation metrics for machine learning interatomic potentials,” npj Computational Materials. 2023. link Times cited: 1 NOT USED (high confidence) H. Zhai and J. Yeo, “Multiscale mechanics of thermal gradient coupled graphene fracture: A molecular dynamics study,” International Journal of Applied Mechanics. 2022. link Times cited: 2 Abstract: The thermo-mechanical coupling mechanism of graphene fractur… read moreAbstract: The thermo-mechanical coupling mechanism of graphene fracture under thermal gradients possesses rich applications whereas is hard to study due to its coupled non-equilibrium nature. We employ non-equilibrium molecular dynamics to study the fracture of graphene by applying a fixed strain rate under different thermal gradients by employing different potential fields. It is found that for AIREBO and AIREBO-M, the fracture stresses do not strictly follow the positive correlations with the initial crack length. Strain-hardening effects are observed for"REBO-based"potential models of small initial defects, which is interpreted as blunting effect observed for porous graphene. The temperature gradients are observed to not show clear relations with the fracture stresses and crack propagation dynamics. Quantized fracture mechanics verifies our molecular dynamics calculations. We provide a unique perspective that the transverse bond forces share the loading to account for the nonlinear increase of fracture stress with shorter crack length. Anomalous kinetic energy transportation along crack tips is observed for"REBO-based"potential models, which we attribute to the high interatomic attractions in the potential models. The fractures are honored to be more"brittle-liked"carried out using machine learning interatomic potential (MLIP), yet incapable of simulating post-fracture dynamical behaviors. The mechanical responses using MLIP are observed to be not related to temperature gradients. The temperature configuration of equilibration simulation employing the dropout uncertainty neural network potential with a dropout rate of 0.1 is reported to be the most accurate compared with the rest. This work is expected to inspire further investigation of non-equilibrium dynamics in graphene with practical applications in various engineering fields. read less NOT USED (high confidence) A. J. W. Zhu, S. L. Batzner, A. Musaelian, and B. Kozinsky, “Fast Uncertainty Estimates in Deep Learning Interatomic Potentials,” The Journal of chemical physics. 2022. link Times cited: 15 Abstract: Deep learning has emerged as a promising paradigm to give ac… read moreAbstract: Deep learning has emerged as a promising paradigm to give access to highly accurate predictions of molecular and material properties. A common short-coming shared by current approaches, however, is that neural networks only give point estimates of their predictions and do not come with predictive uncertainties associated with these estimates. Existing uncertainty quantification efforts have primarily leveraged the standard deviation of predictions across an ensemble of independently trained neural networks. This incurs a large computational overhead in both training and prediction, resulting in order-of-magnitude more expensive predictions. Here, we propose a method to estimate the predictive uncertainty based on a single neural network without the need for an ensemble. This allows us to obtain uncertainty estimates with virtually no additional computational overhead over standard training and inference. We demonstrate that the quality of the uncertainty estimates matches those obtained from deep ensembles. We further examine the uncertainty estimates of our methods and deep ensembles across the configuration space of our test system and compare the uncertainties to the potential energy surface. Finally, we study the efficacy of the method in an active learning setting and find the results to match an ensemble-based strategy at order-of-magnitude reduced computational cost. read less NOT USED (high confidence) K. S. Csizi and M. Reiher, “Universal QM/MM approaches for general nanoscale applications,” Wiley Interdisciplinary Reviews: Computational Molecular Science. 2022. link Times cited: 6 Abstract: Quantum mechanics/molecular mechanics (QM/MM) hybrid models … read moreAbstract: Quantum mechanics/molecular mechanics (QM/MM) hybrid models allow one to address chemical phenomena in complex molecular environments. Whereas this modeling approach can cope with a large system size at moderate computational costs, the models are often tedious to construct and require manual preprocessing and expertise. As a result, transferability to new application areas can be limited and the many parameters are not easy to adjust to reference data that are typically scarce. Therefore, it is desirable to devise automated procedures of controllable accuracy, which enables such modeling in a standardized and black‐box‐type manner. Although diverse best‐practice protocols have been set up for the construction of individual components of a QM/MM model (e.g., the MM potential, the type of embedding, the choice of the QM region), automated procedures that reconcile all steps of the QM/MM model construction are still rare. Here, we review the state of the art of QM/MM modeling with a focus on automation. We elaborate on MM model parametrization, on atom‐economical physically‐motivated QM region selection, and on embedding schemes that incorporate mutual polarization as critical components of the QM/MM model. In view of the broad scope of the field, we mostly restrict the discussion to methodologies that build de novo models based on first‐principles data, on uncertainty quantification, and on error mitigation with a high potential for automation. Ultimately, it is desirable to be able to set up reliable QM/MM models in a fast and efficient automated way without being constrained by specific chemical or technical limitations. read less NOT USED (high confidence) S.-H. Lee, V. Olevano, and B. Sklénard, “A generalizable, uncertainty-aware neural network potential for GeSbTe with Monte Carlo dropout,” Solid-State Electronics. 2022. link Times cited: 2 NOT USED (high confidence) Y. Duan, M. N. Ridao, M. Eaton, and M. Bluck, “Non-intrusive semi-analytical uncertainty quantification using Bayesian quadrature with application to CFD simulations,” International Journal of Heat and Fluid Flow. 2022. link Times cited: 1 NOT USED (high confidence) Y. Kurniawan et al., “Bayesian, frequentist, and information geometric approaches to parametric uncertainty quantification of classical empirical interatomic potentials.,” The Journal of chemical physics. 2021. link Times cited: 6 Abstract: In this paper, we consider the problem of quantifying parame… read moreAbstract: In this paper, we consider the problem of quantifying parametric uncertainty in classical empirical interatomic potentials (IPs) using both Bayesian (Markov Chain Monte Carlo) and frequentist (profile likelihood) methods. We interface these tools with the Open Knowledgebase of Interatomic Models and study three models based on the Lennard-Jones, Morse, and Stillinger-Weber potentials. We confirm that IPs are typically sloppy, i.e., insensitive to coordinated changes in some parameter combinations. Because the inverse problem in such models is ill-conditioned, parameters are unidentifiable. This presents challenges for traditional statistical methods, as we demonstrate and interpret within both Bayesian and frequentist frameworks. We use information geometry to illuminate the underlying cause of this phenomenon and show that IPs have global properties similar to those of sloppy models from fields, such as systems biology, power systems, and critical phenomena. IPs correspond to bounded manifolds with a hierarchy of widths, leading to low effective dimensionality in the model. We show how information geometry can motivate new, natural parameterizations that improve the stability and interpretation of uncertainty quantification analysis and further suggest simplified, less-sloppy models. read less NOT USED (high confidence) L. Fiedler, K. Shah, M. Bussmann, and A. Cangi, “Deep dive into machine learning density functional theory for materials science and chemistry,” Physical Review Materials. 2021. link Times cited: 18 Abstract: With the growth of computational resources, the scope of ele… read moreAbstract: With the growth of computational resources, the scope of electronic structure simulations has increased greatly. Artificial intelligence and robust data analysis hold the promise to accelerate large-scale simulations and their analysis to hitherto unattainable scales. Machine learning is a rapidly growing field for the processing of such complex datasets. It has recently gained traction in the domain of electronic structure simulations, where density functional theory takes the prominent role of the most widely used electronic structure method. Thus, DFT calculations represent one of the largest loads on academic high-performance computing systems across the world. Accelerating these with machine learning can reduce the resources required and enables simulations of larger systems. Hence, the combination of density functional theory and machine learning has the potential to rapidly advance electronic structure applications such as in-silico materials discovery and the search for new chemical reaction pathways. We provide the theoretical background of both density functional theory and machine learning on a generally accessible level. This serves as the basis of our comprehensive review including research articles up to December 2020 in chemistry and materials science that employ machine-learning techniques. In our analysis, we categorize the body of research into main threads and extract impactful results. We conclude our review with an outlook on exciting research directions in terms of a citation analysis. read less NOT USED (high confidence) D. Wang et al., “A hybrid framework for improving uncertainty quantification in deep learning-based QSAR regression modeling,” Journal of Cheminformatics. 2021. link Times cited: 15 NOT USED (high confidence) Y. Duan, J. S. Ahn, M. D. Eaton, and M. Bluck, “Quantification of the uncertainty within a SAS-SST simulation caused by the unknown high-wavenumber damping factor,” Nuclear Engineering and Design. 2021. link Times cited: 2 NOT USED (high confidence) L. Kahle and F. Zipoli, “Quality of uncertainty estimates from neural network potential ensembles.,” Physical review. E. 2021. link Times cited: 11 Abstract: Neural network potentials (NNPs) combine the computational e… read moreAbstract: Neural network potentials (NNPs) combine the computational efficiency of classical interatomic potentials with the high accuracy and flexibility of the ab initio methods used to create the training set, but can also result in unphysical predictions when employed outside their training set distribution. Estimating the epistemic uncertainty of a NNP is required in active learning or on-the-fly generation of potentials. Inspired from their use in other machine-learning applications, NNP ensembles have been used for uncertainty prediction in several studies, with the caveat that ensembles do not provide a rigorous Bayesian estimate of the uncertainty. To test whether NNP ensembles provide accurate uncertainty estimates, we train such ensembles in four different case studies and compare the predicted uncertainty with the errors on out-of-distribution validation sets. Our results indicate that NNP ensembles are often overconfident, underestimating the uncertainty of the model, and require to be calibrated for each system and architecture. We also provide evidence that Bayesian NNPs, obtained by sampling the posterior distribution of the model parameters using Monte Carlo techniques, can provide better uncertainty estimates. read less NOT USED (high confidence) M. Wen, Y. Afshar, R. Elliott, and E. Tadmor, “KLIFF: A framework to develop physics-based and machine learning interatomic potentials,” Comput. Phys. Commun. 2021. link Times cited: 12 NOT USED (high confidence) C.-gen Qian, B. Mclean, D. Hedman, and F. Ding, “A comprehensive assessment of empirical potentials for carbon materials,” APL Materials. 2021. link Times cited: 22 Abstract: Carbon materials and their unique properties have been exten… read moreAbstract: Carbon materials and their unique properties have been extensively studied by molecular dynamics, thanks to the wide range of available carbon bond order potentials (CBOPs). Recently, with the increase in popularity of machine learning (ML), potentials such as Gaussian approximation potential (GAP), trained using ML, can accurately predict results for carbon. However, selecting the right potential is crucial as each performs differently for different carbon allotropes, and these differences can lead to inaccurate results. This work compares the widely used CBOPs and the GAP-20 ML potential with density functional theory results, including lattice constants, cohesive energies, defect formation energies, van der Waals interactions, thermal stabilities, and mechanical properties for different carbon allotropes. We find that GAP-20 can more accurately predict the structure, defect properties, and formation energies for a variety of crystalline phase carbon compared to CBOPs. Importantly, GAP-20 can simulate the thermal stability of C60 and the fracture of carbon nanotubes and graphene accurately, where CBOPs struggle. However, similar to CBOPs, GAP-20 is unable to accurately account for van der Waals interactions. Despite this, we find that GAP-20 outperforms all CBOPs assessed here and is at present the most suitable potential for studying thermal and mechanical properties for pristine and defective carbon. read less NOT USED (high confidence) M. Buze, T. Woolley, and L. A. Mihai, “A stochastic framework for atomistic fracture,” SIAM J. Appl. Math. 2021. link Times cited: 0 Abstract: We present a stochastic modeling framework for atomistic pro… read moreAbstract: We present a stochastic modeling framework for atomistic propagation of a Mode I surface crack, with atoms interacting according to the Lennard-Jones interatomic potential at zero temperature. Specifically, we invoke the Cauchy-Born rule and the maximum entropy principle to infer probability distributions for the parameters of the interatomic potential. We then study how uncertainties in the parameters propagate to the quantities of interest relevant to crack propagation, namely, the critical stress intensity factor and the lattice trapping range. For our numerical investigation, we rely on an automated version of the so-called numerical-continuation enhanced flexible boundary (NCFlex) algorithm. read less NOT USED (high confidence) M. Wen and E. Tadmor, “Hybrid neural network potential for multilayer graphene,” Physical Review B. 2019. link Times cited: 40 Abstract: Monolayer and multilayer graphene are promising materials fo… read moreAbstract: Monolayer and multilayer graphene are promising materials for applications such as electronic devices, sensors, energy generation and storage, and medicine. In order to perform large-scale atomistic simulations of the mechanical and thermal behavior of graphene-based devices, accurate interatomic potentials are required. Here, we present a new interatomic potential for multilayer graphene structures referred to as "hNN--Gr$_x$." This hybrid potential employs a neural network to describe short-range interactions and a theoretically-motivated analytical term to model long-range dispersion. The potential is trained against a large dataset of monolayer graphene, bilayer graphene, and graphite configurations obtained from ab initio total-energy calculations based on density functional theory (DFT). The potential provides accurate energy and forces for both intralayer and interlayer interactions, correctly reproducing DFT results for structural, energetic, and elastic properties such as the equilibrium layer spacing, interlayer binding energy, elastic moduli, and phonon dispersions to which it was not fit. The potential is used to study the effect of vacancies on thermal conductivity in monolayer graphene and interlayer friction in bilayer graphene. The potential is available through the OpenKIM interatomic potential repository at \url{this https URL}. read less NOT USED (high confidence) H. Fan, M. Ferianc, Z. Que, X. Niu, M. L. Rodrigues, and W. Luk, “Accelerating Bayesian Neural Networks via Algorithmic and Hardware Optimizations,” IEEE Transactions on Parallel and Distributed Systems. 2022. link Times cited: 4 Abstract: Bayesian neural networks (BayesNNs) have demonstrated their … read moreAbstract: Bayesian neural networks (BayesNNs) have demonstrated their advantages in various safety-critical applications, such as autonomous driving or healthcare, due to their ability to capture and represent model uncertainty. However, standard BayesNNs require to be repeatedly run because of Monte Carlo sampling to quantify their uncertainty, which puts a burden on their real-world hardware performance. To address this performance issue, this paper systematically exploits the extensive structured sparsity and redundant computation in BayesNNs. Different from the unstructured or structured sparsity existing in standard convolutional NNs, the structured sparsity of BayesNNs is introduced by Monte Carlo Dropout and its associated sampling required during uncertainty estimation and prediction, which can be exploited through both algorithmic and hardware optimizations. We first classify the observed sparsity patterns into three categories: dropout sparsity, layer sparsity and sample sparsity. On the algorithmic side, a framework is proposed to automatically explore these three sparsity categories without sacrificing algorithmic performance. We demonstrated that structured sparsity can be exploited to accelerate CPU designs by up to 49 times, and GPU designs by up to 40 times. On the hardware side, a novel hardware architecture is proposed to accelerate BayesNNs, which achieves a high hardware performance using the runtime adaptable hardware engines and the intelligent skipping support. Upon implementing the proposed hardware design on an FPGA, our experiments demonstrated that the algorithm-optimized BayesNNs can achieve up to 56 times speedup when compared with unoptimized Bayesian nets. Comparing with the optimized GPU implementation, our FPGA design achieved up to 7.6 times speedup and up to 39.3 times higher energy efficiency read less
|