The promise of AI for new materials
Written by Vineeth Venugopal
The floating mountains of Pandora in James Cameron’s movie Avatar was made of Unobtainium, a room-temperature superconductor. The powerful Wakanda tribe in the film The Black Panther had suits made of Vibranium, a metal that absorbed kinetic energy. While fiction, these stories contain a kernel of truth—whether as lightweight alloys in airplanes, plastics in households, magnets in generators, or semiconductors in computers—materials precipitate, catalyze, and drive technological revolutions.
Despite this, the discovery of new materials has been painfully slow, often relying more on serendipity than scientific ingenuity. The potential space of new materials is vast: there are an estimated 10100 possible combinations of the known elements, each of which can interact with each other as well as be affected by physical parameters such as temperature and pressure. To find a room-temperature superconductor from this vast space through trial and error, for instance, is impossible—it might be easier to find the proverbial needle in a haystack.
The development of artificial intelligence (AI) specific to materials science, especially recently, is beginning to change the way materials researchers operate, think, and organize their information. Given the plethora of machine learning algorithms today, scientists wondered, could we do away with experiments altogether? If given a database of known superconductors—their composition, crystal structure, relevant physical and chemical properties including the electronic structure calculated from quantum mechanics—can the algorithm learn to predict new materials with superconducting properties? In one such work, Olexander Isayev, Stephano Curtarolo, Corey Oses, and others explored this option through machine learning and virtual materials screening. The models developed through such statistical approaches have pointed us in the direction of new materials that have later been validated through experimentation.
However, these models are only as good as the data they are fed and there is no consensus on what constitutes a complete database of materials for predictive purposes. Moreover, most materials are complex systems with several layers of structure: a piezoelectric ceramic derives its properties not just from its chemical composition and crystal structure but also from the grain size and shape; the presence of pores, vacancies, and other defects; the size and shape of domains; the nature of its clamping with a substrate; it’s electromechanical history, to name a few. The modeling approach is unyielding for such complex systems; the multivariable optimization problem this leads to can dwarf our existing computational resources.
The application of new deep learning approaches is beginning to transform labor-intensive domains such as microstructure analysis and density functional theory (DFT) calculations. Maxim Ziatdinov, Rama Vasudevan, and Sergei Kalinin at Oak Ridge National Laboratory recently trained a convolutional neural network (CNN) to identify defects in transmission electron microscope images. CNNs are a specific type of deep learning architecture that can extract useful features from multidimensional images. Other groups have used similar approaches in relating microstructure to properties, such as ionic conduction in ceramics and thermal/electrical properties, with minimal human intervention in the interpretation. The end result is that microscopes are becoming smarter while making microscopy more autonomous so that in the future, microscopes will not just show us the microstructures but also point to salient features in these images.
Neural networks have also been applied in machine-driven analyses of x-ray diffraction patterns that identify the principal phases and quantify their respective volume fractions. Analytic machines of the future will not just generate data but will be their own technical experts. Kieron Burke of the University of California-Irvine and his team have applied deep learning to approximate density functionals used to calculate the distribution of electrons in substances. Such DFT calculations are often the best way to model materials and are widely used in many branches of physics and chemistry. Consequently, the development of machine learning approaches to DFT can vastly enlarge our existing databases and modeling capability.
Finally, automation is beginning to reach experimentation, with new robot “scientists” that can generate their own hypotheses, design experiments, record results, and modify their hypotheses based on the results. Work by researchers such as Larisa Soldatova demonstrated such autonomous experimental systems in biomedical applications where they have been used to identify orphan genes in organisms, among others. Benji Maruyama developed an Autonomous REsearch System (ARES) to grow carbon nanotubes at controlled rates while other groups have applied this to organic syntheses as well as development of nickel-titanium-based shape-memory alloys.
The US Air Force Research Laboratory’s Autonomous Research System, or ARES, uses artificial intelligence to design, execute, and analyze experiments at a pace much faster than traditional scientific research methods. This robotic research machine is revolutionizing materials science research and demonstrates the benefits of human-machine interaction for rapid advancement and development of knowledge today. Credit: US Air Force photo/Marisa Novobilski
Optimal Learning is concerned with the strategic and sequential selection of experiments to perform when optimizing materials properties. Bayesian Optimization and other Optimal Learning approaches explored through the work of many researchers including Alán Aspuru-Guzik and Kristofer Reyes, when combined with the prolific data generation capabilities of robot scientists, are poised to revolutionize materials science. These robot graduate students are causing us to question and reflect on the very nature of Knowledge that we take for granted. New ontologies are being developed that are important for systematically categorizing and classifying data. Aspuru-Guzik and his team at Harvard University, for example, have created CHEMOS an open source software platform that aims for efficient interaction between researchers, AI algorithms, and robotic hardware. Clearly, since the early work of pioneering researchers such as Krishna Rajan—who coined the term Materials Informatics—the field has come a long way.
We are a long distance away from Unobtanium but it does not seem unobtainable either.
Symposium LN02 on Artificial Intelligence for Materials Development Forum, scheduled for Tuesday and Wednesday at the 2018 MRS Spring Meeting & Exhibit, is designed to inform and inspire researchers in this exciting area.