Symposium LN02: Artificial Intelligence for Materials Development Forum

The Future of Materials Informatics: Research through Artificial Intelligence

Written by Dale E. Karas

Artificial intelligence (AI) is largely a trending topic at professional conferences and special academic lectures, pervasive in popular culture due to how overwhelmingly its adoption is being pursued, as well as the ethical issues raised about leveraging the implementation of such technologies. With trends to present-day automation overriding menial industrial labor, especially tasks that intend to answer moral questions about eliminating risk and human error, what does its inevitable development mean for humanity?

Near the turn of the century, such questions for AI were abruptly realized with the televised 1997 rematch of IBM’s “Deep Blue” chess supercomputer developers against reigning world champion Garry Kasparov (considered, by many, to be one of the best chess players in existence along with Bobby Fischer and Paul Morphy). A year prior, IBM’s previous version of Deep Blue underperformed and underwhelmed audiences, succumbing to mistakes that the expert human players would effortlessly avoid. The juxtaposition of the 1996 match compared to the incredible upset a year later made the prospects of AI all the more revolutionary—how could a computer, originally prone to a series of comedic errors, now surpass humans based on such a complex series of tasks?

Due to advances in the semiconductor industry to manufacture faster and smaller integrated circuits, as well as fabrication in optics and photonics technologies becoming all the more cost-efficient and amenable to boosting computer processing speeds, memory switching, and data storage capacity, the newer field of computational “neural networks” have gained considerable interest. These are a series of algorithms used for machine learning, acquisition and processing of large data sets, and pattern recognition.

Two decades later succeeding the IBM-Kasparov match, to which Kasparov accused IBM’s team of cheating, as in consulting with external support with the intervention of human grandmasters based on their interest to raise their company’s stock prices with the publicity, the DeepMind team at Google met with comparable controversy with what they posited as their own analogous technological breakthrough. Their “Alpha” platform, using techniques from computational neural networks, mastered the game weiqi (referred to as “Go” in the Americas) with their AlphaGo architecture months prior, and had leveraged the same technology for chess with the AlphaZero incarnation to beat the world’s top open-source Stockfish engine in a match consisting of 100 games. As with IBM’s Deep Blue, AlphaZero was rapidly retired after its victory, ever fueling the critical suggestions of conspiracies at work: the response of notable FIDE title holders (outdated version of Stockfish used, time controls and hardware configuration were suboptimal based on Stockfish’s needs), the more apparent breakthrough was not so much that classical open-source engines were obsolete, but that computational neural networks could be more adept at solving fairly complex problems than had been previously demonstrated.

At this year’s Materials Research Society (MRS) Meeting in Phoenix, Ariz., distinguished computer scientists and materials specialists hosted a special “Materials Informatics” symposium on artificial intelligence development for research prospects. The panel included such speakers as Carla Gomes (Cornell University), Subbarao Kambhampati (Arizona State University), Patrick Riley (Google Research), and Krishna Rajan and Kristofer Reyes (University at Buffalo, State University of New York).

 

What are some common myths and misunderstandings of AI?

While there are many misconceptions, lots of hype, and a belief that certain novel algorithms and processing routines create the “magic” of AI, it must be noted that many current models we deem as “AI” are happy to learn whatever they are trained—and that means they will learn inconsistencies, and that input errors will scale dramatically. Kambhampati remarked, “Human intelligence [in contrast] is not just learning from data. We lampoon proposals as ‘post-2012’ if they give too much credit to rapid learning processes.” On the subject of how AI would relate to deep learning from neural networks implementation, Gomes stated that “it is dangerous for this community to think everything is going to be solved by deep learning. All of it is merely regression analysis. You would not apply this to many methods of teaching, and this is where researchers need to be flexible with their approaches of implementation.” Lastly, Reyes mentioned “the usual conflation between ‘big data’ and ‘deep learning,’ as they are very separate topics.” The type of statistics used in many of the aforementioned methods are based on Bayesian inference techniques, and while causality can be determined through successive experimentation, much data processing can only determine correlation, which alone cannot infer causation.

 

What is it going to take to get us from “data acquisition” to “understanding knowledge”? For instance, will AI robots be teaching us in the next century of physics and chemistry?

A good example for this transition was derived from the ideas of data arrangements. Rajan posed the following question: “Could we teach the periodic table without knowing chemistry? People struggle with knowing the set of data; but what are the larger implications, and why is it important?” As Dmitri Mendeleev understood missing gaps, many chemistry Nobel prizes were essentially filling in the gaps in securing missing elements, and this same idea was applied to supporting data processing to help understand knowledge. The AI community is largely aware of this challenge, with major research areas focused on bridging the gap.

 

How much faster will research in general progress with AI?

The panel deemed that we can largely expect a faster accumulation of data, as is already happening with the biological community. Rajan noted, “The accumulation of data shouldn’t be the goal. If one needs more data, we should instead be concentrating on what the problem is. Can we improve upon connections prior to mass accumulation?” He also lamented that “many experiments are not designed for high throughput. Data awareness is really based on where the data exists, and how does one train people to understand data, rather than just teaching pure methods?” Kambhampati additionally stated: “It is important to remember that it wasn’t that people were not creating data before. It’s just that the data wasn’t being collected! Human beings are making trillions of bytes of data nowadays with mobile technologies and electronic forms of documentation. Data can be captured in many forms, and this has surged the ‘second coming’ of neural networks.” Researchers, of all people, are fortunate that data capture has become an obsessive and effortless practice, so as to help support such a new rulebook.

 

Some publishers are interested in posting experimental raw data online. Can you comment on opportunities and/or challenges with this?

While there was agreement that there was strong pedagogical value for this effort, Riley commented that “trends to put raw data online is important for experimental repeatability, yet most members of this community believe that is has relatively low value for powering future discovery, especially in potentially preventing novelties. New methods get better all the time, so we should be cautious about the practice of formalizing raw data.” Possibly, raw data accumulation would be supportive for aspects of demonstrating scientific rigor, this would probably not be helpful in the long term and would but consider complexity on what the most crucial data storage needs are.

 

What kind of data, in particular, is suitable for training neural network? How can materials scientists support AI opportunities?

Rajan noted: “Let’s pick problems that I’ve already worked on it. Can I use these methods to discover something I didn’t know? Data science shows that fundamental physics parameters get masked when there is a plethora of variables.” It was stated that the role of materials scientists should be to contribute what they already know, rather than artificially scrounging for popular techniques from the AI community.

In the coming decades, researchers will thoroughly put these ideas to the test. Establishing an array of formalisms for research methodology will not only be pivotal for developing more unified policies and ethics on the subject of AI, but also helping scientists to understand the nature of posing research questions and establishing a means to measure research success. The panel concluded that the most impending data accelerations may be the encouragement to being more transparent with failures, as in what does not work in research. If there is a way of capturing all false steps and good steps, that will help many long-term goals for deep learning. Perhaps even “pseudo” experiments to find comparable outputs given other inputs would be of enormous value, for a publishing set bereft of page limits and critical publisher reviews. We may also accept the lack of proprietary technical developments, such as with the game theory data engines of IBM and Google, once such a formalism, in supporting best research practices for such new criteria, is thoroughly understood for both risk and value.


Symposium SM06: The Future of Neuroengineering—Relevant In Vivo Technology

Andrew Steckl, University of Cincinnati

Exploring a Real Artificial Brain—Challenges and Opportunities Using a Semi-Soft Approach

Written by Frieda Wiley

Unlike many other areas of materials science and sensory-based research, artificial brain research can be fairly abstract and difficult to qualify. Researchers at the University of Cincinnati seek to explore whether they can build an artificial brain that mimics the functions of an authentic human brain but that can be implanted and connected to the real thing.

In addition to the abstract nature of the topic, materials scientists face additional hurdles: These include identifying the appropriate materials, and the electronic properties needed for these materials; determining how much energy will be consumed; and elucidating the manner by which they can accomplish all the multiple connections required for the artificial brain to assimilate the activities of a human brain.

Part of the assimilation process lies in the structure of the artificial brain, which like a human brain should have a hydrophilic exterior and hydrophobic interior. Functionality consists of an e-chemical transistor (ECT) based on core/sheath organic fibers.

Data regarding successful integration practices are lacking, contributing to the multiple unknowns in solving this task. However, attempting to answer two groups of questions may help scientists to collect the data they need to move forward. These are addressing lower-level functions, such as those carried out by nodes or neurons and exploring the connectivity relationships between axons and synapses. Higher level functions such as learning memory will help to address additional questions.

Current approaches for neuromorphic research include digital devices, digital computer simulation (e.g., software, most of which are analog devices; these may include both organic and inorganic devices). 

While researchers have identified some of the unknowns in this multifactorial equation, they have yet to elucidate interconnections, testing, and integration.


Symposium SM05: Biomaterials for Tissue Interface Regeneration

Hae Won Hwang, Korea Institute of Science and Technology

Realization of the Tissue-Regenerative All-Metallic Implants

Written by Frieda Wiley

There are three steps in the process of healing bone tissue: the formation of a hematoma, callus and angiogenesis, and ossification and bone remodeling. Researchers are employing reactive oxygen species-based functional metallic implants to accomplish this task. Challenges to commercializing this process warrant the necessity of long-term quantitative analysis.

Researchers observed that in the bone-healing process using a titanium-based implant, the concentration of hydrogen peroxide decreases. This presents a problem, as the bone-healing process requires long-term exposure to hydrogen peroxide to heal. Facilitating the diffusion of hydrogen peroxide can help overcome this challenge.

Researchers conducted two experiments to analyze hydrogen peroxide generation. The first experiment explored the activities of stable materials versus reactive biological inert implants, such as titanium alloys. They also considered biodegradable metals such as magnesium, zinc, and iron but ultimately selected zinc. Compared to control, the corrosion duration of zinc falls into angiogenesis in about 2 weeks.

The second part of the experiment explored diffusion, utilizing fibrin gel because the polymerized fibrin forms a hemostatic clot over a wound site. This is significant because fibrin is the most abundant extracellular matrix after the formation of hematoma in the fractured bone area.

Researchers found that utilizing zinc extended the production of hydrogen peroxide by 2 weeks.


Symposium EN17: Fundamental Materials Science to Enable the Performance and Safety of Nuclear Technologies

Highlights of the Thursday morning session

Written by Dale E. Karas

Concluding the Symposium for the week, the Thursday morning session included many computational and experimental strategies to assess materials for improving the operation and criticality safety of nuclear technologies.

Many strategies for employing density functional theory (DFT) simulations, inferring atomistic-level energy interactions, are used for assessing uranium silicide (U3Si2) as an alternative fuel to uranium dioxide (UO2), based on its high thermal conductivity, for usage in novel reactors.

David Andersson, a staff scientist at Los Alamos National Laboratory (LANL), works within the Fuels Focus Area in the Nuclear Energy Advanced Modeling and Simulation (NEAMS) program. He explores characteristics of higher fissile density of U3Si2 by calculating reaction kinetics and thermodynamics of interacting elemental U and Si point defects, developing a framework for extended calculation summaries usable by other researchers.

Michel Freyss, from CEA Cardache, discussed analysis upon a uranium-plutonium mixed oxide fuel for novel Generation IV fuel reactors, as being implemented in France. As the fuel is difficult to handle due to high radiation toxicity, efforts to characterize its reaction conditions were performed with a modified DFT approach known as DFT+U—where greater dependence of Coulombic interactions requires energy terms for the Hubbard model (describing transition states between conductors and insulators, or electron transition from different atomistic regions).

Vancho Kocevski, a postdoctoral researcher at the University of South Carolina, described Fe-Cr-Al-Y alloyed cladding materials for usage with the U3Si2 candidate fuel. Computationally inferring material phase diagrams based on the U-Fe-Si ternary transition was developed according to DFT methods to develop a theoretically-derived calculation of phase diagram (CALPHAD).

The collection of these methods are adaptable for many simulation calculations that support experimental suggestions of nuclear materials implementation.


Everything is Bigger in Arizona—Not Texas 

Whoever coined the phrase "Everything's bigger in Texas" must have never visited Arizona. I arrived at such a conclusion this morning in the Phoenix Convention Center's lobby when the largest bee I'd ever seen chased me away from my laptop. 

"What is that?" I cried as I did a frantic, crippled chicken dance in front of the Starbucks. My erratic gesticulations earned a few strange looks, but other than that, people remained composed—and seated. 

Now that I think about it, it probably did look like I was have a "reaction" to an illicit substance.

Suddenly, a concerned employee approached me with a confused expression obviously attributed to what she thought was the worst John Travolta Saturday Night Fever impersonation she'd ever seen in her life.  

"Are you okay?" she asked. 

By this point, I had lost the ability to speak and could only point in the direction of the offending creature as I ran around the lobby. Sadly, my dancing was so bad that it took the poor woman a few minutes to realize I was actually pointing at something instead of shaking my groove thang.  That said, I must say I gave a convincing performance of dropping it like it was hot. 

"It's a bee," an amused onlooker standing in the coffee line casually replied. 

"Then it must be full of steroids and taking growth hormones!" I managed to exclaim. My reply earned a few casual chuckles, but still no one came to my rescue. My only solution became clear: I had to fend for myself—whatever that meant.  

Despite the urgency of the situation, the event took me back to when I lived in Tucson and found a gargantuan scorpion perched on my bath towel after returning home from a trip. And as you may have guessed, that scorpion was much larger than any scorpion I'd seen in Texas, too. Fast forward nine years later, the Grand Canyon State has maintained its lead in providing a habitat for humongous creatures: Arizona's score: 2. Texas': 0. 

Luckily, the bee suddenly lost interest in my awkward dancing episode—much to my advantage. It flew across the balcony, allowing me to regain my composure and resume my work.  

Later, a security guard approached me about the ferocious flying arthropod. Apparently, the lady who'd approached me earlier had sought him for help. Unfortunately, I was so consumed by the fight for my life that I didn't even realize she'd left.  

While the security guard's valiant search for the fugitive produced no results, he assured me he would escort the bee to the door in the event it returned for an encore attack.  

Right now, the bee's whereabouts remain unknown; but given his astronomical size, I can assure you he is extremely easy to identify—and that you won't find him in the Lone Star State.

 


Innovation in Materials Characterization Award—Symposium X Presentation

04 david-cahillDavid G. Cahill, University of Illinois at Urbana-Champaign

From Isotopically-Enriched Crystals to Fullerene Derivatives and (Almost) Everything in Between—Measurement of Thermal Conductivity in Time-Domain Thermoreflectance

Written by Ashley White

The Innovation in Materials Characterization Award honors an outstanding advance in materials characterization that notably increases knowledge of the structure; composition; in situ behavior under outside stimulus; electronic, mechanical, or chemical behavior; or other characterization feature of materials. David G. Cahill of the University of Illinois at Urbana-Champaign received the honor “for developing transformative methods for characterizing the thermal transport properties of materials and their interfaces using time-domain thermoreflectance (TDTR) and related approaches.” His Wednesday Symposium X talk highlighted the operating principles of the method and covered his group’s accomplishments in advancing the technique into a nearly universal, high-throughput tool for measuring thermal properties of materials.

The TDTR technique uses an ultrafast titanium-sapphire laser oscillator to produce very short (200 fs), high-repetition-rate (80 MHz) pulses of light. The pulses are split into two paths: a pump path, which injects heat into the sample, and a probe path, which measures some aspect of the sample, reporting back its temperature. Thermal transport properties of materials can be extracted by analyzing the temperature evolution, both as a function of the delay time between the arrival of the pump and the probe, and as a function of the frequency at which the pump beam is modulated. In addition, the setup enables picosecond acoustics, which can provide a measurement of thin-film thickness and interface quality.

Cahill explained how his group has developed TDTR to measure the thermal conductivity of almost any material with a smooth surface, including small crystals, and it can be performed with high throughput and in real time. He went into detail on three particular examples.

The first example showcased TDTR’s ability to measure the thermal conductivity of small crystals. Several years ago, theoretical predictions had suggested that boron arsenide—a very challenging material to synthesize—might have a higher room-temperature thermal conductivity than diamond. The calculations were later revealed to make some incorrect assumptions about phonon effects, but Cahill and his group set out to use TDTR to experimentally measure the thermal conductivity of small powders of boron arsenide and related materials, on the order of 100 µm in diameter. Although they observed a much lower-than-predicted thermal conductivity of boron arsenide, they discovered that boron phosphide’s thermal conductivity at room temperature is more than 500 W/mK, which is higher than that of copper and silicon carbide.

Cahill next described a high-throughput technique for studying the thermal conductivity dependence of a wide variety of polymer thin films on their molecular bonding and molecular structure. By combining measurements at different frequencies, Cahill’s group is able to extract both the thermal conductivity and heat capacity of the films, and to probe a wide array of materials structures to predict and identify materials compositions and structures that may provide desired properties. This approach has allowed them to search for ultralow-conductivity materials, mapping out the thermal properties of a large number of polymers and cage-structured molecules like fullerene derivatives.

Cahill concluded with a final application, discussing the utility of using TDTR to make measurements in real time, in situ, or under operando conditions to follow the evolution of a material’s behavior. The example he cited was measuring the changes in thermal conductivity of liquid crystal networks in real time in response to changing temperature or external fields.

MRS acknowledges the generosity of Professors Gwo-Ching Wang and Toh-Ming Lu for endowing this award.


Symposium SM06: The Future of Neuroengineering—Relevant In Vivo Technology

Luisa Torsi, University of Barsi, Italy

Ultra-low detection limits and selectivity with organic bio-electronic sensors

Written by Frieda Wiley

The use of the bioelectronic for sensing applications is a technology that, in principle, can be applied in vivo. Studying electrostatic and electronic properties with biosensing helps researchers improve sensitivity by elucidating specific characteristics. In one particular application of such technologies, the ultimate goal is to create electronic output that services as a “stick test” or diagnostic tool for physicians. However, physicians rarely use such tools as of late because of their poor quality.

Researchers see the quality deficiency as an opportunity for improvement. Torsi’s team has developed multi-parameter/multi-parametric devices with high sensitivity. Changes in capacitance can give clues about the binding event.

Torsi’s team uses a unique approach by using an electrolyte to get the device structure, thereby altering the behavior of the source functions. Electrolyte-gated devices and electrochemical devices are clear illustrations of this principle.

Torsi’s research includes exploring dielectric changes that occur in the system and the electrostatic changes.

With proteins particularly, everything is bound to the arrangement of charges. Torsi is looking at using the technology to detect C-reactive protein in plasma—this is an inflammatory marker that is often used to help confirm the diagnosis of a heart attack.

In single nanoscopic transistors, successful ligand-receptor interactions require exposing the nanoscopic interface to a large number of biomarkers in order to have a feasible permeability. This means either a large number of receptors or a large number of ligands must be present.

According to Torsi, Mother Nature already has adopted this concept: Antennal receptors in moths sense single molecules of pheromones.


Symposium EN11: Nanomaterials for the Water and Energy Nexus

Highlights of the Wednesday session

Written by Dale E. Karas

This year’s Materials Research Society (MRS) Meeting held a new symposium for sustainable energy materials that were also applicable to water treatment and conservation needs, entitled “Nanomaterials for the Water and Energy Nexus.” It featured a variety of concepts for types of materials, as well as synthesis and characterization techniques for large implications of driving energy generation, transport, and storage sectors.

Many solar energy technologies, such as photovoltaics, solar concentrators, or solar-thermal storage, rely heavily on nanomaterials to improve energy conversion efficiency. Structuring for light-trapping structures, trends to improve solar-thermal energy transfer, and examinations of new materials that help support electrical and optical transport in semiconductor-based solar cell devices were many exciting topics to be featured. Many solar photovoltaic technologies largely show developments on perovskites (usually materials based on minerals of calcium titanate), as this has been a rigorously studied material based on its promise for boosting solar cell efficiencies. Materials that largely drive water treatment technologies were also of considerable value for investigation; Jian Wang from Hong Kong University of Science and Technology presented on a similar class of perovskite-oxides, as they enhanced catalysts for water electrolysis, highlighting the usage of such popular perovskites for control of chemical reaction kinetics, based on controlling oxygen vacancies in reaction to boost the response of inorganic material transport.

The afternoon session opener featured Yi Cui, a researcher at Stanford University, presenting on an array of nanoscale materials for energy applications. He commented on how their novel properties are directly affected by structuring. While both vendor-specific and custom instrumentation is developed to investigate optical, electronic, and mechanical behavior of synthesized materials, with the ability to perform many involved analyses, these capabilities must also share compatibility with fabrication techniques used to modify the composition of materials—control over shapes and sizing is essential for the operation of nanomaterials.

Wednesday’s session concluded with many techniques for water distillation and purification, supporting the addition of some solar concentrator technologies used in tandem for these processes