Graduate Student Awards

GSAgrpGold_MG_4768-800x404

Gold Winners:

Alex Ganose, University College London
Zachary Hood
, Georgia Institute of Technology
Zhiyuan Liu
, Nanyang Technological University
Seongjun Park
, Massachusetts Institute of Technology
Ottman Tertuliano
, California Institute of Technology *Nowick
Hsinhan Tsai
, Rice University
Ying Wang
, University of California, Berkeley
Saien Xie
, Cornell University

GSAgrpSilver_MG_4762-800x404
Silver Winners:

Renjie Chen, University of California, San Diego
Jonghyun Choi, University of Illinois at Urbana-Champaign
Zeyu Deng, University of Cambridge
Sreetosh Goswami, National University of Singapore
Bumjin Jang, ETH Zurich
Dohyung Kim, University of California, Berkeley
Shankar Lalitha Sridhar, University of Colorado Boulder
Dingchang Lin, Stanford University
Xiaolong Liu, Northwestern University
Erfan Mohammadi, University of Illinois at Urbana-Champaign
Hongjie Peng, Tsinghua University
Sean Rodrigues, Georgia Institute of Technology
Michael Cai Wang, University of Illinois at Urbana-Champaign
Xiaoxue Wang, Massachusetts Institute of Technology
Shuai Yuan, Texas A&M University
Hyunwoo Yuk, Massachusetts Institute of Technology


Mid-Career Researcher Award—Symposium X Presentation

05 David MooneyDavid Mooney, Harvard University
Biomaterials for Mechanoregeneration

Written by Don Monroe

In his talk about the work that led to his Mid-Career Researcher Award, David Mooney of Harvard University described “mechanoregeneration: the idea of mechanical signals controlling the regeneration of tissues and organs in the body.” In addition to fundamental experiments, he showed how these insights are leading to therapeutic devices.

There is growing recognition that cells respond not only to chemical signals but to mechanical cues. These can be either responses of the surrounding materials to forces the cells generate or externally imposed forces. “Materials can be used to control these mechanical signals,” Mooney said.

Mooney’s team has exploited three-dimensional (3D) alginate hydrogels, which are block-copolymer polysaccharides. Crosslinking occurs between one type of block, but not the other, letting the researchers tune the stiffness to see its effect on stem-cell proliferation, migration, and differentiation. Unlike other systems, “the architecture and the macromolecular transport in these gels does not change as we vary the extent of crosslinking,” Mooney stressed. In addition, cells do not adhere to the underlying gel backbone, but only to small synthetic peptides that the researchers covalently attach.

Experiments “demonstrated unequivocally in 3D that we could control the fate of these cells simply by controlling the stiffness of the gel in which the cells were encapsulated,” Mooney said. For example, mesenchymal stem cells differentiated into fat cells in a soft matrix, but into bone-forming cells in a stiffer matrix.

Beyond the static stiffness, Mooney showed that “fundamental elements of cell biology were dramatically altered by the stress relaxation or viscoelasticity of these hydrogels.” Modifications of this relaxation by changing molecular weight and introducing PEG spacers changed the speed of bone regrowth, and modified the ability of the cells to remodel their matrix.

Clinical application of stem cells today, however, is dominated by IV infusion of individual cells, but almost all of the injected cells are gone within a day. This problem can be addressed by surgically implanting hydrogels, but Mooney illustrated a successful alternative in which the cells are individually embedded in hydrogel using droplet microfluidics. This results in better cell survival, and lets researchers use, for single cells, the gel-modification tricks developed for populations in culture.

Cells and tissue are sensitive not only to mechanical response to forces they generate, but to applied forces, Mooney illustrated for muscle tissue. This insight led to a project, spearheaded by Ellen Roche (now at the Massachusetts Institute of Technology) to develop pneumatically actuated soft-robotic devices to assist heart function and regeneration.

The envisioned device combines two capabilities. First, a mechanical sleeve around the heart provides extra pumping assistance without directly contacting the blood. The team showed that this device significantly enhanced cardiac output in pigs. Second, therapies can be locally provided without additional surgery. This capability is especially important for cellular therapies, Mooney said, where single applications are often insufficient for regeneration.

In his final topic, Mooney described a key requirement for applying forces to tissues, which is adhesion to wet and dynamic tissues. Mooney, former colleague Jeong-Yun Sun, and their collaborators combined the ionically crosslinked polysaccharide network with a covalently crosslinked protein network. The hydrogel “crosslinks can dissipate energy, but the bonds are pretty weak,” Mooney noted. The covalent crosslinks are stronger, but the materials are brittle. The combination provides “an unprecedented level of toughness.” Combining this material with chemistry to bridge the underlying tissue and the hydrogel resulted in tough adhesives that “dramatically outperform anything that has been previously described,” including superglue, Mooney said.

The Mid-Career Researcher Award recognizes exceptional achievements in materials research made by mid-career professionals. The Mid-Career Researcher Award is made possible through an endowment established by MilliporeSigma (Sigma-Aldrich Materials Science). This year’s award was given “for pioneering contributions to the field of biomaterials, especially in the incorporation of biological design principles into materials and the use of biomaterials in mechanobiology, tissue engineering and therapeutics.”

 


Thursday's Poster Award Winners

ThursPoster-Hawkins-Grp_MG_4984-800x404

CM01.12.03       
Spencer Hawkins, Universal Technology Corporation, Air Force Research Laboratory (AFRL)
Understanding the Onset of Damage at the Nanoscale in Polyurethane Composites Containing Silver Nanoparticle-Coated Carbon Nanofiber          

ThursPoster-Zhao-Grp_MG_4969-800x404

MA02.08.30
Xiaoming Zhao, Queen Mary University of London
Fullerene Single-Crystalline Nanostructures for Organic Electronics          

ThursPoster-Fan-Grp_MG_4976-800x404

NM03.11.13
Charles Fan, Albuquerque Academy, Angstrom Thin Film Technologies LLC
Hierarchical Biomimetic Nanostructures for Oxygen Membranes              

ThursPoster-Samaddar-Grp_MG_4988-800x404

SM02.01.04
Shayak Samaddar, Purdue University
Bladder Tumor Targeted Cyclic di-GMP Liposomes           

ThursPosters-HighSchoolGrp-_MG_5031-800x404

Honorable Mention
SM01.12.01
Holly Golecki, The Haverford School
Design and Characterization of Edible Soft Robotic 'Candy' Actuators     


Symposium SM05: Biomaterials for Tissue Interface Regeneration

Written by Frieda Wiley

Julia Glaum, Norwegian University of Science and Technology

Piezoelectric Performance and Cytotoxicity of Porous, Barium-Titanate-Based Ceramics for Biomedical Applications

Piezoelectric effect is a terminology that defines the ability of certain substances, such as ceramics, bone, DNA, and crystals, to generate alternating current charges. Researchers are investigating this property as it relates not only to piezoelectric performance, but also cytoxicity of porous barium titanate-based ceramic for biochemical applications.

Successful tissue integration has some unusual requirements, including liquid stability. Direct piezoeffect results from mechanical stress while indirectly the vibration produced by mechanical force yields a piezoelectric effect. These are two different triggers in tissue repair.

Medical ultrasound, in vivo sensing and in vivo synergy harvesting, and tissue repair mimic the stress-generated potentials in bones. The motivation of the study is to explore the topic of tissue implant materials to improve the bonding between the artificial implant and the bone.

Researchers used corn starch and poly(methylmethacrylate) (PMMA) to form the artificial template. The strength and polymerization of materials comprised of BaTiO3 can be measured; such compounds are stable up to 35% porosity; after this point, a loss of D33 cells occurs due to an unpoled matrix. A decrease in D33 is also observed as porosity increases due to the formation of air bubbles in the matrix.

Different pore formers lead to similar porosity; both corn starch and PMMA exhibited similar porosities. Additionally, cytotoxicity presents a major challenge. For example, it is easier to get more living cells on the ceramic sample than on the polystyrol. Despite numerous challenges faced with the development of different pore formers, cell proliferation and viability show promise. Leeching tests are forthcoming.


Symposium LN02: Artificial Intelligence for Materials Development Forum

The Future of Materials Informatics: Research through Artificial Intelligence

Written by Dale E. Karas

Artificial intelligence (AI) is largely a trending topic at professional conferences and special academic lectures, pervasive in popular culture due to how overwhelmingly its adoption is being pursued, as well as the ethical issues raised about leveraging the implementation of such technologies. With trends to present-day automation overriding menial industrial labor, especially tasks that intend to answer moral questions about eliminating risk and human error, what does its inevitable development mean for humanity?

Near the turn of the century, such questions for AI were abruptly realized with the televised 1997 rematch of IBM’s “Deep Blue” chess supercomputer developers against reigning world champion Garry Kasparov (considered, by many, to be one of the best chess players in existence along with Bobby Fischer and Paul Morphy). A year prior, IBM’s previous version of Deep Blue underperformed and underwhelmed audiences, succumbing to mistakes that the expert human players would effortlessly avoid. The juxtaposition of the 1996 match compared to the incredible upset a year later made the prospects of AI all the more revolutionary—how could a computer, originally prone to a series of comedic errors, now surpass humans based on such a complex series of tasks?

Due to advances in the semiconductor industry to manufacture faster and smaller integrated circuits, as well as fabrication in optics and photonics technologies becoming all the more cost-efficient and amenable to boosting computer processing speeds, memory switching, and data storage capacity, the newer field of computational “neural networks” have gained considerable interest. These are a series of algorithms used for machine learning, acquisition and processing of large data sets, and pattern recognition.

Two decades later succeeding the IBM-Kasparov match, to which Kasparov accused IBM’s team of cheating, as in consulting with external support with the intervention of human grandmasters based on their interest to raise their company’s stock prices with the publicity, the DeepMind team at Google met with comparable controversy with what they posited as their own analogous technological breakthrough. Their “Alpha” platform, using techniques from computational neural networks, mastered the game weiqi (referred to as “Go” in the Americas) with their AlphaGo architecture months prior, and had leveraged the same technology for chess with the AlphaZero incarnation to beat the world’s top open-source Stockfish engine in a match consisting of 100 games. As with IBM’s Deep Blue, AlphaZero was rapidly retired after its victory, ever fueling the critical suggestions of conspiracies at work: the response of notable FIDE title holders (outdated version of Stockfish used, time controls and hardware configuration were suboptimal based on Stockfish’s needs), the more apparent breakthrough was not so much that classical open-source engines were obsolete, but that computational neural networks could be more adept at solving fairly complex problems than had been previously demonstrated.

At this year’s Materials Research Society (MRS) Meeting in Phoenix, Ariz., distinguished computer scientists and materials specialists hosted a special “Materials Informatics” symposium on artificial intelligence development for research prospects. The panel included such speakers as Carla Gomes (Cornell University), Subbarao Kambhampati (Arizona State University), Patrick Riley (Google Research), and Krishna Rajan and Kristofer Reyes (University at Buffalo, State University of New York).

 

What are some common myths and misunderstandings of AI?

While there are many misconceptions, lots of hype, and a belief that certain novel algorithms and processing routines create the “magic” of AI, it must be noted that many current models we deem as “AI” are happy to learn whatever they are trained—and that means they will learn inconsistencies, and that input errors will scale dramatically. Kambhampati remarked, “Human intelligence [in contrast] is not just learning from data. We lampoon proposals as ‘post-2012’ if they give too much credit to rapid learning processes.” On the subject of how AI would relate to deep learning from neural networks implementation, Gomes stated that “it is dangerous for this community to think everything is going to be solved by deep learning. All of it is merely regression analysis. You would not apply this to many methods of teaching, and this is where researchers need to be flexible with their approaches of implementation.” Lastly, Reyes mentioned “the usual conflation between ‘big data’ and ‘deep learning,’ as they are very separate topics.” The type of statistics used in many of the aforementioned methods are based on Bayesian inference techniques, and while causality can be determined through successive experimentation, much data processing can only determine correlation, which alone cannot infer causation.

 

What is it going to take to get us from “data acquisition” to “understanding knowledge”? For instance, will AI robots be teaching us in the next century of physics and chemistry?

A good example for this transition was derived from the ideas of data arrangements. Rajan posed the following question: “Could we teach the periodic table without knowing chemistry? People struggle with knowing the set of data; but what are the larger implications, and why is it important?” As Dmitri Mendeleev understood missing gaps, many chemistry Nobel prizes were essentially filling in the gaps in securing missing elements, and this same idea was applied to supporting data processing to help understand knowledge. The AI community is largely aware of this challenge, with major research areas focused on bridging the gap.

 

How much faster will research in general progress with AI?

The panel deemed that we can largely expect a faster accumulation of data, as is already happening with the biological community. Rajan noted, “The accumulation of data shouldn’t be the goal. If one needs more data, we should instead be concentrating on what the problem is. Can we improve upon connections prior to mass accumulation?” He also lamented that “many experiments are not designed for high throughput. Data awareness is really based on where the data exists, and how does one train people to understand data, rather than just teaching pure methods?” Kambhampati additionally stated: “It is important to remember that it wasn’t that people were not creating data before. It’s just that the data wasn’t being collected! Human beings are making trillions of bytes of data nowadays with mobile technologies and electronic forms of documentation. Data can be captured in many forms, and this has surged the ‘second coming’ of neural networks.” Researchers, of all people, are fortunate that data capture has become an obsessive and effortless practice, so as to help support such a new rulebook.

 

Some publishers are interested in posting experimental raw data online. Can you comment on opportunities and/or challenges with this?

While there was agreement that there was strong pedagogical value for this effort, Riley commented that “trends to put raw data online is important for experimental repeatability, yet most members of this community believe that is has relatively low value for powering future discovery, especially in potentially preventing novelties. New methods get better all the time, so we should be cautious about the practice of formalizing raw data.” Possibly, raw data accumulation would be supportive for aspects of demonstrating scientific rigor, this would probably not be helpful in the long term and would but consider complexity on what the most crucial data storage needs are.

 

What kind of data, in particular, is suitable for training neural network? How can materials scientists support AI opportunities?

Rajan noted: “Let’s pick problems that I’ve already worked on it. Can I use these methods to discover something I didn’t know? Data science shows that fundamental physics parameters get masked when there is a plethora of variables.” It was stated that the role of materials scientists should be to contribute what they already know, rather than artificially scrounging for popular techniques from the AI community.

In the coming decades, researchers will thoroughly put these ideas to the test. Establishing an array of formalisms for research methodology will not only be pivotal for developing more unified policies and ethics on the subject of AI, but also helping scientists to understand the nature of posing research questions and establishing a means to measure research success. The panel concluded that the most impending data accelerations may be the encouragement to being more transparent with failures, as in what does not work in research. If there is a way of capturing all false steps and good steps, that will help many long-term goals for deep learning. Perhaps even “pseudo” experiments to find comparable outputs given other inputs would be of enormous value, for a publishing set bereft of page limits and critical publisher reviews. We may also accept the lack of proprietary technical developments, such as with the game theory data engines of IBM and Google, once such a formalism, in supporting best research practices for such new criteria, is thoroughly understood for both risk and value.


Symposium SM06: The Future of Neuroengineering—Relevant In Vivo Technology

Andrew Steckl, University of Cincinnati

Exploring a Real Artificial Brain—Challenges and Opportunities Using a Semi-Soft Approach

Written by Frieda Wiley

Unlike many other areas of materials science and sensory-based research, artificial brain research can be fairly abstract and difficult to qualify. Researchers at the University of Cincinnati seek to explore whether they can build an artificial brain that mimics the functions of an authentic human brain but that can be implanted and connected to the real thing.

In addition to the abstract nature of the topic, materials scientists face additional hurdles: These include identifying the appropriate materials, and the electronic properties needed for these materials; determining how much energy will be consumed; and elucidating the manner by which they can accomplish all the multiple connections required for the artificial brain to assimilate the activities of a human brain.

Part of the assimilation process lies in the structure of the artificial brain, which like a human brain should have a hydrophilic exterior and hydrophobic interior. Functionality consists of an e-chemical transistor (ECT) based on core/sheath organic fibers.

Data regarding successful integration practices are lacking, contributing to the multiple unknowns in solving this task. However, attempting to answer two groups of questions may help scientists to collect the data they need to move forward. These are addressing lower-level functions, such as those carried out by nodes or neurons and exploring the connectivity relationships between axons and synapses. Higher level functions such as learning memory will help to address additional questions.

Current approaches for neuromorphic research include digital devices, digital computer simulation (e.g., software, most of which are analog devices; these may include both organic and inorganic devices). 

While researchers have identified some of the unknowns in this multifactorial equation, they have yet to elucidate interconnections, testing, and integration.


Symposium SM05: Biomaterials for Tissue Interface Regeneration

Hae Won Hwang, Korea Institute of Science and Technology

Realization of the Tissue-Regenerative All-Metallic Implants

Written by Frieda Wiley

There are three steps in the process of healing bone tissue: the formation of a hematoma, callus and angiogenesis, and ossification and bone remodeling. Researchers are employing reactive oxygen species-based functional metallic implants to accomplish this task. Challenges to commercializing this process warrant the necessity of long-term quantitative analysis.

Researchers observed that in the bone-healing process using a titanium-based implant, the concentration of hydrogen peroxide decreases. This presents a problem, as the bone-healing process requires long-term exposure to hydrogen peroxide to heal. Facilitating the diffusion of hydrogen peroxide can help overcome this challenge.

Researchers conducted two experiments to analyze hydrogen peroxide generation. The first experiment explored the activities of stable materials versus reactive biological inert implants, such as titanium alloys. They also considered biodegradable metals such as magnesium, zinc, and iron but ultimately selected zinc. Compared to control, the corrosion duration of zinc falls into angiogenesis in about 2 weeks.

The second part of the experiment explored diffusion, utilizing fibrin gel because the polymerized fibrin forms a hemostatic clot over a wound site. This is significant because fibrin is the most abundant extracellular matrix after the formation of hematoma in the fractured bone area.

Researchers found that utilizing zinc extended the production of hydrogen peroxide by 2 weeks.


Symposium EN17: Fundamental Materials Science to Enable the Performance and Safety of Nuclear Technologies

Highlights of the Thursday morning session

Written by Dale E. Karas

Concluding the Symposium for the week, the Thursday morning session included many computational and experimental strategies to assess materials for improving the operation and criticality safety of nuclear technologies.

Many strategies for employing density functional theory (DFT) simulations, inferring atomistic-level energy interactions, are used for assessing uranium silicide (U3Si2) as an alternative fuel to uranium dioxide (UO2), based on its high thermal conductivity, for usage in novel reactors.

David Andersson, a staff scientist at Los Alamos National Laboratory (LANL), works within the Fuels Focus Area in the Nuclear Energy Advanced Modeling and Simulation (NEAMS) program. He explores characteristics of higher fissile density of U3Si2 by calculating reaction kinetics and thermodynamics of interacting elemental U and Si point defects, developing a framework for extended calculation summaries usable by other researchers.

Michel Freyss, from CEA Cardache, discussed analysis upon a uranium-plutonium mixed oxide fuel for novel Generation IV fuel reactors, as being implemented in France. As the fuel is difficult to handle due to high radiation toxicity, efforts to characterize its reaction conditions were performed with a modified DFT approach known as DFT+U—where greater dependence of Coulombic interactions requires energy terms for the Hubbard model (describing transition states between conductors and insulators, or electron transition from different atomistic regions).

Vancho Kocevski, a postdoctoral researcher at the University of South Carolina, described Fe-Cr-Al-Y alloyed cladding materials for usage with the U3Si2 candidate fuel. Computationally inferring material phase diagrams based on the U-Fe-Si ternary transition was developed according to DFT methods to develop a theoretically-derived calculation of phase diagram (CALPHAD).

The collection of these methods are adaptable for many simulation calculations that support experimental suggestions of nuclear materials implementation.