ML4Sci #18: Unveiling the Predictive Power of Static Structure in Glassy Systems; Learning to Simulate and Design for Structural Engineering

Using graphs to represent buildings and glasses; also, reverse engineering additive manufacturing designs with LSTM's

Hi, I’m Charles Yang and I’m sharing (roughly) weekly issues about applications of artificial intelligence and machine learning to problems of interest for scientists and engineers.

If you enjoy reading ML4Sci, send us a ❤️. Or forward it to someone who you think might enjoy it!

Share ML4Sci

As COVID-19 continues to spread, let’s all do our part to help protect those who are most vulnerable to this epidemic. Wash your hands frequently (maybe after reading this?), wear a mask, check in on someone (potentially virtually), and continue to practice social distancing.


Unveiling the predictive power of static structure in glassy systems

Published April 06, 2020

From the good folks at DeepMind, publishing in Nature Physics

Graph Neural Networks (GNN) have shown a variety of use cases in the molecular sciences, as we’ve covered in the past [see ML4Sci #6 for quantum chemistry and ML4Sci #4 for discovering new antibiotics]. They’re also apparently trending in ICLR 2020 submissions. This new paper from DeepMind uses message-passing graph neural networks to simulate glass phase dynamics.

Understanding how glasses form is a fundamental question in material sciences, related to crystallization and how structure impacts properties. This new work demonstrates improved performance at predicting the evolution of glasses over longer time frames over traditional physics-based models. The authors also interrogate the trained graph neural network by varying material parameters to determine the feature importances (in the sciences, we call these mechanisms).

[paper][blog][code][quanta magazine]

Learning to Simulate and Design for Structural Engineering

Published July 09, 2020

From Autodesk Research

Treating buildings as graphs apparently also does pretty well. This work from Autodesk uses a graph neural network with a heavily customized loss function to optimize building designs. The loss function is designed to cover different constraints and optimization concerns for buildings. The motivation for this project, in addition to *labor automation*, is to reduce building material usage, which has a fairly large climate impact.

AI is disrupting engineering fields that have reached “good enough” numerical models and find themselves being saturated with increasingly incremental optimization schemes out in the real world. Autodesk, as a software provider for a wide variety of engineering firms, is ostensibly trying to further its defensive moat by developing AI-powered software for its customers before it’s disrupted by new competitors.

[paper][ICML version]

🔒Reverse engineering of additive manufactured composite part by toolpath reconstruction using imaging and machine learning

Published June 21, 2020

The barrier-of-entry for high-precision manufacturing tends to be quite high, both in terms of capital cost and intellectual capital. One need to look no further than the US’s persistent attempts to prevent China from obtaining the extremely expensive equipment needed to produce high-end semiconductors for an example of how these barriers to entry can be difficult to surmount without help, even for motivated nation-states like China. But what happens when additive manufacturing (AM), which has much lower barriers-to-entry both in terms of intellectual capital and financial resources, begins to be used in export-sensitive industries like aerospace, automotive, and defense?

Because additive manufacturing tends to be cheaper and enables much larger design spaces, the barrier to entry is not the hardware, but actually the software that is used to implement designs. [See Andreessen Horowitz essay on why software is eating the world, published 9 years ago now, but still remarkably prescient]. Software is so critical to AM that Lawrence Livermore National Lab has even published a survey of additive manufacturing attacks and security problems.

This paper covers how LSTM’s, trained on CT scans of a 3D printed product, can be used to reverse engineer the designs of 3D-printed samples, specifically the toolpath used to print a sample. The asymmetry in the difficulty between model training vs. reverse-engineering is strikingly reminiscent to how cloud-based ML models, painstakingly crafted and trained by researchers, can be easily reverse engineered by any user in a sample efficient manner through knowledge distillation.

Similarly, engineers at an automotive company may painstakingly design the right toolpath and procedure to print a mission-critical 3D part. A competitor (or hostile foreign nation-state) can gain years of R&D simply by buying the product and reverse-engineering the toolpath, without ever having to steal the software. In other words, the hardware is imprinted with the software it is designed with.

I previously covered how AI will transform manufacturing into a service; this work using AI to reverse-engineer AM definitely complicates the picture. See also: Andreessen Horowitz on why AI is not a defensible barrier, scroll down to find a story on Porsche’s new 3D printed parts that were designed with AI.

Side note: I wrote recently about the University of California’s suspension of their publishing agreement with Elsevier, which means I don’t have institutional access to the paper. As a result, I had to access this paper through a 3rd party, who has now taken it down. Does make it slightly difficult to edit what I wrote here, but I see these as growing pains, as institutions begin pushing back against closed-access scientific publishing.

[pdf]

📰In the News

ML

ProtoypeML: A neural network IDE [NeurIPS Preprint]. It used to be that everyone wrote their own code for neural networks. Then came tensorflow, pytorch, and keras. Now, we have code-free visual IDE’s for building neural networks! Figure 1 from the preprint:

🍎Machine Learning Research at Apple

Google’s AI principles: an update that highlights their work on Model cards and an internal tech ethics course

Google’s custom TPU supercomputer dominates MLPerf hardware benchmarking competition. BERT, a model architecture for NLP designed by Google researchers, can be trained in 30 seconds on Google’s custom TPU supercomputers, to improve Google’s search engine. Think about what this might look like if they applied their compute, expertise, and data to something else 👀

A nice MIT Tech Review covering the hubbub around OpenAI’s GPT-3 model

🐦Shortcuts: How Neural Networks love to cheat [TheGradient]. With an illustrative example of using pigeons as neural network classifiers.

Deep Learning's Most Important Ideas - A Brief Historical Review

Science

🎉The paper “Learning Graph Models for Template-Free Retrosynthesis”, which we covered in ML4Sci #15, was awarded the ICML 2020 Best Original Research Paper for Graph Representation Learning (GRL). Congrats to the authors!

An interesting contrarian article from AlgorithmWatch: “Can AI mitigate the climate cirsis? Not really”. Important even for those of us working on AI to be careful about “hypeparameter tuning” and recognize the larger context of the problems we work on

Porsche tests 3D printed pistons [CNET]. Keep an eye out for how AI and parallel trends such as automation and additive manufacturing will change how we couple or vertically integrate products and production.
See also: ML4Sci #15 for a paper that highlights exactly these two trends and how it might change material design, my essay on how AI is creating a new “Science as a Service” industry

From the article:

The process starts with redesigning the piston with the advantages of 3D printing in mind. Porsche used computer simulation and artificial intelligence to optimize the piston's supporting shape and underlying topology, removing material where not necessary and adding where needed. The result is what the automaker calls a "bionic" design utilizing an organic structural shape …. Porsche was also able to integrate a cooling duct into the piston's structure and separately engineer special twin-jet oil nozzles. These additive manufacturing feats would be impossible, or at least extremely difficult, to pull off with machining or casting, the automaker claims …Each 3D printed aluminum piston is 10-percent lighter than the forged piston that the GT2 RS normally uses and runs more than 20 degrees cooler in the piston ring area thanks to the new duct. That means the engine can rev higher, unlocking additional power.

Sciml.ai - a Julia ecosystem for scientific ML. Check out online presentations from 7th annual JuliaCon, which covers developments in the young Julia language

🦠An Early Warning Approach to Monitor COVID-19 Activity with Multiple Digital Traces in Near Real-Time - using data fusion across social media and population models to determine spread of COVID-19

A new article in the Lancet on using AI for prostate cancer diagnosis. From the abstract:

… a blinded clinical validation study and deployment of an artificial intelligence (AI)-based algorithm in a pathology laboratory for routine clinical use to aid prostate diagnosis…We report on the first case of missed cancer that was detected by the algorithm.

The Science of Science

Building a Better Search Engine for Semantic Scholar, from Allen AI Institute [Medium][github]

💯World renown mathematician Terence Tao reflects on almost failing his “general exam” (qualifying exam) at Princeton as a graduate student

A bit old, but still relevant: “Ideas for Improving the Field of Machine Learning: Summarizing Discussion from the NeurIPS 2019 Retrospectives Workshop”. For those who don’t know, synthesis of ideas across different disciplines requires a fair amount of reflection and its something I try to infuse throughout this newsletter. Oftentimes, its helpful to see what kinds of challenges others face, and more importantly, how they learn from these challenges. Here are the 4 main challenges identified in this paper (many of which I’ve touched on throughout this newsletter)

  1. Incentivizing openness and alternate forms of scholarship

  2. Re-structuring the review process

  3. Participation from academia and industry

  4. Training computer scientists to be better scientists

🌎Out in the World of Tech

Eric Schmidt, former CEO of Google, writes a NYT Op-ed: “I Used to Run Google. Silicon Valley Could Lose to China.”

Four Top Tech C.E.O.s Testify in Antitrust Inquiry[NYT][TechCrunch]
[TheVerge] does a postinquiry reflection on how it went

🛰️Amazon closer to launching satellites, upping internet reach

Thanks for Reading!

I hope you’re as excited as I am about the future of machine learning for solving exciting problems in science. You can find the archive of all past issues here and click here to subscribe to the newsletter.

Have any questions, feedback, or suggestions for articles? Contact me at ml4science@gmail.com or on Twitter @charlesxjyang21