ML4Sci #23: Synthesis planning with Literature-trained Neural Networks; Kohn-Sham Equations as regularizer;
Also, 2020 StateofAI report
Hi, I’m Charles Yang and I’m sharing (roughly) weekly issues about applications of artificial intelligence and machine learning to problems of interest for scientists and engineers.
If you enjoy reading ML4Sci, send us a ❤️. Or forward it to someone who you think might enjoy it!
As COVID-19 continues to spread, let’s all do our part to help protect those who are most vulnerable to this epidemic. Wash your hands frequently (maybe after reading this?), wear a mask, check in on someone (potentially virtually), and continue to practice social distancing.
Inorganic Materials Synthesis Planning with Literature-Trained Neural Networks
First published Dec. 31, 2018
tldr; Some exciting work from the Olivetti group at MIT and collaborators - they apply advanced NLP techniques to a massive corpus of material synthesis literature and use their models to propose synthesis routes for two theoretically-proposed perovskite materials.
The NLP workflow in this paper is quite a tour-de-force: a RNN identifies synthesis sections in papers, context-sensitive ELMO word embeddings are calculated and passed into another RNN for named entity recognition (which has been tuned for material-science specific entities) and finally an unsupervised conditional VAE (CVAE) is used to learn synthesis routes. After training on 2.5M material science journal articles, they demonstrate that their NLP-based model provides more fine-grained suggestions than simple thermodynamic analysis. The authors also do some cool analysis of different materials synthesizability, demonstrating the power of continuous latent space representations of something that was previously latent in literature.
Some meta-comments on this work:
they open-source the code, model, and embeddings (but importantly, not the actual articles themselves). Perhaps different research groups can use private literature corpuses, but open-source the learned model representations? Still, by definition, a privileged, gated approach to research
this paper was first put on Arxiv on Dec. 31, 2018. It was published in the peer-reviewed, closed-access Journal of Chemical Information and Modelling on Jan 7, 2020, over a year later.
Kohn-Sham equations as regularizer: building prior knowledge into machine-learned physics
Published Sept. 17, 2020
Much of Density Functional Theory(DFT) rests on improvements presented in Kohn-Sham equations. However, solving these equations is quite time-consuming, so naturally, everyone is trying to replace them with machine learning.
This work uses the Kohn-Sham equations, as an implicit regularizer for their ML model. They demonstrate that by incorporating this prior knowledge into their model, they are able to achieve significantly better generalization on smaller sample sizes. A good example of how to strongly integrate ML with prior knowledge, why doing so often helps in scientific domains, and why ML isn’t going to be automating away domain expertise anytime soon.
Fig 1: KSR-global demonstrates better performance than Direct ML and weaker regularization
[paper]
📰In the News
ML
🐦Trending on my Medium feed from Towards Data Science: “How to Make a GPT2 Twitter Bot” - I don’t think we’re ready for this…
An excellent overview: 2020 StateofAI report by Nathan Benaich and Ian Hogarth. Nice overview of political economics of AI, the public-private tension in research, and future trends of different AI fields. Definitely a heavier bio emphasis - would have liked to see some more discussion on physics/material sciences but maybe thats just me
From AI@Google: Massive, large-scale distributed training for RL. What caught my eye isn’t the reinforcement learning aspect, but the fact that they benchmark their performance on a chip placement task. Another vertical Google is trying to disrupt with AI, big implications for Intel, NVIDIA, Apple, Qualcomm.
Paper under review at ICLR 2021: An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale - Transformers, a neural network architecture have been upending the NLP community the past few years, might have made another leap forward for computer vision as well. As a tangent - I liked this Youtube video that discussed why peer review is broken and how easy it is to figure out who actually wrote this paper, supposedly under double blind review
PapersWithCode partners with Arxiv to help researchers find research code much more easily[Medium] I’ve attached a screenshot of the new UI for finding code for a paper I covered in this issue - this is a great development for tightening the loop between ideas and implementations!
Science
Learning Optimal Solutions for Extremely Fast AC Optimal Power Flow - using a simple neural network to tackle the extremely important problem of balancing AC on the power grid when dealing with dynamic distributed renewable energy resources e.g. solar panels on rooftops.
“Generating Crazy Structures”[ScienceMag] - Another even-keeled blog post from Derek Lowe about the flaws in generative models for organic chemistry
🔒“Rational design of transition metal single-atom electrocatalysts: a simulation-based, machine learning-accelerated study” with a 130,000x speedup in DFT calculations
Building the Mathematical Library of the Future[QuantaMagazine] - computational paradigm reaches mathematical discovery
Superconducting nanowire spiking element for Neural Network - building nano-scale hardware to train spiking neural networks, which are biologically plausible, stochastic, neural networks. Definitely evokes ideas from the Hardware Lottery paper, about how hardware architectures choose what kinds of algorithms are developed. Keep an eye out for neuromorphic computing, along with more “traditional” deep-learning hardware accelerating startups
The Science of Science
Michael Nielson’s 2019 twitter thread about open science is a great explanation of…well, everything related to open access
📝From 2018: “Some scientists publish more than 70 papers a year. Here’s how—and why—they do it”[ScienceMag] Arguably the outcome of skewed incentive structures around tenureship and how we evaluate early faculty
“Abuse isn’t an ‘advising style’: The consequences of MIT sheltering abuse behind mentorship”[Medium] - Again, skewed incentives
Phillip Guo’s “PhD Grind” one grad student’s story of, you guessed it, misaligned incentives in academia and the dynamics between advisors and students. Written in 2012, they’re now a professor at UC San Diego
👁️“Seeing Theory: a visual introduction to probability and statistics” - one day, we will wonder why anyone ever learned anything from a static textbook
An Open Review of OpenReview: A Critical Analysis of the Machine Learning Conference Review Process - a great paper demonstrating how an open review process allows for systematic analyses of biases in the review process (gender, institution, reproducibility, etc.)
🌎Out in the World of Tech
Amazon funds New York AI center at Columbia University
NVIDIA announces 1. construction of new supercomputer in Cambridge and 2. AI-powered videoconferencing with 1/10 the bandwidth use, using GAN’s (analogous to compressing high-resolution spectroscopic data)
The UK loses 16,000 COVID-19 reported cases because of excel errors
Policy and Regulation
“The visa woes that shattered scientists’ American dreams”[NatureNews] 5 human faces and stories behind immigration regulations
People are beginning to use DeepFake videos in the US political ads: [Parkland Shooting] [Phil Ehr campaign video]
2 articles on Trump and Science from Nature News [How Trump has damaged science],[4-year timeline of Trump and science]
Thanks for Reading!
I hope you’re as excited as I am about the future of machine learning for solving exciting problems in science. You can find the archive of all past issues here and click here to subscribe to the newsletter.
Have any questions, feedback, or suggestions for articles? Contact me at ml4science@gmail.com or on Twitter @charlesxjyang