ML4Sci #26: Fourier Neural Operator for PDE's; Multi-fidelity Graph Networks for Deep Learning the Experimental Properties of Ordered and Disordered Materials

Also, SketchGraph, ChemBERTa and AI for prostate cancer

Hi, I’m Charles Yang and I’m sharing (roughly) weekly issues about applications of artificial intelligence and machine learning to problems of interest for scientists and engineers.

If you enjoy reading ML4Sci, send us a ❤️. Or forward it to someone who you think might enjoy it!

Share ML4Sci

As COVID-19 continues to spread, let’s all do our part to help protect those who are most vulnerable to this epidemic. Wash your hands frequently (maybe after reading this?), wear a mask, check in on someone (potentially virtually), and continue to practice social distancing.

Fourier Neural Operator for Parametric Partial Differential Equations

Published October 20, 2020

This new work out from Anandkumar’s group at Caltech uses a pretty simple idea to enhance deep learning’s ability to predict partial differential equation system evolution. By using a “Fourier Neural Operator” i.e. translate an image domain into the fourier domain rather than spatial convolutions, this paper demonstrates significantly faster inference for real-time predictions of such systems, without loss in accuracy.

This paper encapsulates a nice theme that is present throughout deep learning architecture design: better designs use a more natural/intelligent/fitting representation for data and allow the “information” to flow better. There are nice analytical reasons for why a fourier domain makes sense for solving differential equations and integrating them into a convolutional neural network is a nice example of physics-informed deep learning design. Another example of how leveraging the domain knowledge of the problem leads to different architectures than those traditionally used for standard computer vision problem. Also…the rapper MC Hammer tweeted about this paper.😲

[paper][code][MIT Tech Review][twitter thread]

Multi-fidelity Graph Networks for Deep Learning the Experimental Properties of Ordered and Disordered Materials

Published Sept. 11, 2020

Using graph neural networks to replace density functional predictions for different crystal structures is becoming a common benchmark problem in the materials community. But there are many different DFT models out there: some slow and accurate, others fast and less accurate. How can we train a deep learning model to learn the underlying “true” value hidden within multiple DFT model predictions? This problem is exacerbated by the fact that we have large pre-existing datasets with low-fidelity DFT predictions and some small disparate datasets of higher quality DFT predictions. Using all of them in some intelligent fashion is probably better than training a model for each individual dataset.

This work trains a graph neural network on multiple DFT fidelities by using an embedding layer for different DFT models and demonstrates enhanced performance when incorporating larger datasets with low-fidelity models. To validate their approach, the authors also test their model on experimentally measured disordered crystal systems and show their model demonstrates a good fit.

Having many different model resolutions is common throughout the physical sciences - this paper is one example of how learning to incorporate multiple models can lead to improved ML performance. Another example of how scientists have to deal with unique challenges and data provenance issues, which lead to different architectures compared to traditional tech-driven AI innovation.


SketchGraphs: A Large-Scale Dataset for Modeling Relational Geometry in Computer-Aided Design

Published July 16, 2020

ML4Sci as a field is still fairly theoretical and academic, but this work is a good step in using AI for commercialized engineering software, namely CAD modelling. The authors released a dataset of 15M CAD models and demonstrate AI’s ability to automate constraint inference and generative modelling using message passing graph neural networks (which we’ve been seeing used quite often in variety of fields). A good amount of engineering and design today requires expertise in numerical modelling software and AI is beginning to “eat” into that too. Everyone likes to talk about AI for synthetic biology, but personally I’m more excited about AI for industrial engineering, like FEM and CAD.

See also: Autodesk researchers use graph neural networks to do structural engineering (ML4Sci #18)


📰In the News


Weekly twitter thread: OpenAI GPT-3 followup work focuses on the importance of working closely with human labellers and ramifications for AI safety and value alignment

📈The Economist covers the exponential growth in cost of training AI models (good motivation for last issue’s coverage of DevOps for ML)

A nice analysis of NeurIPS 2020 authors, organizations, and countries[Medium]


🔋Following up from last week’s essay on Batteries x AI:

Two researchers at Carnegie Mellon have organized a weekly SciML webinar, with an impressive line-up of speakers both from academia and industry. Highly recommend checking out! [h/t Venkat Viswanathan]

The Montreal AI Ethics Institute releases its 2020 State of AI Ethics report[h/t Abishek Gupta]

🏢🧬Building Data Genome Project 2 Data Set[Kaggle] Data can unlock intelligent applications for all aspects of our life, now including energy use for buildings

ChemBERTa: Large-Scale Self-Supervised Pretraining for Molecular Property Prediction. Not SOTA on molecular property prediction yet, but good first start on using transformers for this specific task and continuing the trend of transformers becoming a dominant neural architecture for tasks distinct from standard NLP. Proposed subtitle: “Transformers are eating the world”

💻National Energy Research Scientific Computing Center (NERSC) supports COVID-19 pandemic research

Ezra announces first FDA-cleared AI system for Prostate cancer segmentation

Nature Research Collection on different fields using Computational Science (read: AI papers in Nature)

The Science of Science

💊A nice short read on the story of Herceptin - “How are new medicine’s invented?”

A great essay by Danny Britz around replication, reproducibility, and incentive structures.

🧠A Nature article covering how neuroscience has shifted from manual undergraduate labelling to computational (and AI) accelerated workflows

“The importance of stupidity in scientific research” - humility as an epistemological virtue

🌎Out in the World of Tech

🦀A cute comic about how HTTPS works (avoid bad crabs!)

Adobe Photoshop releases AI-powered photo design

🚗Google’s Waymo releases extensive data on its self-driving car operations in Phoenix Arizona [TheVerge] Also, Cruise announces it receives California permit to operate driverless electric cars, frames the announcements in terms of climate change

Thanks for Reading!

I hope you’re as excited as I am about the future of machine learning for solving exciting problems in science. You can find the archive of all past issues here and click here to subscribe to the newsletter.

Have any questions, feedback, or suggestions for articles? Contact me at or on Twitter @charlesxjyang