ML4Sci #36: Using AI to Disprove Graph Conjectures; Physical Systems for Backpropagation; AI4Science Data Brief
This is my thesis week đȘ
Hi, Iâm Charles Yang and Iâm sharing (roughly) monthly issues about applications of artificial intelligence and machine learning to problems of interest for scientists and engineers.
If you enjoy reading ML4Sci, send us a â€ïž. Or forward it to someone who you think might enjoy it!
As COVID-19 continues to spread, letâs all do our part to help protect those who are most vulnerable to this epidemic. Wash your hands frequently (maybe after reading this?), wear a mask, check in on someone (potentially virtually), and continue to practice social distancing.
Hi all,
Sorry itâs been a quiet month - I am squirreled away trying to finish my masters thesis in the next week, hence the shorter issue today as well. Iâm hoping to ramp back up into ML4Sci newsletter with deeper dives on different topics over the summer!
First, some housekeeping. I published a data brief with some folks over at CSET@Georgetown on how AI is changing scientific innovation and some ways the US government can help accelerate this process. Writing this newsletter turned out to be a great practice for preparing this data brief!
And if you havenât already, hop on over to join our ML4Sci Slack workspace and meet others interested in this space!
Department of Machine Learning
Facebookâs AI research team has made a big bet on self-supervised learning for computer vision and it is starting to pay off. In a recent ML4Sci issue, we covered SEER, a new self-supervised convolutional neural network that achieves new SOTA on ImageNet.
Now, Facebook has released self-supervised learning with transformers for learning image segmentation. The critical difference between CV and NLP was the ability of language models to learn from large amounts of unlabeled data - that gap is quickly closing with these new self-supervised techniques CV models.
Also, Facebook published a 12T parameter deep recommendation model. Lots of different tricks needed to efficiently train and run a model this big; the age of industrialized AI is here
đAfter a new meme format out of XKCD went viral, dozens of variants have been appearing online. Hereâs one that might be particularly relevant for readers of ML4Sci (also guilty of a few of these myself):
Near-Future Science
đŒ Classified: Pfizer has an Applied Scientist - ML role in Cambridge
Reinforcement Learning is starting to yield real returns to the scientific community. How does their use change the way we think about the scientific process and what scientific understanding means?
[Arxiv] âConstructions in Combinatorics via NNâ. Using RL agents to find counterexamples to several open conjectures in extremal combinatorics and graph theory!
[Arxiv] âExperimental Deep Reinforcement Learning for Error-Robust Gateset Design on a Superconducting Quantum Computerâ. Deep RL agents can also be used to improve error-robustness in quantum computing!
đ»[Arxiv] âDeep physical neural networks enabled by a backpropagation algorithm for arbitrary physical systemsâ. Compute has traditionally meant electrons moving in silicon. New accelerators, like GPUâs or Cerebrasâs massive new 2.6T transistor chip, are essentially smarter ways of moving electrons around. This paper shows that we can implement deep learning in *any* physical dynamical system.
The Science of Science
Policy and Regulation
Thanks for Reading!
I hope youâre as excited as I am about the future of machine learning for solving exciting problems in science. You can find the archive of all past issues here and click here to subscribe to the newsletter.
Have any questions, feedback, or suggestions for articles? Contact me at ml4science@gmail.com or on Twitter @charlesxjyang