ML4Sci #30: MuZero, DeepMind and Learning the Rules of Science
Also, AI for power grids and earth sciences
Hi, I’m Charles Yang and I’m sharing (roughly) weekly issues about applications of artificial intelligence and machine learning to problems of interest for scientists and engineers.
If you enjoy reading ML4Sci, send us a ❤️. Or forward it to someone who you think might enjoy it!
As COVID-19 continues to spread, let’s all do our part to help protect those who are most vulnerable to this epidemic. Wash your hands frequently (maybe after reading this?), wear a mask, check in on someone (potentially virtually), and continue to practice social distancing.
[*] In other news, DeepMind released MuZero, the latest progency of the original AlphaGo that shook the Go-playing world (and the deep learning community). Here’s a handy graphic from the DeepMind blog post that demonstrates the improvements between the successive generations of models:
The incredible set of techniques drawn on by MuZero to achieve mastery over a broad ensemble of challenging games without knowing the rules can see important applications in scientific domains where we have high-throughput interactions with a physical environment with currently unknown dynamics. Of course, MuZero assumes you have easy access to a perfect simulator, which allows you to use all sorts of model scaling techniques. These may not be the case in scientific domains, where you are limited in throughput either because you’re in the real world or because of the computational cost of running a simulator. In any case, having an agent that can optimally interact in environments with unknown dynamics can have a variety of applications in different engineering and scientific fields (I’m specifically imagining anything involving fluids, high-energy particle physics, etc.).
Some meta science-of-science points:
Julian Schrittwieser, one of the co-authors, has a great blog post on the paper. DeepMind also has its own blog post. The fact that many people publish in scientific journals but then also write very readable, comprehensive blogs to complement them could suggest that we just publish for prestige/credentialing, and then write blog posts that people will actually read.
The MuZero paper published in Nature is…closed access. Good luck reading it if you don’t have institutional access. Thank God there are all these blogs out there that summarize the paper for us anyways!
Department of Machine Learning
OpenAI throws more compute at transformers results in some incredible text-to-image synthesis
[NeurIPS 2020] A Christmas grinch-themed invited NeurIPS 2020 talk about applying ML to the real world
classified: Jack Kelly at Open Climate Fix is looking for an ML engineer in the UK
[NatureIndex] presenting experiments using different AI summarizations of scientific articles
[Medium] A great overview of graph neural networks and the coming opportunities and challenges in 2021. A must-read for anyone trying to understand the frontier of GNN’s. One great quote from the article:
“2020 has definitively and irreversibly turned graph representation learning into a first-class citizen in ML.” - Petar Veličković, Senior Researcher at DeepMind
Another great slide-deck, this time from the graph mining tutorial at NeurIPS 2020, presented by Google researchers. I’ve found that most of the world’s technical yet accessible knowledge is hidden in blog posts, slide-decks, and youtube lectures (but rarely in scientific papers, at least not in any usefully comprehensible format that makes it easy to find what you’re looking for). Another meta-observation: a NeurIPS tutorial on a specific ML technique is presented by an entirely privately funded research team … and that is the norm in ML now.
[SyncedReview] does their top 10 AI failures of 2020
🤦Scientific Reports, a fairly high-ranked Nature family journal, just published a paper on using facial recognition to predict political orientation. Clearly peer review is failing to act as an adequate gatekeeper to filter out bad science like this. Not linking because this article doesn’t need any help in Google’s ranking algorithm. [1]
Near-Future Science
🌎 “Advancing AI for Earth Science” - a nice overview from a recent NASA-hosted workshop on how to integrate AI into a scientific field, with many themes in common with this newsletter: physically-aware ML, documentation and sharing data, incentive alignment for open-sourcing models and data, need for benchmarks
⚡Climate Change, ML, and the Power Grid(co-hosted by ClimateChangeAI and Energy Innovation network)
See here for the learn-to-run-power-grid challenge
🐋AI-application-I-would-not-have-imagined: Chinese military researches hiding secret messages in marine mammal sounds
🌧️Weather and Climate Datasets for AI Research
The Science of Science
🧠[NatureNews] How the EU’s Human Brain Project fell apart - a story in how to not fund science
[Arxiv] “The Grey Hoodie Project: Big Tobacco, Big Tech, and the threat on academic integrity” compares Big Tech’s actions with Big Tobacco’s playbook. Section 7.2 on faculty funding was personally surprising to me: 88% of CS faculty in AI have known past affiliations with Big Tech (that figure jumps to 97% for CS faculty in AI ethics).
[NatureNews] The Science family of journals announces changes to open-access (OA) in response to the European Union’s Plan S. Another win for the EU’s ambitious plan to overhaul scientific publishing. Just like in tech regulation, the US is lagging behind in setting standards and as a result, entities are increasingly following European-set standards.
[Slate] “Why I’m suffering from nanotechnology fatigue” A 2016 article on overhyping nanotechnology - replace nanotechnology with AI and it will be a remarkably relevant article
Because of this obsession with “brand nanotechnology” (which of course is just referred to as “nanotechnology”), we seem to be caught up in an endless cycle of nanohype and nanodiscovery, where the promise of nanotech is always just over the horizon, and where the same nanonarrative is repeated with such regularity that it becomes almost myth-like….
Beneath the branding, there is important science and technology here that deserves greater awareness and attention. When you strip away the hype and mythology from nanotechnology, what you’re left with is a growing ability to understand and manipulate matter at the scale of atoms and molecules. This is nanoscale science, design, and engineering (as distinct from nanotechnology), and it’s already transforming the world around us.
[ML@CMU Blog] ICML 2020 experiments in peer review:
resubmission bias, i.e. bias against papers that have been submitted to previous conferences, exists
no significant herding effect in reviewer discussion
novice reviewers can do quite well when properly trained!
These experiments are an important step to validate key ideas in how to reform peer review, but are fundamentally iterative improvements and not systematic/structural changes to the conference system (see below)
[ACM] Reboot the Conference Publication System. quote:
"The reputation of the peer-review process is tarnished," concluded Hannah Bast. Practically everyone in computing research complains about the "reviewers," but the reviewers are us! If everyone is unhappy, then the problem must be systemic.
From Github Octoverse 2020 report:
🌎Out in the World of Tech
🚗[NYT]Uber sells its self-driving car unit, [CNBC]Cruise (acquired by GM) begins testing autonomous vehicles in San Francisco
[Wired] Facebook red-teams its own AI systems
Commoditizing AI-as-a-Service and AI research labs is hard:
DeepMinds annual report shows why its hard to run commercial AI labs profitably
[Wired] OpenAI changes its non-profit status to attract investment and compete for talent, compute
Policy and Regulation
🇬🇧 UK AI council releases AI roadmap
[1] Since some people on Twitter also seem confused about why this is bad, so let me do my best to explain my thinking. As scientists, we know that all models make assumptions and that we should strive to build models that not just work on a held-out test set but for which we also have causative understanding and evidence. Is there any evidence that the way your face looks should cause or even be correlated with your political orientation? No. In the same way that we have no basis or evidence to believe that facial orientation is an indicator of criminality or sexual orientation. AI ethics is more than just “you have biased data” but involves questions like “should I build this model”, “should I collect this data”, etc. Publishing papers like this in scientific journals tarnishes the name, yes the very concept, of science as based on hypotheses and models and lends a veneer of authority to a host of tangentially related eugenicist and racist policies.
[*] DeepMind’s MuZero paper is the “other news” because the real news, at least for me and fellow US-based readers, is the capitol riot on Jan. 6 [NYT]. During a tumultuous year, its been difficult to decide when to inject content not strictly related to ML4Sci (COVID-19 being somewhat related). After all, you didn’t read all the way to the end of this newsletter (thanks for making it this far!) to hear my personal opinions on current political events. This newsletter exists for a specific, technical, purpose, not as a megaphone for my every opinion. And yet, writing about science and AI requires understanding that the way we do science and deploy AI is intricately connected to the values of our society. There is no neutral territory, no sacrosanct temple that shields us from the burden of responsibility of belonging to part of a liberal society. Indeed, even the above article on using facial recognition for determining political affiliation is actually imbued with a whole set of assumptions about politics, ethics, and epistemology. And sometimes, there are events so important, that the whole of society must intervene, as is already happening in response to the capitol riot against Trump’s business assets by the PGA, banks, and other private investors[AP News].
And as scientists and engineers, the people who build the pieces of machinery, digital or otherwise, that keep our society running, we too have a responsibility. The time may come and is perhaps soon coming, when every member of our society will have to decide if we are content with simply maintaining our own comfortable standard of living or if we are willing to lay claim and personally inherit the mantle of defending the values of our democracy.
Apologies for the rambling to my non-american readers. Hopefully this gives some insight into what at least one american citizen is thinking.
Thanks for Reading!
I hope you’re as excited as I am about the future of machine learning for solving exciting problems in science. You can find the archive of all past issues here and click here to subscribe to the newsletter.
Have any questions, feedback, or suggestions for articles? Contact me at ml4science@gmail.com or on Twitter @charlesxjyang