While all eyes in the AI policy world are often fixed on Washington D.C., especially with the drastic shifts in U.S. science policy under the current administration, something remarkable has been quietly unfolding across the Atlantic. The UK has been making steady, savvy investments in AI for science. They’re increasingly charting a unique path forward, demonstrating public policy's crucial role in unlocking scientific discoveries through AI.
In many ways, their lack of massive frontier-scale private AI companies has forced a scrappier, more focused approach. This has led to an emphasis on public-private partnerships and investments in differentiated scientific models.
Here's a snapshot of what the UK has been up to in the past five months since Trump was inaugurated1:
£750M investment in UK’s first Exascale Supercomputer
Today, US leads the world in high-performance supercomputing - the 3 fastest supercomputers in the world are all at U.S. national labs. The U.K.’s largest supercomputer is currently 11th.
Despite a rocky start, the U.K. is finally getting serious about investing in their public research compute capacity. The administration has announced a £750M investment in building their first exascale supercomputer at the University of Edinburgh. While technical details are still waiting to come out later in the summer, this investment is a step towards supporting providing more compute resources for researchers.2£5M ARIA x Pillar VC: Encode AI for Science Fellowship
The U.K. government announced £5M in funding to support ARIA and Pillar VC’s AI for Science fellowship, which places world-class AI researchers into scientific institutions. As I’ve written about previously, the history of compute and scientific discovery more broadly is full of examples of serendipitous talent migration that shifts where discovies happen and are commercialized. Encode is clearly geared towards attracting international AI talent to the UK.
It is notable that ARIA is partnering with a US venture firm on this, which suggests that there is market demand for public partnerships with innovative science funding agencies that is being unmet in the U.S.£8M OpenBind Protein-Ligand Dataset
While it was the DeepMind team that created AlphaFold (and won the nobel prize for it), it was Brookhaven National Lab that created the Protein Data Bank (PDB) that provided the data for the CASP competition. Without this dataset, there would be no AlphaFold.
While simulating protein folding was a critical accomplishment, accelerating drug discovery requires understanding how proteins interact with other organic (ligands) to identify potential candidates that achieve desired effects. To that end, the U.K. government has announced an £8M investment to leverage their national synchotron source at Oxfordshire (equivalent to a U.S. national lab) to visualize and generate 500,000 experimental samples of protein-ligand interactions for AI model training. This dataset will power the next-generation of AI model training to discover new potential small molecule drugs.£10M investment for biology self-driving labs in Liverpool investment zone
While models can discover new compounds (when trained on the right data), how we actually synthesize those compounds in the real world is not always clear. In the absence of datasets and/or synthesis routes, we can use self-driving labs to automate real-world experiments at scale.
Liverpool has long been a leader in self-driving labs, with groups like Andy Cooper at University of Liverpool. This investment will help catalyze a broader regional ecosystem around Liverpool developing robotic labs to discover treatments for various infectious diseases and accelerate human organoid research.£4M investment in fellowships to study how AI changes the way we do science
How will AI change the way we do science is a question this entire substack is dedicated to answering. I’m glad to see the UK government agrees with me on this and is putting £4M into funding early career researchers to help answer it.3
It is worth noting this program is “run alongside a US-based cohort funded by the Alfred P. Sloan Foundation, creating a transatlantic research effort to examine AI’s impact on science.” A really clear juxtaposition of UK public science funding being complemented by philanthropic investment on the US end.
This is an impressive array of public partnerships and capital allocation towards AI for Science from the UK. Much of this flurry of activity is downstream of a few technical, motivated, young people in government who are tasked with working on hard AI problems.4 The UK is not waiting for “AGI”, they are actively investing in the broader ecosystem that is needed to realize AI-powered acceleration of scientific discoveries. And from this portfolio of investments, we can see the broad outlines of a UK AI for Science strategy:
Compute: spending capital to increase public compute capacity for researchers
Talent: a clear focus on attracting international AI talent to work on interesting scientific problems
Scientific Infrastructure: leveraging federally-funded scientific infrastructure to generate invaluable experimental data for AI model training. There does not exist a large-scale, high-quality experimental dataset of protein-ligand interactions - the UK is using their national scientific infrastructure to create that dataset! Also doubling down on investments in self-driving labs, which help bridge AI hypothesis generation and real-world interactions
And, Iterate: funding researchers to investigate how AI is changing science, the results of which would inform future funding strategies.
And there is nothing stopping the U.S. from doing any of these things. We have the compute, scientific infrastructure, and talent, all that is missing is the focus.
For reference, the UK had a new Labour administration start in late July 2024, so they are 1 year into their administration.
And there is clear research demonstrating the value of marginal increases in public compute capacity.
Nobody asked me, but a few questions I think would be interesting:
how do we change public supercomputer allocation strategies in light of AI scaling laws?
how should we change the way we think about scientific education? Many of the researchers I’ve interviewed on the show talk about the importance of their graduate students knowing how to program, despite none of them being in C.S. departments
how do we “write science papers for AI” i.e. change the way we publish scientific methods and data in light of AI?
how does AI change scientific workflows? e.g. balance of time spent reading papers vs ideating vs doing experiments, does computational modelling democratize (or specialize even further)?
how does AI change scientific collaboration and team compositions? Does AI favor big team science or small labs, how does it affect the composition or skill set of teams?
Does AI change where disruptive innovations come from? Will we find examples of arbitrage where previously unremarkable institutions or research groups begin to find more “diamonds in the rough” due to a differentiated strategic use of AI?
UK AISI is another example of growing technical state capacity within UK government.