Introduction
In this episode, I sit down with Keith Brown, associate professor of engineering at Boston University and principal investigator at KABlab, to discuss how his lab builds and operates self-driving experimental platforms, particularly using 3D printing to explore mechanical and material properties. He explains the use of Bayesian optimization in high-throughput campaigns involving tens of thousands of experiments, shares lessons from developing open-source hardware for mechanical testing and electrochemistry, and reflects on how graduate students’ roles evolve in automated research settings.
We also discuss model selection for small versus large data regimes, modular infrastructure, the limitations of centralized automation, and how out-of-distribution reasoning still sets human scientists apart. Here are three takeaways from our conversation:
1. Decentralized self-driving labs will drive the next wave of innovation
Centralized mega-labs are not necessarily the future; most progress has come from small, distributed groups.
Researchers innovate faster when experiments are hands-on and local.
Infrastructure can be shared without consolidating everything in one place.
2. 3D printing is emerging as a core engine of materials discovery
3D printers enable rapid, programmable variation in structure and composition.
Their voxel-level control makes them ideal for combinatorial screening.
Shared platforms allow reproducible studies across labs and scales.
3. Human scientists remain essential to shaping long-term experimental campaigns
Humans guide the experiment design, tuning, and interpretation.
Roles shift from operator to systems-level thinker and optimizer.
The most successful campaigns treat self-driving labs as a collaborator, not a black box.
Transcript
Charles:
Keith, thanks for joining.
Keith Brown:
Yeah, yeah. Thanks very much, Charles, for having me.
Could you describe how you set up your self-driving lab to optimize 3D printed structures for mechanical energy absorption? (1:00)
Charles:
I do want to talk about self-driving labs broadly, but maybe first we can acquaint listeners with your work. I thought we could start with two or three of the papers you’ve done around self-driving labs for optimizing 3D-printed structures for mechanical energy absorption. Can you walk us through the setup for those papers and tell us a little bit about that work?
Keith Brown:
Absolutely. When I started my independent career at Boston University, I got very interested in how we design mechanical structures — things like crumple zones in cars, padding in helmets. We still have to do lots of experiments to figure out how different structures and materials perform in those situations. That’s tedious, time-consuming, and wasteful.
So we worked on developing a self-driving lab to study the extreme mechanics of polymer structures. It combines several 3D printers (initially, we had five) that print structures automatically. They’re retrieved, photographed, weighed, and then tested in an Instron compression machine, which compresses them until they’re flat while measuring the force required to do so.
This lets us learn the mechanics of each structure and use that information to design the next one. It’s a closed-loop system that prints and tests new structures, aiming to find ones that absorb a lot of energy. The goal is to create crumple zones and similar systems that are lighter but just as effective.
We’ve been doing this since about 2018. At this point, we’ve run about 30,000 experiments with the system. Over the years, we’ve worked on different facets. Most recently, we’ve been developing helmet pads for the military to improve blunt impact resistance. For example, if a soldier falls out of a vehicle and hits their head.
We’ve been able to design structures that perform very well in that capacity. We’ve also achieved world-record performance in energy absorption efficiency, meaning absorbing the most energy possible per unit volume. I’m happy to dive deeper into any of these aspects, but we’ve basically had a robot running continuously since 2018.
What were the challenges of integrating the robotic and printer systems together? (4:10)
Charles:
When I saw that paper, it seemed like it had one of the largest experimental campaigns for a self-driving lab that I’ve seen, especially with five different 3D printers and 30,000 structures. I'm curious: in the photo of the setup, you see five printers around a UR robot arm with a UTM next to it. What were the challenges of integrating those systems? How did you get the robot arm to pull the sample from the printer into the UTM, know when it was done, and so on?

Keith Brown:
Yeah, great question. Each system has its own software. The universal testing machine has very specific, closed software. Each 3D printer has its own software. We also had to control the machines, choose experiments, use machine vision, track weight and scale, and more. Altogether, there were around 12 distinct pieces of software that needed to operate together.
Each link in the chain required a lot of thought and effort from the team. One story that illustrates this well involves the Instron system. When we bought it, we were told it had an API for making software calls, but we couldn’t get that to work. Instead, we used a serial port on the side of the instrument that we could send a high voltage signal to in order to start it.
So instead of full software control, we simplified the interaction. We told it to start and then waited for it to signal that it was done. That worked reliably. A big part of building the automation system was choosing where to simplify. Did we need full control, or just enough to get the job done?
Ultimately, we had three different computers running different parts of the workflow, listening for signals, and sharing data. That doesn’t include the cloud services we used, like a shared computing cluster here at BU and even Google Drive in some cases. The students who built this system had to become skilled in everything from old-school serial ports to modern machine learning.
How much of the project’s development time was spent just getting all the devices to communicate? Which part of the system was hardest to integrate? (6:20)
Charles:
How much time would you estimate was spent just getting the different systems to talk to each other? Out of the whole project, how much of it was integration overhead?
Keith Brown:
I can give a pretty firm answer. We went from everything in boxes to a functioning system in about three months. The team was made up of one doctoral student, one master's student, and two undergrads. That initial system was a bit brittle — if something broke or dropped, we didn’t have great recovery protocols — but the core integration was there.
We’ve made a lot of improvements since then, but the foundation was built in those three months.
Charles:
Which part of the system was hardest to integrate? I’d guess the UR [Universal Robots] robot is fairly mature and designed to work with different systems. Was it the 3D printers? The Instron? How would you rank their maturity?
Keith Brown:
The UR system is very sophisticated. We’re impressed daily by how well it works. There are different levels of control you can use. Some people use a Robot Operating System (ROS) and similar frameworks to micromanage every movement. But we realized we only needed four or five specific actions. So we programmed a series of waypoints and let it run through those.
Since we controlled where the printer put each part on the bed, we could script the robot’s movements very precisely. That’s still how we use it today. We have more scripts now and more complex logic, but the core idea is the same. It also has force feedback to avoid blindly executing commands, which helps with robustness.
The printers are also highly automatable. That was one of the big reasons we chose this project. If you compare it to doing a similar experiment in chemistry, you run into issues with reagents sitting out, temperature control, and timing. But with a 3D printer, you can create a very complicated structure and test it almost automatically.
That said, there are still challenges. One big one for fused deposition modeling is removing prints from the bed. Sending a job to the printer is easy, but getting the part off the bed often requires a human with a scraper.
We tackled that by focusing first on architectures we knew we could remove easily, like cylindrical structures that the robot could lift with a prong. Later, we developed strategies for peeling parts off more gently. These are the kinds of things you don’t think about when you’re just printing for fun, but become very real problems when you're automating.
How did you calibrate five different 3D printers to ensure reproducibility across the self-driving lab? (10:30)
Charles:
And on the question of printers, you had five. One broader concern with self-driving labs is reproducibility. A setup might be consistent in one lab, but what happens if someone tries to replicate it with slightly different equipment? For your five printers in this lab, how did you handle calibration? I know that was part of the upfront work.
Keith Brown:
Yeah, that’s a great question. The short version is that there’s always going to be some uncertainty. The most we can do during calibrations is to make sure that what you print is exactly what you intended, every time. We also check for variability across printers.
To check the mass, we integrated a simple integral feedback system. Something gets printed, it’s weighed, and if it's under mass, we adjust the extrusion multiplier. That variable lets us account for inconsistencies in filament diameter. That way, we can keep the masses of all structures consistent.
As for printer-to-printer variability, we explicitly tested that in our first paper. We compared five different printers using the same structures and didn’t see any statistical differences. That doesn’t mean there couldn’t be differences under other circumstances, but once we corrected for mass variation, there was no measurable difference in our results.
That said, there definitely are substantial differences across printer models. If you move from something like a MakerGear printer to a more modern platform like a Bambu printer, which uses a different architecture, you could see differences. But the key is comparing the final structure, its geometry and mass, to make sure it's truly the same thing.
Why did you choose Gaussian process regression? (12:40)
Charles:
Last thing on this topic before we move on. You used Gaussian process regression to drive the optimization. Given your experience now with 30,000 data points, do you think you'd pick a different kind of sampling method? There's been a lot of buzz around reinforcement learning lately. I’d love to hear your thinking on the algorithmic choice.
Keith Brown:
Great question. Gaussian process and Bayesian optimization, more generally, are excellent when you’re working with small datasets. When you're starting out with 10, 100, even 1,000 measurements, it's a no-brainer. Many papers have followed this path, and it has become a standard approach in the field for this kind of optimization.
As our dataset grew, especially toward the end of our campaign with 30,000 experiments, we noticed it was taking a long time just to make the next prediction. I gave a talk about that, and someone in the audience who was an expert in Bayesian optimization was shocked. He showed me that he could do predictions with a million data points on his phone. That led us to modern methods for sparse Gaussian process prediction and training, which make large datasets feasible.
Of course, there are some downsides to Gaussian process regression. The standard formulation assumes the function is infinitely differentiable, which limits you to very smooth function spaces. That’s not always realistic. For example, phase transitions are not smooth, and you need other models to capture that behavior accurately. Some researchers have developed hybrid models or alternative frameworks to deal with those cases.
Regarding reinforcement learning, we’ve explored it a bit. We’re not experts, but our understanding is that it still boils down to modeling a reward function and a state space. So it ultimately faces the same challenges: how do you model the space? In our most recent campaign, we ran a head-to-head comparison between a neural network and our existing Gaussian process method. The neural net didn’t outperform it. The variance was about the same.
At that point, most of the variance was from experimental fluctuation, not model uncertainty. That suggests we already knew the space well enough, and pushing further would just be overfitting. So, at least in the spaces we’ve worked in, we haven’t needed to adopt neural networks. That’s not to say they won’t eventually provide value. They just haven’t been necessary for us yet.
Charles:
It’s funny that even in 2024, with all the new tools out there, papers are still relying on good old Bayesian optimization and Gaussian processes, even with relatively large datasets.
Keith Brown:
Well, there’s a funny story behind that. Have you heard about the origin of Bayesian optimization? It’s often called "kriging," after Danie Krige, a South African mining engineer. He was trying to predict how much gold would be in the ground next to a spot where he had already mined. He figured out that you could use kernel functions to make local predictions with uncertainty.
That concept of local averaging under uncertainty is the foundation of what we now call Bayesian optimization. So the whole idea comes from mining for gold, which I think is hilarious, and also shows how old and robust the technique really is.
What is the role of 3D printers in scientific labs today? (17:30)
Charles:
That’s a great story. And it’s always interesting to hear how different fields, like mining or geology, have influenced core machine learning tools. Let’s zoom out a bit. We’ve been talking about 3D printers in the context of this paper, but you’ve also written about their broader role in science. How do you see 3D printers in the modern scientific lab?
Keith Brown:
3D printers wear a lot of hats. In modern labs, they are essential for building flexible infrastructure. My students use them for everything from tube racks to custom fixtures. The ability to quickly design and print something that would have taken hours in the machine shop is a game changer.
That flexibility is critical in academic labs where setups change all the time. You have new students, new projects, and new configurations. Printers let you build and rebuild quickly, which is huge.
They are also incredibly powerful for exploring mechanics. You get access to an immense design space. Sure, there are limits in size, resolution, and printability, but in practice, the number of structures you can make is effectively infinite. That opens up profound questions in mechanics that just weren't accessible before.
And because 3D printing is inherently tied to manufacturability, it makes your discoveries more scalable. If you design a new structure, someone else can reproduce it using a printer. That makes it easier for your findings to be tested by others or even used in industry. Just working with 3D printers builds that translatability in.
Charles:
Right, and I imagine self-driving labs offer a similar kind of design-for-manufacturability benefit. They're at least more automated than having humans do everything by hand.
Keith Brown:
Exactly. Automation does help with reproducibility and manufacturability. But that benefit isn’t always guaranteed. Some of our work in self-driving labs involves extremely miniaturized experiments, preparing fluid samples at the picoliter scale or even smaller. That’s not manufacturable. You wouldn’t want to scale that up.
Additive manufacturing, on the other hand, can scale. People often scale it by just running many printers in parallel. That works. But there’s one more angle I want to highlight.
We’ve focused a lot on the mechanical side of 3D printing, but it’s also an incredible tool for materials development. A printer lets you do chemistry on a voxel-by-voxel basis. You can vary composition, change the polymer formulation, mix precursors — things that usually require huge infrastructure.
So it’s not just about printing structures. You can use the printer as a platform to screen processing conditions and material properties at small scales. That makes it a powerful tool for discovering new materials, not just for studying mechanics.
Charles:
One of our first guests was Sergei Kalinin, who also does interesting work using microscopes as high-throughput experimental platforms. I think you’re describing something similar here, where 3D printers are being used not just to discover structures, but to explore composition too. That’s a really innovative application.
What does the open-source self-driving lab ecosystem look like from your perspective? (22:25)
Charles: Related to what you were saying earlier about the lab use of 3D printers, I know that a lot of new self-driving labs are built using homemade components, often with 3D-printed parts. I’m curious how you’ve engaged with that ecosystem. I know your group has developed some of your own hardware for self-driving labs. Can you walk us through what that ecosystem looks like to you, especially in terms of how many open-source components other science labs are putting out?
Keith Brown:
That’s a great question. There is a rich community — and I mean rich in spirit, even if not in funding — around open-source hardware. The goal is not just to make new tools, but to share them so others can reproduce and build on the work. That’s especially powerful in self-driving labs, where everything is custom but needs to be replicable.
In our case, we have been of two minds. For our 3D printing robot, which we call the BEAR system — in case I mention that later — we’ve made everything public. I don’t necessarily expect many people to duplicate it exactly, because not everyone has the same experimental goals. But we are helping at least one team who is trying to replicate it. We’ve been sharing code and resources with them.
What’s more likely is that people will adopt modular components or subsystems and combine them in their own ways. There are great open-source communities out there. One standout is the Jubilee Project. It looks like a 3D printer, but the tool head can be swapped automatically with others — so you can go from printing to fluid handling to imaging, all on one platform. It’s a very versatile experimental workflow developed by Nadya Peek and collaborators at the University of Washington.
That kind of modular, open hardware design has inspired us. We’ve also released some of our own modules. For example, we developed a mechanical testing platform for use with 96-well plates. You can download the files and build it for around $400. I know of at least two being built right now, which is incredibly rewarding. It shows that the time spent on documentation really pays off.
The broader self-driving lab community is increasingly built on this philosophy: hardware that works, that you can modify and share. That same ethos is very visible in the software space. A lot of the code is written in Python and shared openly on GitHub. Hardware lags a little behind in that regard, but more people are embracing it. Platforms like Opentrons have done a good job at this — their whole model is to be open and accessible.
Charles:
Something I’ve always wondered: when I see someone release an open-source hardware module for scientific experiments, I’m glad those communities exist to support and encourage that work. Especially when the tool can generalize across many kinds of self-driving labs. My suspicion is that while open-sourcing designs helps lower the barrier to entry, it still requires a lot of upfront effort. Do you think there’s room for a marketplace where these tools could be produced at scale, maybe commercially? Something more plug-and-play?
Keith Brown:
That’s a fascinating idea. I think you’re right — there’s definitely a price point where it would make more sense to buy certain components rather than build them yourself.
We’ve talked about this in my group. One system we developed is called PANDA, a low-cost fluid handling and electrochemistry platform that we use for polymer electrodeposition. We collaborated with another lab here at BU, and people kept asking for it. We ended up building a few and distributing them to collaborators. Now there are several operating on campus.

So we’ve asked ourselves: should we spin this out into a company? Could we sell these? Financially, I think there’s a market. But building a hardware startup is tougher than a software one. The upfront costs are higher, and the ecosystem isn’t as mature. But I think it’s worth exploring.
If you consider the student labor and time it takes to build an open-source system, even when you follow detailed instructions, it might actually be cheaper to just buy one at twice or three times the cost. Of course, one of the reasons we build things in-house is the pedagogical value. Students learn a lot from assembling, modifying, and understanding how these systems work.
So I don’t think the answer is always to commercialize. In our group, building hardware is part of what we love to do. But from the perspective of expanding impact, it would be amazing to say, “If you want this, here’s the catalog number — we’ll ship it to you.” That would be very exciting.
Charles:
It’s really interesting that there’s already this latent demand, and your lab is kind of a mini factory distributing this equipment.
Keith Brown:
I wouldn’t call it a factory, for legal reasons. We don’t sell anything. We co-develop instruments with collaborators and share them.
Charles:
Fair enough. I guess it’s not exactly venture-backable either. The market for scientific equipment is small, and as you said, rich mostly in community and spirit. But maybe something for a funder to think about if they’re listening.
Will self-driving labs be more centralized in large facilities or decentralized across many small labs? (29:40)
Charles: Moving to a broader question: do you think self-driving labs will become more centralized — like large shared facilities, or remain distributed, where labs build and run their own setups? I know it’s a bit abstract, but I’m curious where you think the field is heading. Is it currently leaning more toward centralized development in large labs with significant capital, or more toward distributed boutique systems?
Keith Brown:
That’s a big debate in the community. If you look at the literature, most papers that report results from self-driving labs still come from smaller, decentralized setups. There are a few large efforts in the US and globally, but many are still getting off the ground. These are major infrastructure investments, tens or even hundreds of millions of dollars.
In contrast, moving from running experiments to automating those experiments within a research group is a pretty natural progression. The students already understand the experiments, the equipment, the edge cases. I worry that in a fully centralized model, students might plan an experiment, submit it to a facility, get results back, and not understand what went wrong — because they never did the experiment themselves.
People often draw analogies to centralized computing clusters, but the difference is that with computing, you can test the code on your own machine. If you can’t run any part of the experiment in your lab, you’re stuck interpreting black-box results. That’s not ideal.
Also, a lot of the innovation in self-driving labs is coming from people who are building the systems themselves. Most scientific instruments today are built for humans — microscopes, pipettes, everything. But the optimal scale for experimentation is often smaller, faster, and more precise. This is something a robot can handle better than a human. If we want to change the format of scientific infrastructure, we need to be experimenting in labs, not just in centralized hubs.
That said, there is definitely room for shared resources, especially for specialized processes. We’re actually turning our own BEAR system into a kind of centralized service. So if a lab wants to study mechanics but doesn’t have the capacity to do thousands of tests, they could send their samples to us and get consistent, high-quality data in return.
So there’s a spectrum. Decentralized labs can evolve into shared services. You don’t need a $100 million facility to make that happen. The field doesn’t need to be concentrated in a few elite locations. It should be spread across the whole country, and the whole world.
Charles:
Earlier, when we talked about all these different labs building their own equipment, it reminded me of the early days of computing. Back then, the first electronic computers were built by university groups, and each group had its own unique design. That led to a lot of innovation and a tight coupling between computing and scientific research.
Today, we think of supercomputers and compute clusters as centralized resources. But even then, each university usually has its own compute cluster, and some labs have their own. There’s a kind of hierarchy of flexibility and scale.
I think that’s a helpful metaphor for self-driving labs. In some ways, they're like compute clusters, or maybe like simulation software such as VASP or DFT. There are groups that focus entirely on DFT, but it’s also a tool that any group can use. It scales from a laptop to a supercomputer. I feel like that’s one of the key questions: how do we conceptualize self-driving labs? I think your example helped clarify that a lot.
Keith Brown:
Thanks. And I will say that our 3D printing system is, in many ways, one of the easiest examples to understand. You could walk into a middle or high school and find a 3D printer. If you told students, "Design something with mechanical properties—maybe it can hold your weight, but not two people’s," they could understand that. They can physically hold it, design it, and see it tested in a central facility.
That kind of tactile connection makes it much easier to understand than, say, a chemical reaction, which requires a bit more abstraction. That extra layer of abstraction makes it harder to communicate.
What does human-machine collaboration look like? And, how do you train students to work in this field? (36:10)
Charles:
That leads nicely into a topic I wanted to explore, namely how humans interact with self-driving labs. You ran a campaign with 25,000 samples. Because of the broader AI discourse, I think people often imagine self-driving labs as automating science entirely. For that campaign and more broadly, how do you think about the role of the human researcher? What does human-machine collaboration look like?
Keith Brown:
That’s a great question and a very active area of research.
First off, there are no labs today that are truly self-directed. Just like self-driving cars still need a human to say where to go, self-driving labs still need people to define goals, craft experiments, and choose parameters. All of that is still set manually, especially at the beginning of a campaign.
That works well when the campaign lasts a few days. But during our multi-year campaign, we realized something interesting. If the system is running for weeks, you start to notice it making choices that may or may not align with your priorities. So every week, we would check in, usually it was me and Kelsey, the graduate student leading the project, and we would review what the system was doing and whether we should tweak anything.
We weren’t in the loop, because the lab could run without us, but we were on the loop. We made decisions when we wanted to. That style of interaction will become more common as these campaigns get longer. You’re not just running 100 benchmark experiments; you’re searching for a new superconductor or a new lasing molecule over months or even a year.
In that context, the self-driving lab becomes another voice in the research group. And for the human researchers, it can be a more elevated experience. Kelsey, for example, had an experience much more like a PI than a typical graduate student. Instead of running individual experiments, she was thinking about learning strategies, campaign design, and data interpretation.
It was intellectually enriching. We explored optimization theory, machine learning, and human-computer interaction. We even wrote a paper about it called Driving School for Self-Driving Labs. The analogy was that these systems are like autonomous cars—not quite fully self-driving, but advanced enough to require new modes of engagement from the human operator. We wanted to document what those interactions look like and the decisions people still need to make.

Charles:
That’s a great example. It really elevates the abstraction level for graduate students. Instead of spending all their time running individual experiments, they can focus on campaign design and data interpretation.
That leads to a broader question: how do you train students to work in this kind of lab? The skill set seems to be expanding. Are there specific qualities you look for when recruiting grad students? And how do you help them build the combination of hardware, software, AI, and domain expertise that’s now required?
Keith Brown:
Great question. The number one trait I look for is tinkering. I want students who build things in their spare time — woodworking, 3D printing, electronics, coding — anything creative and technical. That shows a willingness to pick up new skills and apply them.
Once someone has that mindset, it’s easier to help them integrate those skills into research.
In terms of training, it’s definitely a challenge. Education is, in some ways, a zero-sum game. If you want someone to learn machine learning and robotics, something else has to give. But you can’t sacrifice core domain expertise. If a student is studying mechanics, they need to understand mechanics deeply. The same goes for materials science, chemistry, whatever the core application is.
That said, there absolutely needs to be room for statistical learning, machine learning, and optimization methods like Bayesian optimization. These should be taught at every level, from high school through graduate school. They are foundational skills across disciplines and are not taught widely enough yet.
Even simple machine learning techniques can be introduced with great tutorials. The deeper subtleties, like choosing and tuning hyperparameters, only come with experience. I can’t count how many times I’ve had a conversation with a student who says their model isn’t learning, and it turns out they haven’t touched the hyperparameters.
There’s a lot to learn, but a lot of it can be learned through doing research experience and hardware. I think I’ve been lucky being in a mechanical engineering department. A lot of the students I work with are naturally inclined toward hardware and have training in robotics and physical systems.
The easy answer to building interdisciplinary teams is to collaborate—bring in some CS folks interested in AI and hardware folks from mechanical engineering. In principle, that’s great. But at the end of the day, everyone still has to write a thesis. So it’s not that simple. You can’t always fund a large team. If you’ve got a small team, people need to wear multiple hats.
Everyone on the team needs to be proficient enough in all these areas to hold conversations and understand trade-offs. There's also a lot of room for expanding skill sets through non-traditional experiences: bringing hobbies into the lab, watching YouTube videos, or running through lots of Python tutorials. A lot of learning can come from just doing.
Charles:
Yeah, I think it's great you mentioned earlier that you look for graduate students who are tinkerers. It really feels like we’re reviving the spirit of tinkering in graduate school, which is kind of what it was originally about.
What do you think AI will be the last to automate in your field? (44:45)
Maybe one last question we sometimes close with. What do you think is the last thing AI will be able to accomplish or automate in your field?
Keith Brown:
Right. The funny thing about AI and computation is that what machines find hard is often different from what we find hard. A calculator can instantly find the square root of a huge number, but it used to struggle with identifying whether a photo contained a bird. So I think there's a mismatch between what’s human-hard and what’s computer-hard.
In my field, mechanics and materials discovery, I think some of the most difficult challenges will be what we call "out-of-distribution" events. These are situations where evidence conflicts, and you need a new mental model to make sense of it. Think of paradigm-shifting discoveries like the photoelectric effect or the heliocentric model. Those moments require not just data, but new frameworks.
AI will likely struggle with that for a long time. And frankly, people do too. It takes millions of scientists to have a single breakthrough moment. That’s the kind of synthesis that’s still extremely difficult.
That said, there are things we consider hard — like reviewing a technical paper — that AI might actually be good at. Maybe not judging novelty, but certainly evaluating technical soundness. Hopefully, we can use AI to amplify our ability to parse massive amounts of knowledge and make better decisions faster. But that final leap, from mountains of data to a new paradigm — that’s going to remain challenging for AI.
Charles:
Awesome. All right, Keith, thanks so much for joining us.
Keith Brown:
Thanks, Charles. This was a really fun conversation.
Share this post