Building an AI infrastructure for biotech. Bacteria that munch on cancer. How to best make a cold brew. All that and more in this week’s edition of The Prototype. To get it in your inbox, sign up here.
For labs at universities, biotech firms and big pharma companies alike, AI models are offering new possibilities for drug discovery.
Using those models, though, is rarely as simple as using a chatbot. They are generally fitted to different tasks, whether that’s predicting protein structures, optimizing chemical reactions or any of the other myriad tasks involved in researching new medicine. What’s more, they typically require additional training and a level of machine learning sophistication that many biologists lack.
That’s where Tamarind Bio steps in. It’s built a software infrastructure that enables scientists to seamlessly use multiple machine learning models. Think of it kind of like the Windows operating system, enabling scientists to use each model as just another application. The platform also enables researchers to program workflows, train models on their data and even integrate them into their own laboratories to conduct experiments that validate their findings.
Basically, cofounder Deniz Kavi told me, his company’s software allows scientists to use AI “without worrying about the infrastructure side or the plumbing, or just the painful software work that they don’t need to be dealing with on a day-to-day basis.”
Scientists not wanting to deal with the hassle of AI is Tamarind’s origin story. The original software was a project that Kavi worked on with his cofounder, Sherry Liu, for their lab at Stanford. They built the software simply to make it easier for their colleagues in the lab to use AI tools. “Basically it was a website to run some of the models we were using,” he said. “Then that caught on just from a word of mouth thing.”
That influx of interest led Kavi and Liu to found Tamarind two years ago. It was incubated at Y Combinator and today the company announced it raised a $12 million series A round, bringing its total investment backing to $13.6 million. The company doesn’t really need the capital, Kavi told me, because Tamarind Bio has been cash-flow positive almost since inception. That’s because it’s not only being used in academic labs around the world, but also for labs at pharmaceutical and biotech companies like Boehringer Ingelheim, Bayer, Mammoth Biosciences and Adimab.
That said, the company does plan to take advantage of the investor interest to double its headcount from the current 12 and keep improving its software, as well as handling its growth–which was 700% over the past year.
Nan Li, an investor at venture firm Dimension Capital, which led the series A, said that the most compelling thing about that growth statistic is that it was accomplished without a sales staff. This “is a story of a company that’s just growing because the product kicks ass and not because they raised tons of venture dollars or their super famous founders,” he told me.
Li doesn’t see challengers on the horizon for Tamarind right now–their biggest competition, he said, is mostly in-house software that biotech companies build for themselves to handle models. He’s not worried about those, either–those “homegrown versions are brittle,” he said, eventually producing demand for Tamarind’s solution.
For Kavi, he’s less worried about competition and more focused on what scientists working on new treatments and medicines can do with the tools his company builds. They “want to be the single place where all computation happens on anything before human trials,” he said.
Discovery of the Week: Programming Bacteria To Eat Cancer
Soil bacteria might be the key to treating some kinds of cancer, thanks to research at the University of Waterloo.
The middle of a tumor is an ideal place for a species of bacteria called Clostridium sporogenes to grow. Tumors are full of dead cells the bacteria can eat, and their centers are an oxygen-free space, which is good because oxygen kills Clostridium. But as the bacteria move further away from the core, they’re more likely to be exposed to oxygen and die before they can completely wipe out the cancer.
The good news is that the researchers solved this problem by genetically engineering a strain of Clostridium to be able to survive in an oxygen-rich environment, so it can keep munching on tumor cells. The slightly less good news is that if the bacteria are given to a patient as a treatment and inadvertently end up somewhere else, that could turn into an infection.
To fix that problem, they equipped a second batch with a genetic “safety switch” that requires them to first start growing in a tumor before the oxygen-resistant gene turns on, and testing the switch by rigging it to produce a fluorescent protein, rather than the oxygen-resistant one.
For their next step, the research team plans to combine the two genes in one strain of bacteria and start testing it on cancer cells to see if it’s as deadly–to tumors–as they hope.
How Reliable Is Military AI?
Secretary of Defense Pete Hegseth has given AI giant Anthropic until Friday to comply with the Pentagon’s demand to use the company’s software for any lawful purpose. Currently, Anthropic doesn’t allow its models to be used for autonomous warfare or domestic surveillance.
In changes that Anthropic made to its safety policies this week, the company said it has to separate what it can realistically do itself and what requires industry and government buy-in. It argued that AI is currently being developed in an “anti-regulatory political climate” that prioritizes “AI competitiveness and economic growth, while safety-oriented discussions have yet to gain meaningful traction at the federal level.”
Pondering the current reliability of AI models is sobering when it comes to military applications, especially considering the DOD uses the same models that told me I should slather vanilla Greek yogurt on my toast and top it with an egg for breakfast (Yuck). Earlier this week, we were exposed to the spectacle of the person in charge of Meta’s AI safety losing her entire email inbox to an errant AI agent, which doesn’t bolster confidence in using AI during stressful combat situations—or even sensitive planning scenarios.
It gets worse. In a paper published to preprint server ArXiv this week, researchers at King’s College London pit major AI models against each other in simulated war games. The result? They were far more aggressive than humans. Not only were they unlikely to surrender even in dire circumstances, but they also rarely de-escalated. In 95% of the games, the models resorted to nuclear weapons. (Somebody train them on Tic-Tac-Toe, stat.)
Look, modern AI systems can be incredible tools. But using them well requires strong guardrails and strict governance, especially in life-or-death situations, and right now it’s not at all clear how the Pentagon–and many other organizations–plan to do that.
The Hot Take: Investors Are Taking For Granted That Healthcare AI Will Scale
Each week, I ask an investor for their take on trends within their industries. Today I’m featuring thoughts from Terri Burke, a senior partner at Intuitive Ventures, which invests in healthcare startups focused on minimally invasive care.
What is being overhyped right now?
Healthtech valuations disconnected from healthcare economics. The assumption that anything with AI will successfully navigate the complex reality of healthcare—getting paid by insurance, integrating into clinical workflows, and scaling beyond a pilot customer (90% of which will never scale).
What should more people be talking about today?
Coronary microvascular dysfunction—a major cause of chest pain and heart failure with preserved ejection fraction that affects millions and has virtually no treatments. Our portfolio company Vahaticor is in a clinical study to understand and treat the disease in a new way.
What are we all going to be talking about in five years?
Where care happens. The hospital-to-outpatient shift will accelerate dramatically—not just for simple procedures, but for complex interventions we assume require hospitals today. Economics will force this shift: healthcare can’t afford hospital-based delivery for everything, and patients prefer not being in hospitals. Five years from now, we’ll be surprised by what we’re capable of doing in outpatient centers
On My Radar
Self-Driving Software: U.K. company Wayve just raised $1.5 billion in investment to expand its operations. The company is developing “plug and play” self-driving software that can fit into a variety of vehicles, meaning it doesn’t have to worry about building its own fleet of cars. That strategy, the company’s CEO told my colleague Alan Ohnsman, “is the most scalable business model.”
Next Gen Data Centers: Microsoft is exploring using superconductors at its data centers in place of copper wiring, which would reduce their power requirements. (Potential gains are good, though they do require liquid nitrogen to stay cool.) Meanwhile, startup Sophia Space just raised $10 million to tackle building data centers in space. The company says it has developed a system that can solve one of the biggest challenges for computation in space–getting rid of the heat.
Pro Science Tip: You Can Make Cold Brew Overnight
If you make your own cold brew, you probably know the rule of thumb is to steep it for about 16-18 hours. Science suggests you don’t need all that time. It all depends on the grind size. A recent study found that you can get the highest extraction from beans by grinding them extremely fine (30 mesh) in just about six hours. But better chemistry isn’t always better flavor. The best taste, the researchers found, was a grind of 20 mesh, brewed for about 8 hours. That’s still good news, though–it means you can start up a brew right before bedtime and have it ready in the morning.
What’s Entertaining Me This Week
I’m currently reading Operation Bounce House by Matt Dinniman (who also authors the Dungeon Crawler Carl books.) The premise is pretty simple: In a sci-fi future, a mercenary company has been hired by the Earth’s government to evict the inhabitants of a colony planet. Their method for doing so? Letting people buy the opportunity to kill some “terrorists” by remote-controlling giant robots, with extra options to customize them (for a fee, of course). I saw Dinnaman at a recent tour promoting the book, where he said he was inspired by trash-talking teenagers in a Call of Duty lobby. I’m a little over halfway through but can’t put it down. It is at once hilarious, bleak and very timely.











