
Today, the Artificial Intelligence Underwriting Company (AIUC) is emerging from stealth with a $15 million seed round led by Nat Friedman at NFDG, with participation from Emergence, Terrain, and notable angels including Anthropic cofounder Ben Mann and former CISOs from Google Cloud and MongoDB. The company’s goal? Build the insurance, audit, and certification infrastructure needed to bring AI agents safely into the enterprise world.
That’s right: Insurance policies for AI agents. AIUC cofounder and CEO Rune Kvist says that insurance for agents—that is, autonomous AI systems capable of making decisions and taking action without constant human oversight—is about to be big business. Previously the first product and go-to-market hire at Anthropic in 2022, Kvist’s founding team also includes CTO Brandon Wang, a Thiel Fellow who previously founded a consumer underwriting business, and Rajiv Dattani a former McKinsey partner who led work in the global insurance sector, and was COO of METR, a research non-profit that evaluated OpenAI and Anthropic’s models before deployment.
Creating financial incentives to reduce risk of AI agent adoption
At the heart of AIUC’s approach is a new risk and safety framework called AIUC-1, designed specifically for AI agents. It pulls together existing standards like the NIST AI Risk Management Framework, the EU AI Act, and MITRE’s ATLAS threat model—then layers on auditable, agent-specific safeguards. The idea is simple: make it easy for enterprises to adopt AI agents with the same kind of trust signals they expect in cloud security or data privacy.
“The important thing about insurance is that it creates financial incentives to reduce the risk,” Kvist told Fortune. “That means that we’re going to be tracking, where does it go wrong, what are the problems you’re solving. And insurers can often enforce that you do take certain steps in order to get certified.”
While there other startups also currently working on AI insurance products, Kvist said none are building the kind of agent standard that prevents risks like AIUC-1. “Insurance & standards go hand-in-hand to create confidence around AI adoption,” he said.
“AIUC-1 creates a standard for AI adoption,” said John Bautista, partner at law firm Orrick and who helped create the standard. “As businesses enter a brave new world of AI, there’s a ton of legal ambiguities that hold up adoption. With new laws and frameworks constantly emerging, companies need one clear standard that pulls it all together and makes adoption massively simple,” he said.
A need for independent vendors
The story of American progress, he added, is also a story of insurance. Benjamin Franklin founded the country’s first mutual fire insurance company in response to devastating house fires. In the 20th century, specialized players like UL Labs emerged from the insurance industry to test the safety of electric appliances. Car insurers built crash-test standards that gave birth to the modern auto industry.
AIUC is betting that history is about to repeat. “It’s not Toyota that does the car crash testing, it’s independent bodies.” Kvist pointed out. “I think there’s a need for an independent ecosystem of companies that are answering [the question], can we trust these AI agents?”
To make that happen, AIUC will offer a trifecta: standards, audits, and liability coverage. The AIUC-1 framework creates a technical and operational baseline. Independent audits test real-world performance—by trying to get agents to fail, hallucinate, leak data, or act dangerously. And insurance policies cover customers and vendors in the event an agent causes harm, with pricing that reflects how safe the system is.
If an AI sales agent accidentally exposes customer personally identifiable information, for example, or if an AI assistant in finance fabricates a policy or misquotes tax information, this type of insurance policy could cover the fallout. The financial incentive, Kvist explained, is the point. Just like consumers get a better car insurance rate for having airbags and anti-lock brakes, AI systems that pass the AIUC-1 audit could get better terms on insurance, in Kvist’s view. That pushes AI vendors toward better practices, faster—and gives enterprises a concrete reason to adopt sooner, before their competitors do.
Using insurance to align incentives
AIUC’s view is that the market, not just government, can drive responsible development. Top-down regulation is “hard to get right,” said Kvist. But leaving it all to companies like OpenAI, Anthropic and Google doesn’t work either—voluntary safety commitments are already being walked back. Insurance creates a third way to align incentives and evolves with the technology, he explained.
Kvist likens AIUC-1 to SOC-2, the security certification standard that gave startups a way to signal trust to enterprise buyers. He imagines a world in which AI agent liability insurance becomes as common—and necessary—as cyber insurance is today, predicting a $500 billion market by 2030, eclipsing even cyber insurance.
AIUC is already working with several enterprise customers and insurance partners (AIUC said it could disclose the names yet), and is moving quickly to become the industry benchmark for AI agent safety.
Investors like Nat Friedman agree. As the former CEO of GitHub, Friedman saw the trust issues firsthand when launching GitHub Copilot. “All his customers were wary of adopting it,” Kvist recalls. “There were all these IP risks.” As a result, Friedman had been looking for an AI insurance startup for a couple of years. After a 90-minute pitch meeting, he said he wanted to invest—which he did, in a seed round in June, before Friedman moved to join Alexandr Wang at Mark Zuckerberg’s new Meta Superintelligence Labs.
In a few years, said Kvist, insuring AI agents will be mainstream. “These agents are making a much bigger promise, which is ‘we’re going to do the work for you,’” he said. “We think the liability becomes much bigger, and therefore the interest is much bigger.”