Campbell Brown has spent his career chasing accurate information, first as a well-known TV journalist, then as Facebook’s first and only dedicated head of news. Now, as she watches AI reshape how people consume information, she sees history threatening to repeat itself. This time, she’s not waiting for someone else to fix it.
Her company, Forum AI — which she recently discussed with TechCrunch’s Tim Fernholz at a StrictlyVC night in San Francisco — evaluates how foundational models fare on what she calls “high-stakes topics” — geopolitics, mental health, economics, employment — topics where “there aren’t clear yes-or-no answers and complex answers and where it’s nuanced.”
The idea is to find the world’s leading experts, have them architect benchmarks, and then train AI judges to evaluate models at scale. For Forum AI’s geopolitical work, Brown has recruited Niall Ferguson, Fareed Zakaria, former Secretary of State Tony Blinken, former House Speaker Kevin McCarthy and Anne Neuberger, who led cybersecurity in the Obama administration. The goal is to get AI judges to about 90% consensus with the human experts, a threshold she says Forum AI has been able to reach.
Brown traces the origins of Forum AI, founded 17 months ago in New York, to a specific moment. “I was on Meta when ChatGPT was first released publicly,” she recalled, “and I remember really shortly after realizing that this is going to be the funnel through which all information flows. And it’s not very good.” The implications for her own children made the moment feel almost existential. “My kids are going to be really stupid if we don’t figure out how to fix this,” she recalled thinking.
What frustrated her the most was that accuracy didn’t seem to be anyone’s priority. Basic model companies, she said, are “extremely focused on coding and math,” whereas news and information are more difficult. But tougher, she argued, does not mean optional.
In fact, when Forum AI began evaluating the leading models, the results were not exactly encouraging. She cited Gemini pulling from Chinese Communist Party websites “for stories that have nothing to do with China,” noting a left-wing political bias across nearly all models. There are also plenty of subtle failures, she said, including missing context, missing perspectives, strawman arguments without acknowledgment. “There’s a long way to go,” she said. “But I also think there are some very easy fixes that would significantly improve the results.”
Brown spent years at Facebook to see what happens when a platform optimizes for the wrong thing. “We failed a lot of the things we tried,” she told Fernholz. The fact-checking program she built no longer exists. The lesson, though social media has turned a blind eye to it, is that optimizing for engagement has been bad for society and left many less informed.
Her hope is that AI can break that cycle. “Right now it could go either way,” she said; companies could give users what they want, or they could “give people what’s real and what’s honest and what’s truthful.” She acknowledged that the idealistic version of that — AI optimizing for truth — might sound naïve. But she believes that business can be the unlikely ally here. Companies that use artificial intelligence for credit decisions, lending, insurance and hiring care about liability, and “they want you to optimize to get it right.”
That corporate demand is also what Forum AI is betting its business on, although turning compliance interest into consistent revenue remains a challenge, especially given that much of the current market is still satisfied with checkbox audits and standardized benchmarks, which Brown considers insufficient.
The compliance landscape, she said, is “a joke.” When New York City passed the first hiring bias law requiring AI auditing, the state comptroller found that more than half had violations that went undetected. Real evaluation, she said, requires domain expertise to work through not only familiar scenarios but also edge cases that “can get you into trouble that people don’t think about.” And that work takes time. “Smart generalists won’t cut it.”
Brown — whose company last fall raised $3 million led by Lerer Hippeau — is uniquely positioned to describe the disconnect between the AI ​​industry’s self-image and the reality of most users. “You hear from the heads of the big tech companies, ‘this technology is going to change the world,’ ‘it’s going to put you out of work,’ ‘it’s going to cure cancer,'” she said. “But then to a normal person just using a chatbot to ask basic questions, they still get a lot of nonsense and wrong answers.”
Trust in AI is at an extraordinarily low level, and she believes that skepticism is justified in many cases. “The conversation is kind of happening in Silicon Valley about one thing, and a completely different conversation is happening among consumers.”
When you buy through links in our articles, we may earn a small commission. This does not affect our editorial independence.
