
Collective Intelligence Project
The Collective Intelligence Project builds platforms that allow large groups of people to participate in decisions about AI systems. We partner with AI labs and governments to evaluate frontier models on questions automated benchmarks can’t capture — e.g. whether models respond safely to mental health crises, provide reliable legal advice across languages, or exhibit political bias. We’re building the democratic infrastructure for AI, including evaluation platforms, systems for collecting and aggregating large-scale human judgment data, interfaces to interpret evaluation results, and infrastructure for large-scale public deliberation about AI.