Active research projects
-
Assurance-Driven AI Development (ADAD)
In partnership with Apart Research
This research initiative explores the development of a novel framework for Assurance-Driven AI Development (ADAD) that systematically integrates safety mechanisms throughout the AI lifecycle. We are investigating quantifiable metrics for evaluating AI system robustness against emergent risks, verifiable governance structures for high-capability models, and standardized testing protocols that preserve safety properties during capability scaling.
Potential research outputs include collaborative methodologies for cross-institutional safety verification, prototype monitoring tools for alignment preservation, comprehensive taxonomies of AI safety failure modes with mitigation strategies, and empirical validation approaches for safety guarantees. By addressing critical gaps in current AI governance approaches, we aim to contribute evidence-based best practices that balance innovation with rigorous safety assurance, helping to establish a foundation for maintaining human control as AI systems continue to advance in capabilities. -
MetaAlign: Distributed Coordination Framework for Scalable AI Safety Research
In partnership with Apart Research
The MetaAlign initiative proposes a novel meta-scientific framework for accelerating high-impact AI safety research through distributed coordination mechanisms, addressing the challenge of allocating intellectual resources across the expanding landscape of AI risks. By integrating computational tools for research gap identification, cross-disciplinary collaboration platforms, and empirical validation methodologies, this project will develop quantitative metrics for research impact assessment while implementing a hybrid sprint-based experimentation model that converts theoretical proposals into rigorous evaluations across interpretability, alignment benchmarking, and capability oversight. The framework combines sprint-based methodologies with advanced meta-analysis techniques to create a self-improving research ecosystem that enhances both velocity and directionality of safety contributions, enabling unprecedented scalability of collaborative research while establishing standardized protocols for distributed scientific investigation applicable across emerging technology domains.
-
VentureSpan: Evidence-Based Translation from Research Insight to Product
In partnership with Seldon Accelerator and Juniper Ventures
We're investigating the meta-processes of AI safety commercialization. By treating the translation pipeline itself as our research focus, we're uncovering the hidden factors that enable safety-focused startups to bridge the gap between academic innovation and commercial viability.
We conduct hypothesis testing on venture formation strategies, structured experiments with founder-matching methodologies, and analysis of market-readiness signals for emerging safety research results. A big part of the effort is assembling and rigorously validating leading metrics of success, from catalyzing the founding team and research insight to product-market fit.
-
maybe soon: YOUR IDEA
We want to support and collaborate on traditional and non-traditional researchers and institutions in our areas of focus - don’t hesitate to reach out!