AI assurance the aim of new DARPA program
NewsFebruary 11, 2026
ARLINGTON, Va. The Defense Advanced Research Projects Agency (DARPA) has launched what it calls the Compositional Learning-And-Reasoning for AI Complex Systems Engineering (CLARA) program, inviting proposals for high-assurance artificial intelligence (AI) research.
According to the DARPA initial solicitation, CLARA aims to create a theory-driven algorithmic, highly reusable, scalable foundation for high-assurance/broadly applicable AI components used in various defense and commercial realms. The agency notes that the current industry approach to AI is to tack specialized automated reasoning (AR) components onto a large language model (LLM) or other similar machine learning (ML) system. These ML-centric systems, say DARPA officials, typically have weak assurance and a lack of real safeguards.
DARPA's aim with the CLARA research program is to tightly integrate AR and ML components to create high-assurance AI and integrate the two different branches of AI to provide the speed and flexibility of ML with verifiability based on AR proofs that have strong logical explainability and computational tractability.
CLARA is anticipated, according to the DARPA announcement, to create powerful methods for the hierarchical, fine-grained, highly transparent composition of important kinds of ML and AR components, including Bayesian, neural nets, and logic programs.
