Toronto-based Formic AI has unveiled Boreal, an explainable language model designed for organizations that require verifiable, auditable AI outputs in high-stakes professional environments.
Unlike conventional large language models that rely primarily on probabilistic text generation, Boreal converts unstructured documents into a structured knowledge graph. Each response is cross-checked against original source materials before being delivered, with reasoning steps logged so users can trace how an answer was produced from start to finish.
Formic AI says this approach shifts AI risk away from opaque “black box” outputs toward evidence-based interpretation. “Trust comes from being able to check the answer and the path that produced it,” said CEO Daniel Escott, adding that Boreal provides clear links back to source documents and a simple audit trail for review.
The platform combines a neuro-symbolic engine for precise, auditable analysis with a generative component for drafting and conversational tasks, according to CTO Varun Ranganathan. Deterministic checks are built in to block unsupported statements before they reach users, while a precomputed graph-traversal design reduces compute requirements and supports more efficient deployments.
Boreal can be deployed in on-premises or air-gapped environments and can also act as a governance layer around existing AI tools, adding grounding and auditability without forcing organizations to overhaul current workflows.
Formic AI also announced an academic partnership with York University Connected Minds, where Boreal will be supported through a Prototyping Award that includes funding and commercialization assistance.



