About
CortexOne is an independent, community‑driven collective. We design and share open‑source tools in artificial intelligence and developer tooling — with a focus on explainability, privacy, and self‑hosting.
Our projects are created in our free time, sustained by curiosity and collaboration, and shared openly with the community.
Mission
We turn complex ideas into clean, auditable systems: reproducible pipelines, transparent reasoning, and documentation that earns trust. We favor deterministic processes, explainable outputs, and self‑hosted deployments.
We build our tooling core in Zig for its clarity and determinism, and expose language-agnostic CLIs and APIs so anyone can use them from any stack.
Focus Areas
- Code Understanding: Deep analysis of source structures — AST parsing, symbol graphs, call relationships, repository-wide reasoning.
- Knowledge Structuring: Transforming domain-specific data (e.g. complex schemas, proprietary formats, scientific datasets) into clean, queryable knowledge packs.
- Retrieval & Reasoning: Combining embeddings, vector stores, semantic search, retrieval-augmented generation (RAG), and multi-step reasoning to provide explainable results.
- Air-Gapped & Self-Hosted: Privacy-first architectures under full organizational control; inference that runs isolated from external networks.
- Explainability & Auditability: Pluggable reasoning and explainer models with documented assumptions, versioned releases, and reproducible builds.
How we build
Stay in Touch
We love collaborating with developers, researchers, and enthusiasts.
Write us at hello@cortexone.ai.
Projects
We are preparing our first open-source project focused on deep code understanding.
Deep Code Understanding (open-source)
- Problem: Large codebases encode knowledge in structures, symbols, and implicit conventions that plain text search and token-based prediction models fail to capture. Code completion ≠ code understanding.
- Approach:
- Extract and structure code (ASTs, symbols, relations) into compact knowledge packs
- Build embeddings and vector stores for semantic search
- Apply multi-step reasoning pipelines with retrieval-augmented generation (RAG)
- Distinct models for reasoning (reasoner) and explanation (explainer)
- Support pluggable model backends, fully under organizational control
- Status: Active development; public preview planned.
- Tech: End‑to‑end pipeline in Zig; deterministic, modular, reproducible.
- Security: Air‑gapped inference possible; data stays in‑house; versioned artifacts for audits.
More details will appear here with the first public preview.
Contact
Open source is about people — we’d love to hear from you! Whether you want to collaborate, share ideas, or just say hi, drop us a note anytime.
You can also connect with us on GitHub.
Let’s build explainable, open-source AI together.