Traction
The Gap.
AI agents are taking real-world actions with zero proof. As agents move from chatbots to autonomous operations, the gap between what they do and what you can prove they did becomes an existential risk.
The Firewall.
HELM solves this. It sits between agents and the tools they use, blocking unsafe operations before they happen and returning a cryptographic receipt for every decision. Fail-closed. Proof first. Open source.
Key Milestones
Leadership
Ivan Peychev
Technical founder who designed and built the entire HELM system — the fail-closed execution firewall for AI agents. Architected the 8-package trusted computing base, signed receipt engine, and multi-SDK platform from first principles.
LinkedInKirill Melnikov
Serial founder with deep finance and operations background. Manages capital formation, strategic alliances, and the operational backbone of Mindburn Labs.
LinkedInWhy this becomes the default
Mindburn Labs builds the safety layer for AI agents. We believe the trust gap in AI-driven systems is the defining infrastructure problem of the decade.
Investment Thesis
Every AI agent will need safety rules
As AI agents go from demos to real work, they need something that checks their actions before they run — not after.
Standards win, not features
HTTPS won because it was a standard, not a product. HELM's test levels create the same effect for AI safety.
Open-source adoption turns into paying customers
Every developer who installs HELM OSS is a future enterprise customer. Free adoption is the growth engine.
Proof is the moat
Others sell dashboards. We create tamper-proof records that work offline. You can't copy a proof system by building a nicer UI.
Trust at Machine Speed
Every AI action creates a tamper-proof record. Records link together into a full audit trail.
Let's Talk
If you're interested in the safety infrastructure for AI agents, we'd love to hear from you.
[email protected]Work on verifiable autonomy
We're building the safety infrastructure for autonomous AI. Every action checked. Every record tamper-proof. Every proof replayable.
Why Mindburn Labs
Remote First
Work from anywhere. Async-first communication. We optimize for deep focus.
Hard Problems
Formal verification, tamper-proof records, reliable AI execution — problems with permanent solutions.
Research Time
20% dedicated research time. Publish papers, build prototypes, explore ideas.
Real Impact
Your code runs in production. Every AI action checked, every proof record verifiable.
Early Equity
Meaningful ownership in infrastructure that will power the next generation of autonomous systems.
No BS Culture
Small team, flat structure, high trust. Ship code that matters.
Open Roles
We hire for capability and curiosity. If you see a role that fits, reach out with what you've built.
First Technical Leader
CTO-caliber technical leader to own reliability, scale, and core architecture as we move past $250K in funding.
We look for engineers who think in systems, build reliable code, and ship like operators.
- Strong systems programming (Go, Rust, or C++)
- Experience with sandboxing, WASM, or secure execution
- Comfort with cryptographic primitives (signing, hashing, merkle trees)
- Track record of shipping production infrastructure
Implementation Engineer
Lead customer deployments, integrate HELM into partner infrastructure, and ensure seamless pilot rollouts for hedge funds and trading firms.
We look for engineers who think in systems, build reliable code, and ship like operators.
- Strong technical writing with developer audience focus
- Ability to read and understand Go/TypeScript codebases
- Experience building developer documentation or API references
- Bonus: experience with security/compliance tooling
ML/AI Researcher
PhD-level researcher to lead our model direction. Focus on applied AI, policy optimization, and advancing our proprietary safety models.
We look for engineers who think in systems, build reliable code, and ship like operators.
- PhD or equivalent experience in formal methods, PL theory, or verification
- Familiarity with model checking, theorem proving, or static analysis
- Ability to bridge formal work with practical engineering
- Interest in trust, governance, and autonomous systems
Problems We're Solving
These are the hard problems at the frontier of AI safety infrastructure. If you have ideas, we want to hear them.
Engine Engineering
Build the AI safety engine — action proposal pipelines, auto-block enforcement, resource metering.
Cryptography & Proofs
Design and implement linked proof chains and proof bundle formats.
Safety Testing & Verification
Build L1/L2/L3 test vectors, safety test runners, and formal verification tooling.
Applied AI Systems
Multi-vendor agent orchestration, trust sharing, and AI model analytics pipelines.
Infrastructure & DevOps
Multi-cluster deployment, CI/CD pipelines, observability, and fleet operations tooling.
Ready to build the future?
We're always looking for exceptional engineers and researchers. Send us what you've built — code speaks louder than resumes.