The Five Blindnesses — verification infrastructure for AI agents

This isn't AI.
This is EI.

AI coding agents aren't dumb. They're blind. We build the infrastructure that gives them sight — and the gates that force them to use it.

6 MCP servers 40 interfaces 900+ tests Open source

What We Do

Building the infrastructure for AI that actually works in production.

Verification Infrastructure

Six MCP servers that give AI coding agents sight into project history, runtime behavior, API contracts, code quality, security, and web content. Six blindnesses no model improvement will fix.

Local-First Architecture

SQLite + WAL + FTS5. No cloud. No Docker. No external databases. Every tool persists intelligence in a single file next to your code. Install via pip/npm and go.

Enforced Quality Gates

Mutation testing, security scanning, and a dev loop with phase gates that reject empty claims. Agents must submit evidence — a sibling file read, test output, a Seraph grade — before they can advance.

The EvoIntel MCP Suite

Six tools for six blindnesses. Each one gives AI coding agents sight into something no model improvement will ever fix — plus the gates to make sure they actually look.

Project History

Sentinel

AI agents are blind to project history. Sentinel mines git for conventions, pitfalls, architectural decisions, hot files, and co-change patterns — institutional memory for your AI.

10 MCP tools — 329 tests
Runtime Behavior

Niobe

AI agents are blind to runtime behavior. Niobe snapshots process metrics, ingests logs, detects anomalies, and compares before/after states — eyes on the running system.

8 MCP tools — 14 tests
Cross-Service Dependencies

Merovingian

AI agents are blind to cross-service dependencies. Merovingian maps API contracts, tracks consumer relationships, and detects breaking changes across repository boundaries.

10 MCP tools — 187 tests
Code Quality

Seraph

AI agents are blind to real code quality. Seraph runs mutation testing, static analysis, security scanning (bandit + semgrep + detect-secrets), flakiness detection, and risk scoring — because 'all tests pass' is not a safety guarantee.

4 MCP tools — 187 tests
Web Content

Anno

AI agents are blind to web content. Anno strips 93% of HTML noise so your agent reads clean, structured text — not 15,000 tokens of scripts and ads.

4 MCP tools — 101 tests
Protocol Enforcement

Morpheus

AI agents skip steps under pressure. Morpheus tracks plan state in SQLite and enforces phase gates with evidence — agents must prove they checked before they can advance. Sequential ordering enforced, not advisory.

4 MCP tools — 83 tests

Install and start building

pip install git-sentinel        # Project history
pip install niobe               # Runtime observation
pip install merovingian         # Dependency intelligence
pip install seraph-ai           # Verification + security
pip install morpheus-mcp        # Protocol enforcement
npm install -g @evointel/anno   # Web content extraction

Also Building

Products powered by the EvoIntel stack.

In Development

Zado

ADHD-friendly budgeting with AI coaching. Five distinct coach personalities adapt to your financial style, with real bank account sync via Plaid.

5 AI coach personalities — real bank sync via Plaid
In Development

EIF

Ethical Intelligence Framework for medical device AI validation. Automates the compliance process that typically takes teams months of manual review.

Months of review automated — for regulated AI systems

How We Build

We don't build isolated tools. We build interconnected systems where every component makes the others smarter, safer, and more capable.

Tools That Compound

Every tool strengthens the others. Sentinel's pitfall history feeds Seraph's risk scoring. Seraph's grades feed back into Sentinel's confidence scores. Morpheus orchestrates all of them and saves what it learns for next time. The stack compounds — every session makes the next one better.

Intelligence That Persists

AI agents forget everything between sessions. Ours don't. Sentinel saves what the agent learns — patterns, pitfalls, fixes — and surfaces them next time. Error fingerprints match across files and sessions. Knowledge accumulates instead of resetting to zero.

Enforcement, Not Guidelines

We caught our own AI agent rubber-stamping quality checks — claiming it read a sibling file without actually reading one. The fix wasn't a better prompt. It was Morpheus: phase gates with evidence requirements. The agent must prove it checked before it can advance. Same reason CI pipelines exist.

The Builder

Evolving Intelligence is built by Nicholas Smith — an AI engineer and ServiceNow developer who ships production systems. Six MCP servers published to PyPI and npm. 900+ tests. All open source.

The thesis: AI coding agents fail not because they're dumb, but because they're blind to things no model improvement will fix — project history, runtime behavior, cross-service dependencies, real code quality, and web content noise. The EvoIntel stack gives them sight and the gates to prove they used it.

Read the full argument in the white paper — including how the stack independently converges with Anthropic's published research on agentic coding challenges.

Let's build something that matters.

Whether you need AI infrastructure, ethical frameworks, or a technical co-builder — let's talk.

nicholas.smith@evolvingintelligence.ai