AIHub

The Engine Behind Everything We Build

Project Overview

Every application we build talks to AI. Different models, different providers, different strengths. AIHub is the centralized gateway that makes all of it possible - one integration layer that connects our entire product ecosystem to the AI models best suited for each task, while preserving the context and memory that makes those interactions actually useful over time.

Overview & Challenges

The Problem with Starting Over Every Time

Anyone who has worked seriously with AI tools has run into the same wall: context disappears. You have a productive conversation with one model, switch to another for a different task, and suddenly you're re-explaining everything from scratch. Multiply that across a dozen applications, each with their own AI needs, and you're spending more time rebuilding context than doing actual work.

We needed a system that could route requests to the right model at the right time - Claude for nuanced analysis, Gemini for large-context processing, DeepSeek for cost-effective reasoning - without losing what each conversation had already established. We also needed prompt governance: a single place to manage, version, and optimize the prompts powering every application in our ecosystem, rather than scattering them across individual codebases.

The challenge wasn't just technical. It was architectural. How do you build a gateway that's secure enough to handle multi-tenant authentication, flexible enough to support any model provider, and intelligent enough to remember what happened three conversations ago?

Summary

Institutional Memory as a Service

AIHub is the central nervous system of our product ecosystem. It handles authentication, prompt management, model routing, and - most critically - conversation persistence across platforms and sessions. Rather than treating each AI interaction as a blank slate, AIHub stores and retrieves context using vector similarity matching, so every application we build can pick up where it left off.

We wrote extensively about the thinking behind this system in our article on Institutional Memory as a Service. That piece walks through the technical decisions, the cost analysis, and why we chose to build this rather than rely on off-the-shelf solutions.

Solution & Results

One Gateway. Every Model. Persistent Memory.

AIHub is built on Laravel with PostgreSQL and pgvector for vector storage, connected to multiple AI providers through a unified API layer. Every application in our ecosystem - Misenous, Rondough, and everything else - authenticates through AIHub using secure token-based HMAC signatures, sends its requests through a centralized routing layer, and benefits from shared conversation history.

The RAG implementation takes a deliberately different approach from conventional wisdom. Instead of chunking documents into fragments and hoping the right pieces surface during retrieval, we store entire conversation contexts and use per-entity embeddings for precise semantic matching. When an application needs context from a previous session, it gets coherent, complete information rather than disconnected snippets.

Prompt governance lives here too. Every prompt template used across our products is versioned, managed, and optimized in one place. When we improve a prompt, every application that uses it benefits immediately.

The result is that our products don't just use AI - they learn from every interaction and carry that knowledge forward. A conversation you had in Misenous about a character's motivations is available the next time you ask about that character, even weeks later, even through a different model.

Project Details

Status

Internal - not publicly accessible

Built For

Our internal product ecosystem (Misenous, Rondough, and all InteractiveIterations applications)

Problem It Solves

Eliminates context loss across AI conversations, centralizes prompt management, and provides unified multi-model routing

Core Stack

Laravel, PostgreSQL with pgvector, S3, MCP (Model Context Protocol)

AI Providers

Claude (Anthropic), Gemini (Google), DeepSeek

Key Innovation

Full-document RAG with per-entity embeddings instead of traditional chunking

Security

Multi-tenant authentication with HMAC-signed tokens

Public Release

Planned for 2027 after extensive internal testing