You Can’t Just “Teach” AI Your Business
The most common misconception we hear, by far, and what good actually looks like.
Tell Me Something I Don’t Know…
If you’ve been using AI for more than a few weeks, you’ve already run into both of these.
The first is session amnesia. Every new conversation starts from zero, with no memory of anything you covered before. The context you built last Tuesday is gone. The instructions you spent an hour crafting are reset. That’s not a bug, it’s how the underlying technology is designed to work.
The second is context rot. Even within a single session, earlier instructions gradually lose influence as the conversation fills up. The model has a fixed working memory, and what came in at the start gets deprioritized in favor of what’s most recent. Outputs drift, becoming a little less specific and a little less aligned with the original brief.
There’s no error message. The output just quietly slides away from what you originally asked for.
Both problems feel like a context quantity problem. So the natural response is to add more context. That’s the trap.
Load more at the start. Be more comprehensive. Build the second brain and give every agent access to all of it.
Which brings us to where most teams are right now.
The Second Brain Solution…
You have a thousand sales meeting transcripts. Customer conversations. Internal process documentation. Years of institutional knowledge your team has accumulated.
The logic is compelling: if context is the problem, comprehensive context is the answer. So you build the knowledge base, connect it to everything, and give every agent you build access to the full library. The paralegal agent. The sales coaching agent. The proposal drafting agent. All of them, drawing from the same complete pool.
It feels like you’ve solved it, and in a way, the AI should now know your business. Here’s where it gets complicated.
More Isn’t More…
Think about what a paralegal agent actually needs: case notes, legal precedents, document templates, filing deadlines. That’s the job.
Now think about what most teams actually give it. Every time that agent runs a task (every single request), it’s drawing from the full second brain. Twelve months of sales transcripts, HR policy docs, finance reports, everything the business has ever documented, flooding in at once.
The agent isn’t browsing the knowledge base selectively. It’s drinking from the firehose on every task, all day long.
Those sales transcripts don’t sit quietly in the background, either. They actively interfere. Vocabulary from sales conversations bleeds into legal analysis, and patterns from deal negotiations compete for attention with the ones that actually matter for legal work.
This is context pollution: the presence of irrelevant information in an agent’s context that degrades the quality and precision of its outputs.
Session amnesia: Context doesn’t carry between sessions. Every conversation starts from zero unless memory is specifically engineered to persist.
Context rot: Within a session, earlier instructions lose influence as the conversation fills up. The longer it runs, the more outputs drift from the original brief.
Context pollution: Irrelevant information in the context window actively degrades output quality, independent of session length. The fix isn’t more prompting. It’s agent-scoped context.
Not Noise. Interference…
Most people assume that irrelevant context is neutral, just background noise the model will quietly filter out because it’s obviously not relevant to the task at hand.
That assumption doesn’t hold up.
Research across 18 leading LLMs found a specific mechanism called distractor interference: semantically adjacent but irrelevant content doesn’t just occupy space in the context window, it actively misleads the model. Sales transcripts are full of people, relationships, competing positions, and outcomes, and legal work covers the same conceptual territory. The model has no way to know you intended them for separate purposes, so it incorporates both.
The result is legal analysis with sales-brained framing baked in.
The Fix Isn’t More Prompting…
Solving this requires decisions made before any agent is deployed: how memory persists between sessions, how each agent’s context is scoped to its actual function, what retrieval logic runs at runtime versus what’s pre-loaded.
The paralegal agent gets a legal context layer. The sales agent gets a sales context layer. The knowledge base isn’t one shared pool that everything draws from; it’s structured so each agent receives only what it needs for its specific job. This is agent-scoped context, and getting the boundaries right is what separates AI that keeps improving from AI that produces inconsistent output and frustrates the people using it.
A few principles that make the difference in practice:
- Scope by job function, not subject matter. A legal agent and a compliance agent may overlap thematically, but they have different tasks. Build context around what the agent does, not what it knows about.
- Retrieve at runtime rather than pre-loading everything. Static pre-loading floods the agent with context it may not need for the specific task. Runtime retrieval pulls only what’s relevant in the moment.
- The exclusion list matters as much as the inclusion list. If you haven’t defined what an agent should not see, the default is everything. Documenting both is how you maintain control as the knowledge base grows.
- Mirror context structure to workflow structure. If your operations run through five distinct functions, your knowledge architecture should have five distinct layers, not one shared pool.
Context precision matters more than context volume. An agent with four highly relevant documents will reliably outperform an agent with a hundred documents where only four are actually relevant.
These are also the questions our Compass and Blueprint engagements are built to answer, before any agent goes live.
Compass
Map what each agent actually needsDefining which AI solutions best help you achieve company goals, and designing the specifics of the solution so that actually happens.
Blueprint
Build the architecture that enforces itBuilding it and rolling it out to the team.
Right Idea. Wrong Implementation.
Wanting your AI to know your business is the right instinct, and building a knowledge base is the right response to session amnesia. The direction isn’t wrong.
The part that is wrong is the assumption that more context, given to more agents, automatically produces better results.
That gap between instinct and implementation is a real, solvable problem. The sooner it’s addressed, the less you end up rebuilding later.