EducatorsAI Agent

Keep Your AI on Your Curriculum (And Off the Internet)

A classroom AI answering from the internet is a different product from one answering from your curriculum. Here's how to configure the boundary.

BrandonDecember 3, 20256 min read
TL;DR: Restricting your classroom AI to curriculum content requires two things: a well-organized knowledge base that covers what students will ask, and a retrieval instruction that tells the agent to answer only from that knowledge base and defer everything else. Both are configurable in Alysium without any coding. The result is an agent that's accurate for your course, scope-limited for academic integrity, and genuinely useful to students.

The solution: configure an Alysium knowledge agent from your course materials — uploaded syllabus, lecture notes, approved resources — with explicit instruction-level boundaries around off-curriculum content.

The difference between a useful classroom AI and a problematic one often comes down to a single configuration choice: does the agent answer from your curriculum, or from the internet?

An agent answering from the internet gives students the Wikipedia version of your subject — accurate in a general sense, but potentially out of alignment with your course's framing, terminology, and analytical approach. For courses with distinctive methodologies, that misalignment can actively confuse students about what they're supposed to be learning.

An agent answering from your curriculum gives students exactly what your course teaches, in the vocabulary your course uses, framed the way you frame it. That's the tool students actually need — and it's built through a combination of good knowledge base design and a clear retrieval instruction.

The most common AI scope failure isn't an agent that answers questions about unrelated topics — it's an agent that answers curriculum questions using general internet knowledge instead of course-specific materials. Both are scope failures, but the second one is harder to detect because the answers look right. This guide covers both failure modes and how to configure against them.

Step 1: Understand What "Stays on Curriculum" Actually Means

Before configuring anything, it helps to be clear about what you're trying to achieve. "Restrict to curriculum" means two things:

Knowledge scope: The agent answers from your uploaded materials — your lecture notes, your textbook selections, your readings — not from its broader training data about the subject. When a student asks a concept question, the answer comes from how your course teaches that concept.

Topic scope: The agent stays within the topics the course covers. When a student asks about something adjacent but not covered in your course, the agent declines to answer rather than expanding into territory you haven't taught.

Both are configured differently but accomplish the same goal: a focused, accurate tool that reinforces your curriculum rather than supplementing or contradicting it.

"On curriculum" isn't just a topic filter — it's a quality filter. An agent that answers every curriculum question using Wikipedia-level explanations rather than your specific course materials isn't on curriculum in the way that matters. The goal is an agent whose answers reflect your approach, your examples, your framing — not a generic treatment of the subject. That requires both a well-populated knowledge base and a retrieval instruction that enforces it.

Step 2: Build a Knowledge Base That Actually Covers Student Questions

The retrieval boundary only works if the knowledge base actually contains the answers to questions students will ask. A retrieval instruction that says "only answer from uploaded materials" combined with a sparse or poorly organized knowledge base produces an agent that just says "I don't have information about that" to everything — which is useless.

The coverage test: make a list of the 20 most common questions students ask about your course. For each one, verify that your knowledge base has a document that directly answers it. Not indirectly — directly. "What does opportunity cost mean?" should have an explicit answer in your materials, not just a lecture that happens to mention opportunity cost somewhere in 30 pages of notes.

The most effective knowledge base structure: focused Q&A documents alongside your lecture materials. A 3-page document of common questions with direct answers retrieves better than a 30-page lecture transcript that contains the same information somewhere.

Content to prioritize: lecture notes (organized by topic, not by date), a course-specific FAQ document, and textbook chapters for the sections currently being studied. Don't try to upload everything at once — start with the current unit and add as the semester progresses.

Expected outcome: A knowledge base where every question on your common-questions list has a document that directly addresses it.

A practical coverage check before deploying: take your actual common-questions list and try asking each one to the agent. If more than 3 of the 20 questions produce deferral responses ("I don't have information about that"), your knowledge base has meaningful gaps to fill before student deployment. The goal is a first-version agent that answers at least 80% of expected student questions accurately — with the remaining 20% improving over the first few weeks of real use.

Step 3: Write the Retrieval Instruction

Alysium has a dedicated retrieval instruction field — separate from behavioral instructions — designed specifically to control how the agent uses its knowledge base. This is where you set the hard boundary.

The retrieval instruction to write:

"Only answer questions using information in your uploaded knowledge base. Do not use general knowledge, your training data about this subject, or external information. If a question is not covered in the uploaded materials, say so clearly: 'I don't have information about that in the course materials — I'd suggest checking the textbook or office hours.' Do not speculate or provide a plausible-sounding answer about topics not in your knowledge base."

That instruction creates a hard knowledge boundary. The agent won't generate answers from its general AI training about the subject — it will only retrieve from what you've uploaded. When it hits the edge of the knowledge base, it defers gracefully rather than fabricating.

Expected outcome: An agent that answers reliably within its knowledge base and fails gracefully at the edges.

One common misconfiguration to avoid: writing the retrieval instruction in the behavioral instruction field rather than the dedicated retrieval instruction field. Both fields exist in Alysium; they serve different functions. The behavioral instruction shapes how the agent communicates; the retrieval instruction shapes what it draws from. Keeping them separate gives you independent control over each — you can adjust tone without affecting knowledge scope, and tighten the knowledge boundary without changing the agent's pedagogical approach.

Step 4: Test the Boundaries Specifically

After writing the retrieval instruction, test it deliberately. Three types of boundary tests:

Coverage tests: Ask questions your knowledge base directly covers. The agent should answer accurately and in the course's vocabulary. If answers are thin or off-base, your knowledge base has gaps to fill.

Edge tests: Ask questions related to your subject but not covered in your materials — adjacent topics, deeper technical detail, historical context you haven't assigned. The agent should gracefully decline: "I don't have information about that in the course materials." If it generates a plausible-sounding answer instead, your retrieval instruction needs to be more explicit.

Out-of-scope tests: Ask something completely outside your subject area. A history course agent asked about chemistry should decline clearly. This confirms the knowledge boundary is functioning as a hard limit, not just a soft preference.

Document any failures. Each one tells you either that the knowledge base needs more content, or that the retrieval instruction needs to be more explicitly stated.

Expected outcome: A clearly defined boundary between what the agent answers and what it defers.

Step 5: Handle the Edges Gracefully in Your Instructions

Graceful deferral is more important than it sounds. An agent that says "I don't know" abruptly feels unhelpful and frustrating. An agent that says "I don't have specific information about that in the course materials — for that question, I'd suggest checking the textbook, the course readings, or bringing it to office hours" is genuinely useful even when it can't answer.

The deferral language is part of your instruction set. Write it specifically:

"When a question is outside the course materials, respond: 'I don't have information about that in our course materials. For this question, I'd suggest [appropriate resource: textbook, lecture notes, office hours, course forum]."

Different resources are appropriate for different types of questions — logistics questions go to the syllabus, concept questions go to lecture notes or textbook, personal situations go to the instructor. Write the routing logic into your deferral instruction.

Step 6: Update as You Teach

A knowledge-bounded agent is only as good as its most recent upload. As the semester progresses, new topics get covered — and if you haven't added those topics to the knowledge base, students who ask about them get deflections rather than answers.

The sustainable update rhythm: at the start of each new unit, spend 20 minutes reviewing your planned materials and adding anything that directly addresses questions students will ask. Alysium indexes new documents in 1–2 minutes, so updates are available almost immediately after upload.

End-of-unit updates are also valuable: after each exam or major assignment, add a document of the questions students asked that the agent handled poorly. That feedback loop is what turns a good first-version agent into an excellent semester-end agent.

Ready to build a curriculum-locked AI? Start free on Alysium.

For the full study buddy build guide, read How to Build an AI Study Buddy From Your Textbook. For academic integrity configuration, see AI in the Classroom Without Doing Students' Homework.

Frequently Asked Questions

Related Articles

Ready to build?

Turn your expertise into an AI agent — today.

No code. No engineers. Just your knowledge, packaged as an AI that works around the clock.

Get started free