Skip to main content
OpenEduCat logo
AI Platform

Bring Your Own AI Model

Most EdTech AI tools lock you into their AI provider. If that provider raises prices, changes their terms of service, or suffers a data breach, you have no alternative. Your student data flows through servers you do not control, in a jurisdiction you may not have chosen, under terms you cannot negotiate.

OpenEduCat does not bundle an AI provider. You plug in your own, OpenAI, Anthropic Claude, Google Gemini, Mistral, or a model running on a server in your own building. All 9 AI tools automatically use whichever provider you configure. Switch providers in two minutes. No code changes. No data migration. No downtime.

The Problem with Bundled AI

Pricing risk

Your vendor's AI partner raises their API prices. Your EdTech vendor passes that cost to you. You have no leverage because switching means losing all your AI workflows.

Data risk

Student essays, IEP data, behavioral notes, and grades flow through a third-party AI provider. You may not know which provider, which region their servers are in, or whether they train on your data.

Lock-in risk

A better, faster, cheaper model launches next month. You cannot use it because your vendor hardcoded their integration with one specific provider. You are stuck until they decide to update.

How BYOM Works

Four steps to full control over your institution's AI infrastructure.

1

Admin configures the AI provider

In the OpenEduCat admin panel, enter the provider name, API key, and endpoint URL. For OpenAI, Anthropic, Google, and Mistral, select the provider from a dropdown and paste your key. For self-hosted models, point to your server's endpoint. Setup takes under two minutes.

2

All AI tools use your chosen provider

Once configured, every AI tool in OpenEduCat (the worksheet generator, grading assistant, course builder, IEP writer, all 9 of them) routes requests through your provider. No per-tool configuration. Change it once, it applies everywhere.

3

Switch providers anytime

Unhappy with response quality? API costs too high? Provider changed their terms? Switch to a different provider in settings. No code changes, no data migration, no downtime. Your worksheets, grades, and generated content stay in OpenEduCat regardless of which model created them.

4

Run multiple providers simultaneously

Assign different providers to different tasks. Use GPT-4o for grading where accuracy matters most, a faster model like Gemini Flash for worksheet generation where speed is the priority, and a local Llama instance for the student chatbot where privacy is paramount. Each tool can have its own provider.

Supported Providers

Eight providers out of the box, plus any OpenAI-compatible endpoint.

ProviderModelsNotes
OpenAIGPT-4o, GPT-4 Turbo, GPT-3.5 TurboMost widely used. Strong general-purpose performance across all tool categories.
AnthropicClaude 3.5 Sonnet, Claude 3 Opus, Claude 3 HaikuExcellent for long documents, essays, reports, IEPs. Strong safety alignment.
GoogleGemini Pro, Gemini FlashGood multimodal support. Competitive pricing on high-volume usage.
MistralMistral Large, Mistral Medium, Mistral SmallEU-hosted option for data residency requirements. Strong multilingual support.
Meta (Self-Hosted)Llama 3.1, Llama 3, Llama 2Run on your own GPU servers via Ollama or vLLM. Zero data leaves your network.
Azure OpenAI ServiceGPT-4o, GPT-4 via AzureEnterprise compliance. Familiar to institutions already on Microsoft 365.
AWS BedrockClaude, Llama, Mistral via AWSFor institutions with existing AWS infrastructure and BAAs in place.
Custom EndpointsAny OpenAI-compatible APIUniversity research models, fine-tuned models, or any server with an OpenAI-compatible endpoint.

Why IT Teams Want BYOM

Five reasons this matters to the people who manage your infrastructure.

Data Sovereignty

Route AI requests through infrastructure you control. On-premise deployments keep all data within your physical network. Cloud deployments go directly to your chosen provider under your data processing agreement. OpenEduCat never acts as a middleman.

Cost Control

Compare providers side by side. Use cheaper models for simple tasks (worksheet generation, email drafting) and reserve expensive models for complex ones (essay grading, IEP goal writing). Monitor API spend per department in real time. Set usage caps before they become budget surprises.

Regulatory Compliance

Satisfy data residency requirements in the EU (GDPR), India (DPDP Act), Gulf states, and other jurisdictions that mandate where citizen data is processed. Use a provider with servers in the required region, or host the model yourself.

Academic Freedom

Research universities can use specialized models fine-tuned for their domain. A computer science department might run their own code-review model. A linguistics department might use a multilingual model that outperforms general-purpose options. Each department can optimize for their use case.

No Vendor Lock-In

When your AI provider raises prices by 40% mid-contract (it happens), you switch to another provider in two minutes. When a better model launches, you try it immediately. When a provider gets breached, you move your traffic elsewhere that afternoon. Your application code does not change.

Full On-Premise Control

Self-Hosted: Zero Data Leaves Your Network

Run Llama 3.1 or Mistral on your own GPU servers using Ollama or vLLM. Every AI request, worksheet generation, grading assistance, IEP drafting, student tutoring, processes on hardware you own, in a room you control. No API calls leave your building. No cloud provider sees your data.

This is not a theoretical option. Districts running Llama 3.1 70B on a server with two NVIDIA A100 GPUs handle 50-100 concurrent AI requests comfortably. Smaller deployments with a single RTX 4090 run Llama 3.1 8B and handle 20-30 concurrent requests, enough for a single school's daily usage.

The initial hardware investment pays for itself within 6-12 months compared to cloud API costs at scale. And you never worry about a provider deprecating a model, changing their privacy policy, or raising prices.

Architecture Overview

Your Institution

→ OpenEduCat Instance (your server)

→ AI Abstraction Layer

→ Provider Router

→ Cloud: OpenAI / Anthropic / Google / Mistral

→ On-Prem: Ollama / vLLM (your GPU server)

→ Hybrid: mix of both per tool category

On-premise path: all traffic stays within your network perimeter.

Who Needs BYOM

Any institution where "we need to check with legal" is a regular part of software procurement.

Universities with data residency requirements

EU universities need GDPR-compliant AI processing. Use Mistral (EU-hosted) or self-host within your data center. Indian universities under the DPDP Act can route through domestic infrastructure.

Government-funded institutions

Public schools and state universities often have procurement rules that restrict which cloud providers can process student data. BYOM lets you use an approved provider from your existing vendor list.

Research universities with their own compute

If your university already has GPU clusters for research, run an AI model on that same infrastructure. Your computer science department can even fine-tune a model on domain-specific educational data.

Schools in regions with strict data protection laws

Gulf states, parts of Southeast Asia, and several Latin American countries have data localization requirements. Self-hosting ensures student data never crosses a border.

Districts that have been burned by vendor lock-in

If you have ever been stuck with a vendor who raised prices 50% at renewal because migrating was too painful, BYOM is specifically designed to prevent that scenario.

Frequently Asked Questions

Common questions about Bring Your Own Model.

OpenAI (GPT-4o, GPT-4 Turbo, GPT-3.5 Turbo), Anthropic (Claude 3.5 Sonnet, Claude 3 Opus, Claude 3 Haiku), Google (Gemini Pro, Gemini Flash), Mistral (Large, Medium, Small), Meta Llama (via self-hosted Ollama or vLLM), Azure OpenAI Service, AWS Bedrock, and any server that exposes an OpenAI-compatible API endpoint. If a new provider launches tomorrow and offers an OpenAI-compatible API, it works without any update from us.

Ready to Transform Your AI Infrastructure?

See how OpenEduCat frees up time so every student gets the attention they deserve.

Try it free for 15 days. No credit card required.