One prompt format
Take retrieved memory from your app and send it through a single text endpoint shape.
BrainAPI is the execution layer for text, image, speech, and routing. Pair it with your own memory store, session database, or vector layer so your chatbot can keep context across sessions without hard-coding provider-specific AI logic.
Most chatbot projects do not fail because the model is weak. They fail because the app forgets user context, repeats onboarding questions, and mixes memory logic directly into prompt-building code. A usable chatbot stack separates concerns cleanly.
If you already keep context in Postgres, Redis, a vector database, or a graph store, BrainAPI gives you one stable place to send the final AI request. That means you can change providers, pricing mode, or multi-modal workflows without rewriting the memory handoff every time.
Take retrieved memory from your app and send it through a single text endpoint shape.
Keep the memory architecture stable while BrainAPI handles provider choice and fallback.
Rate limits, token controls, onboarding, and billing stay in one place instead of spreading across scripts.
const memory = await loadUserContext(userId);
const input = [
"You are the support assistant for BrainAPI.",
"Relevant user context:",
memory.summary,
"Latest message:",
userMessage
].join("\\n\\n");
const response = await fetch("https://api.brainapi.site/api/v1/ai", {
method: "POST",
headers: {
"Content-Type": "application/json",
"X-API-Key": process.env.BRAINAPI_KEY
},
body: JSON.stringify({
type: "text",
input,
mode: "fast",
max_output_tokens: 220
})
});
This keeps memory retrieval inside your app while BrainAPI stays responsible for execution, fallback, and consistent response formatting.
If you want implementation details, read How to Store AI Context in Node.js or jump into the quickstart.