MemoryKit

Build a support chatbot

End-to-end guide: ingest your help docs, build a retrieval-powered support experience, and improve over time.

Build a customer support experience that retrieves answers from your help center, policies, and past tickets. Use MemoryKit search to find the most relevant passages, then feed them into your own LLM for answer generation.

Prerequisites

  • A MemoryKit API key (get one here)
  • Node.js 18+ or Python 3.8+
  • Your help center content (articles, FAQs, policy documents)

Step 1: Ingest your knowledge base

Upload your support content as memories. MemoryKit auto-chunks, embeds, and indexes everything.

import { MemoryKit } from "memorykit";
const mk = new MemoryKit({ apiKey: process.env.MEMORYKIT_API_KEY! });
 
// Batch ingest help articles
const articles = await fetchHelpCenterArticles(); // your function
 
const result = await mk.memories.batchIngest({
  items: articles.map((article) => ({
    content: article.body,
    title: article.title,
    tags: ["support", article.category],
    metadata: {
      article_id: article.id,
      url: article.url,
      updated_at: article.updatedAt,
    },
  })),
});
 
console.log(`Ingested ${result.accepted} articles, ${result.rejected} failed`);

Batch ingest accepts up to 100 items per call. For larger datasets, split into batches.

Step 2: Upload PDF policies

Got policy documents as PDFs? Upload them directly.

import fs from "fs";
 
const policyFiles = ["refund-policy.pdf", "shipping-policy.pdf", "privacy-policy.pdf"];
 
for (const file of policyFiles) {
  const buffer = fs.readFileSync(`./policies/${file}`);
  await mk.memories.upload({
    file: new Blob([buffer], { type: "application/pdf" }),
    title: file.replace(".pdf", "").replace(/-/g, " "),
    tags: ["support", "policy"],
  });
  console.log(`Uploaded ${file}`);
}

Step 3: Build the support endpoint

Create an endpoint that retrieves relevant knowledge and returns it. You can then feed these results into your own LLM for answer generation.

// Express.js endpoint
import express from "express";
 
const app = express();
app.use(express.json());
 
app.post("/api/support", async (req, res) => {
  const { message, customerId } = req.body;
 
  // Retrieve relevant support content
  const results = await mk.memories.search({
    query: message,
    limit: 5,
    tags: "support",
    userId: customerId,
  });
 
  // Return ranked sources — feed these into your own LLM
  res.json({
    results: results.results.map((r) => ({
      content: r.content,
      score: r.score,
      memoryId: r.memory_id,
    })),
  });
});
 
app.listen(3000);

Step 4: Improve over time

Add resolved tickets as new memories so the bot learns from real interactions.

// When a ticket is resolved, add the Q&A as a memory
async function onTicketResolved(ticket: Ticket) {
  await mk.memories.create({
    content: `Question: ${ticket.question}\nAnswer: ${ticket.resolution}`,
    title: `Resolved ticket #${ticket.id}`,
    tags: ["support", "resolved-ticket"],
    metadata: {
      ticket_id: ticket.id,
      agent: ticket.resolvedBy,
      category: ticket.category,
    },
  });
}

Step 5: Monitor with webhooks

Set up webhooks to track when memories finish processing (or fail).

const webhook = await mk.webhooks.create({
  url: "https://your-app.com/webhooks/memorykit",
  events: ["memory.completed", "memory.failed"],
});
console.log(`Webhook created: ${webhook.id}`);

Summary

WhatHow
Ingest contentbatchIngest() + upload()
Retrieve answerssearch() with filters, then feed into your LLM
Cite sourcesresults[].content with article metadata
Improve over timeAdd resolved tickets as memories
MonitorWebhooks for memory.completed / memory.failed

Key features used: Batch ingest, File upload, Search, Webhooks

Edit on GitHub

On this page