Documentation

Add persistent memory to your AI applications with a single API call.

Need help?

Our team is here to assist you.

Contact Support

What is Memory Layer?

Memory Layer gives your AI applications a memory. Instead of treating each conversation as isolated, your AI remembers context from previous interactions and uses it to provide better, more relevant responses.

It's a simple API that sits between your application and your AI provider (OpenAI, Anthropic, etc.). You send messages to Memory Layer, and it automatically:

  • Saves the conversation for future reference
  • Finds relevant past conversations automatically
  • Includes that context when generating responses
  • Returns the AI's response along with any context used

Use Cases

  • • Customer support bots that remember previous tickets
  • • AI coding assistants that recall your project decisions
  • • Personal assistants that learn your preferences over time
  • • Tutoring systems that track student progress

Authentication

All API requests require an API key. You can create and manage your keys in the Dashboard.

Including your API key

Add your API key to the Authorization header of every request:

http
Authorization: Bearer YOUR_API_KEY

Common Mistakes

  • • Forgetting the "Bearer " prefix before your key
  • • Using a key that has been deleted or revoked
  • • Exposing your key in client-side JavaScript

Chat Endpoint

POST
/v1/chat

Send a message and get a context-aware response. This endpoint automatically saves the conversation and finds relevant past context to include in the response.

Request Body

FieldTypeRequiredDescription
messagestringYesThe message to send
user_idstringYesUnique identifier for the user
top_knumberNoHow many past conversations to include (default: 5)

Response

FieldTypeDescription
replystringThe AI's response
memory_idsarrayIDs of past conversations that were used
json
{
  "reply": "Yesterday we discussed building a new dashboard feature. You wanted to focus on user analytics first, then add export functionality.",
  "memory_ids": ["abc-123", "def-456"]
}

Examples

bash
curl -X POST https://memory-layer-api.onrender.com/v1/chat \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "message": "What did we discuss yesterday?",
    "user_id": "user-123"
  }'

Memory Context (MCPs)

Simple Explanation

MCPs (Memory Context Protocols) are a way for your development tools to automatically use Memory Layer. Instead of manually calling our API, your IDE can automatically save and retrieve context as you work.

Without MCPs

Every conversation starts fresh. The AI doesn't know what you discussed in previous sessions unless you manually tell it.

With MCPs

The AI remembers your previous conversations automatically. Ask "What did we decide about the database?" and it knows the answer.

Setting Up MCP

The easiest way is to click "Connect IDE" in your dashboard. For manual setup, add this configuration to your IDE:

json
{
  "mcpServers": {
    "memory-layer": {
      "command": "npx",
      "args": ["@memory-layer/mcp-server"],
      "env": {
        "MEMORY_LAYER_API_KEY": "your-api-key-here"
      }
    }
  }
}

Error Handling

The API uses standard HTTP response codes. Here's what to expect:

CodeWhat it meansWhat to do
200
SuccessEverything worked
400
Bad requestCheck your request format
401
UnauthorizedCheck your API key
429
Too many requestsSlow down or upgrade your plan
500
Server errorTry again later

Error Response Format

When something goes wrong, you'll get a response like this:

json
{
  "error": {
    "code": "invalid_api_key",
    "message": "The provided API key is invalid",
    "status": 401
  }
}

Best Practices

Use consistent user IDs

Always use the same user_id for the same person. This ensures the AI remembers their history correctly.

Keep your API key secure

Never expose your API key in client-side code. Use environment variables on your server.

Handle rate limits gracefully

If you get a 429 error, wait a moment before retrying. Consider implementing exponential backoff.

Monitor your usage

Check your dashboard regularly to understand your usage patterns and avoid hitting limits unexpectedly.