Add persistent memory to your AI applications with a single API call.
Memory Layer gives your AI applications a memory. Instead of treating each conversation as isolated, your AI remembers context from previous interactions and uses it to provide better, more relevant responses.
It's a simple API that sits between your application and your AI provider (OpenAI, Anthropic, etc.). You send messages to Memory Layer, and it automatically:
All API requests require an API key. You can create and manage your keys in the Dashboard.
Add your API key to the Authorization header of every request:
Authorization: Bearer YOUR_API_KEY/v1/chatSend a message and get a context-aware response. This endpoint automatically saves the conversation and finds relevant past context to include in the response.
| Field | Type | Required | Description |
|---|---|---|---|
| message | string | Yes | The message to send |
| user_id | string | Yes | Unique identifier for the user |
| top_k | number | No | How many past conversations to include (default: 5) |
| Field | Type | Description |
|---|---|---|
| reply | string | The AI's response |
| memory_ids | array | IDs of past conversations that were used |
{
"reply": "Yesterday we discussed building a new dashboard feature. You wanted to focus on user analytics first, then add export functionality.",
"memory_ids": ["abc-123", "def-456"]
}curl -X POST https://memory-layer-api.onrender.com/v1/chat \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"message": "What did we discuss yesterday?",
"user_id": "user-123"
}'MCPs (Memory Context Protocols) are a way for your development tools to automatically use Memory Layer. Instead of manually calling our API, your IDE can automatically save and retrieve context as you work.
Every conversation starts fresh. The AI doesn't know what you discussed in previous sessions unless you manually tell it.
The AI remembers your previous conversations automatically. Ask "What did we decide about the database?" and it knows the answer.
The easiest way is to click "Connect IDE" in your dashboard. For manual setup, add this configuration to your IDE:
{
"mcpServers": {
"memory-layer": {
"command": "npx",
"args": ["@memory-layer/mcp-server"],
"env": {
"MEMORY_LAYER_API_KEY": "your-api-key-here"
}
}
}
}The API uses standard HTTP response codes. Here's what to expect:
| Code | What it means | What to do |
|---|---|---|
200 | Success | Everything worked |
400 | Bad request | Check your request format |
401 | Unauthorized | Check your API key |
429 | Too many requests | Slow down or upgrade your plan |
500 | Server error | Try again later |
When something goes wrong, you'll get a response like this:
{
"error": {
"code": "invalid_api_key",
"message": "The provided API key is invalid",
"status": 401
}
}Always use the same user_id for the same person. This ensures the AI remembers their history correctly.
Never expose your API key in client-side code. Use environment variables on your server.
If you get a 429 error, wait a moment before retrying. Consider implementing exponential backoff.
Check your dashboard regularly to understand your usage patterns and avoid hitting limits unexpectedly.