Best Database for a Chat App
You're building a chat app. Maybe it's a Slack clone, a customer support widget, or in-app messaging for your SaaS product. You need a database. You open a browser tab and immediately find 40 different opinions telling you to use 40 different databases.
Here's the short answer: PostgreSQL + Redis. That's the stack. The rest of this post explains why.
Contents
- What a Chat App Actually Needs
- The Stack: PostgreSQL + Redis
- Schema: Messages in PostgreSQL
- Real-Time Delivery: Redis Pub/Sub
- Why Not MongoDB?
- Why Not a Real-Time Database?
- Spin It Up with SpinDB
- Further Reading
What a Chat App Actually Needs
Before picking a database, think about what chat actually requires at the data layer:
- Messages. Ordered by timestamp, paginated for scrollback, tied to a conversation or channel. This is the core.
- Conversations and channels. Who's in them, metadata, permissions. Relational data with foreign keys.
- Read receipts. Which user has read up to which message in which conversation. A join table that gets hammered with updates.
- Presence. Who's online right now. This changes every few seconds and nobody cares about it five minutes later.
- Typing indicators. Ephemeral. "Alice is typing..." shows for three seconds and then vanishes. Storing this in a database is overkill.
- Real-time delivery. When a message is sent, everyone in the conversation needs to know immediately. Sub-second latency.
- Search. Users will want to find old messages by keyword.
Some of these need durable storage with strong consistency. Others are ephemeral and need raw speed. No single database is great at both. That's why you use two.
The Stack: PostgreSQL + Redis
PostgreSQL handles everything that needs to persist: messages, users, conversations, channel memberships, read receipts. It gives you ACID transactions, foreign keys, indexes, full-text search, and JOINs. Chat data is inherently relational (a message belongs to a sender in a conversation), and PostgreSQL is built for exactly this.
Redis (or Valkey, its open-source fork) handles everything ephemeral and real-time: presence tracking, typing indicators, and pub/sub for delivering messages to connected clients. Redis is in-memory, sub-millisecond, and has built-in pub/sub. It's purpose-built for the kinds of fast, throwaway operations that would be wasteful to run through a relational database.
This isn't a novel architecture. Slack, Discord, and most production chat systems use some variation of this pattern. The persistent store handles truth, and the in-memory store handles speed.
Set up both locally with SpinDB in two commands:
npm i -g spindb
spindb create chatdb --start
spindb create chatredis -e redis --startPostgreSQL is the default engine, so chatdb is a PostgreSQL instance without needing the -e flag.
Schema: Messages in PostgreSQL
Here's a minimal schema for a chat app. Connect to your instance (spindb connect chatdb) and run this:
CREATE TABLE users (
id SERIAL PRIMARY KEY,
username VARCHAR(50) UNIQUE NOT NULL,
display_name VARCHAR(100) NOT NULL,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
CREATE TABLE conversations (
id SERIAL PRIMARY KEY,
name VARCHAR(200),
is_group BOOLEAN DEFAULT false,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
CREATE TABLE conversation_members (
conversation_id INT REFERENCES conversations(id),
user_id INT REFERENCES users(id),
joined_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
PRIMARY KEY (conversation_id, user_id)
);
CREATE TABLE messages (
id BIGSERIAL PRIMARY KEY,
conversation_id INT NOT NULL REFERENCES conversations(id),
sender_id INT NOT NULL REFERENCES users(id),
body TEXT NOT NULL,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
CREATE INDEX idx_messages_conversation_time
ON messages (conversation_id, created_at DESC);
CREATE TABLE read_receipts (
user_id INT REFERENCES users(id),
conversation_id INT REFERENCES conversations(id),
last_read_message_id BIGINT REFERENCES messages(id),
PRIMARY KEY (user_id, conversation_id)
);A few things to notice:
messagesusesBIGSERIALfor the primary key. Chat apps generate a lot of messages.SERIAL(32-bit int) maxes out at about 2.1 billion rows.BIGSERIALgoes to 9.2 quintillion. Use it from the start so you never have to migrate.- The index on
(conversation_id, created_at DESC)is the single most important index in a chat app. Every time someone opens a conversation, you're querying recent messages in reverse chronological order. This index makes that query fast. read_receiptsuses an upsert pattern. When a user reads new messages, youINSERT ... ON CONFLICT (user_id, conversation_id) DO UPDATE SET last_read_message_id = EXCLUDED.last_read_message_id. One row per user per conversation, always up to date.- Foreign keys everywhere. A message always belongs to a conversation and a sender. A conversation member always references a real user and a real conversation. The database enforces this. You can't end up with orphaned messages pointing at deleted conversations.
Fetching the most recent 50 messages in a conversation:
SELECT m.id, u.username, u.display_name, m.body, m.created_at
FROM messages m
JOIN users u ON u.id = m.sender_id
WHERE m.conversation_id = 1
ORDER BY m.created_at DESC
LIMIT 50;That JOIN is doing real work. You get the message content and the sender info in a single query. Try doing that without JOINs and you'll understand why relational databases exist.
For search, PostgreSQL has built-in full-text search:
SELECT m.id, m.body, u.username, m.created_at
FROM messages m
JOIN users u ON u.id = m.sender_id
WHERE m.conversation_id = 1
AND to_tsvector('english', m.body) @@ plainto_tsquery('english', 'meeting notes')
ORDER BY m.created_at DESC
LIMIT 20;No Elasticsearch needed for most chat apps. PostgreSQL handles keyword search well enough until you're at serious scale.
Real-Time Delivery: Redis Pub/Sub
When someone sends a message, you need two things to happen: store it in PostgreSQL (permanent), and push it to all connected clients in that conversation (immediate). Redis pub/sub handles the second part.
The pattern is simple. Each conversation gets a Redis channel. When a message is sent, your server publishes it to that channel. Every connected client subscribed to that channel gets it instantly.
Here's what the publish side looks like in TypeScript:
import { createClient } from 'redis'
const redis = createClient({ url: 'redis://127.0.0.1:6379' })
await redis.connect()
// After saving a message to PostgreSQL, publish it to Redis
async function sendMessage(conversationId: number, senderId: number, body: string) {
// 1. Save to PostgreSQL (your ORM or query builder handles this)
const message = await db.query(
'INSERT INTO messages (conversation_id, sender_id, body) VALUES ($1, $2, $3) RETURNING *',
[conversationId, senderId, body]
)
// 2. Publish to Redis for real-time delivery
await redis.publish(
`chat:${conversationId}`,
JSON.stringify(message.rows[0])
)
}And the subscribe side, running on each connected client's WebSocket handler:
const subscriber = createClient({ url: 'redis://127.0.0.1:6379' })
await subscriber.connect()
// When a user connects to a conversation
async function subscribeToConversation(conversationId: number, ws: WebSocket) {
await subscriber.subscribe(`chat:${conversationId}`, (rawMessage) => {
ws.send(rawMessage)
})
}Redis pub/sub is fire-and-forget. If a client is offline when a message is published, they miss it. That's fine, because when they reconnect, they query PostgreSQL for messages newer than their last read receipt. PostgreSQL is the source of truth. Redis is just the fast lane.
Presence and Typing Indicators
Presence (who's online) and typing indicators are also Redis territory:
// User comes online: set a key with a TTL
await redis.set(`presence:${userId}`, 'online', { EX: 30 })
// Heartbeat every 20 seconds to keep the key alive
await redis.expire(`presence:${userId}`, 30)
// Check who's online in a conversation
async function getOnlineMembers(memberIds: number[]) {
const pipeline = redis.multi()
for (const id of memberIds) {
pipeline.get(`presence:${id}`)
}
const results = await pipeline.exec()
return memberIds.filter((_, i) => results[i] === 'online')
}
// Typing indicator: set a key that expires in 3 seconds
await redis.set(`typing:${conversationId}:${userId}`, '1', { EX: 3 })Notice the pattern: everything here has a TTL. Presence keys expire after 30 seconds without a heartbeat. Typing indicators expire after 3 seconds. Redis handles the cleanup automatically. You don't need a background job sweeping stale presence data out of a SQL table.
Why Not MongoDB?
MongoDB works for chat. People have built chat apps on it and they function fine. But the document model doesn't give you much here.
Chat messages are uniform. Every message has a conversation ID, a sender, a body, and a timestamp. There's no deeply nested, variable-shape data that needs a flexible schema. A messages collection in MongoDB looks almost identical to a messages table in PostgreSQL, except you lose foreign keys and JOINs.
Without JOINs, fetching messages with sender info means either:
- Embedding sender data in every message. Now you have denormalized data. When a user changes their display name, you have to update every message they ever sent, or live with stale names.
- Making a second query for user data. More code, more round trips, more complexity.
- Using
$lookup(MongoDB's JOIN equivalent). It works, but it's slower than a native SQL JOIN and the syntax is verbose.
MongoDB also doesn't have built-in pub/sub for real-time delivery, so you still need Redis (or something like it) on top. At that point, you have two databases anyway. PostgreSQL + Redis gives you a cleaner separation of concerns than MongoDB + Redis.
If your team already runs MongoDB in production and knows it well, it's a reasonable choice. But if you're starting fresh, PostgreSQL is the stronger foundation for relational chat data.
Why Not a Real-Time Database?
Databases like SurrealDB, Firebase Realtime Database, or RethinkDB are designed with real-time features built in. Subscriptions, live queries, push-based updates. The appeal for a chat app is obvious: one database that handles both persistence and real-time delivery.
For a hobby project or an MVP, these can work well. You skip the Redis layer entirely and get real-time updates from a single data source. Less infrastructure to manage.
The tradeoff is maturity and scale. PostgreSQL has been in production since 1996. Redis has been hammered by every major tech company on the planet. The PostgreSQL + Redis stack has a massive body of documentation, tooling, and battle scars from running at scale. If something goes wrong at 3 AM, you'll find a Stack Overflow answer or a blog post from someone who hit the same problem.
Newer real-time databases don't have that yet. SurrealDB is genuinely interesting, but it's young. Firebase works but locks you into Google Cloud. RethinkDB was discontinued and then revived by the community.
If you're building something you plan to scale, go with the boring stack. PostgreSQL + Redis is boring, and boring is a feature when your pager goes off.
Spin It Up with SpinDB
If you followed along, you already have both running. If not, here's the full setup with SpinDB (What is SpinDB?):
npm i -g spindb
spindb create chatdb --start
spindb create chatredis -e redis --startGet connection strings for your app:
spindb url chatdb
spindb url chatredispostgresql://127.0.0.1:5432/chatdb
redis://127.0.0.1:6379Use those URLs in your application config and you're ready to build.
Manage your instances:
spindb stop chatdb chatredis # Stop both
spindb start chatdb chatredis # Start both
spindb list # See all your instances
spindb destroy chatdb chatredis # Remove when doneSpinDB supports 20+ database engines, so if you want to experiment with MongoDB, SurrealDB, or anything else mentioned in this post, it's one command away. Layerbase Desktop provides a GUI if you prefer that.
For managed instances with TLS, backups, and zero setup, both PostgreSQL and Redis are available on Layerbase Cloud.
Further Reading
- Getting Started with Redis builds a rate limiter and leaderboard in TypeScript
- Redis vs Valkey covers the licensing fork and why your Redis code works on both
- What's New in PostgreSQL 18 walks through the latest features with runnable examples
- Which Relational Database Should I Pick? compares PostgreSQL, MySQL, MariaDB, and CockroachDB
- Getting Started with SurrealDB if you want to try the real-time database approach
- Getting Started with MongoDB if your team prefers the document model