Getting Started with Redis
Redis is an in-memory data structure store with sub-millisecond access times. Most people think of it as a key-value cache, but that sells it short. Redis gives you specialized data structures (strings, lists, sets, sorted sets, hashes) that solve problems which are awkward or slow in a relational database: rate limiting, leaderboards, session storage, atomic counters, queues.
The difference isn't just speed. You can solve these problems in PostgreSQL, but you'll need tables, timestamps, window functions, row-level locking, and cleanup jobs. Redis gives you purpose-built primitives that handle all of that in a few commands.
We'll build two practical features in TypeScript: a sliding window rate limiter and a real-time leaderboard. Run Redis locally, or point the same code at Valkey on Layerbase Cloud if you'd rather use a hosted Redis-compatible database.
Contents
- Create a Redis Instance
- Set Up the Project
- Connect to Redis
- Core Operations: Keys, Values, and TTL
- Feature 1: Rate Limiter
- Feature 2: Real-Time Leaderboard
- When to Reach for Redis
Create a Redis Instance
Local with SpinDB
SpinDB is the simplest way to get a local Redis server running. No Docker, no manual downloads. (What is SpinDB?)
Install SpinDB globally:
npm i -g spindb # npm
pnpm add -g spindb # pnpmOr run it directly without installing:
npx spindb create redis1 -e redis --start # npm
pnpx spindb create redis1 -e redis --start # pnpmIf you installed globally, create and start a Redis instance:
spindb create redis1 -e redis --startSpinDB downloads the Redis binary for your platform, configures it, and starts the server. Verify it's running:
spindb url redis1redis://127.0.0.1:6379Leave the server running. We'll connect to it from TypeScript in the next section.
Layerbase Cloud with Valkey
Don't want to run anything locally? Layerbase Cloud gives you managed Valkey, which speaks the Redis protocol and works with Redis client libraries. Create a Valkey instance and grab the connection URL and password from the Quick Connect panel.
Cloud instances use TLS, and Redis-compatible traffic on Layerbase Cloud is routed by the database hostname over a shared TLS port. For node-redis, pass the hostname as both socket.host and socket.servername:
const client = createClient({
username: 'default',
password: 'YOUR_PASSWORD',
socket: {
host: 'YOUR_DATABASE_HOSTNAME',
port: 6379,
tls: true,
servername: 'YOUR_DATABASE_HOSTNAME',
},
})Everything else in this guide works identically whether you're running local Redis or hosted Valkey. Keep the same commands and swap in your Layerbase Cloud hostname and password.
Set Up the Project
mkdir redis-features && cd redis-features
pnpm init
pnpm add redis
pnpm add -D tsx typescriptCreate a file called features.ts. All the code in this post goes into that one file.
Connect to Redis
import { createClient } from 'redis'
const client = createClient({ url: 'redis://localhost:6379' })
await client.connect()
console.log('Connected to Redis')The redis package (node-redis v4+) requires an explicit connect() call. If you send commands before connecting, the client throws an error.
For Layerbase Cloud, use the TLS socket form from the previous section rather than only swapping in a rediss:// URL. node-redis needs socket.servername set to your database hostname so TLS SNI reaches the correct Valkey instance.
Core Operations: Keys, Values, and TTL
Before building anything real, the fundamentals. Redis stores everything as keys mapping to typed values. The simplest type is a string:
// SET and GET
await client.set('greeting', 'hello')
const value = await client.get('greeting')
console.log(value) // "hello"
// SET with TTL (expires after 60 seconds)
await client.set('session:abc123', 'user_42', { EX: 60 })
// Check remaining TTL
const ttl = await client.ttl('session:abc123')
console.log(`Expires in ${ttl} seconds`)
// Atomic counter
await client.set('page:views', '0')
await client.incr('page:views') // 1
await client.incr('page:views') // 2
await client.incrBy('page:views', 10) // 12
const views = await client.get('page:views')
console.log(`Page views: ${views}`) // "12"Two things worth calling out. First, INCR is atomic. If 100 requests hit it simultaneously, you get exactly 100 increments. No race conditions, no locks. PostgreSQL's UPDATE counters SET value = value + 1 is also atomic, but it carries the overhead of a full disk-backed transaction. Redis does this entirely in memory. Second, TTL-based expiration is built into the storage engine. Keys clean themselves up. No cron jobs, no DELETE FROM sessions WHERE expires_at < NOW().
Feature 1: Rate Limiter
Rate limiting is a textbook Redis use case, and one of my favorite examples of how much simpler the right data store makes things. The goal: limit each user to 10 requests per 60-second window. Exceed the limit, get rejected.
The PostgreSQL Version (For Comparison)
In a relational database, you'd need a table, an insert per request, a count query with a time window, and a cleanup job. Already feeling heavy:
CREATE TABLE rate_limits (
id SERIAL PRIMARY KEY,
user_id TEXT NOT NULL,
requested_at TIMESTAMP DEFAULT NOW()
);
-- Check rate limit
SELECT COUNT(*) FROM rate_limits
WHERE user_id = 'user_42'
AND requested_at > NOW() - INTERVAL '60 seconds';
-- Clean up old records (needs a scheduled job)
DELETE FROM rate_limits
WHERE requested_at < NOW() - INTERVAL '60 seconds';A table, an index, a count query with a time filter, and a background cleanup process. Every request hits disk.
The Redis Version
async function checkRateLimit(
userId: string,
limit: number,
windowSeconds: number,
): Promise<{ allowed: boolean; remaining: number }> {
const key = `rate:${userId}`
const current = await client.incr(key)
// Set expiration only on the first request in this window
if (current === 1) {
await client.expire(key, windowSeconds)
}
return {
allowed: current <= limit,
remaining: Math.max(0, limit - current),
}
}That's it. The whole thing. INCR creates the key if it doesn't exist (starting at 1) and increments atomically if it does. EXPIRE sets the window duration on the first request. When the key expires, the window resets automatically.
Let's test it:
console.log('\n--- Rate Limiter ---')
const USER = 'user_42'
const LIMIT = 10
const WINDOW = 60
// Simulate 12 requests
for (let i = 1; i <= 12; i++) {
const { allowed, remaining } = await checkRateLimit(USER, LIMIT, WINDOW)
const status = allowed ? 'ALLOWED' : 'BLOCKED'
console.log(`Request ${i}: ${status} (${remaining} remaining)`)
}Run it:
npx tsx features.tsConnected to Redis
--- Rate Limiter ---
Request 1: ALLOWED (9 remaining)
Request 2: ALLOWED (8 remaining)
Request 3: ALLOWED (7 remaining)
Request 4: ALLOWED (6 remaining)
Request 5: ALLOWED (5 remaining)
Request 6: ALLOWED (4 remaining)
Request 7: ALLOWED (3 remaining)
Request 8: ALLOWED (2 remaining)
Request 9: ALLOWED (1 remaining)
Request 10: ALLOWED (0 remaining)
Request 11: BLOCKED (0 remaining)
Request 12: BLOCKED (0 remaining)Ten requests go through. The rest are blocked. After 60 seconds, the key expires and the user can make requests again. No cleanup job needed.
Feature 2: Real-Time Leaderboard
Sorted sets might be my favorite Redis data structure. Every member has a score, and Redis keeps them sorted automatically. Adding a member, updating a score, fetching the top N: all O(log N).
Why Not PostgreSQL?
You could build a leaderboard in PostgreSQL:
CREATE TABLE leaderboard (
player TEXT PRIMARY KEY,
score INTEGER NOT NULL
);
-- Update score (needs INSERT ON CONFLICT for upsert)
INSERT INTO leaderboard (player, score) VALUES ('alice', 100)
ON CONFLICT (player) DO UPDATE SET score = leaderboard.score + 100;
-- Get top 10
SELECT player, score FROM leaderboard ORDER BY score DESC LIMIT 10;
-- Get a player's rank
SELECT COUNT(*) + 1 AS rank FROM leaderboard WHERE score > (
SELECT score FROM leaderboard WHERE player = 'alice'
);This works, but every "get top 10" query re-sorts the table (unless you maintain an index, which slows writes). Getting a player's rank requires a subquery. Concurrent score updates need row-level locking. Under high traffic, these operations compete for the same rows.
The Redis Version
With sorted sets, the data stays sorted at all times. Reads and writes are both O(log N):
const LEADERBOARD = 'game:leaderboard'
// Clean up from previous runs
await client.del(LEADERBOARD)
// Add players with initial scores using ZADD
await client.zAdd(LEADERBOARD, [
{ score: 2500, value: 'alice' },
{ score: 1800, value: 'bob' },
{ score: 3200, value: 'charlie' },
{ score: 2100, value: 'diana' },
{ score: 2900, value: 'eve' },
{ score: 1500, value: 'frank' },
{ score: 3400, value: 'grace' },
{ score: 2700, value: 'henry' },
])
console.log('\n--- Leaderboard ---')Update scores atomically with ZINCRBY. No read-modify-write cycle, no locking:
// Alice gets a 500-point bonus
await client.zIncrBy(LEADERBOARD, 500, 'alice')
// Bob goes on a streak
await client.zIncrBy(LEADERBOARD, 1200, 'bob')Get the top 5 players. ZRANGE with REV returns highest scores first, and WITHSCORES includes the scores:
const top5 = await client.zRangeWithScores(LEADERBOARD, 0, 4, { REV: true })
console.log('\nTop 5:')
for (let i = 0; i < top5.length; i++) {
const entry = top5[i]
console.log(` #${i + 1} ${entry.value}: ${entry.score}`)
}Get a specific player's rank and score. ZREVRANK returns the 0-based position in descending order:
const aliceRank = await client.zRevRank(LEADERBOARD, 'alice')
const aliceScore = await client.zScore(LEADERBOARD, 'alice')
console.log(
`\nAlice: rank #${(aliceRank ?? 0) + 1}, score ${aliceScore}`,
)
const bobRank = await client.zRevRank(LEADERBOARD, 'bob')
const bobScore = await client.zScore(LEADERBOARD, 'bob')
console.log(`Bob: rank #${(bobRank ?? 0) + 1}, score ${bobScore}`)Get the total number of players:
const totalPlayers = await client.zCard(LEADERBOARD)
console.log(`\nTotal players: ${totalPlayers}`)Finally, disconnect when done:
await client.close()Run the complete script:
npx tsx features.tsConnected to Redis
--- Rate Limiter ---
Request 1: ALLOWED (9 remaining)
Request 2: ALLOWED (8 remaining)
...
Request 10: ALLOWED (0 remaining)
Request 11: BLOCKED (0 remaining)
Request 12: BLOCKED (0 remaining)
--- Leaderboard ---
Top 5:
#1 grace: 3400
#2 charlie: 3200
#3 bob: 3000
#4 alice: 3000
#5 eve: 2900
Alice: rank #4, score 3000
Bob: rank #3, score 3000
Total players: 8Alice went from 2500 to 3000 after her bonus, Bob jumped from 1800 to 3000 after his streak. The sorted set handled both updates atomically and kept the ordering correct. No explicit sorting anywhere.
When to Reach for Redis
The rate limiter and leaderboard are two examples, but the pattern applies broadly. Reach for Redis when you need:
- Caching with TTL-based expiration: store computed results or API responses with automatic cleanup
- Rate limiting: atomic counters with sliding windows, no cleanup jobs
- Session storage: fast reads, automatic expiration, no table scans
- Real-time leaderboards and counters: sorted sets keep data ordered at write time, so reads are always fast
- Job queues: lists with
LPUSH/BRPOPgive you a reliable FIFO queue - Pub/sub messaging: broadcast events to multiple subscribers without polling
These all boil down to the same thing: high-frequency operations on simple data structures where sub-millisecond latency matters. That's Redis territory.
Wrapping Up
The full script is under 80 lines of real code. You built a rate limiter and a real-time leaderboard, both using data structures that would require significantly more code and infrastructure in a relational database. These patterns scale to millions of operations per second.
The Redis documentation covers additional data structures (streams, HyperLogLog, bitmaps), persistence options, replication, and clustering.
To manage your local Redis instance:
spindb stop redis1 # Stop the server
spindb start redis1 # Start it again
spindb list # See all your database instancesSpinDB handles 20+ engines, so Redis can sit next to PostgreSQL and MongoDB with one CLI managing all of them. Layerbase Desktop wraps the same thing in a GUI on macOS.