Getting Started with Valkey
Valkey is the open-source fork of Redis. After Redis changed its license in 2024, the Linux Foundation launched Valkey to keep the project fully open under BSD 3-Clause. The result: a drop-in replacement. Every Redis client, library, and CLI tool works with Valkey unchanged because Valkey speaks the exact same protocol.
If you're starting a new project and want an in-memory data store without licensing headaches, Valkey is the community's pick. Same sub-millisecond performance, same data structures, same ecosystem.
We'll build three patterns in a single TypeScript file: a cache-aside layer, a session store using hashes, and pub/sub messaging. Run it locally or against a Layerbase Cloud instance.
Contents
- Create a Valkey Instance
- Set Up the Project
- Connect to Valkey
- Core Operations
- Cache-Aside Pattern
- Session Store with Hashes
- Pub/Sub Notifications
- When to Use Valkey
Create a Valkey Instance
Local with SpinDB
One command gets you a local Valkey server with SpinDB. No Docker, no manual binary wrangling. (What is SpinDB?)
Install SpinDB globally:
npm i -g spindb # npm
pnpm add -g spindb # pnpmOr run it directly without installing:
npx spindb create valkey1 -e valkey --start # npm
pnpx spindb create valkey1 -e valkey --start # pnpmIf you installed globally, create and start a Valkey instance:
spindb create valkey1 -e valkey --startSpinDB downloads the Valkey binary for your platform, configures it, and starts the server. Verify it's running:
spindb url valkey1redis://127.0.0.1:6379Leave the server running. We'll connect to it from TypeScript in the next section.
Layerbase Cloud
For a hosted option, Layerbase Cloud spins up a managed Valkey instance for you. Create one and copy the connection URL and password from the Quick Connect panel.
Cloud instances use TLS, and Valkey on Layerbase Cloud is routed by the database hostname over a shared TLS port. For node-redis, pass the hostname as both socket.host and socket.servername:
const client = createClient({
username: 'default',
password: 'YOUR_PASSWORD',
socket: {
host: 'YOUR_DATABASE_HOSTNAME',
port: 6379,
tls: true,
servername: 'YOUR_DATABASE_HOSTNAME',
},
})Everything else in this guide works identically whether you're running locally or on Layerbase Cloud. Just keep the same commands and swap in your Layerbase Cloud hostname and password.
Set Up the Project
mkdir valkey-patterns && cd valkey-patterns
pnpm init
pnpm add redis
pnpm add -D tsx typescriptBecause Valkey is fully Redis-compatible at the protocol level, you use the standard redis npm package (node-redis v4+). No special client needed. Same package, same API, same everything. I really like this about Valkey: zero migration cost.
Create a file called valkey.ts. All the code in this post goes into that one file.
Connect to Valkey
import { createClient } from 'redis'
const client = createClient({ url: 'redis://localhost:6379' })
client.on('error', (error) => console.error('Valkey client error:', error))
await client.connect()
console.log('Connected to Valkey')The createClient function takes a URL for local SpinDB instances (redis://localhost:6379). For Layerbase Cloud, use the TLS socket form from the previous section so node-redis sends the correct SNI hostname on port 6379.
Core Operations
The fundamentals first. SET stores a value, GET retrieves it, and EX sets a TTL (time to live) in seconds:
// Basic SET/GET with TTL
await client.set('greeting', 'Hello from Valkey', { EX: 60 })
const greeting = await client.get('greeting')
console.log(greeting) // "Hello from Valkey"
// Check remaining TTL
const ttl = await client.ttl('greeting')
console.log(`TTL: ${ttl} seconds`)For batch operations, MSET and MGET handle multiple keys in a single round trip:
// Batch SET/GET
await client.mSet([
['config:theme', 'dark'],
['config:lang', 'en'],
['config:timezone', 'UTC'],
])
const values = await client.mGet([
'config:theme',
'config:lang',
'config:timezone',
])
console.log(values) // ["dark", "en", "UTC"]Batching matters. Three individual GET calls make three round trips. MGET makes one. Over a network connection, that difference adds up fast.
Cache-Aside Pattern
Cache-aside (also called lazy-loading) is the most common caching strategy in web applications. Check the cache first. If the data is there, return it. If not, fetch from the primary database, store the result in the cache, then return it:
async function simulateDatabaseQuery(
userId: string,
): Promise<Record<string, string>> {
// Simulate a slow database query (50ms)
await new Promise((resolve) => setTimeout(resolve, 50))
return {
id: userId,
name: 'Alice',
email: 'alice@example.com',
plan: 'pro',
}
}
async function getUserWithCache(
userId: string,
): Promise<Record<string, string>> {
const cacheKey = `user:${userId}`
const CACHE_TTL = 300 // 5 minutes
// Check the cache first
const cached = await client.get(cacheKey)
if (cached) {
console.log(` Cache HIT for ${cacheKey}`)
return JSON.parse(cached)
}
// Cache miss: fetch from database
console.log(` Cache MISS for ${cacheKey}`)
const user = await simulateDatabaseQuery(userId)
// Store in cache with TTL
await client.set(cacheKey, JSON.stringify(user), { EX: CACHE_TTL })
return user
}The performance difference is dramatic:
console.log('\n--- Cache-Aside Pattern ---')
const start1 = performance.now()
const user1 = await getUserWithCache('42')
const time1 = (performance.now() - start1).toFixed(2)
console.log(` Result: ${user1.name} (${time1}ms)`)
const start2 = performance.now()
const user2 = await getUserWithCache('42')
const time2 = (performance.now() - start2).toFixed(2)
console.log(` Result: ${user2.name} (${time2}ms)`)--- Cache-Aside Pattern ---
Cache MISS for user:42
Result: Alice (51.23ms)
Cache HIT for user:42
Result: Alice (0.34ms)First call: ~50ms (hits the database). Second call: under a millisecond (served from memory). In production, this pattern slashes load on your primary database. A PostgreSQL query might take 5-50ms. Valkey typically responds in under 1ms.
The TTL matters. Without it, stale data lives in the cache forever. After 5 minutes the key expires, the next request triggers a fresh query, and the cycle starts over.
Session Store with Hashes
Most web frameworks store sessions in a database table with a fixed schema. Valkey hashes are lighter: each session is a key with named fields, no schema to manage, automatic expiration built in.
HSET stores fields in a hash. HGETALL retrieves all fields. EXPIRE sets a TTL on the entire hash:
console.log('\n--- Session Store ---')
async function createSession(
sessionId: string,
data: Record<string, string>,
): Promise<void> {
const key = `session:${sessionId}`
const entries = Object.entries(data).flat()
await client.hSet(key, entries)
await client.expire(key, 1800) // 30 minutes
console.log(` Created session ${sessionId}`)
}
async function getSession(
sessionId: string,
): Promise<Record<string, string> | null> {
const key = `session:${sessionId}`
const data = await client.hGetAll(key)
if (Object.keys(data).length === 0) return null
return data
}
async function updateSessionField(
sessionId: string,
field: string,
value: string,
): Promise<void> {
const key = `session:${sessionId}`
await client.hSet(key, field, value)
}
// Create a session
await createSession('abc-123', {
userId: '42',
email: 'alice@example.com',
role: 'admin',
lastActive: new Date().toISOString(),
})
// Read it back
const session = await getSession('abc-123')
console.log(' Session data:', session)
// Update a single field without rewriting the whole session
await updateSessionField(
'abc-123',
'lastActive',
new Date().toISOString(),
)
console.log(' Updated lastActive field')
// Check TTL
const sessionTtl = await client.ttl('session:abc-123')
console.log(` Session expires in ${sessionTtl} seconds`)--- Session Store ---
Created session abc-123
Session data: { userId: '42', email: 'alice@example.com', role: 'admin', lastActive: '2026-03-14T12:00:00.000Z' }
Updated lastActive field
Session expires in 1799 secondsCompare this to a PostgreSQL sessions table: no CREATE TABLE, no migrations, no cleanup cron to delete expired rows. Set a TTL, Valkey handles expiration. Reads are sub-millisecond. And because HSET updates individual fields, you can touch lastActive on every request without rewriting the entire session.
Pub/Sub Notifications
Pub/sub lets clients send and receive messages through named channels. Publish to a channel, every subscriber gets the message instantly. Great for cache invalidation, real-time notifications, and event broadcasting between services.
One quirk: pub/sub requires a separate client connection because a subscribed client enters a special mode where it can only listen:
console.log('\n--- Pub/Sub ---')
// Create a separate client for subscribing
const subscriber = client.duplicate()
await subscriber.connect()
// Subscribe to a channel
const messages: string[] = []
await subscriber.subscribe('events', (message) => {
console.log(` Received: ${message}`)
messages.push(message)
})
// Publish messages from the main client
await client.publish('events', 'user:42 logged in')
await client.publish('events', 'cache:user:42 invalidated')
await client.publish('events', 'deployment started')
// Give messages a moment to arrive
await new Promise((resolve) => setTimeout(resolve, 100))
console.log(` Total messages received: ${messages.length}`)
// Clean up
await subscriber.unsubscribe('events')
await subscriber.disconnect()--- Pub/Sub ---
Received: user:42 logged in
Received: cache:user:42 invalidated
Received: deployment started
Total messages received: 3Messages are fire-and-forget: if no one is subscribed when a message is published, it's lost. Pub/sub is not a message queue. If you need guaranteed delivery, look at Valkey Streams instead. But for real-time event broadcasting where the occasional missed message is acceptable, pub/sub is simple and fast.
Common uses: broadcasting cache invalidation across app servers, pushing real-time notifications to WebSocket clients, and signaling between microservices.
Wrapping Up
// Clean up
await client.close()
console.log('\nDisconnected')Run the complete script:
npx tsx valkey.tsWhen to Use Valkey
Valkey fits well in these scenarios:
- Application caching: cache database queries, API responses, or computed results with automatic expiration
- Session management: store user sessions with per-field updates and built-in TTL
- Pub/sub messaging: broadcast events between services or to connected clients in real time
- Feature flags: store and check feature flags with sub-millisecond reads
- Rate limiting: track request counts per key with TTL-based sliding windows
- Distributed locks: coordinate access to shared resources across multiple servers
In all of these, Valkey sits alongside your primary database, not replacing it. It handles the fast, ephemeral stuff so your main store doesn't have to.
The Valkey documentation covers advanced topics like clustering, replication, persistence, Lua scripting, and Streams.
To manage your local Valkey instance:
spindb stop valkey1 # Stop the server
spindb start valkey1 # Start it again
spindb list # See all your database instancesSpinDB supports 20+ engines, so Valkey can run alongside CockroachDB, SurrealDB, or whatever else your project needs. Layerbase Desktop wraps the same thing in a GUI on macOS.