QuestDB vs InfluxDB
Your app is generating timestamped data and you need a database that's built for it. Maybe it's IoT sensor readings coming in every few seconds. Maybe it's server metrics, financial ticks, or user analytics events. PostgreSQL can store timestamps, sure, but once you're doing time-bucketed aggregations over millions of rows, you want something purpose-built.
QuestDB and InfluxDB both target this space, but they couldn't be more different in how they want you to work with your data. QuestDB extends SQL with time-series primitives like SAMPLE BY, so your team's existing SQL knowledge transfers directly. This comparison targets InfluxDB 2.x, which introduces its own data model, its own query language (Flux), and its own way of thinking about measurements. One feels like a natural extension of what you already know. The other asks you to learn a new paradigm.
Below, we'll run the same sensor pipeline in both so you can see exactly how different the developer experience is.
Contents
- Quick Comparison
- Set Up Both Databases with SpinDB
- The Same Task in Both
- Key Differences
- When to Pick QuestDB
- When to Pick InfluxDB
- Run Both on Layerbase Cloud
Quick Comparison
| QuestDB | InfluxDB | |
|---|---|---|
| Query language | SQL with extensions (SAMPLE BY, LATEST ON) | Flux (functional pipeline) |
| Data model | Tables, columns, rows | Measurements, tags, fields |
| Wire protocol | PostgreSQL (port 8812) | HTTP REST API (port 8086) |
| Write format | SQL INSERT or InfluxDB Line Protocol | InfluxDB Line Protocol or Point builder |
| Client library | Any PostgreSQL client (pg, psycopg2) | Official SDK (@influxdata/influxdb-client) |
| Retention | Manual (DROP PARTITION) or detach | Built-in per-bucket retention policies |
| Learning curve | Low if you know SQL | Steeper (new query language, new data model) |
| Sweet spot | Teams that want SQL, high-throughput ingestion | Telegraf/Grafana ecosystem, pipeline-style queries |
Set Up Both Databases with SpinDB
We'll run both locally with SpinDB. No Docker, no manual config. (What is SpinDB?)
Install SpinDB globally:
npm i -g spindb # npm
pnpm add -g spindb # pnpmCreate and start both instances:
spindb create quest1 -e questdb --start
spindb create influx1 -e influxdb --startCheck their URLs:
spindb url quest1postgresql://localhost:8812/qdbspindb url influx1http://127.0.0.1:8086QuestDB gives you a PostgreSQL connection string. InfluxDB gives you an HTTP URL. That tells you a lot about what comes next.
When SpinDB starts InfluxDB, it handles initial setup (creating a default organization, bucket, and API token). Copy the token from the output. You'll need it shortly.
The Same Task in Both
The task: insert 288 sensor readings (3 sensors, 96 readings each, spanning 24 hours), then query average temperature per 15-minute bucket. Same data, same question, very different implementations.
Set up a project with both client libraries:
mkdir tsdb-compare && cd tsdb-compare
pnpm init
pnpm add pg @influxdata/influxdb-client
pnpm add -D tsx typescript @types/pgBoth scripts use the same generated data:
type Reading = {
sensorId: string
temperature: number
humidity: number
ts: Date
}
function generateReadings(): Reading[] {
const sensors = ['sensor_a', 'sensor_b', 'sensor_c']
const readings: Reading[] = []
const now = new Date()
const twentyFourHoursAgo = new Date(now.getTime() - 24 * 60 * 60 * 1000)
for (const sensorId of sensors) {
let baseTemp = 20 + Math.random() * 10
let baseHumidity = 40 + Math.random() * 20
for (let i = 0; i < 96; i++) {
const ts = new Date(
twentyFourHoursAgo.getTime() +
i * 15 * 60 * 1000 +
Math.random() * 60 * 1000,
)
baseTemp += (Math.random() - 0.5) * 2
baseHumidity += (Math.random() - 0.5) * 3
readings.push({
sensorId,
temperature: Math.round(baseTemp * 100) / 100,
humidity:
Math.round(Math.max(0, Math.min(100, baseHumidity)) * 100) / 100,
ts,
})
}
}
return readings.sort((a, b) => a.ts.getTime() - b.ts.getTime())
}
const readings = generateReadings()Now let's see how each database handles it differently.
QuestDB: SQL All the Way
Create a file called questdb-sensors.ts:
import pg from 'pg'
// ... paste generateReadings() above ...
const client = new pg.Client({
host: 'localhost',
port: 8812,
database: 'qdb',
})
await client.connect()
console.log('Connected to QuestDB')Create the table. QuestDB's SYMBOL type interns string values for fast filtering, and timestamp(ts) designates the time column:
await client.query(`
CREATE TABLE IF NOT EXISTS sensors (
sensor_id SYMBOL,
temperature DOUBLE,
humidity DOUBLE,
ts TIMESTAMP
) timestamp(ts) PARTITION BY HOUR WAL;
`)Insert the data with parameterized SQL:
for (const r of readings) {
await client.query(
'INSERT INTO sensors (sensor_id, temperature, humidity, ts) VALUES ($1, $2, $3, $4)',
[r.sensorId, r.temperature, r.humidity, r.ts],
)
}
console.log(`Inserted ${readings.length} rows`)Query average temperature per 15-minute bucket:
const sampled = await client.query(`
SELECT
sensor_id,
avg(temperature) as avg_temp
FROM sensors
SAMPLE BY 15m
ALIGN TO CALENDAR
`)
console.log('\n15-minute averages (first 10 buckets):')
console.log('sensor_id | avg_temp')
console.log('-----------|----------')
for (const row of sampled.rows.slice(0, 10)) {
console.log(
`${row.sensor_id.padEnd(10)} | ${Number(row.avg_temp).toFixed(2).padStart(8)}`,
)
}
console.log(`... (${sampled.rows.length} total buckets)`)
await client.end()15-minute averages (first 10 buckets):
sensor_id | avg_temp
-----------|----------
sensor_a | 23.41
sensor_b | 27.08
sensor_c | 21.83
sensor_a | 24.12
sensor_b | 26.55
sensor_c | 22.31
sensor_a | 23.78
sensor_b | 27.44
sensor_c | 22.09
sensor_a | 24.63
... (288 total buckets)That's the whole thing. SAMPLE BY 15m. If you can write a SELECT, you can write a QuestDB time-bucketed aggregation.
InfluxDB: Points, Tags, and Flux
Create a file called influxdb-sensors.ts:
import { InfluxDB, Point } from '@influxdata/influxdb-client'
// ... paste generateReadings() above ...
const INFLUX_URL = 'http://localhost:8086'
const INFLUX_TOKEN = 'YOUR_TOKEN'
const INFLUX_ORG = 'default'
const INFLUX_BUCKET = 'default'
const influxDB = new InfluxDB({ url: INFLUX_URL, token: INFLUX_TOKEN })
console.log('Connected to InfluxDB')Write the data. No INSERT statements here. You build Point objects and classify each value as a tag (indexed, for filtering) or a field (not indexed, for aggregation):
const writeApi = influxDB.getWriteApi(INFLUX_ORG, INFLUX_BUCKET, 'ms')
for (const r of readings) {
const point = new Point('sensors')
.tag('sensor_id', r.sensorId)
.floatField('temperature', r.temperature)
.floatField('humidity', r.humidity)
.timestamp(r.ts)
writeApi.writePoint(point)
}
await writeApi.close()
console.log(`Wrote ${readings.length} points to InfluxDB`)Query average temperature per 15-minute bucket using Flux:
const queryApi = influxDB.getQueryApi(INFLUX_ORG)
const fluxQuery = `
from(bucket: "${INFLUX_BUCKET}")
|> range(start: -24h)
|> filter(fn: (r) => r._measurement == "sensors")
|> filter(fn: (r) => r._field == "temperature")
|> aggregateWindow(every: 15m, fn: mean, createEmpty: false)
|> sort(columns: ["_time"])
`
const rows: Record<string, unknown>[] = []
for await (const { values, tableMeta } of queryApi.iterateRows(fluxQuery)) {
rows.push(tableMeta.toObject(values))
}
console.log('\n15-minute averages (first 10 windows):')
console.log('sensor_id | avg_temp')
console.log('-----------|----------')
for (const row of rows.slice(0, 10)) {
const sensorId = String(row.sensor_id).padEnd(10)
const avgTemp = Number(row._value).toFixed(2).padStart(8)
console.log(`${sensorId} | ${avgTemp}`)
}
console.log(`... (${rows.length} total windows)`)15-minute averages (first 10 windows):
sensor_id | avg_temp
-----------|----------
sensor_a | 23.41
sensor_b | 27.08
sensor_c | 21.83
sensor_a | 24.12
sensor_b | 26.55
sensor_c | 22.31
sensor_a | 23.78
sensor_b | 27.44
sensor_c | 22.09
sensor_a | 24.63
... (288 total windows)Same numbers, very different path. You need to understand measurements, tags vs fields, the |> pipe operator, range(), filter(), and aggregateWindow(). None of it is unreasonable, but it's all new vocabulary if you're coming from SQL.
Key Differences
SQL vs Flux
This is the biggest decision point. QuestDB is SQL with a handful of extensions. If your team writes SQL every day, QuestDB is immediately productive. SAMPLE BY 15m reads like pseudocode.
Flux is a different language entirely. Functional, pipeline-oriented, and genuinely expressive once you learn it. Chaining |> filter() |> aggregateWindow() |> group() is elegant for complex transformations. But every developer on your team needs to learn it.
Same query, side by side:
QuestDB:
SELECT sensor_id, avg(temperature) FROM sensors SAMPLE BY 15m ALIGN TO CALENDARInfluxDB (Flux):
from(bucket: "default")
|> range(start: -24h)
|> filter(fn: (r) => r._measurement == "sensors")
|> filter(fn: (r) => r._field == "temperature")
|> aggregateWindow(every: 15m, fn: mean, createEmpty: false)The QuestDB version is one line. The Flux version is five. Both are readable, but they demand different kinds of expertise.
PostgreSQL Wire Protocol vs HTTP API
QuestDB speaks the PostgreSQL wire protocol. Connect with pg, psycopg2, JDBC, or any other PG client. Your existing database tooling, ORMs, connection poolers, and monitoring tools work out of the box.
InfluxDB exposes an HTTP API. You need the official SDK (@influxdata/influxdb-client for JavaScript, influxdb-client-python for Python) or raw HTTP requests. Fine for applications, but psql, TablePlus, and DBeaver don't work directly.
For quick ad-hoc queries, QuestDB's PG protocol is a real advantage. spindb connect quest1 and start running SQL immediately. With InfluxDB, you use the built-in web UI at http://localhost:8086 or write code.
Data Retention
InfluxDB has retention policies built into the bucket model. Set how long data lives when you create a bucket. Old data gets deleted automatically. You can also set up downsampling tasks that aggregate high-resolution data into summaries before the raw data expires.
QuestDB manages retention manually. You detach or drop partitions to remove old data. No built-in "delete everything older than 30 days" setting. Some teams hate this. Others prefer the explicit control.
If automatic data lifecycle management matters to you, InfluxDB handles it natively. If you want full control over what gets deleted and when, QuestDB gives you that.
Ecosystem
InfluxDB has a larger ecosystem for monitoring and observability. Telegraf is a widely-deployed metrics collection agent with hundreds of input plugins. The TIG stack (Telegraf + InfluxDB + Grafana) is a well-established pattern for infrastructure monitoring.
QuestDB is leaner. It focuses on being a fast, SQL-compatible time-series database. It supports the InfluxDB Line Protocol for ingestion (so Telegraf can write to QuestDB too), has a built-in web console, and works with Grafana via its PostgreSQL-compatible interface. But the surrounding ecosystem is smaller.
When to Pick QuestDB
Pick QuestDB if:
- Your team already knows SQL. Strongest argument. Zero new query language to learn.
SAMPLE BYandLATEST ONare intuitive extensions, not a paradigm shift. - You want existing PostgreSQL tooling. Connection poolers, ORMs, CLI tools, TablePlus, DBeaver, monitoring dashboards that speak PG wire protocol. All of it just works.
- Your queries are time-bucketed aggregations and "latest value" lookups. These two patterns cover the majority of time-series use cases, and QuestDB handles both in one clause each.
- You value simplicity. QuestDB does time-series storage and querying. It doesn't try to be a metrics platform, a dashboarding tool, or a task scheduler.
When to Pick InfluxDB
Pick InfluxDB if:
- You need the Telegraf ecosystem. Collecting metrics from dozens of sources (servers, containers, cloud services, network devices)? Telegraf's plugin library is unmatched. InfluxDB is its native backend.
- Built-in retention and downsampling matter. Storing high-frequency data that needs automatic lifecycle management? InfluxDB's bucket retention and scheduled tasks handle it without external tooling.
- You like the pipeline query model. Flux's
|>chaining is genuinely powerful for multi-step transformations. If your queries involve filtering, grouping, joining, and aggregating across multiple measurements, the pipeline reads more naturally than nested SQL. - You're building a monitoring stack. The TIG stack is battle-tested for infrastructure monitoring. If that's your use case, you're swimming with the current.
Run Both on Layerbase Cloud
Want to skip local setup entirely? Layerbase Cloud provisions either engine. Pick QuestDB or InfluxDB on the create page and grab your connection details from the Quick Connect panel.
To manage your local instances:
spindb stop quest1 # Stop QuestDB
spindb stop influx1 # Stop InfluxDB
spindb start quest1 # Start QuestDB
spindb start influx1 # Start InfluxDB
spindb list # See all your database instancesSpinDB runs 20+ database engines from one CLI. Running QuestDB and InfluxDB side by side is the fastest way to decide which one fits. If you prefer a GUI, Layerbase Desktop is available for macOS.