Best Database for IoT
IoT data is time-series data. Thousands of devices sending temperature readings, GPS coordinates, battery levels, or vibration measurements every few seconds. You need to store all of it, query it by time range, aggregate it into summaries, and eventually age out the raw data so your storage costs don't spiral.
PostgreSQL can do this. You create a table with a timestamp column, insert rows, and write GROUP BY date_trunc(...) queries. It works at small scale. But once you're ingesting millions of rows per day from a fleet of sensors and running real-time dashboards on top, you start fighting the database instead of working with it. Indexes bloat, time-bucketed queries get verbose, and there's no built-in way to expire old data.
Purpose-built time-series databases exist because this workload is common enough to deserve specialized storage engines, query syntax, and retention policies. Two stand out for IoT: QuestDB and InfluxDB. They take very different approaches to the same problem.
Contents
- What Makes IoT Data Different
- The Options
- Quick Comparison
- Try Both Locally with SpinDB
- So Which One?
- Further Reading
What Makes IoT Data Different
IoT workloads share a few characteristics that general-purpose databases aren't optimized for:
- Write-heavy, append-only. Sensors don't update old readings. They just keep writing new ones. You need sustained high write throughput, not fast random updates.
- Time-range queries. Almost every query filters by time. "Show me the last 4 hours" or "compare this week to last week." Scanning the entire table for a time range is wasteful when the data is naturally ordered by time.
- Downsampling and aggregation. Raw readings every 5 seconds are useful for debugging. For dashboards and trend analysis, you want 15-minute or hourly averages. The database should make this easy, not require a 10-line SQL query every time.
- Data retention. Sensor data loses value as it ages. You want 30 days of raw data, 6 months of hourly summaries, and everything older than that deleted automatically. In PostgreSQL, this means writing cron jobs and manual partition management.
- High cardinality. Thousands of device IDs, each with its own stream of readings. Queries like "latest reading per device" need to be fast even with tens of thousands of groups.
If your IoT project is 10 sensors reporting once a minute, PostgreSQL is fine. If you're building a platform with thousands of devices reporting every few seconds, a time-series database pays for itself in both performance and developer time.
The Options
QuestDB: SQL with Time-Series Extensions
QuestDB is a time-series database that speaks SQL. Not a SQL-like language, actual SQL extended with a few time-series primitives. If your team writes SQL every day, QuestDB is productive on day one.
The two features that matter most for IoT:
SAMPLE BY handles time-bucketed aggregation in one clause. Instead of GROUP BY date_trunc('hour', ts), you write SAMPLE BY 1h. Change it to 15m, 1d, or any interval. The engine handles bucketing, alignment, and per-group aggregation internally.
SELECT sensor_id, avg(temperature), max(temperature)
FROM readings
SAMPLE BY 15m
ALIGN TO CALENDARLATEST ON gives you the most recent reading per device in one line. No window functions, no correlated subqueries, no self-joins.
SELECT * FROM readings
LATEST ON ts PARTITION BY sensor_idQuestDB connects via the PostgreSQL wire protocol. Your existing pg driver, TablePlus, DBeaver, and Grafana's PostgreSQL data source all work out of the box. The SYMBOL column type interns high-cardinality string values (like device IDs) for fast filtering. Data is physically partitioned by time, so range queries only scan relevant partitions.
For ingestion at scale, QuestDB also supports the InfluxDB Line Protocol over TCP and HTTP, which means Telegraf agents can write directly to it.
InfluxDB: Purpose-Built from the Ground Up
InfluxDB 2.x doesn't extend SQL. It replaces it with an entirely different data model and query language. Data is organized into measurements, tags (indexed metadata like device ID), and fields (the actual values). Queries use Flux, a functional pipeline language.
The same 15-minute average query in Flux:
from(bucket: "iot")
|> range(start: -24h)
|> filter(fn: (r) => r._measurement == "readings")
|> filter(fn: (r) => r._field == "temperature")
|> aggregateWindow(every: 15m, fn: mean, createEmpty: false)More verbose than QuestDB's SQL, but powerful once you learn it. The |> pipeline model is genuinely expressive for multi-step transformations.
Where InfluxDB stands out for IoT:
- Built-in retention policies. Set a retention period on a bucket and old data is deleted automatically. No cron jobs, no manual partition management.
- Downsampling tasks. Write a Flux task that aggregates high-resolution data into summaries on a schedule. Raw 5-second readings get compressed into hourly averages, and the raw data expires after 30 days.
- Telegraf agent ecosystem. Telegraf is a metrics collection agent with hundreds of input plugins for MQTT, Modbus, SNMP, OPC-UA, and basically every IoT protocol you've heard of. InfluxDB is its native backend. If your devices publish to an MQTT broker, a Telegraf instance subscribes and writes to InfluxDB with zero custom code.
The tradeoff is a steeper learning curve. Tags vs fields, measurements vs tables, Flux vs SQL. Every developer on your team needs to learn a new vocabulary.
TimescaleDB: PostgreSQL Extension
TimescaleDB is worth mentioning because it's a legitimate option. It's a PostgreSQL extension that adds automatic time-based partitioning (hypertables), continuous aggregates, and data retention policies, all while staying 100% PostgreSQL-compatible.
We don't cover it in depth here because SpinDB doesn't run TimescaleDB (it's a PostgreSQL extension, not a standalone engine). If your organization is already running managed PostgreSQL and doesn't want to introduce a new database, TimescaleDB is a strong choice. You keep your existing tooling and get time-series optimizations on top.
PostgreSQL with Partitioning: The "Good Enough" Option
If you have fewer than a few thousand devices and moderate write volume, plain PostgreSQL with native table partitioning might be all you need:
CREATE TABLE readings (
sensor_id TEXT NOT NULL,
temperature DOUBLE PRECISION,
ts TIMESTAMPTZ NOT NULL
) PARTITION BY RANGE (ts);
CREATE TABLE readings_2026_03 PARTITION OF readings
FOR VALUES FROM ('2026-03-01') TO ('2026-04-01');You manage partitions manually (or with a tool like pg_partman), write date_trunc queries for aggregation, and drop old partitions for retention. It works. It's just more manual than what QuestDB or InfluxDB give you out of the box.
For a small IoT project or a team that already operates PostgreSQL well, this is a reasonable starting point. You can always migrate to a dedicated time-series database later when the volume justifies it.
Quick Comparison
| QuestDB | InfluxDB | PostgreSQL (partitioned) | |
|---|---|---|---|
| Query language | SQL + extensions | Flux (pipeline) | SQL |
| Time bucketing | SAMPLE BY 15m | aggregateWindow(every: 15m, ...) | GROUP BY date_trunc(...) |
| Latest per device | LATEST ON ts PARTITION BY sensor_id | Flux last() | DISTINCT ON or window function |
| Retention | Manual partition drop/detach | Built-in per-bucket | Manual partition drop |
| Ingestion protocol | PG wire + InfluxDB Line Protocol | HTTP API + Line Protocol | PG wire |
| Telegraf support | Yes (via Line Protocol) | Native | Via plugin |
| Learning curve | Low (SQL) | Higher (new data model + Flux) | None (you already know PG) |
| Sweet spot | SQL teams, high-throughput ingestion | Telegraf ecosystem, automatic retention | Small scale, existing PG infrastructure |
Try Both Locally with SpinDB
The fastest way to evaluate is to run both on your machine. SpinDB handles the download and setup. No Docker, no manual config. (What is SpinDB?)
Install SpinDB globally:
npm i -g spindb # npm
pnpm add -g spindb # pnpmCreate both instances:
spindb create quest1 -e questdb --start
spindb create influx1 -e influxdb --startQuestDB: Sensor Table in 30 Seconds
Connect with any PostgreSQL client:
spindb connect quest1Create a table and insert some test data:
CREATE TABLE IF NOT EXISTS readings (
device_id SYMBOL,
temperature DOUBLE,
humidity DOUBLE,
battery DOUBLE,
ts TIMESTAMP
) timestamp(ts) PARTITION BY DAY WAL;
INSERT INTO readings VALUES ('thermostat_01', 22.4, 55.1, 98.2, '2026-03-14T10:00:00Z');
INSERT INTO readings VALUES ('thermostat_01', 22.6, 54.8, 98.1, '2026-03-14T10:05:00Z');
INSERT INTO readings VALUES ('thermostat_01', 22.9, 54.3, 98.0, '2026-03-14T10:10:00Z');
INSERT INTO readings VALUES ('thermostat_02', 19.1, 61.2, 72.5, '2026-03-14T10:00:00Z');
INSERT INTO readings VALUES ('thermostat_02', 19.3, 60.8, 72.4, '2026-03-14T10:05:00Z');
INSERT INTO readings VALUES ('thermostat_02', 19.0, 61.5, 72.3, '2026-03-14T10:10:00Z');Average temperature per 15-minute bucket:
SELECT device_id, avg(temperature) as avg_temp
FROM readings
SAMPLE BY 15m
ALIGN TO CALENDAR;Latest reading per device:
SELECT * FROM readings
LATEST ON ts PARTITION BY device_id;That's it. If you can write a SELECT, you can query time-series data in QuestDB.
InfluxDB: Same Data, Different Model
InfluxDB uses an HTTP API, so connect via the web UI at http://localhost:8086 or use the CLI/SDK. When SpinDB starts InfluxDB, it creates a default organization, bucket, and API token. Grab the token from the startup output.
Write data using the JavaScript SDK:
import { InfluxDB, Point } from '@influxdata/influxdb-client'
const influxDB = new InfluxDB({
url: 'http://localhost:8086',
token: 'YOUR_TOKEN',
})
const writeApi = influxDB.getWriteApi('default', 'default', 'ms')
const devices = ['thermostat_01', 'thermostat_02']
for (const deviceId of devices) {
const point = new Point('readings')
.tag('device_id', deviceId)
.floatField('temperature', 22.4 + Math.random() * 3)
.floatField('humidity', 55 + Math.random() * 5)
.floatField('battery', 70 + Math.random() * 28)
writeApi.writePoint(point)
}
await writeApi.close()Query the 15-minute average in Flux:
from(bucket: "default")
|> range(start: -1h)
|> filter(fn: (r) => r._measurement == "readings")
|> filter(fn: (r) => r._field == "temperature")
|> aggregateWindow(every: 15m, fn: mean, createEmpty: false)Both give you the same answer. The question is which developer experience your team prefers.
So Which One?
Pick QuestDB if your team knows SQL. This is the strongest signal. QuestDB's SAMPLE BY and LATEST ON cover the two most common IoT query patterns, and they read like pseudocode. You connect with a PostgreSQL driver, so your existing tools and ORMs work immediately. No new query language to learn, no new data model to internalize.
Pick InfluxDB if you need the Telegraf agent ecosystem or automatic retention. If your devices publish to MQTT and you want a Telegraf instance to handle ingestion with zero custom code, InfluxDB is the natural backend. If automatic data lifecycle management is important (delete raw data after 30 days, keep hourly summaries for a year), InfluxDB handles it natively without cron jobs or external tooling.
Stick with PostgreSQL if you're at small scale and already run it. Tens of devices, moderate write volume, a team that knows PostgreSQL inside out. Add partitioning, use date_trunc for bucketing, and drop old partitions when needed. You can always migrate later when the volume justifies a dedicated engine.
My default recommendation for a new IoT project is QuestDB. SQL is the path of least resistance for most teams, the ingestion throughput is excellent, and the query ergonomics for time-series work are the best I've used. But if your architecture already has Telegraf and Grafana in the mix, InfluxDB slots in naturally.
Further Reading
Deeper dives on each database:
- Getting Started with QuestDB builds a full sensor pipeline in TypeScript with
SAMPLE BYandLATEST ON - Getting Started with InfluxDB builds an APM pipeline with the Flux query language and retention policies
- QuestDB vs InfluxDB runs the same sensor workload in both databases side by side, with identical TypeScript code
To manage your local instances:
spindb stop quest1 # Stop QuestDB
spindb stop influx1 # Stop InfluxDB
spindb start quest1 # Start again
spindb list # See all your database instancesSpinDB runs 20+ database engines from one CLI. Spin up QuestDB for time-series data alongside PostgreSQL for your app and Redis for caching. If you prefer a GUI, Layerbase Desktop is available for macOS.
Want managed hosting instead? Layerbase Cloud provisions both QuestDB and InfluxDB. Pick an engine on the create page and grab your connection details from Quick Connect.