Getting Started with InfluxDB
Every database can store rows with a timestamp column. PostgreSQL, MySQL, SQLite: they all let you INSERT a row with created_at. But time-series workloads have a distinct shape: massive write volumes, queries that always filter by time range, and data that loses value as it ages. General-purpose databases can handle this. They just weren't designed for it.
QuestDB tackles time-series by extending SQL with dedicated clauses like SAMPLE BY. This guide targets InfluxDB 2.x, which takes a fundamentally different approach. Instead of tables and rows, it has its own data model: measurements, tags, and fields. Instead of SQL, it gives you Flux, a functional query language for transforming and aggregating time-stamped data through a pipeline. Writes use the line protocol, a compact text format optimized for high-throughput ingestion.
This makes InfluxDB a natural fit for server metrics, application telemetry, IoT sensor readings, and financial tick data. It handles millions of writes per second, manages data retention automatically, and provides built-in downsampling so you don't drown in stale high-resolution data.
We'll build an application performance monitoring pipeline in one TypeScript file. Everything below works against a local instance, but you can also point it at Layerbase Cloud if you'd rather not install anything.
Contents
- Create an InfluxDB Instance
- Set Up the Project
- The Data Model: Measurements, Tags, and Fields
- Generate Metrics Data
- Connect and Write Data
- Query with Flux
- Aggregation: Average Response Time per Window
- Grouping: Response Times by Endpoint
- Retention Policies and Downsampling
- When to Reach for InfluxDB
- Wrapping Up
Create an InfluxDB Instance
Local with SpinDB
SpinDB handles the download, setup, and initial configuration in one command. No Docker, no manual binary management. (What is SpinDB?)
Install SpinDB globally:
npm i -g spindb # npm
pnpm add -g spindb # pnpmOr run it directly without installing:
npx spindb create influx1 -e influxdb --start # npm
pnpx spindb create influx1 -e influxdb --start # pnpmIf you installed globally, create and start an InfluxDB instance:
spindb create influx1 -e influxdb --startSpinDB downloads the InfluxDB binary, configures it, runs the initial setup (creating an organization, bucket, and authentication token), and starts the server. Verify it's running:
spindb url influx1http://127.0.0.1:8086SpinDB handles the initial setup that InfluxDB 2.x requires: a default organization, a default bucket, and an API token. You'll see these values in the output when the instance starts. Copy the token; you'll need it shortly.
Layerbase Cloud
Skip the local install entirely if you prefer. Layerbase Cloud provisions a managed InfluxDB instance and hands you a connection URL and API token through the Quick Connect panel.
Cloud instances use TLS, so the connection code uses https://:
const influxDB = new InfluxDB({
url: 'https://cloud.layerbase.dev:11010',
token: 'YOUR_TOKEN',
})Everything else in this guide works identically whether you're running locally or on Layerbase Cloud. Just swap in your connection details.
Set Up the Project
mkdir influxdb-metrics && cd influxdb-metrics
pnpm init
pnpm add @influxdata/influxdb-client
pnpm add -D tsx typescriptCreate a file called metrics.ts. All the code in this post goes into that one file.
The Data Model: Measurements, Tags, and Fields
Before writing any code, it helps to understand how InfluxDB thinks about data, because the model is genuinely different from relational databases.
A measurement is like a table name. It groups related data points. Ours will be http_requests.
Tags are indexed key-value pairs. They describe what the data point is about: which endpoint, which HTTP method, which status code. Because they're indexed, filtering and grouping by tags is fast. Tags are always strings.
Fields are the actual values you're measuring: response time in milliseconds, bytes sent. Fields are not indexed. You query them with aggregation functions (mean, max, sum), not with equality filters.
Every data point also has a timestamp. InfluxDB stores and queries everything relative to time.
Here's how our metrics map to this model:
| Concept | InfluxDB term | Example |
|---|---|---|
| Table name | Measurement | http_requests |
| Indexed metadata | Tags | method=GET, endpoint=/api/users, status_code=200 |
| Measured values | Fields | response_time_ms=42.5, bytes_sent=1024 |
| When | Timestamp | 2026-03-14T10:30:00Z |
If you've used SQL databases, think of tags as columns you'd put in a WHERE clause and fields as columns you'd wrap in AVG() or SUM().
Generate Metrics Data
We'll simulate HTTP request metrics for a web app. Each data point represents one request with its method, endpoint, status code, response time, and bytes sent:
import { InfluxDB, Point } from '@influxdata/influxdb-client'
type Metric = {
method: string
endpoint: string
statusCode: string
responseTimeMs: number
bytesSent: number
timestamp: Date
}
function generateMetrics(): Metric[] {
const endpoints = [
{ method: 'GET', endpoint: '/api/users', avgMs: 45, avgBytes: 2048 },
{ method: 'GET', endpoint: '/api/products', avgMs: 62, avgBytes: 4096 },
{ method: 'POST', endpoint: '/api/orders', avgMs: 120, avgBytes: 512 },
{ method: 'GET', endpoint: '/api/health', avgMs: 5, avgBytes: 128 },
{ method: 'PUT', endpoint: '/api/users', avgMs: 85, avgBytes: 256 },
]
const metrics: Metric[] = []
const now = new Date()
const oneHourAgo = new Date(now.getTime() - 60 * 60 * 1000)
for (let i = 0; i < 50; i++) {
const ep = endpoints[Math.floor(Math.random() * endpoints.length)]
const ts = new Date(
oneHourAgo.getTime() + Math.random() * 60 * 60 * 1000,
)
const jitter = 0.5 + Math.random() * 1.5
const responseTimeMs = Math.round(ep.avgMs * jitter * 100) / 100
const bytesSent = Math.round(ep.avgBytes * (0.8 + Math.random() * 0.4))
const statusCode =
Math.random() > 0.9 ? '500' : Math.random() > 0.85 ? '404' : '200'
metrics.push({
method: ep.method,
endpoint: ep.endpoint,
statusCode,
responseTimeMs,
bytesSent,
timestamp: ts,
})
}
return metrics.sort((a, b) => a.timestamp.getTime() - b.timestamp.getTime())
}
const metrics = generateMetrics()
console.log(`Generated ${metrics.length} request metrics`)Each endpoint has a realistic average response time and payload size. The jitter multiplier creates natural variance, about 10% of requests get error status codes, and the data spans one hour, sorted by timestamp.
Connect and Write Data
Now connect to InfluxDB and write the metrics using the Point builder:
const INFLUX_URL = 'http://localhost:8086'
const INFLUX_TOKEN = 'YOUR_TOKEN'
const INFLUX_ORG = 'default'
const INFLUX_BUCKET = 'default'
const influxDB = new InfluxDB({ url: INFLUX_URL, token: INFLUX_TOKEN })
const writeApi = influxDB.getWriteApi(INFLUX_ORG, INFLUX_BUCKET, 'ms')
for (const m of metrics) {
const point = new Point('http_requests')
.tag('method', m.method)
.tag('endpoint', m.endpoint)
.tag('status_code', m.statusCode)
.floatField('response_time_ms', m.responseTimeMs)
.intField('bytes_sent', m.bytesSent)
.timestamp(m.timestamp)
writeApi.writePoint(point)
}
await writeApi.close()
console.log(`Wrote ${metrics.length} points to InfluxDB`)Replace YOUR_TOKEN with the API token SpinDB printed when it created your instance (or the token from your Layerbase Cloud Quick Connect panel).
A few things worth noting:
getWriteApitakes the organization, bucket, and timestamp precision.'ms'tells InfluxDB to interpret timestamps as milliseconds.Pointis a builder. Chain.tag()for indexed metadata and.floatField()/.intField()for measured values. This maps directly to the line protocol InfluxDB uses internally.- Tags vs. fields:
method,endpoint, andstatus_codeare tags because we'll filter and group by them.response_time_msandbytes_sentare fields because we'll aggregate them. writeApi.close()flushes buffered points and closes the connection. Skip this and some points silently vanish.
Query with Flux
InfluxDB 2.x uses Flux for queries. Flux reads like a pipeline: start with a data source, then pipe it through transformations. Here's how to fetch all requests from the last hour:
const queryApi = influxDB.getQueryApi(INFLUX_ORG)
const allRequestsQuery = `
from(bucket: "${INFLUX_BUCKET}")
|> range(start: -1h)
|> filter(fn: (r) => r._measurement == "http_requests")
|> filter(fn: (r) => r._field == "response_time_ms")
|> sort(columns: ["_time"])
|> limit(n: 10)
`
console.log('\nRecent requests (first 10):')
console.log('time | method | endpoint | status | ms')
console.log('-------------------------|--------|-----------------|--------|------')
const rows: Record<string, unknown>[] = []
for await (const { values, tableMeta } of queryApi.iterateRows(
allRequestsQuery,
)) {
const row = tableMeta.toObject(values)
rows.push(row)
}
for (const row of rows) {
const time = new Date(row._time as string).toISOString().slice(0, 23)
const method = String(row.method).padEnd(6)
const endpoint = String(row.endpoint).padEnd(15)
const status = String(row.status_code).padEnd(6)
const ms = Number(row._value).toFixed(1).padStart(6)
console.log(`${time} | ${method} | ${endpoint} | ${status} | ${ms}`)
}Recent requests (first 10):
time | method | endpoint | status | ms
-------------------------|--------|-----------------|--------|------
2026-03-14T11:30:12.451 | GET | /api/health | 200 | 3.2
2026-03-14T11:31:05.892 | POST | /api/orders | 200 | 145.8
2026-03-14T11:32:44.103 | GET | /api/users | 200 | 38.7
2026-03-14T11:34:22.567 | GET | /api/products | 404 | 71.2
2026-03-14T11:36:01.234 | PUT | /api/users | 200 | 92.4
2026-03-14T11:37:48.891 | GET | /api/health | 200 | 6.1
2026-03-14T11:39:15.445 | GET | /api/users | 200 | 52.3
2026-03-14T11:41:33.678 | POST | /api/orders | 500 | 178.2
2026-03-14T11:43:02.112 | GET | /api/products | 200 | 55.9
2026-03-14T11:44:50.334 | GET | /api/health | 200 | 4.8The pipeline reads top to bottom: start from the bucket, filter to the last hour, keep only http_requests with the response_time_ms field, sort by time, take 10. Each |> passes its output to the next function.
Aggregation: Average Response Time per Window
This is where InfluxDB really shines. Mean response time in 5-minute windows:
const aggregationQuery = `
from(bucket: "${INFLUX_BUCKET}")
|> range(start: -1h)
|> filter(fn: (r) => r._measurement == "http_requests")
|> filter(fn: (r) => r._field == "response_time_ms")
|> aggregateWindow(every: 5m, fn: mean, createEmpty: false)
|> sort(columns: ["_time"])
`
const aggRows: Record<string, unknown>[] = []
for await (const { values, tableMeta } of queryApi.iterateRows(
aggregationQuery,
)) {
aggRows.push(tableMeta.toObject(values))
}
console.log('\nAverage response time per 5-minute window:')
console.log('window | avg_ms')
console.log('---------------------|--------')
for (const row of aggRows.slice(0, 12)) {
const time = new Date(row._time as string).toISOString().slice(11, 16)
const avgMs = Number(row._value).toFixed(1).padStart(7)
console.log(`${time} | ${avgMs}`)
}
console.log(`... (${aggRows.length} total windows)`)Average response time per 5-minute window:
window | avg_ms
---------------------|--------
11:30 | 62.4
11:35 | 48.7
11:40 | 91.3
11:45 | 55.2
11:50 | 73.8
11:55 | 41.9
12:00 | 67.5
12:05 | 88.1
12:10 | 44.3
12:15 | 59.6
12:20 | 76.2
12:25 | 52.8
... (12 total windows)aggregateWindow(every: 5m, fn: mean) is InfluxDB's equivalent of QuestDB's SAMPLE BY 5m. Swap 5m for 1h or 1d. Swap mean for max, min, sum, or count. createEmpty: false skips windows with no data.
Grouping: Response Times by Endpoint
Compare how different endpoints perform by grouping on the endpoint tag:
const byEndpointQuery = `
from(bucket: "${INFLUX_BUCKET}")
|> range(start: -1h)
|> filter(fn: (r) => r._measurement == "http_requests")
|> filter(fn: (r) => r._field == "response_time_ms")
|> group(columns: ["endpoint"])
|> mean()
|> sort(columns: ["_value"], desc: true)
`
const endpointRows: Record<string, unknown>[] = []
for await (const { values, tableMeta } of queryApi.iterateRows(
byEndpointQuery,
)) {
endpointRows.push(tableMeta.toObject(values))
}
console.log('\nAverage response time by endpoint:')
console.log('endpoint | avg_ms')
console.log('------------------|--------')
for (const row of endpointRows) {
const endpoint = String(row.endpoint).padEnd(17)
const avgMs = Number(row._value).toFixed(1).padStart(7)
console.log(`${endpoint} | ${avgMs}`)
}Average response time by endpoint:
endpoint | avg_ms
------------------|--------
/api/orders | 118.4
/api/users | 63.5
/api/products | 59.8
/api/health | 5.1group(columns: ["endpoint"]) regroups the data by endpoint, then mean() calculates the average within each group. Same idea as GROUP BY endpoint in SQL, just expressed as a pipeline.
You can combine grouping with windowed aggregation too:
const endpointOverTimeQuery = `
from(bucket: "${INFLUX_BUCKET}")
|> range(start: -1h)
|> filter(fn: (r) => r._measurement == "http_requests")
|> filter(fn: (r) => r._field == "response_time_ms")
|> aggregateWindow(every: 15m, fn: mean, createEmpty: false)
|> group(columns: ["endpoint"])
`That gives you a time-series of average response times broken down by endpoint, exactly what you'd feed into a monitoring dashboard.
Retention Policies and Downsampling
InfluxDB can automatically delete data older than a configured threshold. When you create a bucket, you specify how long data lives:
- Default bucket: data is kept forever (retention = 0)
- 7-day retention: InfluxDB automatically drops data older than 7 days
- 30-day retention: keeps a month of data, then discards it
Configure retention when creating buckets through the InfluxDB UI at http://localhost:8086 or through the API. SpinDB creates a default bucket with no expiration, which is fine for development.
In production, the common pattern is downsampling: keep high-resolution data briefly and aggregated summaries longer. For example:
- Raw data bucket (7-day retention): every individual request metric
- Hourly summaries bucket (90-day retention): mean, p95, max response times per hour
- Daily summaries bucket (forever): daily aggregates for long-term trend analysis
InfluxDB tasks run on a schedule to aggregate data from one bucket into another. A task that downsamples into per-minute averages looks like this:
option task = {name: "downsample_hourly", every: 1h}
from(bucket: "metrics")
|> range(start: -task.every)
|> filter(fn: (r) => r._measurement == "http_requests")
|> aggregateWindow(every: 1h, fn: mean, createEmpty: false)
|> to(bucket: "metrics_hourly")Every hour, this aggregates raw data into hourly means and writes results to a separate bucket with longer retention. Raw data eventually expires. Summaries persist. Full detail for recent data, compact summaries for historical analysis, no manual cleanup.
When to Reach for InfluxDB
I'd reach for InfluxDB in these situations:
- Server and infrastructure monitoring: CPU, memory, disk I/O, network throughput. InfluxDB is the storage backend for Telegraf, one of the most widely deployed metrics agents.
- Application performance metrics: request latency, error rates, throughput per endpoint. The tag-based model maps naturally to the dimensions you care about (service, endpoint, region, status code).
- IoT sensor data at scale: thousands of devices reporting temperature, pressure, GPS, battery level. Tags identify the device, fields carry the readings, retention policies keep storage bounded.
- Financial tick data: high-frequency price updates, trade volumes, order book snapshots. Millions of points per second, with rolling averages, VWAP, or volatility over configurable windows.
- Any workload with high-frequency timestamped writes and time-windowed queries: if your data arrives continuously, your queries always filter by time range, and you need automatic data lifecycle management, this is exactly what InfluxDB was built for.
The common thread: append-heavy writes, time as the primary query axis, and a need for built-in retention and windowed aggregation.
Wrapping Up
Run the full script:
npx tsx metrics.tsUnder 100 lines of real code. You generated application metrics, wrote them with the Point builder, queried with Flux pipelines, ran windowed aggregations, and compared response times across endpoints. That same pattern scales from 50 data points to billions.
The InfluxDB documentation covers Telegraf integration, InfluxQL for SQL-like queries, template variables, dashboard building in the built-in UI, and the full Flux language reference.
To manage your local InfluxDB instance:
spindb stop influx1 # Stop the server
spindb start influx1 # Start it again
spindb list # See all your database instancesSpinDB supports 20+ engines, so you can run InfluxDB for metrics alongside PostgreSQL for your app and Redis for sessions, all from one CLI. Prefer a GUI? Layerbase Desktop is available for macOS.