What's New in PostgreSQL 18
PostgreSQL 18 was released in September 2025, and it is one of the most feature-rich major releases in years. Rather than listing every change from the release notes, this post focuses on the features that have the most practical impact for application developers, with SQL examples you can run yourself.
Set Up PostgreSQL 18 Locally
Every example in this post runs on a stock PostgreSQL 18 install with no extensions. The fastest way to get there is with SpinDB, a CLI tool for running and managing databases locally. (What is SpinDB?)
# Install SpinDB
npm i -g spindb
# Create a PostgreSQL instance named "pg18", start it, and connect
spindb create pg18 --start --connectThat's it. No Homebrew formulas, no Docker containers, no manual configuration. SpinDB downloads the correct PostgreSQL 18 binary for your platform, initializes a data directory, starts the server, and drops you into an interactive psql session. You're ready to paste the examples below.
If you get disconnected or close your terminal, reconnect any time:
spindb connect pg18Native UUIDv7
This is the headliner. PostgreSQL 18 adds a built-in uuidv7() function that generates UUID version 7 values. UUIDv7 embeds a timestamp in the first 48 bits, which means these IDs are globally unique and chronologically sortable.
SELECT uuidv7();
-- 01936f6e-8b50-7cc2-a174-1dd6b0f37b8fUse it as a default primary key:
CREATE TABLE orders (
id UUID PRIMARY KEY DEFAULT uuidv7(),
customer_name TEXT NOT NULL,
total DECIMAL(10, 2) NOT NULL,
created_at TIMESTAMPTZ DEFAULT now()
);
INSERT INTO orders (customer_name, total) VALUES ('Alice', 99.99);
INSERT INTO orders (customer_name, total) VALUES ('Bob', 149.50);
INSERT INTO orders (customer_name, total) VALUES ('Charlie', 75.00);
SELECT id, customer_name, total FROM orders ORDER BY id;Because UUIDv7 values sort by creation time, ORDER BY id returns rows in insertion order without needing a separate timestamp column. More importantly, B-tree indexes stay efficient because new values always cluster at the end of the index, avoiding the page splits that random UUIDv4 values cause.
PostgreSQL has had gen_random_uuid() for random UUIDv4 values since version 13, and uuid-ossp offers time-based UUIDv1, but neither produces time-sorted, globally unique IDs the way v7 does. Before v18, UUIDv7 required application-level generation. Native uuidv7() gives you the best of both worlds: the global uniqueness of UUIDs with the sequential index performance of auto-incrementing integers.
Virtual Generated Columns
PostgreSQL has supported generated columns since version 12, but they were always STORED, meaning the computed value was written to disk on every insert and update. PostgreSQL 18 adds VIRTUAL generated columns, and makes them the default.
Virtual columns compute their value at read time. No disk space, no write overhead.
CREATE TABLE line_items (
id UUID PRIMARY KEY DEFAULT uuidv7(),
product_name TEXT NOT NULL,
quantity INT NOT NULL,
unit_price DECIMAL(10, 2) NOT NULL,
total DECIMAL(10, 2) GENERATED ALWAYS AS (quantity * unit_price)
);
INSERT INTO line_items (product_name, quantity, unit_price) VALUES
('Widget', 5, 12.99),
('Gadget', 2, 49.99),
('Sprocket', 10, 3.50);
SELECT product_name, quantity, unit_price, total FROM line_items; product_name | quantity | unit_price | total
--------------+----------+------------+--------
Widget | 5 | 12.99 | 64.95
Gadget | 2 | 49.99 | 99.98
Sprocket | 10 | 3.50 | 35.00The total column is never stored on disk. It is computed from quantity * unit_price every time a row is read. You can still use STORED explicitly if you want the old behavior, but for most computed columns, virtual is the better default.
OLD/NEW in RETURNING
This one eliminates a common pattern of running a separate SELECT before or after a mutation to capture the previous values. PostgreSQL 18 lets you reference both OLD (pre-mutation) and NEW (post-mutation) values directly in the RETURNING clause.
CREATE TABLE products (
id UUID PRIMARY KEY DEFAULT uuidv7(),
name TEXT UNIQUE NOT NULL,
price DECIMAL(10, 2) NOT NULL
);
INSERT INTO products (name, price) VALUES
('Keyboard', 79.99),
('Mouse', 29.99),
('Monitor', 349.99);
-- Apply a 10% price increase and see old vs. new in one statement
UPDATE products
SET price = price * 1.10
RETURNING
name,
OLD.price AS old_price,
NEW.price AS new_price; name | old_price | new_price
----------+-----------+-----------
Keyboard | 79.99 | 87.99
Mouse | 29.99 | 32.99
Monitor | 349.99 | 384.99This also works with INSERT ... ON CONFLICT (upserts), where OLD values are NULL for newly inserted rows:
INSERT INTO products (name, price) VALUES ('Keyboard', 89.99)
ON CONFLICT (name)
DO UPDATE SET price = EXCLUDED.price
RETURNING
name,
OLD.price AS previous_price,
NEW.price AS current_price,
(OLD.price IS NULL) AS was_inserted;No more CTEs or before/after queries. Audit logging, change data capture, and undo features all get simpler.
Temporal Constraints with WITHOUT OVERLAPS
If you've ever built a booking system, scheduling app, or any model where two records shouldn't overlap in time for the same entity, you know the pain of enforcing that in application code or with custom trigger logic. PostgreSQL 18 makes this a declarative constraint.
CREATE TABLE room_bookings (
room_id INT,
booking_period TSTZRANGE NOT NULL,
guest_name TEXT NOT NULL,
PRIMARY KEY (room_id, booking_period WITHOUT OVERLAPS)
);
-- Alice books room 101 for March 1-5
INSERT INTO room_bookings VALUES
(101, '[2026-03-01, 2026-03-05)', 'Alice');
-- Bob books room 102 for the same dates (different room, no conflict)
INSERT INTO room_bookings VALUES
(102, '[2026-03-01, 2026-03-05)', 'Bob');
-- Charlie tries to book room 101 for March 3-7 (overlaps Alice)
INSERT INTO room_bookings VALUES
(101, '[2026-03-03, 2026-03-07)', 'Charlie');
-- ERROR: conflicting key value violates exclusion constraintThe database enforces non-overlapping ranges per room using a GiST index under the hood. No triggers, no application-side validation, no race conditions. This works with any range type: TSTZRANGE, DATERANGE, INT4RANGE, etc.
Zero-Downtime NOT NULL with NOT VALID
Adding a NOT NULL constraint to an existing column on a large production table used to require a full table scan under an AccessExclusiveLock, blocking all reads and writes. PostgreSQL 18 lets you split this into two steps:
CREATE TABLE users (
id UUID PRIMARY KEY DEFAULT uuidv7(),
name TEXT NOT NULL,
email TEXT
);
INSERT INTO users (name) VALUES ('Legacy User');
-- Step 1: Add the constraint instantly (no table scan, brief lock)
ALTER TABLE users
ADD CONSTRAINT users_email_not_null
NOT NULL email NOT VALID;
-- New inserts must satisfy the constraint immediately
INSERT INTO users (name) VALUES ('Another User');
-- ERROR: null value in column "email" violates not-null constraint
-- But existing rows with NULL are not checked yet.
-- Step 2: Clean up existing NULLs at your own pace
UPDATE users SET email = name || '@unknown.com' WHERE email IS NULL;
-- Step 3: Validate the constraint (table scan, but with a weaker lock)
ALTER TABLE users VALIDATE CONSTRAINT users_email_not_null;This is the same two-phase pattern that has existed for CHECK and FOREIGN KEY constraints, but NOT NULL didn't support it until now. If you run zero-downtime deployments, this removes a significant pain point.
B-tree Skip Scan
A subtle but impactful optimization. Previously, a composite B-tree index was only useful if your query filtered on the leading column(s). PostgreSQL 18 can now "skip" through distinct values of the leading column to reach the columns you are filtering on.
CREATE TABLE sales (
id UUID PRIMARY KEY DEFAULT uuidv7(),
region TEXT NOT NULL,
category TEXT NOT NULL,
sale_date DATE NOT NULL,
amount DECIMAL(10, 2) NOT NULL
);
CREATE INDEX idx_sales_region_cat_date
ON sales (region, category, sale_date);
-- In PG 17, this query could NOT use the index (no filter on 'region')
-- In PG 18, skip scan makes this efficient:
SELECT * FROM sales
WHERE category = 'electronics'
AND sale_date > '2026-01-01';The skip scan is most effective when the leading column has low cardinality (few distinct values). In practice, this means you can maintain fewer indexes. A single composite index now serves more query patterns than it used to.
Enhanced EXPLAIN
A small quality-of-life improvement: EXPLAIN ANALYZE now includes buffer usage statistics by default. Previously you had to remember to write EXPLAIN (ANALYZE, BUFFERS) to see how many pages were read from shared buffers vs. disk.
-- PG 17: EXPLAIN ANALYZE omits buffer stats unless you add BUFFERS
-- PG 18: buffer stats are included by default
EXPLAIN ANALYZE SELECT * FROM sales WHERE category = 'electronics';The output now includes lines like:
Buffers: shared hit=128 read=45
I/O Timings: shared read=12.34Buffer usage is the single most useful metric for understanding query performance. Whether a query reads from shared buffers (fast) or from disk (slow) tells you more about real-world performance than row counts or cost estimates.
Asynchronous I/O
PostgreSQL 18 introduces an asynchronous I/O subsystem that allows the database to queue multiple read requests instead of waiting for each one to complete sequentially. This can deliver up to 3x speedups for I/O-bound workloads like large sequential scans, bitmap heap scans, and VACUUM.
-- Check the I/O method
SHOW io_method;
-- sync (traditional), worker (thread pool), io_uring (Linux, fastest)On Linux, the io_uring method provides the best performance. On macOS, the worker method uses a thread pool as a portable alternative. No application changes are needed to benefit from this.
What Else Changed
A few more notable additions that didn't need full code examples:
- OAuth 2.0 authentication: native OAUTHBEARER support in
pg_hba.conf, so clients can authenticate with bearer tokens from identity providers like Okta or Azure AD instead of passwords - Logical replication improvements: parallel streaming is now the default for new subscriptions, generated columns can be replicated, and conflict logging gives visibility into replication issues
- pg_upgrade --swap and --jobs: parallel consistency checks and directory swapping significantly reduce upgrade downtime
- pg_stat_io reports in bytes: I/O statistics now include byte counts in addition to page counts, with per-backend granularity
Connect From Your Application
Once you've tried the examples, you can connect your application to the same PostgreSQL instance. SpinDB gives you a connection string:
spindb url pg18
# postgresql://postgres:postgres@localhost:5432/postgresUse that URL with any PostgreSQL client library: pg for Node.js, psycopg for Python, pgx for Go, or any ORM that supports PostgreSQL.
When you're done for the day, stop the instance. When you need it again, start it back up:
spindb stop pg18
spindb start pg18SpinDB manages PostgreSQL and 19 other database engines with the same commands. If your next project needs Redis, MongoDB, or DuckDB alongside PostgreSQL, you don't need to learn a different tool.
For the full list of PostgreSQL 18 changes, see the official release notes.