BLUF (Bottom Line Up Front): Adding an index to a large table using the standard add_index command creates an exclusive write lock, paralyzing your application and causing Rack::Timeout errors. The solution is disabling DDL transactions and using algorithm: :concurrently. This allows PostgreSQL to build the index in the background without blocking writes.
Phase 1: The Table Lock Crisis
Glossary entry: Unindexed Foreign Keys.
When optimizing legacy database queries, adding an index to a table with millions of rows is necessary but highly dangerous if executed improperly.
Synthetic Engineering Context: The Locked Application
# The Dangerous Migration
class AddIndexToUsersEmail < ActiveRecord::Migration[6.1]
def change
add_index :users, :email
end
end
When this migration runs against a 20GB users table, PostgreSQL acquires an AccessExclusiveLock.
# Database Logs
LOG: process 1450 acquired AccessExclusiveLock on relation "users"
FATAL: canceling statement due to lock timeout
CONTEXT: while updating tuple (124, 45) in relation "users"
Every User.create or User.update from the Rails app hangs until the index finishes building, effectively taking the application offline.
Phase 2: The Concurrent Approach
PostgreSQL supports building indexes concurrently. However, Rails wraps migrations in a transaction by default. Since CREATE INDEX CONCURRENTLY cannot run inside a transaction block, you must explicitly disable it.
Execution: Safe Index Creation
Use disable_ddl_transaction! at the class level, and append algorithm: :concurrently to the add_index method.
# The Safe Migration
class AddIndexToUsersEmailConcurrently < ActiveRecord::Migration[6.1]
# 1. Disable the default transaction wrapper
disable_ddl_transaction!
def change
# 2. Instruct Postgres to build the index in the background
add_index :users, :email, algorithm: :concurrently
end
end
Handling Invalid Indexes
If the concurrent index build fails (e.g., due to a unique constraint violation or a server crash), PostgreSQL leaves behind an INVALID index. It will not be used for querying, but it will consume disk space and slow down writes. You must manually drop it and retry.
-- Finding invalid indexes in Postgres
SELECT indexrelid::regclass, indreqs, indcheckxmin, indisvalid
FROM pg_index
WHERE indisvalid = false;
Phase 3: Next Steps & Risk Mitigation
Building an index concurrently consumes significantly more CPU and memory on the database server and takes roughly twice as long to complete. You should monitor database I/O closely when running these migrations on production.
Need Help Stabilizing Your Legacy App? Database tuning without downtime requires deep PostgreSQL expertise. Our team at USEO helps scaling legacy Rails applications by restructuring indexes, queries, and background migrations safely.