Tests & Security High severity

Sequential ID Exposure

Publicly exposing the auto-incrementing database primary key in URLs and API responses, enabling Broken Object Level Authorization (BOLA / IDOR) attacks where an attacker iterates /invoices/1, /invoices/2, /invoices/3 to access resources they do not own.

Before / After

Problematic Pattern
# config/routes.rb
resources :invoices
# URLs: /invoices/1, /invoices/2, /invoices/3...

class InvoicesController < ApplicationController
def show
  @invoice = Invoice.find(params[:id])
  # Forgot authorization check.
  # Attacker iterates IDs, downloads every invoice.
end
end
Target Architecture
# STANDARD: UUID (or ULID) as native primary key.
# Generated by PostgreSQL, no app-layer mapping.

# db/migrate
enable_extension 'pgcrypto' unless
extension_enabled?('pgcrypto')

create_table :invoices, id: :uuid do |t|
# ...
end

# Routes, associations, find() all work unchanged.
# URLs become /invoices/7c9e6679-... (128 bits
# of randomness, non-iterable).

# CRITICAL: opaque IDs are NOT authorization.
# Always scope by the authenticated user:
class InvoicesController < ApplicationController
def show
  @invoice = current_user.invoices.find(params[:id])
  # Raises ActiveRecord::RecordNotFound if not owned.
end
end

# LEGACY FALLBACK ONLY: hashid-rails for existing
# integer-PK tables where UUID migration is not
# feasible. This is technical debt, not a standard.

Why this hurts

Sequential integer IDs communicate two things to an attacker: the total count of records (via the highest visible ID) and a trivial iteration target. OWASP API Top 10 lists BOLA (Broken Object Level Authorization) as the number-one API security risk precisely because it requires no exploitation skill, just a loop from 1 to N. A Python script with requests and a valid session cookie can scrape an entire resource in minutes, and the requests look like normal traffic in logs, so detection is after-the-fact at best.

The real attack vector is compound. An attacker notes the ID /users/4532 on their own profile URL, then iterates /invoices/1..10000 hoping some are accessible because the authorization check is missing. Even when authorization is correctly scoped per-user, the presence of sequential IDs enables timing attacks: a request for an existing-but-unauthorized ID may return 403 faster than a request for a non-existent ID, which leaks the existence of records belonging to other users.

Native UUID primary keys (128-bit random identifiers) eliminate the iteration attack entirely. The search space is computationally infeasible to brute-force, and IDs reveal nothing about total record count. ULIDs offer the same security with the added benefit of chronological sorting, which preserves index locality.

hashid-rails is a legacy fallback, not a design standard. It obscures the integer PK but keeps the underlying iteration risk and depends on a reversible algorithm. If the salt leaks, the mapping back to integers is trivial. Modern systems should adopt UUIDs from the first migration; use hashids only for legacy tables where primary key rotation is not feasible. Authorization remains the primary control: always scope queries through the authenticated user (e.g., current_user.invoices.find(id)) to ensure that even if an ID is guessed, access is denied.

Get Expert Help

Inheriting a legacy Rails codebase with this problem? Request a Technical Debt Audit.