Infrastructure Medium severity

Asset Pipeline Memory Spikes

rake assets:precompile crashes with “JavaScript heap out of memory” or kills the CI runner because Sprockets and Webpacker load all JavaScript, CSS, and image pipelines into memory at once.

Before / After

Problematic Pattern
# CI runner: 2 GB memory
# .gitlab-ci.yml or Dockerfile
RUN bundle exec rake assets:precompile

# Webpacker boots Node.js with default heap = 1.5 GB
# Sprockets holds every asset in memory.
# OOM killed on any non-trivial frontend.
# CI fails intermittently, devs rerun "until it passes".
Target Architecture
# Option A: raise Node heap for the precompile step.
ENV NODE_OPTIONS="--max-old-space-size=4096"
RUN bundle exec rake assets:precompile

# Option B: Rails 7+ migration away from Sprockets.
# Replace Sprockets with Propshaft (just fingerprints,
# no compile step). JS via esbuild/jsbundling-rails.
# CSS via cssbundling-rails (Tailwind, Sass, PostCSS).
# Typical precompile memory: ~200 MB.

Why this hurts

Webpacker boots a Node.js process with the default V8 heap of 1.5 GB, then asks it to build the full production bundle. Tree-shaking, minification, and source map generation each hold the complete module graph in memory simultaneously. On a non-trivial SPA with React, Vue, or Stimulus components, the resident set size peaks at 3-5 GB for seconds at a time. CI runners provisioned at 2 GB OOM during the peak, which manifests as the confusing “JavaScript heap out of memory” error rather than a clean failure.

Sprockets runs in the same process as the Rails application, so precompile also boots the full Rails environment: every initializer runs, every class loads, every connection pool initializes even though the task makes no database queries. On a large codebase this adds 500 MB to 1 GB of Ruby RSS before any asset work begins. The combined memory pressure of Sprockets (Ruby) and Webpacker (Node) running simultaneously is the actual cause of most CI precompile failures.

Intermittent OOM kills train developers to retry failed CI runs rather than investigate root cause, which masks the underlying memory growth. Each added JS library compounds the problem silently until a specific pull request tips over the edge. The retry-until-passing culture also hides the slowness: precompile that should take 30 seconds takes 3 minutes including retries, and developers avoid pushing during the 3-minute window, artificially reducing deployment cadence.

Docker layer cache invalidation amplifies the waste. Any change to package.json or Gemfile.lock invalidates the install layer, forcing a full bundle install + yarn install + assets:precompile on every deploy. Build cost scales linearly with deployment frequency. Migrating to Propshaft (pure fingerprinting) plus jsbundling-rails + cssbundling-rails (direct esbuild/Rollup integration) cuts precompile memory by an order of magnitude.

See also: Rails Asset Pipeline Memory Bloat.

Get Expert Help

Inheriting a legacy Rails codebase with this problem? Request a Technical Debt Audit.