Architecture

Migrating delayed_job to Sidekiq Without Data Loss

BLUF (Bottom Line Up Front): delayed_job relies on an SQL database, causing severe deadlocks and high queue latency at scale. Sidekiq uses Redis, offering massive concurrency. To migrate safely, you must run both processors simultaneously during the transition and use a script to re-enqueue stranded delayed_job payloads into Sidekiq.

Phase 1: The Relational Bottleneck

delayed_job uses row-level locking in PostgreSQL or MySQL. When hundreds of jobs are processed concurrently, the database spends more CPU time managing locks than executing queries.

Synthetic Engineering Context: High Queue Latency

In your APM (like New Relic), you notice background jobs are taking 15 minutes to start, even though the actual execution time is 200ms.

# Database log showing lock contention
FATAL:  terminating connection due to conflict with recovery
DETAIL:  User query might have needed to see row versions that must be removed.
STATEMENT:  UPDATE "delayed_jobs" SET locked_at = '2026-04-23 10:00:00', locked_by = 'host:worker-1' WHERE id = 15432

Phase 2: The Migration Strategy

A hard cutover will result in lost jobs. You must adopt a dual-boot approach.

Execution: Step 1 - Dual Configuration

Configure ActiveJob to push new jobs to Sidekiq, but keep the delayed_job worker running to drain the old queue.

# config/application.rb
# Direct all NEW jobs to Sidekiq
config.active_job.queue_adapter = :sidekiq

Execution: Step 2 - The Re-enqueue Script (PoC)

If you have jobs scheduled for weeks in the future (e.g., subscription reminders), you cannot wait for the delayed_job queue to drain naturally. You must extract them from the database and push them to Redis persistency.

# lib/tasks/migrate_jobs.rake
namespace :jobs do
  desc "Migrate scheduled delayed_jobs to Sidekiq"
  task delayed_to_sidekiq: :environment do
    Delayed::Job.find_each do |dj|
      # Parse the YAML payload generated by DelayedJob
      payload = YAML.load(dj.handler)
      
      # Extract the original ActiveJob class and arguments
      job_class = payload.job_data['job_class'].constantize
      args = payload.job_data['arguments']
      
      if dj.run_at > Time.current
        # Schedule it in Sidekiq
        job_class.set(wait_until: dj.run_at).perform_later(*args)
      else
        # Enqueue immediately
        job_class.perform_later(*args)
      end
      
      # Delete the migrated record
      dj.destroy!
    end
  end
end

Phase 3: Next Steps & Risk Mitigation

Redis is an in-memory data store. If you do not configure Redis persistency (AOF or RDB backups), a server restart will wipe out all scheduled Sidekiq jobs. You must ensure your Redis infrastructure is hardened before running the migration script.

Need Help Stabilizing Your Legacy App? Background job migrations carry a high risk of dropping critical business events (like billing emails). Our team at USEO executes zero-downtime infrastructure migrations.

Contact us for a Technical Debt Audit