Infrastructure Medium severity

ActiveRecord Object Serialization

Passing full ActiveRecord instances as arguments to native Sidekiq::Job workers. Sidekiq cannot serialize complex Ruby objects to JSON, so in strict mode it raises ArgumentError, and in legacy mode it falls back to YAML that produces bloated payloads and stale state at perform time.

Before / After

Problematic Pattern
# Native Sidekiq API, no GlobalID protection.
# Passing a full object causes Sidekiq to attempt
# YAML serialization or fail with ArgumentError
# in strict mode.

class SendWelcomeEmailJob
include Sidekiq::Job

def perform(user)
  # user is NOT a fresh record; it's a stale,
  # potentially bloated Ruby object.
  UserMailer.welcome(user).deliver_now
end
end

# CRITICAL: Passing the whole object instead of the ID.
SendWelcomeEmailJob.perform_async(@user)
Target Architecture
class SendWelcomeEmailJob
include Sidekiq::Job

def perform(user_id)
  user = User.find(user_id)
  UserMailer.welcome(user).deliver_now
  user.update!(onboarded_at: Time.current)
end
end

SendWelcomeEmailJob.perform_async(@user.id)

# ApplicationJob (ActiveJob) handles this natively
# via GlobalID - it serializes a URI like
# gid://app/User/1 and re-fetches at perform time.
# Use ActiveJob if you want that abstraction.

Why this hurts

Native Sidekiq workers serialize arguments into JSON to be stored in Redis. When you pass a full ActiveRecord instance, Sidekiq (in strict mode, the default since Sidekiq 6) raises ArgumentError because it cannot serialize complex Ruby objects into JSON. In legacy configurations, it may attempt to use YAML serialization, which produces a massive payload containing the entire internal state of the object, including its association cache and dirty tracking metadata. This inflates Redis memory usage significantly; at scale, a deep queue of such jobs can trigger Redis OOM (Out of Memory) errors, crashing the entire background processing layer.

The fragmentation dynamics compound the problem. Redis’s jemalloc allocator rounds each value up to a slab size, so a 22 KB YAML payload occupies a 32 KB slab and wastes 10 KB per job. One hundred thousand in-flight jobs waste a gigabyte of RAM to slab overhead alone. INFO memory reports high used_memory_rss with lower used_memory, the signature of allocator fragmentation rather than a genuine leak, and operators spend time chasing a leak that does not exist.

The semantic risk is equally high. The worker receives a snapshot of the record as it existed at the moment of enqueuing. If the record is updated in the database before the job starts, the worker remains blind to those changes, potentially operating on obsolete email addresses or invalid state. A reconciliation job that writes user.update!(attrs_from_snapshot) resurrects stale values for every column the snapshot contained, silently overwriting fresher data. In high-throughput systems with minutes of queue lag, the window for stale writes is large enough to cause user-visible inconsistencies.

ActiveJob sidesteps this by using GlobalID to serialize only a lightweight URI (e.g., gid://app/User/1), but raw Sidekiq requires developers to manually pass primitive IDs and re-fetch the record to ensure data integrity and infrastructure stability.

Get Expert Help

Inheriting a legacy Rails codebase with this problem? Request a Technical Debt Audit.