Background Jobs vs Cron Jobs — Which One Belongs in Your Stack
by Eric Hanson, Backend Developer at Clean Systems Consulting
The cron job that runs three times
Your cron job kicks off a billing calculation at 2:00 AM. The calculation takes 90 minutes. At 2:00 AM the next night, a new instance starts while the previous one is still running on a slow month with 10% more records than usual. Now two processes are writing to the same billing records. Your accounting is wrong. Nobody knows until the month-end report.
This is the canonical cron job failure mode: no concurrency control, no visibility into whether the last run completed, no retry on failure, no alerting when the job is skipped. And it is avoidable with the right tool for the job.
What cron is actually for
Cron is a time-based trigger. Use it when: the work needs to happen on a schedule regardless of application state, the work is idempotent (running it twice produces the same result as running it once), and you do not need retry, concurrency control, or observability at the job level.
Database backups: run pg_dump every night at 3 AM. If it fails, an alert fires. Running it again would just create another backup file — safe to run multiple times.
Cleanup tasks: delete soft-deleted records older than 90 days, purge expired sessions, rotate log files. Idempotent, schedule-driven, low consequence if it occasionally runs twice.
Report generation to object storage: generate and upload a daily summary. The output is a dated file; two runs produce two files or one overwrites the other with the same content. Safe.
# crontab — appropriate for true schedule-driven, idempotent work
0 3 * * * /app/bin/backup_database.sh >> /var/log/db_backup.log 2>&1
0 2 * * * /app/bin/cleanup_expired_sessions.sh
0 4 1 * * /app/bin/generate_monthly_report.sh
The problems begin when cron is used for work that is triggered by application events, that must not run concurrently, that needs retry on failure, or where you need to know if it succeeded. This is where a background job queue belongs.
What background job queues solve
A background job queue (Sidekiq backed by Redis, Celery backed by RabbitMQ or Redis, Spring Batch with JDBC job repository) provides: at-least-once delivery with configurable retry, concurrency control through worker pool sizing, dead letter queues for failed jobs, visibility into queue depth and worker lag, and event-driven triggering from application code.
# Sidekiq — triggered by application events, with retry and observability
class ProcessPaymentJob
include Sidekiq::Job
sidekiq_options retry: 3, dead: true, queue: :critical
def perform(order_id)
order = Order.find(order_id)
result = PaymentService.charge(order)
if result.success?
order.update!(status: :paid, charge_id: result.charge_id)
OrderMailer.confirmation(order).deliver_later
else
raise PaymentFailedError, result.error_message
end
end
end
# Enqueued from the controller — triggered by event, not schedule
ProcessPaymentJob.perform_async(order.id)
Sidekiq's retry behavior with exponential backoff means a transient payment gateway failure retries automatically — the first retry at 15 seconds, the second at 5 minutes, and so on. If it exhausts retries, it goes to the dead set where you can inspect and manually retry it. With cron, a failure is a silent missed execution.
The hybrid pattern: scheduled jobs via the queue
The pattern that eliminates most cron problems: keep cron as a minimal trigger that enqueues a background job, and let the job queue handle execution, retry, and observability.
# Sidekiq-Cron configuration — schedule drives the queue, queue handles execution
Sidekiq::Cron::Job.create(
name: 'Daily Billing Calculation',
cron: '0 2 * * *',
class: 'BillingCalculationJob'
)
# The job itself runs in the queue — has retry, concurrency control, observability
class BillingCalculationJob
include Sidekiq::Job
sidekiq_options unique: :until_executed # prevents concurrent runs
def perform
# Lock prevents two instances running simultaneously
BillingService.calculate_for_period(Date.yesterday)
end
end
sidekiq-cron (or whenever + Sidekiq for simpler setups) schedules the job via cron syntax but enqueues it to the Sidekiq queue. If the job is already running, unique: :until_executed prevents a duplicate from starting. If it fails, Sidekiq retries it. The Sidekiq Web UI shows you whether the last run succeeded and how long it took.
This is materially better than a raw cron entry for any job that has business importance.
The jobs that belong in each bucket
Use cron (raw) for:
- Infrastructure maintenance: backups, log rotation, certificate renewal (certbot renew)
- Jobs that run outside the application process entirely (shell scripts with no application state dependency)
- Monitoring and alerting checks
Use background job queue for:
- Any work triggered by an application event (user action, webhook, state change)
- Jobs that must not run concurrently or that need distributed locking
- Jobs that require retry on failure with alerting on exhaustion
- Long-running processes with progress reporting
- Fan-out patterns: one trigger spawning many parallel worker jobs
Use the hybrid (cron schedule → job queue) for:
- Any scheduled work that runs inside your application and has business significance
- The billing run, the nightly report, the weekly digest email, the subscription renewal check
The moment you care whether a scheduled job succeeded, move it out of raw cron and into the job queue where you can observe it.