Why Queues?
Any operation that isn't needed for the immediate HTTP response should be queued: sending emails, processing images, generating reports, syncing with third-party APIs, sending webhooks. The user doesn't need to wait for these.
A typical request that sends an email takes 2-5 seconds. Queue the email, and the response is instant.
Designing Good Jobs
A well-designed job is small, idempotent, and serializable.
// Good: small, focused, idempotent
class SendInvoiceEmail implements ShouldQueue
{
use Dispatchable, InteractsWithQueue, Queueable, SerializesModels;
public function __construct(
public readonly int $invoiceId
) {}
public function handle(Mailer $mailer): void
{
$invoice = Invoice::with('customer')->findOrFail($this->invoiceId);
// Idempotency check: don't send twice
if ($invoice->email_sent_at) {
return;
}
$mailer->to($invoice->customer->email)->send(
new InvoiceMail($invoice)
);
$invoice->update(['email_sent_at' => now()]);
}
}
// Bad: stores entire Eloquent model (serialization issues)
class SendInvoiceEmail implements ShouldQueue
{
public function __construct(
public readonly Invoice $invoice // Don't do this for complex models
) {}
}
Rule: Pass IDs to jobs, not full objects. This avoids serialization issues with large models and ensures the job always works with the latest data.
Retry Strategies
Network calls fail. APIs go down. Databases have momentary blips. Your retry strategy determines how gracefully your system handles these hiccups.
class SyncOrderToErp implements ShouldQueue
{
use Dispatchable, InteractsWithQueue, Queueable, SerializesModels;
public int $tries = 5;
public int $maxExceptions = 3;
// Exponential backoff: 10s, 30s, 90s, 270s, 810s
public function backoff(): array
{
return [10, 30, 90, 270, 810];
}
public function handle(ErpClient $erp): void
{
$order = Order::findOrFail($this->orderId);
$erp->syncOrder($order->toErpFormat());
}
// Called when all retries are exhausted
public function failed(Throwable $exception): void
{
Log::critical('ERP sync permanently failed', [
'order_id' => $this->orderId,
'error' => $exception->getMessage(),
]);
Notification::route('slack', config('services.slack.alerts'))
->notify(new ErpSyncFailed($this->orderId, $exception));
}
}
Rate Limiting
When calling external APIs, you need to respect their rate limits:
class CallExternalApi implements ShouldQueue
{
use Dispatchable, InteractsWithQueue, Queueable, SerializesModels;
public function middleware(): array
{
return [
new RateLimited('external-api'),
];
}
public function handle(): void
{
// Your API call
}
}
// In AppServiceProvider
RateLimiter::for('external-api', function (object $job) {
return Limit::perMinute(30); // Max 30 API calls per minute
});
Job Batching
When you need to process many items and track overall progress:
// Dispatch a batch of jobs
$batch = Bus::batch([
new ProcessRow($rows[0]),
new ProcessRow($rows[1]),
new ProcessRow($rows[2]),
// ... hundreds of jobs
])
->then(function (Batch $batch) {
// All jobs completed successfully
Notification::send($user, new ImportComplete($batch->id));
})
->catch(function (Batch $batch, Throwable $e) {
// First job failure
Log::error("Batch {$batch->id} had a failure: {$e->getMessage()}");
})
->finally(function (Batch $batch) {
// Batch finished (regardless of success/failure)
ImportJob::where('batch_id', $batch->id)->update(['status' => 'finished']);
})
->name('CSV Import')
->allowFailures()
->dispatch();
// Check progress
$batch = Bus::findBatch($batchId);
echo $batch->progress(); // 67 (percent)
Job Chaining
When jobs must run in sequence:
Bus::chain([
new ExtractDataFromCsv($uploadId),
new ValidateExtractedData($uploadId),
new ImportToDatabase($uploadId),
new SendImportReport($uploadId),
])->dispatch();
Queue Selection
Not all jobs are equally important. Use multiple queues with different priorities:
// Critical: payment webhooks
ProcessPaymentWebhook::dispatch($event)->onQueue('critical');
// Default: most jobs
SendWelcomeEmail::dispatch($user)->onQueue('default');
// Low priority: analytics, reports
GenerateMonthlyReport::dispatch($month)->onQueue('low');
// Run workers with priority
// php artisan queue:work --queue=critical,default,low
Production Monitoring
Queues fail silently if you're not watching. Essential monitoring:
- Queue depth: How many jobs are waiting? If it's growing, your workers can't keep up
- Failed jobs: Check
failed_jobstable daily. Set up alerts. - Job duration: Monitor how long jobs take. A sudden spike means something changed.
- Worker health: Use process managers like Supervisor to restart crashed workers
// Supervisor config for production
[program:laravel-worker-critical]
process_name=%(program_name)s_%(process_num)02d
command=php /var/www/artisan queue:work redis --queue=critical --sleep=1 --tries=3
autostart=true
autorestart=true
numprocs=4
redirect_stderr=true
stdout_logfile=/var/www/storage/logs/worker-critical.log
[program:laravel-worker-default]
process_name=%(program_name)s_%(process_num)02d
command=php /var/www/artisan queue:work redis --queue=default,low --sleep=3 --tries=3
autostart=true
autorestart=true
numprocs=2
redirect_stderr=true
stdout_logfile=/var/www/storage/logs/worker-default.log