Mutexes and Synchronization in Ruby
When you have multiple threads reading and writing the same data, things go wrong in ways that are hard to predict and harder to reproduce. One thread reads a value before another finishes writing it. Two threads increment a counter at the same time and one of the updates disappears. This class of bug is a race condition, and the way you prevent it in Ruby is with a mutex.
The Problem: Race Conditions
Ruby threads share memory by default. That means every instance variable, every element in a shared array, every hash entry is accessible from every thread simultaneously. The OS can interrupt a thread at any point — even in the middle of @counter += 1. Here’s a concrete example of that going wrong:
counter = 0
threads = 5.times.map do
Thread.new { 1000.times { counter += 1 } }
end
threads.each(&:join)
puts counter # => likely less than 5000
Each thread is running counter += 1 one thousand times. You’d expect the final result to be 5000. In practice, you’ll get something lower because the read-modify-write sequence is not atomic. Thread A reads counter as 42. Thread B also reads 42. Both write 43. One of the increments vanished.
This is a classic lost-update race condition. The code looks correct in isolation but fails under concurrent access.
Enter the Mutex
Thread::Mutex lives in Ruby’s standard library. It implements mutual exclusion — only one thread can hold a mutex at a time. Other threads that try to acquire it will block until the lock is released.
mutex = Mutex.new
The safest way to use a mutex is with synchronize. It acquires the lock, runs your block, and releases the lock — even if an exception is raised inside the block:
mutex = Mutex.new
counter = 0
threads = 5.times.map do
Thread.new do
1000.times { mutex.synchronize { counter += 1 } }
end
end
threads.each(&:join)
puts counter # => 5000
Now only one thread can be inside that block at a time. The increment is atomic. The result is correct every time.
Lock, Try Lock, and Unlock
synchronize is the idiomatic approach, but it helps to know the underlying primitives.
mutex.lockacquires the lock and blocks until it succeedsmutex.try_lockattempts to acquire the lock without blocking — returnstrueif it got the lock,falseotherwisemutex.unlockreleases the lock
You can use these directly, but you must pair them carefully. A common pattern with try_lock is a timeout or skip behavior:
if mutex.try_lock
begin
@inventory[item] -= 1
ensure
mutex.unlock
end
else
puts "Could not acquire lock"
end
The ensure is critical here. Without it, an exception inside the block would leave the mutex permanently locked and every other thread waiting for it would hang forever.
synchronize does this automatically, which is why it’s preferred. You get the guarantee that the lock is always released when the block exits, whether normally or via an exception.
Thread-Local State
Not everything needs a mutex. Thread.current gives each thread its own isolated storage that no other thread can touch:
Thread.current[:request_id] = SecureRandom.uuid
Thread.current[:buffer] = []
thread1 = Thread.new do
Thread.current[:counter] = 0
sleep 0.1
puts Thread.current[:counter] # => 0
end
thread2 = Thread.new do
Thread.current[:counter] = 100
sleep 0.1
puts Thread.current[:counter] # => 100
end
thread1.join
thread2.join
Each thread has its own independent copy of :counter. There’s no shared state, so no mutex is needed. Only data that crosses thread boundaries requires synchronization.
Thread::Queue — Producer-Consumer Without the Headache
Implementing a producer-consumer pattern with a bare mutex is tedious work. You need a shared array, a mutex to protect it, and a condition variable to make consumers wait when the queue is empty. Ruby gives you all of that in one class: Thread::Queue.
queue = Queue.new
producer = Thread.new do
5.times do |i|
queue << i
sleep 0.1
end
queue.close
end
consumer = Thread.new do
until queue.closed? && queue.empty?
item = queue.pop
puts "Processed: #{item}"
end
end
producer.join
consumer.join
# => Processed: 0
# => Processed: 1
# => Processed: 2
# => Processed: 3
# => Processed: 4
Queue handles all the internal synchronization. << (also called enq) blocks if you set a maximum size. pop blocks when the queue is empty. close signals that no more items will arrive, so consumers can exit their loops cleanly.
Without Queue, the equivalent code would need a mutex, a condition variable, and careful management of the closed state — easy to get wrong.
ConditionVariable — Waiting for a Change
Sometimes you need threads to wait for a specific condition before proceeding. Thread::ConditionVariable pairs with a mutex to let threads sleep and be woken by other threads.
A common use case is a resource that gets produced and consumed on demand:
mutex = Mutex.new
condition = ConditionVariable.new
resource_available = false
# Consumer waits
worker = Thread.new do
mutex.synchronize do
until resource_available
condition.wait(mutex)
end
puts "Worker: got the resource"
resource_available = false
end
end
# Producer
Thread.new do
sleep 0.5
mutex.synchronize do
resource_available = true
condition.signal
end
end
worker.join
# Worker: got the resource
wait inside synchronize does something clever: it atomically releases the mutex and puts the thread to sleep. When another thread calls signal, the sleeping thread wakes up, re-acquires the mutex, and continues. This avoids the CPU waste of a busy loop.
Two things to always keep in mind with condition variables. First, re-check the predicate in a loop after wait returns — spurious wake-ups can happen. Second, signal wakes one waiting thread and broadcast wakes all of them.
Avoiding Deadlocks
Deadlocks happen when threads end up waiting on each other forever. There are a few patterns that cause this more than others.
Non-reentrant mutexes. Ruby’s Mutex does not support reentrant locking. If you already hold a mutex and you try to lock it again, your thread deadlocks immediately:
mutex = Mutex.new
mutex.synchronize do
mutex.synchronize do # DEADLOCK — you already hold this lock
puts "never reached"
end
end
This surprises people who are used to other languages where this works. The fix is simple: never try to re-acquire a mutex you already hold. Restructure your code so each lock acquisition happens at a consistent call-stack depth.
Lock ordering across multiple mutexes. If thread A holds mutex1 and wants mutex2, while thread B holds mutex2 and wants mutex1, you have a classic deadlock. The solution is to always acquire multiple mutexes in the same global order everywhere in your codebase. Pick an ordering (mutex1 before mutex2) and follow it consistently.
Holding locks while doing blocking operations. If you hold a mutex and then call something that might block — file I/O, network requests, sleep — you’ve frozen all other threads that need that mutex. Keep the locked section as short as possible. Do your slow work outside the lock.
Error Handling in Threaded Code
synchronize guarantees that locks are released when a Ruby exception is raised inside the block. That’s reliable. What synchronize cannot protect against is a thread being killed externally:
thread = Thread.new do
mutex.synchronize { sleep 10 }
end
sleep 0.1
Thread.kill(thread) # lock is never released
# All other threads waiting on mutex are now permanently blocked
This is a design-level concern. If you use Thread.kill, you need a process-level recovery strategy (restarting the process, or detecting stuck threads). In most applications, you avoid killing threads and instead signal them to stop gracefully — for example, by closing a Queue so consumer threads can exit their loops and finish cleanly.
See Also
- Ruby Thread Basics — Creating threads, joining, and thread lifecycle
- Ruby Error Handling — Exceptions and cleanup in threaded code
- Procs and Lambdas — Callable objects used throughout threading examples