Threads Basics in Ruby
Ruby has a built-in threading system that lets you run multiple tasks concurrently within a single process. Threads are lighter than spawning separate OS processes, and they share memory by default, making them a practical choice for I/O-bound work like handling network requests or reading files in parallel.
This guide covers the core Thread API, synchronization primitives to protect shared state, and the common pitfalls that trip people up.
Creating Threads with Thread.new
Thread.new spawns a new thread of execution. The block you pass to it runs concurrently with the rest of your program.
t = Thread.new do
sleep 1
puts "Thread finished"
end
puts "Main thread continues..."
t.join # blocks until the thread completes
puts "Done"
Thread.new returns immediately without blocking the caller. The thread starts running as soon as the scheduler gives it CPU time. Without calling join, the main thread can reach the end of the program and exit before your spawned threads finish.
To capture a thread’s return value, use join or value:
t = Thread.new { sleep 1; 42 }
puts t.value # => 42 (blocks for ~1 second)
You can pass arguments into the thread block:
Thread.new(3, 5) do |a, b|
puts a + b # => 8
end.join
Inspecting Threads
Thread.current returns the thread object that is currently executing the call. You can use it to store thread-local data in a hash-like syntax:
Thread.current[:request_id] = SecureRandom.uuid
Thread.current[:buffer] = []
Each thread gets its own isolated storage. Variables defined outside the thread block are shared across all threads, but thread-local keys live only in the thread that set them.
Thread.list returns a collection of every live Thread object in your program. This is useful for debugging:
Thread.list.each do |t|
puts "Thread #{t.object_id}: #{t.status}"
end
Thread status values include "sleep" (waiting or I/O), "run" (actively executing), or false (finished).
Thread.join and Thread.value
Thread#join blocks the calling thread until the receiver thread terminates. If the thread has already finished, join returns right away.
results = 3.times.map do |i|
Thread.new(i) do |n|
sleep rand
n * 2
end
end
results.each do |t|
puts t.value # blocks and prints the return value
end
Thread#value behaves identically to join but makes it clearer that you want the block’s return value, not the Thread object itself.
Exception Handling in Threads
By default, an unhandled exception in a thread does not crash your program. It just silently kills that thread and leaves the others running. You can change this behavior with Thread.abort_on_exception:
Thread.abort_on_exception = true
Thread.new { raise "boom" } # kills the entire process
Ruby 2.5 introduced Thread.report_on_exception. When set to true, unhandled exceptions print a backtrace to stderr but the process keeps running. The default changed to false in Ruby 3.0 to reduce noise in production logs.
You can also set these per-thread:
t = Thread.new { raise "oops" }
t.abort_on_exception = true
Synchronization with Mutex
Threads share memory. That sounds convenient, but it creates a problem: if two threads read and write the same variable at the same time, the outcome becomes unpredictable. This is a race condition.
A Mutex (mutual exclusion lock) ensures only one thread can execute a section of code at a time. You wrap the critical section inside mutex.synchronize:
counter = 0
mutex = Mutex.new
threads = 5.times.map do
Thread.new { 1000.times { mutex.synchronize { counter += 1 } } }
end
threads.each(&:join)
puts counter # => 5000
Without the mutex, running this code produces an unpredictable result below 5000. With it, the increment is atomic — no other thread can read or write counter while one thread is in the middle of counter += 1.
A few Mutex rules worth remembering: always release the lock before performing I/O or any operation that might block for a long time. Holding a lock across an HTTP request is a good way to starve your thread pool. You can also pass a timeout to mutex.lock(0.5) — it raises ThreadError if the lock isn’t acquired in time.
Thread-Safe Queues with Queue
Queue from the thread stdlib is a thread-safe, blocking queue designed exactly for producer-consumer patterns. Producers push items, consumers pop them, and the queue handles all the locking internally.
require 'thread'
queue = Queue.new
producer = Thread.new do
5.times { |i| queue << "item #{i}" }
queue << :done
end
consumer = Thread.new do
loop do
item = queue.pop
break if item == :done
puts "Got: #{item}"
end
end
producer.join
consumer.join
pop blocks when the queue is empty, so the consumer waits patiently until something arrives. Once the producer sends :done, the consumer breaks out of the loop.
SizedQueue works the same way but lets you set a maximum capacity. If the queue fills up, pushing threads sleep until there’s room.
Condition Variables for Wait/Signal Patterns
Sometimes a thread needs to wait until some condition becomes true. ConditionVariable paired with a Mutex lets you implement this cleanly:
mutex = Mutex.new
cv = ConditionVariable.new
ready = false
consumer = Thread.new do
mutex.synchronize do
cv.wait(mutex) until ready
puts "Ready to consume"
end
end
producer = Thread.new do
sleep 1
mutex.synchronize do
ready = true
cv.signal
end
end
consumer.join
producer.join
cv.wait(mutex) atomically releases the mutex and sleeps. When another thread calls cv.signal, the sleeping thread wakes up, re-acquires the mutex, and continues. cv.broadcast wakes all waiting threads instead of just one.
Deadlocks
A deadlock happens when two or more threads are each waiting for a lock held by the other. Neither can proceed.
mutex1 = Mutex.new
mutex2 = Mutex.new
a = Thread.new do
mutex1.lock
sleep 0.1
mutex2.lock # waits forever for mutex2
end
b = Thread.new do
mutex2.lock
sleep 0.1
mutex1.lock # waits forever for mutex1
end
a.join # deadlock: both threads are stuck
b.join
Prevention strategies:
- Always acquire multiple locks in a consistent global order
- Use higher-level abstractions like
Queuethat handle locking internally - Keep critical sections as short as possible
- Pass a timeout to
lockand raise an error if you cannot acquire it in time
The GIL and What It Means for Parallelism
Ruby’s standard interpreter (MRI) has a Global Interpreter Lock, or GVL (Global VM Lock). This lock ensures only one thread executes Ruby bytecode at a time.
That sounds like it defeats threading entirely, but it doesn’t — the distinction matters between CPU-bound and I/O-bound work:
- CPU-bound tasks like image processing or mathematical computation cannot run in true parallel with threads. Only one thread gets the GIL at a time. For CPU parallelism, use
Process.forkor Ruby’sRactor(Ruby 3.0+). - I/O-bound tasks like network requests or file reads release the GVL while waiting. This means threads can run concurrently during I/O, making standard threads a good fit for web servers, API clients, and background job workers.
Ruby’s own stdlib synchronization primitives (Mutex, Queue, ConditionVariable) all release the GVL while waiting, so they don’t block other threads unnecessarily.
Race Conditions
Race conditions are the most common threading bug. The core problem: operations that look atomic in your code (like counter += 1) are actually made up of multiple steps — read, modify, write — and another thread can interleave between any of them.
counter = 0
5.times.map { Thread.new { 1000.times { counter += 1 } } }.each(&:join)
puts counter # unpredictable: less than 5000
The fix is always the same: protect every write to shared mutable state with a Mutex. Alternatively, use immutable data structures or thread-safe collections from the concurrent-ruby gem.
See Also
- Procs and Lambdas in Ruby — Closures and first-class functions that work naturally with threads.
- Error Handling in Ruby — Handling exceptions in threaded and concurrent code.
- Control Flow in Ruby — Flow control structures that affect how threads execute blocks and loops.