The concurrent-ruby Library
Ruby ships with Thread and Mutex in the standard library, and that gets you surprisingly far. But once you need thread-safe collections, futures, actors, or coordination primitives like semaphores and barriers, you’re reinventing wheels. The concurrent-ruby gem fills that gap with battle-tested, high-level abstractions used in production by Sidekiq, webpacker, and countless other projects.
This guide covers the parts of concurrent-ruby that come up most often: thread-safe data structures, the promises/futures API, synchronization primitives, and the actor model.
Thread-Safe Collections
Ruby’s built-in Array and Hash are not thread-safe. If two threads push to the same array simultaneously, you can lose writes. The same goes for hash key assignments. You can protect them with a Mutex, but that’s verbose and easy to get wrong.
concurrent-ruby gives you drop-in replacements:
require 'concurrent'
arr = Concurrent::Array.new
threads = 10.times.map do
Thread.new do
100.times { arr << Thread.current.object_id }
end
end
threads.each(&:join)
puts arr.size # => 1000
Every method on Concurrent::Array is synchronized. You get the same guarantee with Concurrent::Hash and Concurrent::Set. No mutex boilerplate required.
h = Concurrent::Hash.new
threads = 5.times.map do |i|
Thread.new { h[:key] = i }
end
threads.each(&:join)
puts h[:key] # Safely written, one of the values
One caveat: on CRuby, Concurrent::Array locks against the object itself. In deeply nested concurrent operations you can still hit contention issues, but for most use cases these structures work correctly.
Async Computation with Promises
The Concurrent::Promises module gives you futures — objects representing values that may not exist yet. This is the most powerful part of the library for async work.
Futures
Concurrent::Promises.future runs a block on a thread pool immediately and returns a Future:
require 'concurrent'
future = Concurrent::Promises.future do
sleep 0.5
42
end
puts future.completed? # => false (still running)
result = future.value # Blocks until done, returns 42
puts future.completed? # => true
The Future object gives you several ways to interact with it:
| Method | Behavior |
|---|---|
.value | Blocks and returns the result; raises if the block threw |
.value(timeout: 1) | Blocks with a timeout; returns nil if it times out |
.completed? | Non-blocking check |
.reason | The exception if the block failed |
.wait | Blocks until completed without returning a value |
Delays
A delay does not run until something asks for its value:
delay = Concurrent::Promises.delay { expensive_computation }
# Nothing has run yet
result = delay.value # Now it runs — and memoizes the result
This is useful when you want to defer expensive work until the result is actually needed.
Composing Futures
Concurrent::Promises.zip combines multiple futures into one that resolves when all complete:
f1 = Concurrent::Promises.future { sleep 0.3; 1 }
f2 = Concurrent::Promises.future { sleep 0.1; 2 }
f3 = Concurrent::Promises.future { sleep 0.2; 3 }
combined = Concurrent::Promises.zip(f1, f2, f3)
results = combined.value # => [1, 2, 3]
If you need the first result rather than all results, use any_fulfilled_future. For racing on any settlement (success or failure), use any_resolved_future.
Synchronization Primitives
Sometimes you need to coordinate threads beyond what a mutex can do. concurrent-ruby provides several primitives for this.
Semaphore
A semaphore manages access to a shared resource with a fixed number of permits. Each acquire blocks when no permits are available, and each release gives one back. This is useful for limiting parallelism, such as throttling API calls:
require 'concurrent'
semaphore = Concurrent::Semaphore.new(3) # Max 3 concurrent accesses
10.times.map do |i|
Thread.new do
semaphore.acquire
puts "Thread #{i} acquired"
sleep 1
puts "Thread #{i} releasing"
semaphore.release
end
end.each(&:join)
CountDownLatch
A latch is a one-time barrier. Threads call wait and block until the count reaches zero, at which point all waiting threads are released:
require 'concurrent'
latch = Concurrent::CountDownLatch.new(3)
threads = 3.times.map do |i|
Thread.new do
sleep rand
puts "Thread #{i} calling count_down"
latch.count_down
end
end
puts "Main thread waiting..."
latch.wait # Blocks until all 3 count_down calls
puts "All threads done!"
Once a latch reaches zero it stays at zero — it’s one-way.
CyclicBarrier
Unlike a latch, a barrier is reusable. N threads all call wait, and when the Nth thread arrives, all are released simultaneously and the barrier resets:
require 'concurrent'
barrier = Concurrent::CyclicBarrier.new(3)
3.times.map do |i|
Thread.new do
puts "Thread #{i} waiting at barrier"
barrier.wait
puts "Thread #{i} passed!"
end
end.each(&:join)
# All three print "waiting" then all three print "passed!" together
This is useful when you need threads to rendezvous at a checkpoint before proceeding.
ReadWriteLock
A read-write lock allows multiple concurrent readers but a single exclusive writer:
require 'concurrent'
rwlock = Concurrent::ReadWriteLock.new
shared_data = 0
readers = 5.times.map do
Thread.new { rwlock.with_read_lock { shared_data } }
end
writer = Thread.new do
rwlock.with_write_lock { shared_data = 42 }
end
readers.each(&:join)
writer.join
The with_read_lock and with_write_lock helpers acquire and release the lock automatically.
Actor Model
concurrent-ruby includes an Erlang-style actor implementation. Each actor runs on its own thread, processes messages from its mailbox sequentially, and maintains its own internal state. Because only one message is processed at a time, there are no race conditions inside the actor:
require 'concurrent/actor'
counter = Concurrent::Actor.define(:Counter) do |initial|
count = initial
loop do
message = receive
case message
when :inc then count += 1
when :dec then count -= 1
when :get then count
end
end
end
counter << :inc
counter << :inc
counter << :get # => 2
The << operator sends a message to the actor’s mailbox. Actors are especially useful for modeling stateful services where you want thread safety without mutex management.
Supervised actors (restarting on failure) live in the edge channel of the gem, which has a less stable API than the core library.
Scheduled Execution
Concurrent::ScheduledTask runs a block after a given delay and returns a Future:
require 'concurrent'
task = Concurrent::ScheduledTask.execute(2) { 42 }
puts task.completed? # => false
sleep 3
puts task.completed? # => true
puts task.value # => 42
This is the foundation of delayed and periodic job scheduling, and it’s what Sidekiq uses under the hood for retry logic.
Error Handling
When a future’s block raises an exception, the exception is captured and stored in the future’s .reason:
future = Concurrent::Promises.future { raise "oops" }
future.wait
puts future.reason # => #<RuntimeError: oops>
Always check .reason or wrap .value in a begin/rescue if your futures might fail. There’s no silent failure — the error is there, but you have to look for it.
How concurrent-ruby Compares to Alternatives
stdlib Thread gives you raw OS threads. You’re responsible for protecting shared state, avoiding deadlocks, and managing thread lifecycle. concurrent-ruby handles all of that for you through higher-level abstractions.
The async gem uses fibers instead of threads. Fibers are lightweight green threads that cooperate by yielding — they don’t run in parallel on multiple CPU cores, but they use far less memory. If you have thousands of concurrent I/O-bound connections, async wins. If you need true parallelism for CPU-bound work, concurrent-ruby wins.
concurrent-ruby runs on CRuby, JRuby, and TruffleRuby. The thread pool executors auto-size based on the runtime environment, so your code doesn’t need to change between implementations.
See Also
- Thread Basics in Ruby — stdlib Thread, Mutex, and thread lifecycle
- Fiber Basics in Ruby — cooperative concurrency with fibers
- Error Handling in Ruby — rescue, raise, and exception best practices