Ractor-Based Concurrency

· 5 min read · Updated April 4, 2026 · advanced
ractors concurrency ruby-3 parallelism actor-model

Ruby threads can’t run Ruby code in parallel — the Global VM Lock (GVL) ensures only one thread runs Ruby code at a time. This matters when you want to use multiple CPU cores for actual parallel work. Ractors solve this by giving each concurrent execution context its own isolated memory space and thread, with communication happening purely through message passing.

If you haven’t worked with Ractors before, start with the Ractor Introduction tutorial first. This guide focuses on inter-Ractor communication patterns and practical deployment scenarios.

How Message Passing Works in Ractors

Ractors implement the actor model: each Ractor has an incoming port (for receiving messages) and an outgoing port (for replying). The public API reflects this:

  • Ractor.send(ractor, message) — sends a message to a Ractor’s incoming port. Non-blocking.
  • Ractor.receive — blocks until a message arrives at the current Ractor’s incoming port.
  • ractor << message — syntactic sugar for Ractor.send(ractor, message).
  • Ractor.yield(value) — sends a value back to whoever called take on this Ractor.
  • Ractor.take(ractor) — blocks until the target Ractor yields a value.

Here’s the simplest possible conversation between two Ractors:

worker = Ractor.new do
  msg = Ractor.receive
  Ractor.yield(msg.upcase)
end

# Ractor.select waits until at least one Ractor has a value ready.
# It returns [ractor, value] for whichever Ractor produced a value first.
winner, result = Ractor.select(worker, Ractor.new { :done })
# winner is the Ractor that produced a value first

Ractor.select is useful when you want to wait on multiple Ractors at once. Ruby 3.3 added this method to handle multi-way coordination without having to poll sequentially.

Shareable vs Non-Shareable Objects

Each Ractor has its own heap. Objects don’t cross Ractor boundaries unless they’re shareable. Ruby’s rule: only frozen/immutable objects are shareable by default.

Ractor.shareable?(42)          # => true
Ractor.shareable?(true)        # => true
Ractor.shareable?("hello")      # => false (strings are mutable)
Ractor.shareable?("hello".freeze)  # => true

# Deep-freeze a hash to share it
config = { timeout: 30 }.freeze
Ractor.shareable?(config)  # => true

Non-shareable objects get deep-cloned when passed. This is safe but can be slow for large structures. If you need to share a complex object, freeze it first:

data = { cache: {} }
Ractor.make_shareable(data)
# data and everything it refers to is now frozen

Ractor.make_shareable(obj) recursively freezes an object. Use it when you need to pass a large constant table to a Ractor without paying the copy cost on every message.

Building a Channel

Ractors don’t have built-in channels like Go does, but you can build one trivially. A channel is just a Ractor that loops forever, receiving messages and yielding them back to consumers:

channel = Ractor.new do
  loop { Ractor.yield Ractor.receive }
end

Producers send to it, consumers take from it:

# Producer
channel.send("job:1")
channel.send("job:2")

# Consumer (blocks until a value is available)
while job = channel.take
  process(job)
end

This channel design is wait-free for producers (send never blocks) and blocking for consumers (take blocks until something is available).

Worker Pool Pattern

A common setup is a fixed pool of workers sharing a job queue. Each worker runs truly in parallel for CPU-bound work, with no GVL contention between them.

def worker_pool(num_workers, &block)
  job_queue = Ractor.new do
    loop { Ractor.yield Ractor.receive }
  end

  num_workers.times do |i|
    Ractor.new(job_queue, name: "worker-#{i}", &block)
  end

  job_queue
end

pool = worker_pool(4) do |jobs|
  while job = jobs.take
    result = job[:task].call
    job[:reply] << result
  end
end

# Dispatch jobs
num_jobs.times do |i|
  reply_port = Ractor.new { Ractor.receive }
  pool.send({ task: -> { compute(i) }, reply: reply_port })
  result = reply_port.take
  puts "Job #{i} => #{result}"
end

Each worker is a Ractor with a name for easier debugging. The reply_port pattern (a tiny Ractor just to receive one result) is a clean way to get values back to the caller.

Parallel Computation

For CPU-bound work that can be divided into independent chunks, Ractors give you genuine parallelism:

def parallel_sum(range, num_chunks)
  chunk_size = range.size / num_chunks
  chunks = num_chunks.times.map do |i|
    start = range.begin + i * chunk_size
    stop = (i == num_chunks - 1) ? range.end : start + chunk_size - 1
    Range.new(start, stop)
  end

  ractors = chunks.map do |chunk|
    Ractor.new(chunk) { |r| r.sum }
  end

  ractors.map { |r| Ractor.take(r) }.sum
end

puts parallel_sum(1..1_000_000, 8)

With 8 Ractors on an 8-core machine, this divides the sum into 8 chunks and computes them in parallel. The final .map { |r| Ractor.take(r) }.sum blocks until all Ractors have yielded their partial sums.

Cancelling and Closing Ractors

A Ractor runs until its block returns or you explicitly close it:

r = Ractor.new { loop { Ractor.receive } }

r << :stop  # sends a shutdown signal

# Or close the incoming port — Ractor.receive raises Ractor::ClosedError
r.close_incoming

Ruby 3.2 added Ractor.cancel(ractor) for forcible termination, though this should be a last resort since it doesn’t give the Ractor a chance to clean up.

What Ractors Don’t Solve

Ractors are not a silver bullet. A few things to keep in mind:

Each Ractor still has the GVL. A single Ractor can’t split its CPU-bound work across cores. That’s why you need multiple Ractors for parallelism — each one holds its own GVL. If you only spawn one Ractor for CPU work, it won’t be faster than a thread.

Debugging is harder. Stack traces from Ractors are isolated. Named Ractors help (Ractor.new(name: "my-worker")) so you can identify them in logs, but you’ll want to lean on unit tests for Ractor-heavy code.

Thousands of Ractors is not the answer. Each Ractor has real memory overhead and startup cost. For I/O-bound concurrency (many network requests), threads or the async gem are lighter. Save Ractors for CPU parallelism or when you genuinely need actor-style isolation.

See Also