The Async Gem in Ruby

· 5 min read · Updated March 29, 2026 · intermediate
ruby async fibers concurrency io

When your Ruby program spends most of its time waiting on network requests, database queries, or file operations, traditional synchronous code leaves CPUs idle. Threads solve the problem but bring heavyweight overhead. The async gem takes a different approach: it runs thousands of concurrent operations in a single thread using fibers, giving you the throughput of threads without the memory cost.

Installing the Gem

Add it to your Gemfile with bundle add async, or install directly:

gem install async

That’s all you need. The gem works with Ruby 3.0 and later, since it hooks into Ruby’s Fiber Scheduler interface.

The Reactor

Every Async { } block creates an Async::Reactor under the hood. The reactor is an event loop that watches for I/O readiness and wakes up fibers when their blocking operations are ready to continue.

You rarely create a reactor directly. The Async block does it for you:

require 'async'

Async do |task|
  # The reactor is running here
end
# Reactor stops when the block finishes

Inside the block, blocking operations like sleep, read, and write are intercepted by the scheduler. When one of these yields control, the reactor picks another ready fiber to run. From the outside it looks like everything runs at once; internally, the reactor jumps between fibers cooperatively.

Async Tasks

The block argument task is an Async::Task — a fiber that can wait for other fibers, spawn children, and propagate results.

Waiting for Results

Call task.wait (or task.await) to suspend the current fiber until the target task finishes and retrieve its return value:

require 'async'

task = Async do
  sleep 0.5
  "hello"
end

result = task.wait
puts result  # prints "hello"

This is non-blocking from the reactor’s perspective. While this fiber is suspended, the reactor can execute other fibers.

Spawning Child Tasks

Call task.async to create a child task that shares the parent’s reactor:

require 'async'

Async do |parent|
  child = parent.async do
    sleep 0.2
    "child done"
  end

  parent.async do
    sleep 0.1
    "parent's other child"
  end

  puts child.wait  # prints "child done"
end

Child tasks inherit the reactor’s scheduler, so all of them cooperate within the same event loop. There’s no need for locks or mutexes between fibers — a fiber only yields at blocking calls, so there’s no pre-emptive scheduling to worry about.

Concurrent HTTP Requests

One of the most practical uses for the async gem is running many HTTP requests simultaneously. The async-http gem provides an async-aware HTTP client built on top of the reactor:

require 'async'
require 'async/http/internet'

urls = [
  "https://example.com",
  "https://example.org",
  "https://example.net",
]

Async do
  internet = Async::HTTP::Internet.new

  tasks = urls.map do |url|
    Async { internet.get(url).read }
  end

  responses = tasks.map(&:wait)
  puts responses.map { |r| r[0..80] }.join("\n")
end

Each Async { } block runs its HTTP request concurrently. The reactor switches between them as they wait for data, so you get parallelism without threads.

For a single request, the Sync helper is simpler:

require 'async/http/internet'

Sync do
  internet = Async::HTTP::Internet.new
  body = internet.get("https://example.com").read
  puts body[0..100]
end

Sync runs in the current reactor if one exists, or creates one if not. It’s lightweight and fits well in scripts or tests where you just need one-shot async behavior.

Limiting Concurrency with Semaphore

Sometimes you want concurrent execution but need to cap how many tasks run at the same time — for example, to avoid overwhelming a downstream API. Async::Semaphore does exactly that:

require 'async'
require 'async/http/internet'

semaphore = Async::Semaphore.new(limit: 3)

urls = 10.times.map { |i| "https://example.com/?id=#{i}" }

Async do
  internet = Async::HTTP::Internet.new

  urls.each do |url|
    semaphore.async do
      # At most 3 of these run simultaneously
      internet.get(url).read
    end
  end
end

When a task enters a semaphore beyond the limit, the semaphore yields control back to the reactor until a slot frees up. This gives you a clean way to rate-limit without managing a thread pool manually.

Async Queues for Producer-Consumer Patterns

Async::Queue is a fiber-safe, reactor-aware queue. Producers push items and consumers pop them — consumers suspend when the queue is empty instead of polling:

require 'async'

queue = Async::Queue.new

producer = Async do
  5.times do |i|
    queue.push(i)
    sleep 0.1
  end
  queue.close
end

consumer = Async do
  while (item = queue.pop)
    puts "got: #{item}"
  end
end

The consumer suspends at queue.pop when there’s nothing to read, freeing the reactor to run the producer. When the producer closes the queue, pop returns nil and the loop exits.

Timeouts That Respect the Scheduler

Ruby’s standard library Timeout.timeout uses a separate thread to raise an exception, which can interrupt the wrong fiber unpredictably when you’re using the Fiber Scheduler. The async gem provides Async::Timeout as a safe alternative:

require 'async'

Sync do
  Async::Timeout.timeout(2) do
    sleep 10
  end
end
# After 2 seconds, the task is cancelled

Async::Timeout respects the fiber scheduler — it cancels only the targeted task at a safe yield point. This is significantly more reliable than Timeout in async Ruby code.

Why Fibers Over Threads?

The async gem is built on fibers rather than threads for good reasons. Fibers have stack sizes around 1 KB versus ~1 MB for threads, meaning you can have thousands of them in memory at once. Fibers are cooperatively scheduled — a fiber only yields at blocking calls — so there’s no risk of race conditions between fibers sharing the reactor. Threads, by contrast, are pre-emptively scheduled by the OS, requiring mutexes and careful synchronization for any shared state.

The tradeoff is that fibers only help with I/O-bound concurrency. If you need true parallel execution to saturate multiple CPU cores, threads are still the answer (and Ruby’s GVL limits even that). But for I/O-bound workloads — HTTP requests, database queries, file operations — async fibers give you far more concurrency per thread than threads ever could.

See Also