Fiber Scheduler and Async IO

· 4 min read · Updated March 16, 2026 · intermediate
fibers async io concurrency ruby-3

Ruby 3.1 introduced the Fiber Scheduler—a powerful mechanism for writing asynchronous code without the complexity of callbacks or full async frameworks. If you’ve ever wanted to handle thousands of concurrent connections or make your IO-bound code more efficient, the Fiber Scheduler is worth knowing.

What is a Fiber Scheduler?

Before diving in, understand the building blocks. Fibers are lightweight concurrency primitives that you explicitly pause and resume. Unlike threads, fibers are cooperative—they yield control only when you tell them to, avoiding the synchronization headaches that come with threads.

The Fiber Scheduler extends this idea to IO operations. When a fiber performs a blocking IO operation (reading from a socket, waiting for a file, sleeping), the scheduler can suspend that fiber and run other fibers instead. This lets you handle many concurrent operations in a single thread.

Think of it as a single-threaded event loop, similar to what Node.js does, but built into Ruby and using fibers instead of callbacks.

Enabling the Scheduler

Ruby 3.1+ includes a built-in scheduler via Fiber.schedule and the Fiber::Scheduler class. Enable it by running code within the scheduler’s block:

require 'fibers'

Fiber::Scheduler.new do
  # All async operations here use the scheduler
end.run

Basic Example: Non-blocking Sleep

The simplest example shows how the scheduler improves efficiency. Regular sleep blocks the entire thread, but with the scheduler, other fibers can run during the wait:

Fiber::Scheduler.new do
  5.times do |i|
    Fiber.schedule do
      puts "Fiber #{i} starting"
      sleep 1
      puts "Fiber #{i} finished"
    end
  end
end.run

puts "All fibers done!"

Without the scheduler, this would take 5 seconds. With it, all five fibers run concurrently, taking only about 1 second.

Non-blocking IO Operations

The scheduler works with any IO operation that supports non-blocking mode. Here’s a simple HTTP fetcher that fetches multiple URLs concurrently:

require 'net/http'
require 'fibers'

urls = [
  'https://httpbin.org/delay/1',
  'https://httpbin.org/delay/1', 
  'https://httpbin.org/delay/1'
]

Fiber::Scheduler.new do
  urls.each do |url|
    Fiber.schedule do
      uri = URI(url)
      response = Net::HTTP.get_response(uri)
      puts "Got response: #{response.code}"
    end
  end
end.run

puts "All requests complete"

Each HTTP request runs concurrently. The total time is roughly the time of the slowest request, not the sum of all requests.

Custom Scheduler Implementation

You can create your own scheduler to add logging, metrics, or custom behavior:

class CustomScheduler < Fiber::Scheduler
  def initialize
    super
    @start_time = Process.clock_gettime(Process::CLOCK_MONOTONIC)
  end

  def before_fiber(fiber)
    puts "Starting fiber #{fiber.object_id}"
  end

  def after_fiber(fiber)
    elapsed = Process.clock_gettime(Process::CLOCK_MONOTONIC) - @start_time
    puts "Fiber #{fiber.object_id} done in #{elapsed.round(2)}s"
  end

  def close
    puts "Scheduler closed"
    super
  end
end

CustomScheduler.new.run do
  Fiber.schedule { sleep 0.5; puts "Done!" }
end

Handling Errors

The scheduler provides hooks for handling errors in async code:

Fiber::Scheduler.new do
  Fiber.schedule do
    raise "Something went wrong!"
  end
end.run
# Raises error in the scheduler's context

For more control, override the process_error method:

class ErrorTrackingScheduler < Fiber::Scheduler
  def initialize
    @errors = []
    super
  end

  def process_error(fiber, error)
    @errors << { fiber: fiber, error: error }
    puts "Caught error: #{error.message}"
  end

  def errors
    @errors
  end
end

Use Cases

Concurrent Web Requests

Process multiple API calls simultaneously without threads:

def fetch_all(urls)
  results = []
  Fiber::Scheduler.new.run do
    urls.each do |url|
      Fiber.schedule do
        results << fetch_one(url)
      end
    end
  end
  results
end

Real-time Feeds

Handle multiple data streams concurrently:

Fiber::Scheduler.new.run do
  Fiber.schedule { listen_to_feed(:orders) }
  Fiber.schedule { listen_to_feed(:inventory) }
  Fiber.schedule { listen_to_feed(:notifications) }
end

Connection Pooling

Manage database or connection limits while maximizing throughput:

class ConnectionPoolScheduler < Fiber::Scheduler
  def initialize(max_connections)
    @semaphore = Concurrent::Semaphore.new(max_connections)
    super()
  end

  def block(blocker, timeout)
    # Wait for a connection slot
    @semaphore.acquire
    super
  ensure
    @semaphore.release
  end
end

Limitations

The Fiber Scheduler isn’t a silver bullet. Keep these in mind:

  • Only non-blocking IO: Operations that truly block (CPU-intensive work, certain C extensions) won’t yield to other fibers
  • Single-threaded: You won’t get true parallelism for CPU-bound tasks—use threads or Ractor for that
  • Library support: The scheduler only helps if the IO operations support it. Ruby’s stdlib does, but some gems may still block

Summary

  • Fiber::Scheduler enables efficient concurrent IO in a single thread
  • Use Fiber.schedule to run blocks concurrently
  • The scheduler automatically yields during IO operations
  • Create custom schedulers for logging, metrics, or specialized behavior
  • Best for IO-bound workloads with many concurrent operations

This approach sits between threads (full parallelism, complex) and async/await in other languages (callback-based). If you need concurrency without the thread overhead, the Fiber Scheduler is your answer.

See Also