Skip to content

Rebus — Complete Guide Drawing Parallels To Your Custom Wrapper and MassTransit

Rebus — Complete Guide Drawing Parallels To Your Custom Wrapper and MassTransit


What Rebus Is And Its Philosophy

Rebus is a lean service bus library. Where MassTransit leans heavily on convention and auto-magic — creating topology, wiring consumers, managing everything automatically — Rebus leans toward explicitness. You see more of what is happening. Less is hidden. More is your responsibility but also more is in your control.

The single biggest philosophical difference:

MassTransit:  one queue per consumer per message type  fine grained, auto-created
Rebus:        one queue per service  everything that service handles goes through one queue
              Rebus dispatches internally to the correct handler

This simplicity is Rebus's core design decision. One service, one queue, one place to look.


Installation

dotnet add package Rebus
dotnet add package Rebus.RabbitMq
dotnet add package Rebus.ServiceProvider

Basic Setup

// Program.cs
services.AddRebus(configure => configure
    .Transport(t => t.UseRabbitMq(
        connectionString: "amqp://guest:guest@localhost:5672",
        inputQueueName:   "notification-service"   // one queue, you name it explicitly
    ))
    .Options(o => o
        .RetryStrategy(maxDeliveryAttempts: 5)
        .SetNumberOfWorkers(3)
        .SetMaxParallelism(10)
    )
    .Routing(r => r.TypeBased()
        .Map<ProcessInvoiceCommand>("invoice-service")  // command routing
    )
);

// Register handlers
services.AddRebusHandler<OrderCreatedHandler>();
services.AddRebusHandler<PaymentProcessedHandler>();
services.AddRebusHandler<ProcessInvoiceHandler>();

Compare to your wrapper where you had three BackgroundServices registered and three consumers each declaring their own queue — Rebus does all of this from one configuration block. One queue declared. Multiple handlers registered. Rebus dispatches internally.


Connection and Channel Management

Your Wrapper

public class RabbitMqConnection
{
    private readonly IConnection _connection;

    public RabbitMqConnection(IConfiguration config)
    {
        var factory = new ConnectionFactory { Uri = new Uri(connectionString) };
        _connection = factory.CreateConnection();
    }

    public IModel CreateChannel() => _connection.CreateModel();
}

Rebus

Manages connection and channels entirely internally. You never see a connection object or a channel. The transport configuration is all you provide:

.Transport(t => t.UseRabbitMq("amqp://guest:guest@localhost:5672", "notification-service"))

Internally Rebus:

One persistent TCP connection per host           same as your singleton
Channels created per worker thread               same as your per-consumer channel
Reconnection handled automatically               same as MassTransit
Channel recovery after disconnect                handled
Re-subscription after reconnect                  handled

The difference from your wrapper — you built none of this. Rebus does what MassTransit does — owns the connection lifecycle entirely.


Topology — The Key Difference From MassTransit

This is where Rebus diverges most clearly from both your wrapper and MassTransit.

Your Wrapper Topology

domain.events exchange   (Topic)
domain.commands exchange (Direct)
one queue per message type per consumer

MassTransit Topology

one exchange per message type (Fanout)
one queue per consumer per message type

Rebus Topology

RebusTopics exchange    (Topic, one global exchange for all events)
notification-service    (one queue for entire service, all message types)
notification-service_error (DLQ, auto created)

Rebus creates two global exchanges regardless of how many message types you handle:

RebusTopics     Topic exchange, handles all published events
                 your notification-service queue binds here for each subscribed type

RebusDirect     Direct exchange, handles all sent commands
                 routing key = destination queue name

This maps closest to your wrapper which also used two global exchanges — domain.events and domain.commands. Rebus just names them differently and manages the bindings automatically.


Subscribing To Events

This is the most important difference between Rebus and MassTransit. MassTransit auto-subscribes based on registered consumers. Rebus requires explicit subscription on startup:

// Must call these explicitly on startup — Rebus creates bindings here
var bus = app.Services.GetRequiredService<IBus>();

await bus.Subscribe<OrderCreatedEvent>();
await bus.Subscribe<PaymentProcessedEvent>();
await bus.Subscribe<PeppolValidationFailedEvent>();

What Subscribe does under the hood:

// Rebus internally calls something equivalent to:
_channel.QueueBind(
    queue:      "notification-service",
    exchange:   "RebusTopics",
    routingKey: "OrderCreatedEvent"  // or full type name
);

Miss a Subscribe call — messages published to that exchange never arrive at your queue. No error. Silent. This is the most common Rebus mistake.

Your wrapper equivalent:

// In ExecuteAsync for each consumer
_channel.QueueBind(QueueName, Exchange, RoutingKey);

Same concept — Rebus just centralises it rather than spreading it across multiple BackgroundServices.


Handlers — Writing Business Logic

Your Wrapper

public interface IMessageHandler<T>
{
    Task HandleAsync(T message);
}

public class OrderCreatedHandler : IMessageHandler<OrderCreatedEvent>
{
    public Task HandleAsync(OrderCreatedEvent message)
    {
        _logger.LogInformation("Order received {OrderId}", message.OrderId);
        return Task.CompletedTask;
    }
}

MassTransit

public class OrderCreatedConsumer : IConsumer<OrderCreatedEvent>
{
    public async Task Consume(ConsumeContext<OrderCreatedEvent> context)
    {
        var message = context.Message;
        _logger.LogInformation("Order received {OrderId}", message.OrderId);
    }
}

Rebus

public class OrderCreatedHandler : IHandleMessages<OrderCreatedEvent>
{
    private readonly ILogger<OrderCreatedHandler> _logger;

    public OrderCreatedHandler(ILogger<OrderCreatedHandler> logger)
        => _logger = logger;

    public async Task Handle(OrderCreatedEvent message)
    {
        _logger.LogInformation("Order received {OrderId}", message.OrderId);
        // Just business logic — no ACK, no NACK, no retry routing
        // Throw on failure — Rebus handles the rest
    }
}

The interface name differs — IHandleMessages<T> instead of IConsumer<T> — but the concept is identical. Pure business logic, no infrastructure concern. ACK on success, retry on exception, all handled by Rebus.

One important difference — one service can have multiple handlers for the same message type in Rebus:

public class OrderCreatedAuditHandler : IHandleMessages<OrderCreatedEvent>
{
    public async Task Handle(OrderCreatedEvent message)
    {
        await _auditLog.WriteAsync(message);
    }
}

public class OrderCreatedNotificationHandler : IHandleMessages<OrderCreatedEvent>
{
    public async Task Handle(OrderCreatedEvent message)
    {
        await _emailService.SendConfirmationAsync(message);
    }
}

// Both registered — both called when OrderCreatedEvent arrives
services.AddRebusHandler<OrderCreatedAuditHandler>();
services.AddRebusHandler<OrderCreatedNotificationHandler>();

One message arrival — Rebus calls both handlers sequentially. Your wrapper would have required two separate consumers on two separate queues. MassTransit would also require two separate consumers. Rebus dispatches internally.


Publishing Events

Your Wrapper

await _bus.PublishAsync(new OrderCreatedEvent { OrderId = orderId, ... });

// Under the hood:
_channel.BasicPublish(
    exchange:   "domain.events",
    routingKey: "OrderCreatedEvent",
    props, body
);

Rebus

// Inject IBus
public class OrdersController : ControllerBase
{
    private readonly IBus _bus;

    public OrdersController(IBus bus) => _bus = bus;

    [HttpPost]
    public async Task<IActionResult> CreateOrder([FromBody] CreateOrderRequest req)
    {
        var orderId = Guid.NewGuid();

        await _bus.Publish(new OrderCreatedEvent
        {
            OrderId      = orderId,
            CustomerName = req.CustomerName,
            Total        = req.Total
        });

        return Ok(new { OrderId = orderId });
    }
}

Rebus publishes to RebusTopics exchange with a routing key derived from the message type. All queues that subscribed to OrderCreatedEvent receive a copy. Identical to your wrapper conceptually — different exchange name, same mechanism.


Sending Commands

Your Wrapper

await _bus.SendAsync(
    new ProcessInvoiceCommand(orderId, total, customerName),
    destination: "invoice-service"
);

// Under the hood:
_channel.BasicPublish(
    exchange:   "domain.commands",
    routingKey: "invoice-service",
    props, body
);

Rebus

await _bus.Send(new ProcessInvoiceCommand
{
    OrderId      = orderId,
    Total        = total,
    CustomerName = customerName
});

Rebus knows where to send it via the routing configuration you declared on startup:

.Routing(r => r.TypeBased()
    .Map<ProcessInvoiceCommand>("invoice-service")  // type → destination
)

Under the hood Rebus publishes to RebusDirect exchange with routing key invoice-service. Invoice service queue is bound to RebusDirect with that routing key. Identical to your wrapper — different exchange name, same mechanism.


ACK and NACK

Your Wrapper

consumer.Received += async (_, ea) =>
{
    try
    {
        await _handler.HandleAsync(message);
        _channel.BasicAck(ea.DeliveryTag, false);        // explicit ACK
    }
    catch (Exception ex)
    {
        _channel.BasicPublish(...);                       // explicit DLQ routing
        _channel.BasicAck(ea.DeliveryTag, false);        // explicit ACK original
    }
};

Rebus

You never write ACK or NACK code. Rebus wraps every handler call:

Handler returns normally    →  BasicAck automatically
Handler throws exception    →  retry policy applies
                               on max retries exceeded → move to _error queue
                               BasicAck after routing to error

Identical to MassTransit in this regard. The delivery tag management, the ACK after routing to DLQ, the always-ACK-original pattern you built — Rebus owns all of it.


Retry

This is where Rebus and MassTransit are similar but Rebus has one extra concept.

Your Wrapper

// TTL queues — one per delay tier
_channel.QueueDeclare("order-created.retry", durable: true, arguments: {
    "x-message-ttl": RetryDelayMs,
    "x-dead-letter-exchange": Exchange,
    "x-dead-letter-routing-key": RoutingKey
});

// Header tracking
var retryCount = GetRetryCount(ea.BasicProperties);
if (retryCount < MaxRetries)
    _channel.BasicPublish("", RetryQueue, props, ea.Body);

Rebus — First Level Retry

In memory — same as MassTransit, no extra queues:

.Options(o => o.RetryStrategy(maxDeliveryAttempts: 5))

Five attempts in memory. If all fail — message moves to _error queue. Simple. No broker round trip. No TTL queues. No header management.

Rebus — Second Level Retry

This is unique to Rebus — MassTransit has fault consumers as an alternative but Rebus makes this a first class retry concept:

.Options(o => o.RetryStrategy(
    maxDeliveryAttempts:      3,    // first level — in memory, fast
    secondLevelRetriesEnabled: true // second level — deferred, different handler
))

With second level retries enabled:

First level:
  attempt 1  fail
  attempt 2  fail
  attempt 3  fail   max first level attempts
  message deferred to second level

Second level:
  different handler type IHandleMessages<IFailed<OrderCreatedEvent>>
  can do something different  compensate, alert, store, escalate
  if second level also fails  _error queue
// Second level handler — receives failed message with exception info
public class OrderCreatedFailedHandler : IHandleMessages<IFailed<OrderCreatedEvent>>
{
    public async Task Handle(IFailed<OrderCreatedEvent> failedMessage)
    {
        var original   = failedMessage.Message;
        var exceptions = failedMessage.Exceptions;

        _logger.LogError(
            "OrderCreatedEvent failed all retries OrderId={OrderId} Error={Error}",
            original.OrderId, exceptions.Last().Message);

        await _alertService.NotifyOpsTeamAsync(original, exceptions);
    }
}

services.AddRebusHandler<OrderCreatedFailedHandler>();

Your wrapper equivalent was the explicit DLQ publish in the catch block after max retries. MassTransit equivalent is the Fault consumer. Rebus makes it a typed handler with the full exception context built in.


Dead Letter Queue

Your Wrapper

// Three things manually declared
_channel.ExchangeDeclare("dlx.exchange", ExchangeType.Direct, durable: true);
_channel.QueueDeclare($"{QueueName}.dlq", durable: true);
_channel.QueueBind($"{QueueName}.dlq", "dlx.exchange", $"{QueueName}.dlq");

// Explicit routing in catch block
_channel.BasicPublish("dlx.exchange", DlqQueue, props, ea.Body);

Rebus

Zero configuration. One _error queue per service — not per message type:

notification-service         main queue, all message types
notification-service_error   one DLQ for entire service

After max retries — message moves to notification-service_error automatically. Contains original message plus full exception chain.

The difference from MassTransit:

MassTransit:  _error queue per consumer endpoint (fine grained)
Rebus:        _error queue per service (one queue, all failures)

For a small service this is fine. For a large service handling many message types — harder to distinguish which type is failing from one combined error queue.


Scaling and Parallel Consumers

Your Wrapper

// One consumer, one channel, prefetchCount: 1
_channel.BasicQos(0, prefetchCount: 1, global: false);

// Multiple instances = competing consumers
// Just run more instances, broker round robins

Rebus

Two settings — workers and parallelism:

.Options(o => o
    .SetNumberOfWorkers(3)      // number of worker threads pulling from queue
    .SetMaxParallelism(10)      // max concurrent handlers across all workers
)
Workers:      how many threads pulling messages from the queue
              each worker has its own channel — same as your per-consumer channel model
              3 workers = 3 channels = 3 concurrent pulls from broker

Parallelism:  how many handlers can run at the same time across all workers
              MaxParallelism: 10 means at most 10 handlers running simultaneously
              IO bound work — set higher than workers
              CPU bound work — set equal to workers or lower
SetNumberOfWorkers(3) SetMaxParallelism(10):

Worker 1 → pulls message → handler 1 starts (IO, awaits)
                         → handler 2 starts (IO, awaits)
                         → handler 3 starts (IO, awaits)
Worker 2 → pulls message → handler 4 starts
Worker 3 → pulls message → handler 5 starts
... up to 10 concurrent handlers

Competing consumers still work the same as your wrapper — run more instances, broker distributes. Workers and parallelism control concurrency within one instance.

One queue per service means all message types compete for the same workers:

10 OrderCreatedEvent messages arrive
3 PeppolValidationFailedEvent messages arrive

All 13 compete for the same worker pool
A slow OrderCreatedEvent handler can delay PeppolValidationFailedEvent processing

Your wrapper avoided this — separate queues, separate consumers, separate throughput. MassTransit also avoids this. Rebus accepts this tradeoff for simplicity.


The Blocking Risk In Practice

Worth understanding concretely since your system has this exact scenario:

notification-service handles:
  PeppolValidationFailedEvent  → HTTP call to external notification API (slow, 2s)
  ZipReadyEvent                → quick DB write (fast, 10ms)

Without enough workers:
  5 PeppolValidationFailedEvent messages arrive, filling all workers
  ZipReadyEvent waits behind them
  customer ZIP notification delayed unnecessarily

Solution — enough workers and parallelism that IO bound tasks do not block:

.Options(o => o
    .SetNumberOfWorkers(5)
    .SetMaxParallelism(20)   // generous parallelism for IO bound handlers
)

Or — accept this limitation and put high volume or slow message types on separate services with separate queues. This is actually the natural boundary driver for Rebus service design.


Full Setup Showing Everything Together

// Program.cs
services.AddRebus(configure => configure
    .Transport(t => t.UseRabbitMq(
        "amqp://guest:guest@localhost:5672",
        "notification-service"
    ))
    .Options(o => o
        .RetryStrategy(
            maxDeliveryAttempts:      3,
            secondLevelRetriesEnabled: true
        )
        .SetNumberOfWorkers(5)
        .SetMaxParallelism(15)
    )
    .Routing(r => r.TypeBased()
        .Map<ProcessInvoiceCommand>("invoice-service")
        .Map<SendPeppolCommand>("peppol-service")
    )
    .Logging(l => l.MicrosoftExtensionsLogging(provider))
);

// Register all handlers
services.AddRebusHandler<OrderCreatedHandler>();
services.AddRebusHandler<OrderCreatedAuditHandler>();         // second handler, same event
services.AddRebusHandler<PaymentProcessedHandler>();
services.AddRebusHandler<PeppolValidationFailedHandler>();
services.AddRebusHandler<OrderCreatedFailedHandler>();        // second level retry handler
services.AddRebusHandler<PaymentProcessedFailedHandler>();

// Startup — explicit subscriptions required
var app = builder.Build();

using (var scope = app.Services.CreateScope())
{
    var bus = scope.ServiceProvider.GetRequiredService<IBus>();

    await bus.Subscribe<OrderCreatedEvent>();
    await bus.Subscribe<PaymentProcessedEvent>();
    await bus.Subscribe<PeppolValidationFailedEvent>();
    await bus.Subscribe<ZipReadyEvent>();
    // Miss any of these — messages never arrive
}

What Rebus Creates In RabbitMQ

Exchanges:
  RebusTopics   (Topic)    all published events
  RebusDirect   (Direct)   all sent commands

Queues:
  notification-service           one queue, all message types
  notification-service_error     one DLQ, all failures

Bindings (created by Subscribe calls):
  RebusTopics  notification-service  routing key: OrderCreatedEvent
  RebusTopics  notification-service  routing key: PaymentProcessedEvent
  RebusTopics  notification-service  routing key: PeppolValidationFailedEvent
  RebusTopics  notification-service  routing key: ZipReadyEvent
  RebusDirect  invoice-service       routing key: invoice-service
  RebusDirect  peppol-service        routing key: peppol-service

Compare to your wrapper — same two global exchanges, same binding pattern. Rebus just automates the binding creation via Subscribe calls instead of your manual QueueBind calls in ExecuteAsync.


The Complete Three-Way Comparison

Concern               Your Wrapper              MassTransit              Rebus
──────────────────────────────────────────────────────────────────────────────
Exchanges             2 global (yours named)    1 per message type       2 global (RebusTopics/Direct)
Queues                1 per msg type/consumer   1 per consumer/msg type  1 per service
Queue declaration     Manual in ExecuteAsync    ConfigureEndpoints()     UseRabbitMq() + Subscribe()
Subscription          Manual QueueBind          Automatic                Explicit Subscribe() required
Handlers              IMessageHandler<T>        IConsumer<T>             IHandleMessages<T>
Multiple handlers     Separate consumers        Separate consumers       Multiple on same queue
ACK                   Explicit BasicAck         Automatic                Automatic
Retry                 TTL queues + headers      In memory, policy        In memory, first+second level
DLQ                   Manual, per queue         Auto _error per endpoint Auto _error per service
Fault handling        Manual in catch block     Fault<T> consumer        IFailed<T> handler
Workers               1 per BackgroundService   ConcurrentMessageLimit   SetNumberOfWorkers
Parallelism           prefetchCount             PrefetchCount            SetMaxParallelism
Blocking risk         None  separate queues    None  separate queues   Yes  shared queue
License               N/A (yours)               Free v8, paid v9         MIT, free forever
Topology visibility   Full  you see everything Hidden, convention        Partial  you see exchanges

When To Choose Rebus Over MassTransit

Choose Rebus when:
  MIT license matters                 permanent free, no commercial risk
  Simple topology preferred           one queue per service is easier to reason about
  Multiple handlers per event         built in, no extra queues
  Second level retry needed           first class concept, cleaner than fault consumers
  Small services, few message types   one queue is fine, blocking risk minimal
  You want explicit control           Subscribe calls make bindings visible

Choose MassTransit v8 when:
  Complex saga orchestration          MassTransit state machines are excellent
  Fine grained isolation needed       separate queue per message type
  Richer ecosystem wanted             more middleware, more integrations
  Already familiar with it            team knowledge is real value
  Until end 2026 deadline is fine     reassess when support ends

Stick with your wrapper when:
  Full control required               no framework constraints
  Learning the fundamentals           which you have now done
  Minimal dependencies preferred      no external package surface area
  Custom topology needed              framework cannot express it

The Core Insight

Your wrapper, MassTransit, and Rebus all solve identical problems. They sit at different points on the same spectrum:

Raw RabbitMQ client       maximum control, maximum ceremony
Your wrapper              reduced ceremony, your conventions, still explicit
Rebus                     reduced further, explicit where it matters, simple topology
MassTransit               minimum ceremony, maximum convention, maximum automation

Understanding your wrapper is understanding all three. When MassTransit calls ConfigureEndpoints — it is running your QueueDeclare and QueueBind code. When Rebus calls Subscribe — it is running your QueueBind code. When either library catches an exception and retries — it is running your catch block retry logic. The difference is they have been doing it for years across thousands of production systems and have handled every edge case you would have discovered painfully over time.