Skip to content

AverAzure — Session Context

What we built

A single .NET 8 Web API project called AverAzure that demonstrates:

  • Azure Blob Storage via IDocumentStore abstraction
  • RabbitMQ messaging via a custom wrapper
  • Structured logging via Serilog → Seq
  • Scalar UI (Swashbuckle generates spec, Scalar renders UI)

Project structure

AverAzure/
├── Abstractions/
   ├── IDocumentStore.cs
   ├── IInvoiceStore.cs
   ├── IImageStore.cs
   ├── IMessageBus.cs
   └── MessagingContracts.cs        # IIntegrationEvent, ICommand, base records

├── Messaging/
   ├── Events/
      └── InvoiceUploadedEvent.cs
   ├── MessageEnvelope.cs
   ├── RabbitMqConnection.cs
   ├── RabbitMqConsumer.cs          # Base BackgroundService with retry + DLQ
   └── RabbitMqMessageBus.cs

├── Consumers/
   └── InvoiceUploadedConsumer.cs   # Handler + Consumer for InvoiceUploadedEvent

├── Storage/
   ├── DocumentStoreOptions.cs
   ├── DocumentStoreFactory.cs
   ├── AzureBlobDocumentStore.cs
   ├── InvoiceStore.cs              # Wraps IDocumentStore for invoices container
   └── ImageStore.cs                # Wraps IDocumentStore for originals/thumbnails/web

├── Extensions/
   ├── ConfigurationBuilderExtensions.cs   # AddSecretsProvider()
   └── ServiceCollectionExtensions.cs      # AddDocumentStores(), AddMessaging()

├── Controllers/
   ├── InvoicesController.cs
   └── ImagesController.cs

├── Program.cs
├── appsettings.json
├── appsettings.Development.json
├── Dockerfile
├── docker-compose.yml
├── .env.example
└── .gitignore

API endpoints

  • GET /health
  • POST /api/invoices — upload file to invoices container, publishes InvoiceUploadedEvent
  • GET /api/invoices — list all invoices
  • GET /api/invoices/{name} — download invoice by name
  • POST /api/images — upload to originals container, Azure Function processes it
  • GET /api/images — list all originals
  • GET /api/images/{name}/thumbnail — fetch 150x150 thumbnail (generated by Azure Function)
  • GET /api/images/{name}/web — fetch 800x800 web version (generated by Azure Function)
  • GET /scalar/v1 — Scalar UI
  • GET /swagger/v1/swagger.json — OpenAPI spec

RabbitMQ topology

  • Exchange: domain.events (Topic) — for events
  • Exchange: domain.commands (Direct) — for commands
  • Queue: aver.invoice-uploaded.queue — bound to domain.events, routing key InvoiceUploadedEvent
  • Retry queue: aver.invoice-uploaded.queue.retry — 30s TTL, dead letters back to main queue
  • DLQ: aver.invoice-uploaded.queue.dlq — after 3 failed attempts

Event flow on invoice upload

  1. POST /api/invoices — file uploaded to Azure Blob Storage (invoices container)
  2. InvoiceUploadedEvent published to domain.events exchange
  3. InvoiceUploadedConsumer receives it from aver.invoice-uploaded.queue
  4. InvoiceUploadedHandler logs structured fields (InvoiceName, FileSizeBytes, UploadedAt)
  5. Message ACKed and removed from queue
  6. All log entries visible in Seq at http://localhost:8081

Image flow

  1. POST /api/images — file uploaded to Azure Blob Storage (originals container)
  2. Azure Function (image-processor) triggered via Blob trigger on originals/{name}
  3. Function generates 150x150 thumbnail → thumbnails container
  4. Function generates 800x800 web version → web container
  5. GET /api/images/{name}/thumbnail fetches from thumbnails container
  6. GET /api/images/{name}/web fetches from web container

Storage abstraction

  • IDocumentStore — low level: UploadAsync, DownloadAsync, DeleteAsync, ListAsync
  • IInvoiceStore — wraps IDocumentStore, intent-named methods for invoices container
  • IImageStore — wraps three IDocumentStore instances (originals, thumbnails, web)
  • DocumentStoreFactory — switches on Provider config value ("Azure", "S3", "Minio" stubs)
  • AzureBlobDocumentStore — uses DefaultAzureCredential

Config structure (appsettings.json)

{
  "Secrets": { "Provider": "None" },
  "DocumentStore": {
    "Invoices": { "Provider": "Azure", "AccountUrl": "...", "ContainerName": "invoices" },
    "Images": {
      "Originals":  { "Provider": "Azure", "AccountUrl": "...", "ContainerName": "originals" },
      "Thumbnails": { "Provider": "Azure", "AccountUrl": "...", "ContainerName": "thumbnails" },
      "Web":        { "Provider": "Azure", "AccountUrl": "...", "ContainerName": "web" }
    }
  },
  "RabbitMq": { "ConnectionString": "amqp://guest:guest@rabbitmq:5672" },
  "Seq": { "ServerUrl": "http://seq:5341" }
}

Credentials and identity

  • DefaultAzureCredential used throughout — no connection strings in code
  • On Azure infra (VM, Function App) → Managed Identity resolves automatically
  • In Docker containers (local or VPS) → Service Principal via env vars:
    • AZURE_TENANT_ID
    • AZURE_CLIENT_ID
    • AZURE_CLIENT_SECRET
  • SP created with Storage Blob Data Contributor role scoped to learn_week_1 resource group
  • Key Vault wiring is in place (AddSecretsProvider()) but Secrets:Provider is set to "None" for now — switching to "AzureKeyVault" is a single config change

Docker compose stack

Three services:

  • api — .NET 8 API, port 8080:8080, depends on rabbitmq healthcheck
  • rabbitmqrabbitmq:3.13-management, port 15672:15672 (management UI only, 5672 internal only)
  • seqdatalust/seq, ports 5341:5341 (ingestion) and 8081:80 (UI)

Named volumes for RabbitMQ and Seq — required for Podman rootless permission handling. SEQ_FIRSTRUN_NOAUTHENTICATION: true — skips password requirement for local dev.

Azure resources in use

  • Subscription: b089d18c-cba7-4719-af2f-ff9ab3f8b56e
  • Resource Group: learn_week_1
  • Storage Account: averblobstore (West India)
    • Containers: invoices, originals, thumbnails, web
  • Key Vault: peppol-vault (West India) — not active yet, wiring in place
  • Function App: image-processor (East US, Consumption Windows)
    • Blob trigger on originals/{name}
    • Managed Identity: aa7657b1-b467-4c45-8c4f-7684396b9bd1
    • Roles: Storage Blob Data Contributor, Queue Data Contributor, Table Data Contributor on averblobstore

Proved working

  • Local (Podman on Fedora) — all containers up, full invoice flow, image upload and thumbnail fetch
  • VPS (real Docker alongside Coolify) — cloned via SSH key, full flow proved via curl

What's next

  • docker compose down on VPS
  • docker swarm init
  • Convert docker-compose.yml to docker-stack.yml
  • docker stack deploy
  • Then CI/CD + interview prep