Azure Functions — Complete Reference¶
What it is¶
A serverless compute platform. You write a method, decorate it with a trigger attribute, and Azure handles everything else — hosting, scaling, retries, and infrastructure. You pay per execution, not per idle second. Scales to zero when not in use.
The mental model¶
A Function is just a message consumer where the broker is managed by Azure:
RabbitMQ consumer → you manage the broker, always-on process
Azure Function → Azure manages the broker, runs on demand
AWS Lambda → AWS manages the broker, runs on demand
Same pattern at every cloud provider. The trigger mechanism is the abstraction over the queue. [BlobTrigger] is Azure saying "subscribe to this queue of blob events" without you managing the queue.
Under the hood — Blob trigger on Consumption plan literally uses Azure Queue Storage to pass blob event notifications to your Function. That's what AzureWebJobsStorage is used for internally.
Hosting plans¶
Five plans exist. The two that matter:
Consumption (Windows/Linux) Pay per execution. Scales to zero. Cold starts possible. No virtual networking. Max 200 instances. Classic serverless model. Most battle-tested. Use this unless you have a specific reason not to.
Flex Consumption Newer model. Pay per execution. Scales to zero. Virtual networking supported. Up to 1000 instances. Faster cold starts. However — does not support standard Blob trigger. Requires Event Grid-based blob trigger instead. This is an undocumented gotcha that causes silent trigger failures with no error in Application Insights.
Functions Premium Always-on minimum 1 instance. No cold starts. Virtual networking. More expensive — you pay for the always-on instance. Use when cold starts are unacceptable but you still want event-driven scaling.
App Service Runs on an existing App Service Plan. Metrics-based scaling. Makes sense if you already have an App Service Plan. Not truly serverless.
Container Apps environment Runs your Function as a container. KEDA-based scaling. Only relevant if you're already using Container Apps.
Rule of thumb for your stack:
Sporadic, stateless, no latency requirement → Consumption (Windows)
Cold starts unacceptable → Worker Service in Swarm, not Functions
.NET worker models¶
Two models exist for .NET Functions:
In-process — your code runs inside the Functions host process. Shares the host's dependencies. Limited to .NET versions the host supports. Being deprecated.
Isolated worker — your code runs as a separate process. Full control over .NET version. Doesn't share host dependencies. Supports .NET 8 properly. Use this always.
The isolated worker model is a .NET Worker Service under the hood — same IHost, same DI, same IConfiguration, same ILogger. The FunctionsApplication builder is just HostBuilder with Functions-specific extensions.
Triggers¶
The trigger is defined by the attribute on the method parameter. The method name is irrelevant — Run is convention only. The [Function] attribute on the method and the trigger attribute on the parameter are what matter.
// BlobTrigger — fires when blob lands in container
[Function(nameof(ProcessImage))]
public async Task Run(
[BlobTrigger("originals/{name}", Connection = "BlobStorageConnection")] Stream stream,
string name) { }
// TimerTrigger — fires on cron schedule
[Function("DailyCleanup")]
public async Task Run([TimerTrigger("0 0 0 * * *")] TimerInfo timer) { }
// HttpTrigger — fires on HTTP request
[Function("Webhook")]
public async Task<HttpResponseData> Run(
[HttpTrigger(AuthorizationLevel.Function, "post")] HttpRequestData req) { }
// QueueTrigger — fires on Azure Queue Storage message
[Function("ProcessMessage")]
public async Task Run(
[QueueTrigger("my-queue", Connection = "...")] string message) { }
// ServiceBusTrigger — fires on Service Bus message
[Function("ProcessOrder")]
public async Task Run(
[ServiceBusTrigger("orders", Connection = "...")] string message) { }
Binding expressions¶
The {name} in [BlobTrigger("originals/{name}")] is a binding expression. Azure extracts the value from the blob path and injects it as a method parameter automatically:
// Blob path: originals/invoice_001.xml
[BlobTrigger("originals/{name}")] Stream stream,
string name // → "invoice_001.xml"
Sub-path extraction also works:
// Blob path: originals/2026/april/invoice.xml
[BlobTrigger("originals/{year}/{month}/{fileName}")]
string year, // → "2026"
string month, // → "april"
string fileName // → "invoice.xml"
Multiple functions in one app¶
One Function App is a deployment unit containing multiple functions. Each function is a class with a [Function]-decorated method:
Function App (image-processor)
├── ProcessImage → BlobTrigger — fires on blob upload
├── DailyCleanup → TimerTrigger — fires at midnight
└── ResizeOnDemand → HttpTrigger — fires on HTTP POST
Same project, same Program.cs, same deployment. All share the same app settings, Managed Identity, and scaling behaviour.
Program.cs and DI¶
Same pattern as ASP.NET Core — builder, services, build, run:
var builder = FunctionsApplication.CreateBuilder(args);
builder.ConfigureFunctionsWebApplication();
// Register services exactly like Web API
builder.Services.AddSingleton<IImageProcessor, ImageResizeProcessor>();
builder.Services.AddSingleton<IDocumentStore>(sp =>
DocumentStoreFactory.Create(
sp.GetRequiredService<IConfiguration>()
.GetSection("DocumentStore")
.Get<DocumentStoreOptions>()
)
);
builder.Services
.AddApplicationInsightsTelemetryWorkerService()
.ConfigureFunctionsApplicationInsights();
builder.Build().Run();
Constructor injection works exactly as in Web API:
public class ProcessImage
{
private readonly IImageProcessor _processor;
public ProcessImage(IImageProcessor processor)
{
_processor = processor;
}
[Function(nameof(ProcessImage))]
public async Task Run(
[BlobTrigger("originals/{name}", Connection = "BlobStorageConnection")] Stream stream,
string name)
{
await _processor.ProcessAsync(stream, name);
}
}
The three-line Function rule¶
A Function should contain no business logic. It is an infrastructure adapter — a thin trigger wrapper that delegates to your domain:
[Function(nameof(ProcessImage))]
public async Task Run(
[BlobTrigger("originals/{name}", Connection = "BlobStorageConnection")] Stream stream,
string name)
{
await _imageProcessor.ProcessAsync(stream, name); // one line
}
IImageProcessor contains the actual logic and has no Azure dependencies. If you move to AWS Lambda tomorrow, you rewrite this three-line adapter. Nothing else changes.
Azure Function (BlobTrigger) → thin adapter, Azure-specific, 3 lines
AWS Lambda (S3 trigger) → thin adapter, AWS-specific, 3 lines
↓ ↓
IImageProcessor → portable, no cloud dependencies
↓
IDocumentStore → portable, swappable storage
Storage accounts — two separate concerns¶
Functions needs two storage accounts for completely different purposes:
AzureWebJobsStorage — the Functions runtime's own internal storage. Used for blob trigger checkpointing (which blobs already processed), distributed locks (prevent duplicate processing), function state and metadata. This happens before your code runs, before DI is wired up. The runtime needs direct access.
Your application storage (BlobStorageConnection) — where your actual blobs live. Used by your code to read inputs and write outputs.
These should be separate storage accounts. Using the same account for both works but complicates access control and creates noise in your application containers.
Connection options for AzureWebJobsStorage¶
Connection string (simplest)
AzureWebJobsStorage = DefaultEndpointsProtocol=https;AccountName=...;AccountKey=...
Managed Identity (most secure)
AzureWebJobsStorage__accountName = averimagestore
AzureWebJobsStorage__credential = managedidentity
Managed Identity for AzureWebJobsStorage requires these roles on the storage account:
Storage Blob Data Contributor
Storage Queue Data Contributor ← trigger checkpoint mechanism
Storage Table Data Contributor ← trigger state storage
Missing Queue or Table roles causes the trigger to silently not fire — no error in Application Insights because the failure happens before the host initialises logging.
Connection options for your application storage¶
Connection string
BlobStorageConnection = DefaultEndpointsProtocol=https;AccountName=...;AccountKey=...
Store in Key Vault, reference via:
BlobStorageConnection = @Microsoft.KeyVault(VaultName=peppol-vault;SecretName=BlobStorageConnection)
Managed Identity
BlobStorageConnection__accountName = averblobstore
BlobStorageConnection__credential = managedidentity
Requires Storage Blob Data Contributor on your application storage account.
In your code — if using Managed Identity for the trigger connection, your SDK calls should also use DefaultAzureCredential:
var serviceClient = new BlobServiceClient(
new Uri("https://averblobstore.blob.core.windows.net"),
new DefaultAzureCredential()
);
Never read BlobStorageConnection as a connection string via Environment.GetEnvironmentVariable if you've switched to identity-based connection — the setting no longer exists as a connection string.
Cold starts¶
When a Function hasn't been invoked for a while Azure scales to zero. The next invocation pays a startup cost:
Spin up container
→ Start .NET runtime
→ Load your assembly
→ Initialise DI container
→ Execute your function
For .NET 8 isolated worker on Consumption plan — 2-5 seconds cold start.
When it matters:
Image processing (async, user already has response) → irrelevant
HTTP webhook (caller waiting for 200) → unacceptable
Invoice pipeline (compliance SLAs) → unacceptable
Mitigation options:
Premium plan → always-on instance, no cold starts, costs more
Flex Consumption → "Always Ready" instances, pay for warm instances
Keep-warm timer → timer trigger every 5 mins, hacky but free
Accept it → right answer for genuinely sporadic workloads
For image processing — accept cold starts. For anything latency-sensitive — Worker Service in Swarm.
Why Functions for images, not invoices¶
Image processing
→ Sporadic — merchant uploads product image occasionally
→ Stateless — read original, resize, write output, done
→ No strict latency — 5 second cold start nobody notices
→ No complex guarantees — simple blob in, blobs out
→ Consumption plan, pay per execution ✅
Invoice pipeline
→ Steady throughput — 7K/day, predictable
→ Stateful — correlation IDs, retry state, reconciliation
→ Latency matters — compliance webhooks can't wait
→ Complex guarantees — retry, DLQ, ordering
→ RabbitMQ + Worker Service in Swarm ✅
This contrast is the interview answer for "why did you use Functions for images but not invoices."
Application Insights and logging¶
Application Insights integration is included in the scaffolded template:
builder.Services
.AddApplicationInsightsTelemetryWorkerService()
.ConfigureFunctionsApplicationInsights();
By default LogInformation is filtered out — only Warning and above flows to Application Insights. Fix via host.json:
{
"version": "2.0",
"logging": {
"applicationInsights": {
"samplingSettings": {
"isEnabled": false
}
},
"logLevel": {
"Default": "Warning",
"Function": "Information",
"Function.ProcessImage.User": "Information"
}
}
}
Function.ProcessImage.User is the specific log category for your custom ILogger calls in isolated worker. Without this category explicitly set to Information, your _logger.LogInformation calls are silently dropped.
KQL query to see execution traces:
union traces, exceptions
| order by timestamp desc
| take 50
Deployment¶
Manual (learning/dev)
func azure functionapp publish image-processor
Builds locally, zips output, uploads to Azure. Fine for learning. Not for production.
CI/CD via GitHub Actions (production)
name: Deploy Function App
on:
push:
branches: [main]
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Setup .NET
uses: actions/setup-dotnet@v3
with:
dotnet-version: '8.0.x'
- name: Build
run: dotnet build --configuration Release
- name: Deploy
uses: Azure/functions-action@v1
with:
app-name: image-processor
package: .
publish-profile: ${{ secrets.AZURE_FUNCTIONAPP_PUBLISH_PROFILE }}
Get publish profile from portal — image-processor → Get publish profile → store as GitHub secret.
local.settings.json¶
Local development configuration file. Never goes to source control — add to .gitignore immediately.
{
"IsEncrypted": false,
"Values": {
"AzureWebJobsStorage": "DefaultEndpointsProtocol=https;AccountName=averimagestore;AccountKey=...;EndpointSuffix=core.windows.net",
"FUNCTIONS_WORKER_RUNTIME": "dotnet-isolated",
"BlobStorageConnection": "DefaultEndpointsProtocol=https;AccountName=averblobstore;AccountKey=...;EndpointSuffix=core.windows.net"
}
}
This file is local only. In Azure these values live in App Settings (Environment variables) — set via portal, CLI, or Key Vault references.
Gotchas¶
Flex Consumption does not support standard Blob trigger — requires Event Grid-based blob trigger. Standard [BlobTrigger] silently never fires. Error only visible via Host Status endpoint, not Application Insights. Always use Consumption (Windows) for standard blob triggers.
Missing Queue/Table roles for Managed Identity — trigger silently doesn't fire. No error. Blob trigger needs Storage Blob Data Contributor, Storage Queue Data Contributor, and Storage Table Data Contributor on AzureWebJobsStorage storage account.
Environment.GetEnvironmentVariable returns null after switching to identity-based connection — if you switch BlobStorageConnection to BlobStorageConnection__accountName + BlobStorageConnection__credential, the variable BlobStorageConnection no longer exists. Your code reading it as a connection string gets null and throws ArgumentNullException. Switch your SDK calls to DefaultAzureCredential when using identity-based connections.
LogInformation silently dropped — isolated worker filters out Information level by default. Set Function.ProcessImage.User: Information in host.json logLevel section.
App Settings not updated without restart — changing App Settings in portal takes effect after restart. Key Vault secret changes take effect after restart and cache expiry (up to 24 hours).
Test/Run panel returns 404 for Blob triggers — the portal's Test/Run is designed for HTTP triggers. 404 on a Blob trigger is expected, not an error.
Blob trigger delay on Consumption plan — up to 10 minutes delay in low-traffic scenarios due to polling. Fine for image processing, unacceptable for anything time-sensitive. Use Event Grid trigger if near-real-time reaction is needed.