As government agencies move more mission-critical workloads into containers on Azure Kubernetes Service (AKS), observability becomes non-negotiable. You need to know when a citizen-facing application is slow, why a background job is failing, and where latency is hiding across distributed microservices. Historically, that meant manually instrumenting every service with SDKs, which is tedious for large portfolios and error-prone across polyglot codebases.
Microsoft has addressed this gap with AKS autoinstrumentation for Azure Monitor Application Insights, now in public preview. This feature injects the Azure Monitor OpenTelemetry Distro into your application pods automatically, generating distributed traces, metrics, and logs without requiring any code changes. In this post, we will walk through the architecture, step-by-step setup, and configuration patterns that government IT teams should know.
What Is OpenTelemetry and Why Does It Matter?
OpenTelemetry (OTel) is a vendor-neutral, open-source observability framework backed by the Cloud Native Computing Foundation (CNCF). It standardizes how applications emit traces, metrics, and logs so that you are not locked into any single monitoring vendor. Azure Monitor’s OpenTelemetry Distro builds on this foundation, adding Azure-specific exporters and configuration that route telemetry directly into Application Insights.
For government organizations, this matters because:
- Vendor neutrality reduces procurement risk. OpenTelemetry instrumentation works with multiple backends.
- Standardized telemetry means consistent observability across Java, Node.js, .NET, and Python services regardless of which team built them.
- Reduced code changes lower the risk of introducing bugs when adding monitoring to production workloads.
Architecture Overview
The autoinstrumentation feature works by deploying a mutating admission webhook into your AKS cluster. When a pod starts (or restarts), the webhook intercepts the pod spec and injects the appropriate OpenTelemetry Distro sidecar based on the language you configure. The injected agent collects telemetry and exports it to your Application Insights resource via its connection string.
Here is how the pieces fit together within the broader AKS monitoring stack:
- Application-level telemetry (traces, dependencies, exceptions, logs) flows from the OTel Distro into Application Insights
- Infrastructure metrics (CPU, memory, pod counts) flow from Azure Monitor managed service for Prometheus into an Azure Monitor workspace
- Container logs (stdout/stderr) flow from the Azure Monitor Agent into a Log Analytics workspace via Container insights
- Visualization ties it all together through Azure Managed Grafana dashboards
This layered approach gives you full-stack visibility: from the Kubernetes control plane down to individual HTTP requests inside your application code.
Prerequisites
Before you begin, ensure you have:
- An AKS cluster running a Kubernetes deployment using Java or Node.js (the two languages supported in this preview)
- A workspace-based Application Insights resource
- Azure CLI version 2.60.0 or later
- At least Contributor access to the cluster
Important: This preview is currently incompatible with Windows node pools and Linux Arm64 node pools. Plan your node pool architecture accordingly.
Step-by-Step Setup
1. Install the AKS Preview Extension
The autoinstrumentation feature requires the aks-preview CLI extension:
| |
2. Register the Feature Flag
Register the AzureMonitorAppMonitoringPreview feature flag in your subscription. Note that registration can take up to several hours to propagate:
| |
3. Prepare the Cluster
You can enable application monitoring during cluster creation or on an existing cluster.
During cluster creation:
| |
On an existing cluster via the Azure portal:
- Navigate to your AKS cluster in the Azure portal
- Select the Monitor pane
- Check the Enable application monitoring box
- Select Review + enable
4. Onboard Your Deployments
You have two onboarding approaches: namespace-wide (instrument everything in a namespace) or per-deployment (selective instrumentation with different Application Insights resources).
Namespace-Wide Onboarding
This is the simplest path. From the Azure portal, navigate to the Namespaces pane, select a namespace, choose Application Monitoring, pick your languages (Java, Node.js), and select Configure.
Per-Deployment Onboarding with Custom Resources
For more granular control, create an Instrumentation custom resource (CR) for each configuration scenario:
| |
Then annotate each deployment to associate it with the CR:
| |
For Node.js services, use instrumentation.opentelemetry.io/inject-nodejs instead.
5. Restart Deployments
Autoinstrumentation takes effect only after a pod restart:
| |
After the restart, generate some traffic against your application and navigate to your Application Insights resource. Within a few minutes, you should see distributed traces, dependency maps, and performance metrics populating the portal.
Enabling Application Logs in Application Insights
By default, container stdout/stderr logs go to Container Insights. You can optionally route application logs into Application Insights as well, which provides correlated logs alongside distributed traces. This is especially valuable for microservices that use structured logging frameworks rather than simple console output.
Add this annotation to your deployment:
| |
Cost consideration: Enabling logs in both Container Insights and Application Insights creates duplication. Evaluate whether you need both, or if one source can serve your teams. You can filter container log collection with ConfigMap to reduce overlap.
Completing the Observability Stack
Autoinstrumentation handles application-level telemetry, but a production AKS cluster needs the full monitoring stack. Here is how to enable the other layers alongside autoinstrumentation:
| |
With all three layers enabled, you get:
| Layer | Tool | Data |
|---|---|---|
| Application | Application Insights (OTel Distro) | Traces, dependencies, exceptions, custom metrics |
| Container | Container Insights (Azure Monitor Agent) | stdout/stderr logs, pod inventory, node metrics |
| Infrastructure | Managed Prometheus + Grafana | Kubernetes metrics, GPU metrics, custom Prometheus targets |
Observability Best Practices for Government Workloads
Use separate Application Insights resources per environment. Create distinct resources for dev, staging, and production. Per-deployment onboarding makes this straightforward by pointing each deployment’s Instrumentation CR at a different connection string.
Restart deployments weekly. The autoinstrumentation agent version is updated when pods restart. Regular restarts ensure you are running the latest version with the most recent security patches.
Set up alerting early. Use Application Insights smart detection and Prometheus alert rules to catch anomalies before they impact citizens. Alert on response time degradation, failure rate spikes, and dependency failures.
Control log volume with namespace filtering. Use the namespaceFilteringMode setting in your Container Insights data collection configuration to limit log ingestion to namespaces you care about. This reduces costs and noise:
| |
Tag workloads with cloud role names. If multiple services report to the same Application Insights resource, set cloud role names so the Application Map accurately reflects your architecture.
Why This Matters for Government
Government agencies running containerized workloads on AKS face unique pressures: strict uptime requirements for citizen-facing services, compliance mandates that require audit trails and visibility into application behavior, and lean IT teams that cannot afford to spend weeks manually instrumenting every microservice.
AKS autoinstrumentation directly addresses these challenges:
- Faster time-to-value: Government development teams can add full Application Insights monitoring to existing Java and Node.js deployments without modifying a single line of application code. This is critical for agencies that have inherited legacy codebases or rely on vendor-developed applications where source access may be limited.
- Compliance-ready observability: Distributed traces and correlated logs create an auditable record of request flows across services, supporting compliance requirements around system monitoring and incident response.
- Reduced operational burden: With autoinstrumentation, a single platform team can enable observability across dozens of microservices via namespace-wide onboarding, rather than coordinating code changes across multiple development teams.
- Open standards alignment: OpenTelemetry’s vendor-neutral approach aligns with government procurement best practices that favor open standards and avoid vendor lock-in.
Azure Government note: This preview is currently available in Azure public cloud only. Government customers using Azure commercial subscriptions can use it today. If your AKS workloads run in Azure Government regions, monitor the Azure Government services availability page for updates on when this feature will be supported. In the meantime, the Azure Monitor OpenTelemetry Distro can be added manually to workloads running in Azure Government.
