Skip to content

Security, Networking, Governance (Expert)

Section S1: Identity, Access, and Secretless Patterns

QS1.1: You need to let an AKS workload call Azure AI Search and Azure OpenAI without storing any secrets. What’s the best-practice auth approach?

Answer: Use workload identity (federated) or managed identity (where applicable) + Azure AD auth with RBAC.

Clarifications (exam traps):

  • “Put keys in Kubernetes secrets” violates the requirement.
  • If Azure AD auth isn’t supported for a specific call path, use Key Vault and fetch secrets via workload identity.

QS1.2: Your team wants “least privilege” for an app that only needs to query one Azure AI Search index. Where should you enforce it?

Answer: Use Azure AD auth to Search and enforce query permissions in the app layer; isolate by separate Search service if you need hard multi-tenant separation.

Clarifications (exam traps):

  • Search RBAC covers service operations; fine-grained index-per-tenant authorization is usually an application concern.

QS1.3: How do you prevent client apps (browser/mobile) from calling Azure AI services directly?

Answer: Put calls behind your backend (optionally behind APIM) and never ship AI credentials to clients.

Clarifications (exam traps):

  • “Obfuscate keys” is not security.

QS1.4: You must rotate API keys across multiple services (Search, Vision, Language) with minimal downtime. What pattern scales best?

Answer: Centralize secrets in Key Vault, use app config references (or runtime fetch + cache), rotate using dual keys where available.

Clarifications (exam traps):

  • Keys should not be embedded in CI/CD variables long-term.

QS2.1: After enabling Private Endpoint for an Azure AI service, calls from your VNet fail with name resolution errors. What’s the likely missing piece?

Answer: Private DNS (private DNS zone + link to VNet) or DNS forwarding in hybrid setups.

Clarifications (exam traps):

  • Private Endpoint without DNS alignment commonly breaks clients.

QS2.2: You must allow only a single subnet to call Azure OpenAI, and all other networks must be blocked. What should you configure?

Answer: Private Endpoint + disable public network access; place callers in the allowed subnet and control via NSGs/route design.

Clarifications (exam traps):

  • “Selected networks” alone can still involve public routing semantics.

QS2.3: What’s the difference between Private Endpoint and Service Endpoint in “public internet must not be used” requirements?

Answer: Private Endpoint gives a private IP in your VNet and keeps traffic on private link; Service Endpoints still target a public endpoint.

Clarifications (exam traps):

  • For strict isolation, the exam typically expects Private Endpoint.

Section S3: Governance, Logging, Compliance

QS3.1: You need to capture request volume/latency/error rates for AI services and alert on anomalies. What’s the standard Azure approach?

Answer: Azure Monitor metrics + alerts, plus diagnostic settings to Log Analytics where available.

Clarifications (exam traps):

  • Resource diagnostics help, but you still need application-level logs for prompt/tool correlation.

QS3.2: Your org forbids storing prompts containing PII. How do you keep debuggability?

Answer: Implement PII redaction/tokenization before logging, store minimal metadata, and enforce retention + access controls.

Clarifications (exam traps):

  • “Turn off logs” is not acceptable for production; implement privacy-aware telemetry.

QS3.3: You need to enforce per-user quotas on a chat endpoint to prevent abuse and cost spikes. What’s the most practical enforcement point?

Answer: Your API gateway/backend (APIM policies or app middleware) using authenticated user identity.

Clarifications (exam traps):

  • Service-level quotas don’t give per-user fairness by themselves.

QS3.4: A regulator asks you to prove who accessed model outputs for a sensitive workflow. What do you need?

Answer: Authenticated access + audit logs (API access logs, correlation IDs, and storage access logs) with controlled retention.

Clarifications (exam traps):

  • “We trust the app” is not an audit trail.

Section S4: Threat Modeling for RAG/Agents

QS4.1: In RAG, what’s the correct security stance toward retrieved documents?

Answer: Treat retrieved text as untrusted input.

Clarifications (exam traps):

  • Don’t let retrieved docs override system instructions.
  • Keep tools behind allowlists + authorization.

QS4.2: A tool call can delete data. What’s the correct design to avoid accidental destructive actions?

Answer: Require explicit authorization + implement “dry-run/confirm” flows + enforce server-side policy checks.

Clarifications (exam traps):

  • The model should never be the final authority for destructive operations.

QS4.3: You need to prevent data exfiltration via prompt injection (“send secrets to user”). What’s the real control?

Answer: Don’t put secrets in the model context, and constrain tool outputs + apply DLP/redaction before returning anything.

Clarifications (exam traps):

  • Content filters don’t prevent accidental secret inclusion if you already fed the secret to the model.

Released under the MIT License.