Implement and Manage Storage - Q&A
This document contains comprehensive questions and answers for the Implement and Manage Storage domain of the AZ-104 exam (15-20% weight).
📚 Reference Links
- Azure Storage Documentation
- Azure Blob Storage Documentation
- Azure Files Documentation
- AZ-104 Study Guide
Section 1: Storage Accounts
Q1.1: What are the different types of storage accounts?
Answer: Azure offers several storage account types optimized for different scenarios:
Storage Account Types:
| Type | Supported Services | Performance | Use Case |
|---|---|---|---|
| Standard general-purpose v2 | Blob, File, Queue, Table | Standard | Most scenarios, recommended default |
| Premium block blobs | Block blobs only | Premium | High transaction rates, low latency |
| Premium file shares | Files only | Premium | Enterprise file shares, high IOPS |
| Premium page blobs | Page blobs only | Premium | VM disks, databases |
Performance Tiers:
Standard:
- HDD-based storage
- Lower cost
- Higher latency
- Good for backup, archive, infrequent access
Premium:
- SSD-based storage
- Higher cost
- Low latency, high IOPS
- Good for databases, high-performance workloads
Creating Storage Accounts:
# Standard general-purpose v2
az storage account create \
--name "mystorageaccount" \
--resource-group "Storage-RG" \
--location "eastus" \
--sku Standard_LRS \
--kind StorageV2
# Premium block blobs
az storage account create \
--name "mypremiumblobs" \
--resource-group "Storage-RG" \
--location "eastus" \
--sku Premium_LRS \
--kind BlockBlobStorage
# Premium file shares
az storage account create \
--name "mypremiumfiles" \
--resource-group "Storage-RG" \
--location "eastus" \
--sku Premium_LRS \
--kind FileStorageDocumentation Links:
Q1.2: What are the storage redundancy options?
Answer: Azure Storage provides multiple redundancy options to protect your data:
Primary Region Redundancy:
| Option | Copies | Description |
|---|---|---|
| LRS (Locally Redundant) | 3 | Three copies in single datacenter |
| ZRS (Zone Redundant) | 3 | Three copies across availability zones |
Secondary Region Redundancy:
| Option | Copies | Description |
|---|---|---|
| GRS (Geo-Redundant) | 6 | LRS + async copy to secondary region |
| GZRS (Geo-Zone Redundant) | 6 | ZRS + async copy to secondary region |
| RA-GRS (Read-Access GRS) | 6 | GRS + read access to secondary |
| RA-GZRS (Read-Access GZRS) | 6 | GZRS + read access to secondary |
Durability and Availability:
| Redundancy | Durability | Availability |
|---|---|---|
| LRS | 11 nines (99.999999999%) | 99.9% |
| ZRS | 12 nines | 99.9% |
| GRS/RA-GRS | 16 nines | 99.9% (99.99% RA-GRS read) |
| GZRS/RA-GZRS | 16 nines | 99.9% (99.99% RA-GZRS read) |
When to Use Each:
- LRS: Cost-sensitive, data can be recreated, single region compliance
- ZRS: High availability within region, zone failure protection
- GRS: Disaster recovery, regional outage protection
- GZRS: Maximum protection, zone + regional redundancy
- RA-GRS/RA-GZRS: Read access during regional outage
Changing Redundancy:
# Change to GRS
az storage account update \
--name "mystorageaccount" \
--resource-group "Storage-RG" \
--sku Standard_GRSImportant Notes:
- Cannot change from ZRS to LRS or GRS directly
- Premium storage only supports LRS and ZRS
- Secondary region is determined by Azure (paired regions)
Documentation Links:
Q1.3: How do you configure storage account networking and security?
Answer: Storage accounts can be secured using multiple network and access controls:
Network Access Options:
1. Public Access (Default):
- Accessible from any network
- Use SAS tokens or keys for authentication
2. Selected Networks:
- Allow specific VNets and IP addresses
- Block all other traffic
3. Private Endpoint:
- Private IP address in your VNet
- Traffic stays on Microsoft backbone
Configuring Network Rules:
# Deny all public access
az storage account update \
--name "mystorageaccount" \
--resource-group "Storage-RG" \
--default-action Deny
# Allow specific VNet
az storage account network-rule add \
--account-name "mystorageaccount" \
--resource-group "Storage-RG" \
--vnet-name "MyVNet" \
--subnet "StorageSubnet"
# Allow specific IP
az storage account network-rule add \
--account-name "mystorageaccount" \
--resource-group "Storage-RG" \
--ip-address "203.0.113.0/24"Service Endpoints vs Private Endpoints:
| Feature | Service Endpoint | Private Endpoint |
|---|---|---|
| Traffic path | Microsoft backbone | Private IP in VNet |
| Public IP | Still has public IP | No public IP needed |
| On-premises access | Requires VPN/ExpressRoute | Requires VPN/ExpressRoute |
| DNS | Public DNS | Private DNS zone |
| Cost | Free | Per hour + data processing |
Creating Private Endpoint:
# Create private endpoint
az network private-endpoint create \
--name "storage-pe" \
--resource-group "Storage-RG" \
--vnet-name "MyVNet" \
--subnet "PrivateEndpointSubnet" \
--private-connection-resource-id "/subscriptions/.../storageAccounts/mystorageaccount" \
--group-id "blob" \
--connection-name "storage-connection"Additional Security Features:
- Require secure transfer (HTTPS only)
- Minimum TLS version (1.2 recommended)
- Allow/disallow blob public access
- Storage account keys rotation
- Microsoft Entra authentication
Documentation Links:
Section 2: Azure Blob Storage
Q2.1: What are the different blob types and when should you use each?
Answer: Azure Blob Storage supports three types of blobs:
1. Block Blobs:
- Composed of blocks (up to 50,000 blocks)
- Maximum size: 190.7 TiB
- Optimized for upload/download
- Best for: Documents, images, videos, backups
2. Append Blobs:
- Optimized for append operations
- Cannot modify existing blocks
- Maximum size: 195 GiB
- Best for: Logging, audit trails, streaming data
3. Page Blobs:
- Optimized for random read/write
- 512-byte pages
- Maximum size: 8 TiB
- Best for: VM disks (VHDs), databases
Blob Type Comparison:
| Feature | Block Blob | Append Blob | Page Blob |
|---|---|---|---|
| Max size | 190.7 TiB | 195 GiB | 8 TiB |
| Modification | Replace blocks | Append only | Random access |
| Use case | Files, media | Logs | VHDs |
| Access tiers | Yes | No | No |
Uploading Blobs:
# Upload block blob
az storage blob upload \
--account-name "mystorageaccount" \
--container-name "documents" \
--name "report.pdf" \
--file "./report.pdf" \
--type block
# Upload with specific tier
az storage blob upload \
--account-name "mystorageaccount" \
--container-name "archive" \
--name "backup.zip" \
--file "./backup.zip" \
--tier CoolDocumentation Links:
Q2.2: What are the blob access tiers and how do they work?
Answer: Azure Blob Storage offers access tiers to optimize costs based on data access patterns:
Access Tiers:
| Tier | Storage Cost | Access Cost | Use Case |
|---|---|---|---|
| Hot | Highest | Lowest | Frequently accessed data |
| Cool | Lower | Higher | Infrequently accessed (30+ days) |
| Cold | Lower than Cool | Higher than Cool | Rarely accessed (90+ days) |
| Archive | Lowest | Highest | Long-term retention (180+ days) |
Tier Characteristics:
Hot Tier:
- Default tier for new storage accounts
- Optimized for frequent access
- Highest storage cost, lowest access cost
Cool Tier:
- Minimum 30-day retention (early deletion fee)
- Lower storage cost than Hot
- Higher access cost than Hot
Cold Tier:
- Minimum 90-day retention
- Lower storage cost than Cool
- Higher access cost than Cool
Archive Tier:
- Minimum 180-day retention
- Offline storage (must rehydrate to access)
- Lowest storage cost, highest access cost
- Rehydration can take hours
Setting and Changing Tiers:
# Set tier during upload
az storage blob upload \
--account-name "mystorageaccount" \
--container-name "data" \
--name "file.txt" \
--file "./file.txt" \
--tier Cool
# Change existing blob tier
az storage blob set-tier \
--account-name "mystorageaccount" \
--container-name "data" \
--name "file.txt" \
--tier Archive
# Rehydrate from Archive
az storage blob set-tier \
--account-name "mystorageaccount" \
--container-name "data" \
--name "file.txt" \
--tier Hot \
--rehydrate-priority HighRehydration Options:
- Standard: Up to 15 hours
- High Priority: Under 1 hour for objects < 10 GB
Documentation Links:
Q2.3: How do you configure lifecycle management policies?
Answer: Lifecycle management policies automate blob tier transitions and deletion based on rules.
Policy Components:
1. Filters:
- Blob types (blockBlob, appendBlob)
- Prefix match (folder paths)
- Blob index tags
2. Actions:
- tierToCool
- tierToCold
- tierToArchive
- delete
- enableAutoTierToHotFromCool
Lifecycle Policy Example:
{
"rules": [
{
"enabled": true,
"name": "move-to-cool",
"type": "Lifecycle",
"definition": {
"actions": {
"baseBlob": {
"tierToCool": {
"daysAfterModificationGreaterThan": 30
}
}
},
"filters": {
"blobTypes": ["blockBlob"],
"prefixMatch": ["logs/"]
}
}
},
{
"enabled": true,
"name": "archive-old-data",
"type": "Lifecycle",
"definition": {
"actions": {
"baseBlob": {
"tierToArchive": {
"daysAfterModificationGreaterThan": 90
},
"delete": {
"daysAfterModificationGreaterThan": 365
}
},
"snapshot": {
"delete": {
"daysAfterCreationGreaterThan": 90
}
}
},
"filters": {
"blobTypes": ["blockBlob"]
}
}
}
]
}Creating Policy via CLI:
# Create lifecycle policy
az storage account management-policy create \
--account-name "mystorageaccount" \
--resource-group "Storage-RG" \
--policy @lifecycle-policy.json
# View existing policy
az storage account management-policy show \
--account-name "mystorageaccount" \
--resource-group "Storage-RG"Available Conditions:
daysAfterModificationGreaterThandaysAfterCreationGreaterThandaysAfterLastAccessTimeGreaterThan(requires access tracking)daysAfterLastTierChangeGreaterThan
Best Practices:
- Start with longer retention periods and adjust
- Test policies with a subset of data first
- Consider early deletion fees when setting thresholds
- Use prefix filters to target specific data
Documentation Links:
Q2.4: What is blob versioning and soft delete?
Answer: Blob versioning and soft delete provide data protection against accidental deletion or modification.
Blob Versioning:
- Automatically maintains previous versions of blobs
- Each modification creates a new version
- Access previous versions by version ID
- Versions are immutable
Enabling Versioning:
az storage account blob-service-properties update \
--account-name "mystorageaccount" \
--resource-group "Storage-RG" \
--enable-versioning trueSoft Delete for Blobs:
- Retains deleted blobs for specified period
- Can recover deleted blobs and snapshots
- Retention period: 1-365 days
Enabling Soft Delete:
# Enable blob soft delete
az storage account blob-service-properties update \
--account-name "mystorageaccount" \
--resource-group "Storage-RG" \
--enable-delete-retention true \
--delete-retention-days 14Soft Delete for Containers:
- Retains deleted containers
- Separate setting from blob soft delete
- Retention period: 1-365 days
# Enable container soft delete
az storage account blob-service-properties update \
--account-name "mystorageaccount" \
--resource-group "Storage-RG" \
--enable-container-delete-retention true \
--container-delete-retention-days 7Recovering Deleted Data:
# List deleted blobs
az storage blob list \
--account-name "mystorageaccount" \
--container-name "data" \
--include d
# Undelete blob
az storage blob undelete \
--account-name "mystorageaccount" \
--container-name "data" \
--name "deleted-file.txt"Point-in-Time Restore:
- Restore block blobs to a previous state
- Requires versioning and change feed
- Restore entire container or blob prefix
Documentation Links:
Section 3: Azure Files
Q3.1: What is Azure Files and what protocols does it support?
Answer: Azure Files provides fully managed file shares in the cloud accessible via industry-standard protocols.
Supported Protocols:
SMB (Server Message Block):
- Versions: SMB 2.1, 3.0, 3.1.1
- Port: 445
- Windows, Linux, macOS support
- AD authentication support
NFS (Network File System):
- Version: NFS 4.1
- Premium file shares only
- Linux support
- No Windows support
REST API:
- HTTP/HTTPS access
- Programmatic access
- Azure SDKs support
File Share Tiers:
| Tier | Performance | Use Case |
|---|---|---|
| Premium | SSD, provisioned IOPS | Databases, high-performance |
| Transaction optimized | HDD, optimized for transactions | General purpose |
| Hot | HDD, optimized for access | Team shares |
| Cool | HDD, optimized for storage | Archive, backup |
Creating File Shares:
# Create standard file share
az storage share create \
--account-name "mystorageaccount" \
--name "fileshare01" \
--quota 100
# Create premium file share (requires premium storage account)
az storage share create \
--account-name "mypremiumfiles" \
--name "premiumshare" \
--quota 1024Mounting File Shares:
Windows:
# Mount as drive
net use Z: \\mystorageaccount.file.core.windows.net\fileshare01 /u:Azure\mystorageaccount <storage-key>
# Or using cmdkey
cmdkey /add:mystorageaccount.file.core.windows.net /user:Azure\mystorageaccount /pass:<storage-key>
New-PSDrive -Name Z -PSProvider FileSystem -Root "\\mystorageaccount.file.core.windows.net\fileshare01" -PersistLinux:
# Mount SMB share
sudo mount -t cifs //mystorageaccount.file.core.windows.net/fileshare01 /mnt/fileshare \
-o vers=3.0,username=mystorageaccount,password=<storage-key>,dir_mode=0777,file_mode=0777
# Add to /etc/fstab for persistent mount
//mystorageaccount.file.core.windows.net/fileshare01 /mnt/fileshare cifs vers=3.0,username=mystorageaccount,password=<storage-key>,dir_mode=0777,file_mode=0777 0 0Documentation Links:
Q3.2: What is Azure File Sync?
Answer: Azure File Sync enables caching Azure file shares on Windows Servers for local access with cloud tiering.
Key Components:
1. Storage Sync Service:
- Azure resource managing sync
- Contains sync groups
2. Sync Group:
- Defines sync topology
- Contains cloud endpoint + server endpoints
3. Cloud Endpoint:
- Azure file share
- One per sync group
4. Server Endpoint:
- Path on Windows Server
- Multiple per sync group
- Cloud tiering optional
5. Registered Server:
- Windows Server with Azure File Sync agent
- Trust relationship with Storage Sync Service
Cloud Tiering:
- Keeps frequently accessed files local
- Tiers infrequently accessed files to cloud
- Files appear local (rehydrate on access)
- Configurable by volume free space or date policy
Setting Up Azure File Sync:
- Create Storage Sync Service:
az storagesync create \
--resource-group "Storage-RG" \
--name "MySyncService" \
--location "eastus"- Create Sync Group:
az storagesync sync-group create \
--resource-group "Storage-RG" \
--storage-sync-service "MySyncService" \
--name "MySyncGroup"- Add Cloud Endpoint:
az storagesync sync-group cloud-endpoint create \
--resource-group "Storage-RG" \
--storage-sync-service "MySyncService" \
--sync-group-name "MySyncGroup" \
--name "CloudEndpoint" \
--storage-account-resource-id "<storage-account-id>" \
--azure-file-share-name "fileshare01"Install Agent on Windows Server:
- Download from Microsoft
- Register server with Storage Sync Service
Add Server Endpoint:
- Configure through Azure portal
- Specify local path
- Enable/configure cloud tiering
Use Cases:
- Branch office file servers
- Disaster recovery
- Lift and shift file servers
- Multi-site file sharing
Documentation Links:
Section 4: Storage Security
Q4.1: What are Shared Access Signatures (SAS)?
Answer: Shared Access Signatures provide secure delegated access to storage resources without sharing account keys.
SAS Types:
1. User Delegation SAS:
- Secured with Entra ID credentials
- Most secure option
- Blob storage only
- Requires RBAC assignment
2. Service SAS:
- Secured with storage account key
- Access to single service (blob, file, queue, table)
- Can use stored access policy
3. Account SAS:
- Secured with storage account key
- Access to multiple services
- More permissions than service SAS
SAS Components:
- Signed resource (sr): blob, container, file, share
- Signed permissions (sp): read, write, delete, list, etc.
- Signed start/expiry (st/se): validity period
- Signed IP (sip): allowed IP addresses
- Signed protocol (spr): HTTPS only or HTTP/HTTPS
- Signature (sig): cryptographic signature
Generating SAS Tokens:
# Account SAS
az storage account generate-sas \
--account-name "mystorageaccount" \
--account-key "<key>" \
--services b \
--resource-types sco \
--permissions rwdlacup \
--expiry "2025-12-31T00:00:00Z" \
--https-only
# Container SAS
az storage container generate-sas \
--account-name "mystorageaccount" \
--name "documents" \
--permissions rl \
--expiry "2025-12-31T00:00:00Z" \
--https-only
# Blob SAS
az storage blob generate-sas \
--account-name "mystorageaccount" \
--container-name "documents" \
--name "report.pdf" \
--permissions r \
--expiry "2025-06-30T00:00:00Z" \
--https-onlyUser Delegation SAS:
# Get user delegation key (requires RBAC)
az storage blob generate-sas \
--account-name "mystorageaccount" \
--container-name "documents" \
--name "report.pdf" \
--permissions r \
--expiry "2025-06-30T00:00:00Z" \
--auth-mode login \
--as-userBest Practices:
- Use user delegation SAS when possible
- Set shortest practical expiry time
- Use HTTPS only
- Restrict IP addresses when known
- Use stored access policies for revocation
Documentation Links:
Q4.2: What are stored access policies?
Answer: Stored access policies provide additional control over service-level SAS tokens.
Benefits:
- Modify SAS parameters after issuance
- Revoke SAS tokens by deleting policy
- Group SAS tokens under single policy
- Extend or shorten validity period
Limitations:
- Only for service SAS (not account SAS)
- Maximum 5 policies per container/share/queue/table
- Cannot specify signed IP or protocol in policy
Creating Stored Access Policy:
# Create policy on container
az storage container policy create \
--account-name "mystorageaccount" \
--container-name "documents" \
--name "ReadPolicy" \
--permissions rl \
--expiry "2025-12-31T00:00:00Z"
# Generate SAS using policy
az storage container generate-sas \
--account-name "mystorageaccount" \
--name "documents" \
--policy-name "ReadPolicy"Managing Policies:
# List policies
az storage container policy list \
--account-name "mystorageaccount" \
--container-name "documents"
# Update policy
az storage container policy update \
--account-name "mystorageaccount" \
--container-name "documents" \
--name "ReadPolicy" \
--expiry "2026-06-30T00:00:00Z"
# Delete policy (revokes all SAS using it)
az storage container policy delete \
--account-name "mystorageaccount" \
--container-name "documents" \
--name "ReadPolicy"Documentation Links:
Q4.3: How do you configure Microsoft Entra authentication for storage?
Answer: Microsoft Entra authentication provides identity-based access to Azure Storage without using keys.
Supported Services:
- Blob storage
- Queue storage
- Table storage (preview)
- Azure Files (AD DS or Entra Domain Services)
RBAC Roles for Data Access:
| Role | Permissions |
|---|---|
| Storage Blob Data Owner | Full access to blob data |
| Storage Blob Data Contributor | Read/write/delete blobs |
| Storage Blob Data Reader | Read blobs |
| Storage Blob Delegator | Get user delegation key |
| Storage Queue Data Contributor | Read/write/delete queues |
| Storage Queue Data Reader | Read queues |
| Storage Queue Data Message Processor | Peek, receive, delete messages |
| Storage Queue Data Message Sender | Add messages |
Assigning Data Roles:
# Assign blob data contributor
az role assignment create \
--assignee "user@contoso.com" \
--role "Storage Blob Data Contributor" \
--scope "/subscriptions/<sub>/resourceGroups/<rg>/providers/Microsoft.Storage/storageAccounts/<account>"
# Assign at container level
az role assignment create \
--assignee "user@contoso.com" \
--role "Storage Blob Data Reader" \
--scope "/subscriptions/<sub>/resourceGroups/<rg>/providers/Microsoft.Storage/storageAccounts/<account>/blobServices/default/containers/<container>"Using Entra Auth with CLI:
# List blobs using Entra auth
az storage blob list \
--account-name "mystorageaccount" \
--container-name "documents" \
--auth-mode login
# Upload using Entra auth
az storage blob upload \
--account-name "mystorageaccount" \
--container-name "documents" \
--name "file.txt" \
--file "./file.txt" \
--auth-mode loginAzure Files with AD Authentication:
- Requires AD DS or Entra Domain Services
- Storage account joined to domain
- NTFS permissions on files/folders
- Kerberos authentication
Documentation Links:
Section 5: Data Management
Q5.1: What is object replication?
Answer: Object replication asynchronously copies block blobs between storage accounts.
Use Cases:
- Minimize latency (replicate closer to users)
- Increase efficiency (process in different regions)
- Data distribution
- Disaster recovery
Requirements:
- Blob versioning enabled on both accounts
- Change feed enabled on source
- Block blobs only (not append or page)
- Same or different regions
- Same or different subscriptions/tenants
Configuring Object Replication:
# Enable versioning on source
az storage account blob-service-properties update \
--account-name "sourceaccount" \
--enable-versioning true \
--enable-change-feed true
# Enable versioning on destination
az storage account blob-service-properties update \
--account-name "destaccount" \
--enable-versioning true
# Create replication policy (via portal or ARM template)Replication Policy Components:
- Source account and container
- Destination account and container
- Filter rules (prefix, min creation time)
Documentation Links:
Q5.2: What is immutable storage?
Answer: Immutable storage prevents blob modification or deletion for a specified period (WORM - Write Once, Read Many).
Immutability Policies:
1. Time-based Retention:
- Blobs cannot be modified or deleted during retention
- Retention period: 1 day to 146,000 years
- Can extend but not reduce (when locked)
- States: Unlocked → Locked
2. Legal Hold:
- Indefinite retention until removed
- No time limit
- Multiple holds can be applied
- Named tags for tracking
Configuring Immutability:
# Create time-based retention policy
az storage container immutability-policy create \
--account-name "mystorageaccount" \
--container-name "compliance" \
--period 365
# Lock policy (irreversible!)
az storage container immutability-policy lock \
--account-name "mystorageaccount" \
--container-name "compliance" \
--if-match "<etag>"
# Add legal hold
az storage container legal-hold set \
--account-name "mystorageaccount" \
--container-name "legal" \
--tags "Case123" "Investigation456"
# Remove legal hold
az storage container legal-hold clear \
--account-name "mystorageaccount" \
--container-name "legal" \
--tags "Case123"Version-Level Immutability:
- Apply policies to individual blob versions
- More granular than container-level
- Requires versioning enabled
Important Notes:
- Locked policies cannot be deleted
- Container cannot be deleted with active policy
- Storage account cannot be deleted with locked policies
- Meets SEC 17a-4, CFTC, FINRA requirements
Documentation Links:
Practice Questions
Question 1
You need to store data that will be accessed once a month and must be retained for 2 years. Which access tier is most cost-effective?
A. Hot
B. Cool
C. Cold
D. Archive
Answer: C
Cold tier is designed for data accessed infrequently (90+ days) but needs to be available without rehydration delay. Archive would require rehydration time. Cool has higher storage costs than Cold for long retention.
Question 2
A storage account uses GRS replication. During a regional outage, you need read access to data. What should you do?
A. Initiate failover
B. Change to RA-GRS
C. Create a new storage account
D. Wait for region recovery
Answer: A or B
If you need immediate read access, initiate failover (makes secondary primary). For future scenarios, upgrade to RA-GRS which provides read access to secondary without failover. During an outage, changing to RA-GRS may not be possible.
Question 3
You need to give a contractor access to a specific blob for 7 days. They should only be able to read the blob. What is the most secure approach?
A. Share the storage account key
B. Create a user delegation SAS
C. Assign Storage Blob Data Reader role
D. Create a service SAS with stored access policy
Answer: B
User delegation SAS is the most secure option as it's backed by Entra ID credentials, not storage keys. It provides time-limited, scoped access. RBAC would give ongoing access beyond 7 days.
Question 4
You have a lifecycle management policy that moves blobs to Cool tier after 30 days. A blob was uploaded 25 days ago and moved to Cool tier manually. What happens when the policy runs?
A. The blob is moved to Cool tier again
B. Nothing, the blob is already in Cool tier
C. The blob is moved back to Hot tier
D. An error occurs
Answer: B
Lifecycle management policies check the current state. Since the blob is already in Cool tier, no action is taken. The policy won't move it back to Hot or cause an error.
Question 5
You need to ensure files in an Azure file share can be accessed from on-premises servers with low latency while keeping data in Azure. What should you implement?
A. Azure Backup
B. Azure File Sync with cloud tiering
C. Premium file shares
D. Object replication
Answer: B
Azure File Sync with cloud tiering caches frequently accessed files locally while keeping the full dataset in Azure. This provides low latency for hot data while maintaining cloud storage benefits.
Summary
Key topics for the Storage domain:
- Storage Accounts: Types, performance tiers, creation
- Redundancy: LRS, ZRS, GRS, GZRS, RA-GRS, RA-GZRS
- Blob Storage: Block, append, page blobs
- Access Tiers: Hot, Cool, Cold, Archive
- Lifecycle Management: Automated tier transitions
- Data Protection: Versioning, soft delete, point-in-time restore
- Azure Files: SMB, NFS, tiers, mounting
- Azure File Sync: Cloud tiering, sync groups
- Security: SAS tokens, stored access policies, Entra auth
- Network Security: Firewalls, service endpoints, private endpoints
- Immutable Storage: WORM, retention policies, legal holds