Skip to content

azure blob storage

3 posts with the tag “azure blob storage”

How to Change Azure Blob Storage Tiers Without Re-Uploading

Change Azure Blob Storage tiers without code using Blober mutations

Azure Storage Tiers and the Problem with Managing Them

Section titled “Azure Storage Tiers and the Problem with Managing Them”

Azure Blob Storage offers four access tiers: Hot, Cool, Cold, and Archive. Each tier has different storage and retrieval costs. The idea is straightforward: keep frequently accessed data on Hot, move older data to Cool or Cold, and archive rarely needed files to Archive for the lowest per-GB rate.

In practice, managing tiers is not that simple. Azure Portal lets you change tiers one blob at a time. For bulk changes, Microsoft points you to PowerShell scripts, Azure CLI, or lifecycle management policies. If you want to move 500 blobs from Hot to Archive, you are either clicking through the portal for an hour or writing and testing a script.

Lifecycle policies help with automated transitions, but they operate on rules and schedules. They are not designed for the case where you look at a set of files and decide, right now, that these specific blobs need to be on a different tier.

Blober is a desktop app that connects to Azure Blob Storage as one of its supported providers. Beyond the usual read, write, list, and delete operations, Blober supports something called mutations for Azure Blob. Mutations let you change properties of existing blobs without transferring any data.

Today, Blober supports two types of Azure mutations:

Select any number of blobs in the Blober file browser, choose a target tier (Hot, Cool, Cold, or Archive), and run the mutation. Every selected blob gets moved to the new tier. No re-upload, no script, no waiting for a lifecycle policy to kick in.

This is useful when you realize a project is finished and its assets should move to Archive, or when you need to bring archived files back to Cool for a review cycle.

Azure containers can be set to Private, Blob-level public access, or Container-level public access. Changing access levels usually means navigating to each container in the portal and updating the setting. With Blober, you select the containers you want to modify, pick the access level, and apply.

Say you run a media production company. You have a container called project-alpine-2025 with 800 GB of raw footage sitting on Hot storage. The project wrapped three months ago and no one is accessing those files. You are paying Hot rates for storage that should be on Archive.

With Azure CLI, you would write something like:

az storage blob list --container-name project-alpine-2025 --output tsv | \
while read line; do
az storage blob set-tier --container-name project-alpine-2025 --name "$line" --tier Archive
done

This works, but you need to set up authentication, handle pagination for large containers, deal with blobs that are already archived, and test the script before running it on production data.

With Blober, you open your Azure Blob connection, navigate to the container, select all files, choose “Archive” as the target tier, and click run. Done.

Tier changes and access levels are the first mutations Blober supports for Azure. The architecture is designed to extend this to other providers and other types of modifications. Future mutations could include things like metadata updates, blob tagging, or replication settings. The goal is to give you the same visual, point-and-click control over blob properties that you already have for transfers.

Connecting Azure to Blober takes about a minute:

  1. Open Blober and add a new provider
  2. Select Azure Blob Storage
  3. Paste your connection string (the same one you would use with Azure Storage Explorer or the SDK)
  4. Blober verifies the connection and lists your containers

From there, you can browse blobs, transfer files to or from Azure, and run mutations on existing blobs.

When using Azure as a destination, Blober lets you configure:

  • Storage Tier: Choose which tier new uploads land on (Hot, Cool, Cold, or Archive)
  • Write Behavior: Overwrite existing blobs, skip if a blob already exists, or skip only if the blob is archived

These options are set per-workflow, so you can have one workflow that uploads to Hot and another that uploads directly to Archive.

  • DevOps teams managing storage costs across multiple containers and projects
  • Media companies archiving completed project assets
  • Backup administrators moving cold data to cheaper tiers
  • Anyone who has outgrown Azure Portal’s one-blob-at-a-time tier management

Blober is a one-time purchase. No subscription, no per-GB fees, no account required.

Download Blober at blober.io

How to Move Data from Azure Blob Storage to Cloudflare R2

Move data from Azure Blob Storage to Cloudflare R2 with Blober

Azure Blob Storage charges $0.087 per GB for data leaving their network. If you serve 1 TB of files per month to users or external systems, that is $87/month in egress alone, on top of storage costs.

Cloudflare R2 charges $0 for egress. Zero. Nothing. You pay for storage ($0.015/GB/month) and operations, but downloading data from R2 is free. For applications that serve files to users, APIs, CDNs, or other services, switching to R2 can cut your cloud bill significantly.

The most common reason is cost. If your Azure Blob account is mostly used for serving static assets, media files, backups that get restored frequently, or API responses, the egress fees can dwarf your storage costs. R2 removes that variable entirely.

Another reason is simplicity. R2 is S3-compatible, meaning any tool or SDK that works with S3 works with R2. If your application already uses the S3 API (many do, even on Azure), the migration is mostly about moving data and updating the endpoint.

Blober supports both Azure Blob Storage and Cloudflare R2 as native providers. The transfer works like any other Blober workflow: connect both accounts, select files, run.

Add Azure Blob as a provider with your connection string. Blober lists your containers and their contents.

Add Cloudflare R2 as a provider. You will need your Account ID along with an S3-compatible Access Key ID and Secret Access Key from the Cloudflare dashboard. If you also provide a Cloudflare API token, Blober can list your buckets through Cloudflare’s native API with server-side pagination, which is more efficient for accounts with many buckets.

Set Azure Blob as the source and Cloudflare R2 as the destination. Browse your Azure containers, select the files or containers you want to migrate, and choose the destination bucket in R2.

Blober streams data from Azure through your machine to R2. It uses parallel uploads on both ends, so large files move efficiently. If the transfer is interrupted, Blober resumes from where it stopped.

What About Azure Egress Fees During Migration?

Section titled “What About Azure Egress Fees During Migration?”

This is the unavoidable part. Moving data out of Azure means paying egress. For the initial migration, you will pay $0.087/GB to get your data from Azure to your machine (where Blober runs), and from there to R2.

For 1 TB, that is about $87 in egress fees. That is a one-time cost. After the migration, your ongoing egress from R2 is $0.

If you were paying $87/month in Azure egress, the migration pays for itself in the first month.

Data SizeAzure Egress Cost (one-time)Monthly Savings on R2
500 GB~$43Depends on egress pattern
1 TB~$87Up to $87/month
5 TB~$435Up to $435/month
10 TB~$870Up to $870/month

This matters because your application code likely uses the AWS SDK or an S3-compatible client. After migrating data to R2, updating your app is often as simple as changing the endpoint URL and credentials. No SDK changes, no API rewrites.

Blober connects to R2 using the same S3 protocol, so the transfer is seamless.

R2 is excellent for serving files and eliminating egress. But Azure has features that R2 does not:

  • Storage tiers (Hot, Cool, Cold, Archive) for lifecycle cost optimization
  • Geo-redundant replication built into the platform
  • Azure Functions and event triggers tied to blob operations
  • Enterprise compliance certifications that some industries require

If you need those features, Azure is worth the egress premium. Many teams keep some data on Azure (for processing and compliance) and move the served/public data to R2 (for cost savings).

One-time purchase. Transfer as much data as you need.

Download Blober at blober.io

How to Transfer Files from AWS S3 to Azure Blob Storage

Transfer files from AWS S3 to Azure Blob Storage with Blober

Moving Between the Two Largest Cloud Providers

Section titled “Moving Between the Two Largest Cloud Providers”

AWS S3 and Azure Blob Storage are the two most popular object storage services in the world. Companies move data between them for all sorts of reasons: switching primary cloud vendors, setting up multi-cloud redundancy, following compliance requirements, or simply taking advantage of Azure’s pricing for certain workloads.

The transfer itself is the hard part. Both providers have their own tools (AWS DataSync, Azure Data Box, AzCopy), but those tools are designed for their own ecosystem. Cross-cloud transfers with native tools usually require intermediate steps, scripting, or third-party managed services that charge per-GB.

You can download from S3 using the AWS CLI and upload to Azure using AzCopy. This requires local disk space for the intermediate files, separate authentication for each tool, and scripting to coordinate the two.

Services like Flexify charge per-GiB transferred. For large migrations (10 TB+), the fees add up. Your data also routes through their infrastructure, which may not meet compliance requirements.

rclone supports both S3 and Azure Blob. It works, but you need to configure both remotes, handle multipart upload settings, and manage the transfer from the command line.

Blober connects to both AWS S3 and Azure Blob Storage natively. You configure both providers with their respective credentials, create a workflow, and run the transfer. Files stream from S3 through your machine to Azure without intermediate storage.

What Blober Does That Matters for This Transfer

Section titled “What Blober Does That Matters for This Transfer”

Parallel uploads to Azure. Blober uses Azure’s uploadStream with configurable concurrency. Large files are streamed in parallel chunks, which makes a noticeable difference on fast connections.

S3 multipart downloads. On the source side, Blober downloads from S3 using the AWS SDK with multipart support. Large objects do not bottleneck the pipeline.

Azure tier selection. When setting up Azure as your destination, you choose which storage tier new blobs land on: Hot, Cool, Cold, or Archive. This means you can migrate directly to the tier that matches your access pattern without a second step to change tiers after upload.

Write behavior options. You can configure Blober to overwrite existing blobs, skip files that already exist at the destination, or skip only archived blobs. This is useful for incremental migrations where you want to resume without re-transferring what is already there.

  1. Connect AWS S3: Add S3 as a provider with your Access Key ID, Secret Access Key, and region. Blober lists your buckets.
  2. Connect Azure Blob: Add Azure Blob Storage with your connection string. Blober verifies and lists your containers.
  3. Create a workflow: Set S3 as source, Azure Blob as destination. Browse and select files or entire buckets.
  4. Choose Azure options: Pick the storage tier and write behavior.
  5. Run: Blober transfers with progress tracking and auto-resume.

If your S3 bucket is in us-east-1 and your Azure storage account is in westeurope, Blober handles the cross-region transfer. S3’s cross-region copy limitations (which affect native S3-to-S3 copies) do not apply here because the data flows through your machine.

The tradeoff is that transfer speed depends on your internet connection. For very large migrations (50 TB+), this is slower than a datacenter-to-datacenter transfer. But for migrations under 10 TB, running through Blober on a fast connection is often faster than coordinating managed services.

AWS S3 StandardAzure Blob HotAzure Blob Cool
Storage (per TB/mo)$23$18$10
Egress (per GB)$0.09$0.087$0.087
PUT requests (per 10K)$0.005$0.065$0.10

Azure is generally cheaper for storage. S3 is cheaper for write-heavy workloads. The right choice depends on your access patterns.

One-time purchase. No per-GB fees, no subscription.

Download Blober at blober.io