Skip to content

Articles

Rabata.io: Where It Fits and Where It Doesn't

Rabata.io: Where It Fits and Where It Doesn't - benchmarks, pricing, and comparison with AWS, Backblaze, R2, Wasabi, iDrive

Rabata.io is an S3-compatible object storage provider from RCS Technologies (UK) with two products: Hot Storage, general-purpose object storage at $0.01/GB/month in us-east-1, designed for applications, media, and frequently accessed data, and Backup, bulk archival storage at $49/10TB flat in eu-west-2, intended for backups, disaster recovery, and cold data. Both use standard AWS SigV4 authentication, work with any S3 SDK or CLI, and require no code changes to migrate from AWS S3. You swap the endpoint and credentials.

That is the entire product. No compute layer, no managed databases, no dashboard file browser: you cannot preview or view objects through Rabata’s web UI, so you need an S3 client or a tool like Blober to actually see what’s in your buckets. Just storage with an S3 API.

Rabata published benchmarks using MinIO warp v1.0.7 (released January 2025, now superseded by v1.5.0) on a Debian 13 VM in us-east-1 with 8 concurrent threads in September 2025. The methodology is public.

According to their numbers, Rabata wins upload speed by a small margin (1,462 MB/s vs AWS’s 1,444) and mixed operations by 2.3x over AWS. It loses on downloads to both Backblaze (2,075 MB/s) and AWS (1,816), and loses small object throughput to iDrive e2 (962 ops/s vs 696).

The mixed operations number is the most relevant for production workloads. Real applications read, write, list, stat, and delete concurrently. Rabata scored 2.3x higher than AWS S3 in that test.

These are same-region tests (us-east-1 to us-east-1). Performance from other geographies is unknown, and Rabata only operates in two regions. The runs are 30 seconds to 10 minutes with 8 threads, so they measure burst, not sustained multi-TB daily throughput over months. The warp version used (v1.0.7, January 2025) was already 8 months old at the time of testing and is now over a year outdated, and newer versions may produce different results. AWS S3 publishes 99.999999999% durability. Rabata publishes no durability SLA, and their terms include a broad “as is” disclaimer with zero liability for data loss.

Rabata fits a specific profile:

Write-heavy S3 workloads that need to stay cheap. If you’re ingesting backup pipelines, media uploads, log aggregation, or AI training data, and your bottleneck is upload throughput plus cost, Rabata’s upload speed at $0.01/GB is competitive, roughly 57% less than AWS’s $0.023/GB first-tier pricing (AWS discounts at volume).

The Backup tier at $49/10TB ($0.0048/GB) is priced below Backblaze B2 ($6.95/TB, ~$0.007/GB) and Wasabi ($6.99/TB, ~$0.007/GB, increasing to $7.99/TB in July 2026). Wasabi enforces a 90-day minimum retention. Rabata’s Backup tier has no documented minimum retention, but note: egress is capped at 2x your storage amount and billing is in 10TB increments rounded up: store 1TB and you pay for 10TB.

GDPR-compliant EU storage. The eu-west-2 Backup tier gives you EU data residency, which Rabata calls out explicitly. Worth noting: Rabata’s parent company (RCS Technologies) operates under UK law, not EU law. Hetzner also offers EU-based S3-compatible storage with three EU regions (NBG1, FSN1, HEL1) versus Rabata’s single EU region. For European companies that need S3-compatible storage with data residency guarantees, both are worth evaluating.

No-friction evaluation. 30-day trial, no credit card required per Rabata’s signup page.

  • Download-heavy workloads. If you’re serving content to users, Backblaze B2 (2,075 MB/s downloads, ~$0.007/GB) or Cloudflare R2 ($0.015/GB storage, zero egress, weak throughput but free delivery) are better choices depending on whether you’re optimizing for speed or cost.
  • Global distribution. Two regions. If you need worldwide low-latency access, this is not the product.
  • Enterprise compliance requirements. No published durability SLA, no SOC 2 mention, limited public track record, benchmarks not independently verified.
  • Ecosystem depth. No lifecycle policies, no event notifications, no cross-region replication, no versioning (or at least none documented), no dashboard file browser. AWS S3 has all of these. Rabata does not.

Based on Rabata’s own benchmarks (no independent verification available), they offer three things at once that no other single provider does:

  1. Fastest mixed workload performance in their published benchmarks
  2. Simple pricing at $0.01/GB with $0.01/GB egress (Backup tier: egress capped at 2x storage)
  3. No-barrier trial with no credit card required

AWS is faster on downloads but 2-3x more expensive. Backblaze is comparable on storage (~$0.007/GB) but slower on uploads. Cloudflare R2 has zero egress but performs 3-8x worse. Wasabi has no egress fees but enforces 90-day minimums. iDrive wins on small objects but falls behind on mixed workloads.

If your workload is “ingest data via S3 API, store it cheaply, occasionally read it back,” Rabata is worth testing. If your workload needs more features, more regions, or a long track record, look elsewhere.

Blober supports Rabata.io as a native provider. Connect with your access key and secret key, and Blober detects your buckets across both regions (Hot Storage and Backup). You can use Rabata as a source or destination in any workflow: migrate to it from AWS S3, sync from Dropbox, back up from Google Drive, or download files from Rabata to your local machine. Since Rabata’s dashboard has no built-in file browser, Blober is one of the easiest ways to actually see and manage what’s in your buckets.

What Blober supports with Rabata:

  • Browse: list buckets and objects across both regions (something Rabata’s own dashboard doesn’t offer)
  • Upload: write files to Hot Storage or Backup buckets
  • Download: pull files from Rabata to local storage or stream to another provider
  • Copy/Move: transfer objects between buckets
  • Delete: remove objects

Blober handles the region routing automatically. If a bucket lives in eu-west-2, operations go through the eu-west-2 endpoint. No manual configuration needed.

For setup details, see the Rabata.io provider documentation.

How to Cancel GoPro Plus Without Losing Your Footage

Cancel GoPro Plus without losing your footage by downloading everything with Blober

GoPro Plus costs $49.99/year. It gives you unlimited cloud storage for your GoPro footage, camera replacement coverage, and discounts on accessories. For active GoPro users, that’s a reasonable deal.

The problem shows up when you want to leave.

GoPro Plus auto-uploads your footage to GoPro Cloud. Over time, you might have hundreds of gigabytes sitting there. When you cancel, you lose access to those files. GoPro does not give you a bulk export tool, there’s no API, and the web interface lets you download at most 25 files at a time in zip bundles.

If you have 500 videos from two years of travel, surfing, or family events, downloading them 25 at a time is not practical. And the zip downloads often fail on larger batches.

When your GoPro Plus subscription ends:

  • You can no longer view or access your cloud footage
  • Your files remain on GoPro’s servers for a limited time (the exact retention policy is not published)
  • No third-party tool has API access to help you
  • You lose camera replacement coverage and store discounts

The footage does not transfer anywhere. It sits in GoPro’s cloud until they delete it. If you did not download it before cancelling, it may be gone.

Blober is the only desktop app that connects to GoPro Cloud. It was built specifically because no other tool can access GoPro’s proprietary storage system.

Step 1: Download Blober and Connect GoPro Cloud

Section titled “Step 1: Download Blober and Connect GoPro Cloud”

Install Blober on your Mac, Windows, or Linux computer. Add GoPro Cloud as a provider and sign in with your GoPro account. Blober captures your session and gives you a visual file browser showing your entire cloud library.

You have several options:

Local hard drive or SSD The simplest option. Select all your GoPro Cloud files, pick a local folder as the destination, and transfer. Your footage downloads to your computer at full quality.

External drive or NAS If your internal drive does not have enough space, point Blober to an external drive, SD card, or network-attached storage (Synology, QNAP, etc.).

Backblaze B2 (cheapest cloud option) If you want your footage in the cloud but do not want to pay $49.99/year, Backblaze B2 stores data at $6.95/TB/month. For 1 TB of GoPro footage, that is about $83/year with no subscription lock-in, no download limits, and full API access.

Dropbox, Google Drive, or AWS S3 If you already use another cloud provider, Blober can transfer your GoPro footage directly there. No double-download needed.

Select your files (or select all), choose the destination, and click run. Blober transfers with parallel streams, auto-resume on failure, and progress tracking. For large libraries, you can leave it running overnight.

Once your footage is safely stored elsewhere, cancel your subscription through the GoPro app or website. Your files are yours, on storage you control.

Cost Comparison: GoPro Plus vs Alternatives

Section titled “Cost Comparison: GoPro Plus vs Alternatives”
Storage OptionCost (1 TB/year)Download LimitsAPI Access
GoPro Plus$49.99/year25 files at a timeNone
Backblaze B2~$83/yearUnlimitedS3-compatible
Wasabi~$84/yearUnlimitedS3-compatible
Local hard driveOne-time ~$40 (4TB HDD)N/AN/A
Google Drive (2TB)$100/yearUnlimitedYes

GoPro Plus is actually the cheapest cloud option per TB, but it comes with restrictions that the others do not have: no bulk downloads, no third-party tool access, and your footage is inaccessible the moment you cancel.

This is not a case of “just use rclone” or “try MultCloud.” GoPro Cloud is a proprietary system with no published API. No transfer tool, CLI, or cloud sync service has ever supported it.

  • rclone: No GoPro backend. Never had one.
  • MultCloud: Does not list GoPro Cloud as a provider.
  • Flexify: No GoPro support.
  • CloudHQ, Mover, Movebot: None support GoPro Cloud.

Blober connects to GoPro Cloud through the same authentication path as GoPro’s own web app. It is the only third-party tool that can read, download, and transfer your GoPro Cloud files.

Not everyone needs to cancel. If you shoot regularly and use GoPro’s highlight tools, Plus is a solid deal. But even if you keep your subscription, having a backup somewhere else is just good practice.

Use Blober to mirror your GoPro Cloud to a local drive or Backblaze B2 as a safety net. That way, if GoPro changes their terms, raises prices, or has a service issue, your footage is protected.

Blober is a one-time purchase with a lifetime license. No subscription, no per-GB fees.

Download Blober at blober.io

How to Change Azure Blob Storage Tiers Without Re-Uploading

Change Azure Blob Storage tiers without code using Blober mutations

Azure Storage Tiers and the Problem with Managing Them

Section titled “Azure Storage Tiers and the Problem with Managing Them”

Azure Blob Storage offers four access tiers: Hot, Cool, Cold, and Archive. Each tier has different storage and retrieval costs. The idea is straightforward: keep frequently accessed data on Hot, move older data to Cool or Cold, and archive rarely needed files to Archive for the lowest per-GB rate.

In practice, managing tiers is not that simple. Azure Portal lets you change tiers one blob at a time. For bulk changes, Microsoft points you to PowerShell scripts, Azure CLI, or lifecycle management policies. If you want to move 500 blobs from Hot to Archive, you are either clicking through the portal for an hour or writing and testing a script.

Lifecycle policies help with automated transitions, but they operate on rules and schedules. They are not designed for the case where you look at a set of files and decide, right now, that these specific blobs need to be on a different tier.

Blober is a desktop app that connects to Azure Blob Storage as one of its supported providers. Beyond the usual read, write, list, and delete operations, Blober supports something called mutations for Azure Blob. Mutations let you change properties of existing blobs without transferring any data.

Today, Blober supports two types of Azure mutations:

Select any number of blobs in the Blober file browser, choose a target tier (Hot, Cool, Cold, or Archive), and run the mutation. Every selected blob gets moved to the new tier. No re-upload, no script, no waiting for a lifecycle policy to kick in.

This is useful when you realize a project is finished and its assets should move to Archive, or when you need to bring archived files back to Cool for a review cycle.

Azure containers can be set to Private, Blob-level public access, or Container-level public access. Changing access levels usually means navigating to each container in the portal and updating the setting. With Blober, you select the containers you want to modify, pick the access level, and apply.

Say you run a media production company. You have a container called project-alpine-2025 with 800 GB of raw footage sitting on Hot storage. The project wrapped three months ago and no one is accessing those files. You are paying Hot rates for storage that should be on Archive.

With Azure CLI, you would write something like:

az storage blob list --container-name project-alpine-2025 --output tsv | \
while read line; do
az storage blob set-tier --container-name project-alpine-2025 --name "$line" --tier Archive
done

This works, but you need to set up authentication, handle pagination for large containers, deal with blobs that are already archived, and test the script before running it on production data.

With Blober, you open your Azure Blob connection, navigate to the container, select all files, choose “Archive” as the target tier, and click run. Done.

Tier changes and access levels are the first mutations Blober supports for Azure. The architecture is designed to extend this to other providers and other types of modifications. Future mutations could include things like metadata updates, blob tagging, or replication settings. The goal is to give you the same visual, point-and-click control over blob properties that you already have for transfers.

Connecting Azure to Blober takes about a minute:

  1. Open Blober and add a new provider
  2. Select Azure Blob Storage
  3. Paste your connection string (the same one you would use with Azure Storage Explorer or the SDK)
  4. Blober verifies the connection and lists your containers

From there, you can browse blobs, transfer files to or from Azure, and run mutations on existing blobs.

When using Azure as a destination, Blober lets you configure:

  • Storage Tier: Choose which tier new uploads land on (Hot, Cool, Cold, or Archive)
  • Write Behavior: Overwrite existing blobs, skip if a blob already exists, or skip only if the blob is archived

These options are set per-workflow, so you can have one workflow that uploads to Hot and another that uploads directly to Archive.

  • DevOps teams managing storage costs across multiple containers and projects
  • Media companies archiving completed project assets
  • Backup administrators moving cold data to cheaper tiers
  • Anyone who has outgrown Azure Portal’s one-blob-at-a-time tier management

Blober is a one-time purchase. No subscription, no per-GB fees, no account required.

Download Blober at blober.io

How to Migrate from DigitalOcean Spaces to AWS S3

Migrate from DigitalOcean Spaces to AWS S3 with Blober

DigitalOcean Spaces is a good starting point for object storage. It is simple, affordable ($5/month for 250 GB + 1 TB transfer), and S3-compatible. For small to mid-size projects, it does the job.

But as your storage needs grow, you run into limitations:

  • Region constraints. Spaces are region-scoped. Each region only sees its own Spaces. Cross-region replication is not available.
  • No storage tiers. Everything is stored at the same tier. There is no equivalent to S3’s Glacier or Intelligent-Tiering for cost optimization.
  • Limited ecosystem. AWS S3 integrates with hundreds of services: Lambda, CloudFront, Athena, Step Functions, SageMaker. DigitalOcean’s ecosystem is smaller.
  • Bandwidth limits. The included 1 TB transfer can be burned through quickly on busy applications.

When a project outgrows Spaces, AWS S3 is the most common destination.

DigitalOcean runs Spaces across 7 regions: NYC3, SFO3, AMS3, SGP1, FRA1, SYD1, and BLR1. If you have Spaces in multiple regions, you need to handle each region separately.

Blober detects all your Spaces across all DigitalOcean regions automatically. When you connect your DigitalOcean account, Blober probes all 7 regions in parallel and presents a unified view of all your Spaces. You do not need to configure each region separately.

DigitalOcean recently introduced cold storage tiers for Spaces. Blober detects whether a Space is using Standard or Cold storage and flags it accordingly. This helps you make informed decisions about which S3 storage class to target.

Add DigitalOcean Spaces as a provider in Blober. You can use either:

  • S3-compatible credentials (Access Key + Secret Key) for basic access
  • Personal Access Token for richer bucket listing with project metadata

Blober discovers all your Spaces across all regions.

Add AWS S3 with your Access Key ID, Secret Access Key, and preferred region. Blober lists your S3 buckets.

Create a workflow with DigitalOcean as the source and S3 as the destination. Browse your Spaces, select files or entire Spaces, and choose the target S3 bucket and storage class.

Options for the destination:

  • Storage class: Standard, Intelligent-Tiering, Standard-IA, Glacier Instant Retrieval, Glacier, or Deep Archive
  • Target bucket: Any existing S3 bucket (or create one in the AWS console first)

Blober handles the transfer with parallel multipart uploads on both sides. S3-to-S3-compatible transfers are efficient because both services speak the same protocol.

DigitalOcean SpacesAWS S3 StandardAWS S3 Standard-IA
Storage (1 TB)$5/mo (250 GB included) + $20/mo extra$23/mo$12.50/mo
Bandwidth (1 TB)Included$90/mo$90/mo
PUT requests (100K)$0.50$0.50$1.00

DigitalOcean is cheaper for simple, low-traffic use cases. S3 is more cost-effective at scale with its tiering options, especially if you use Intelligent-Tiering or Glacier for archival data.

One-time purchase. Transfer as much as you need.

Download Blober at blober.io

How to Migrate from Google Drive to Backblaze B2

Migrate Google Drive files to Backblaze B2 with Blober

Why Move from Google Drive to Backblaze B2?

Section titled “Why Move from Google Drive to Backblaze B2?”

Google Drive is a collaboration tool with storage built in. Backblaze B2 is pure storage built for scale. The reasons people move between them usually come down to one or more of these:

  • Cost. Google One charges $100/year for 2 TB. Backblaze B2 charges $6.95/TB/month, but for archival or backup data you access rarely, the math works differently. If you are storing 5+ TB of media, raw footage, or project archives, B2 can be significantly cheaper depending on your access patterns.
  • Control. B2 gives you S3-compatible API access, which means you can integrate it with backup tools, CDNs, media workflows, and custom applications. Google Drive’s API is more limited for bulk operations.
  • Redundancy. Keeping a copy of your Google Drive data in B2 means you are not dependent on a single provider. If Google changes pricing, restricts your account, or has an outage, your files are safe elsewhere.

Google Drive stores native files (Docs, Sheets, Slides) as cloud-only application states, not as downloadable files. When you need them outside of Google, they must be converted to Office formats first.

Google Takeout can export your Drive, but it takes hours, produces fragmented zip archives, and flattens your folder structure. For a migration to B2 specifically, Takeout is especially awkward because you would need to download everything locally, extract it, then upload it to B2 using a separate tool.

Blober connects to both Google Drive and Backblaze B2. It handles the tricky parts automatically:

  • Google Docs become .docx files during transfer
  • Google Sheets become .xlsx files during transfer
  • Google Slides become .pptx files during transfer
  • Regular files (photos, videos, PDFs) transfer as-is
  • Folder structure is preserved in your B2 bucket
  • Shared files are accessible through a “Shared with me” virtual folder
  1. Connect Google Drive: Add Google Drive as a provider in Blober. OAuth login through your browser.
  2. Connect Backblaze B2: Add B2 with your Application Key ID and Application Key. Blober auto-detects your bucket regions.
  3. Create a workflow: Set Google Drive as source, B2 as destination. Browse and select files or folders.
  4. Run: Blober streams files from Google Drive to B2 through your machine. No local storage needed for intermediate files.
Google One (2 TB)Backblaze B2 (2 TB)
Monthly$8.33~$14
Annual$100~$167
5 TB$25/month (Google One Premium)~$35/month
10 TB+Not available on consumer plans~$70/month
EgressFree (via Drive sync/download)Free up to 3x stored

For small amounts of active data, Google Drive is the better deal. For large archives, backups, and media libraries that you rarely access, B2’s pay-for-what-you-use model wins.

Many people do not fully leave Google Drive. Instead, they keep it for active collaboration (shared documents, team folders) and move everything else to B2:

  • Current projects stay in Google Drive for real-time editing
  • Completed projects, old photos, and archives go to Backblaze B2
  • Blober handles the transfer once, then you adjust your Google storage plan

This hybrid approach gives you the best of both: Google’s collaboration features for active work and B2’s affordable storage for everything else.

One-time purchase. No subscription, no per-GB fees.

Download Blober at blober.io