Where CivNode's pictures live, and what the storage bill actually looks like by quark
Earlier today I moved every file CivNode owns, from book covers to AI portraits to the nightly database dumps, out of Cloudflare R2 in the United States and into OVHcloud Object Storage in Frankfurt. Two thousand two hundred objects. Just over a gigabyte of everything we have so far. It took about four minutes and nobody noticed. I wanted to write down why I bothered, and what the alternatives looked like on paper. If you run a small creative platform and you are wondering where to keep your images, this is the honest shortlist. What I was actually comparing At ten terabytes, which is the number I use for 'somewhere between today and the day this starts to matter', the price gap between providers is real but smaller than you would expect. What matters more is egress. Egress is the fee a provider charges every time a browser asks for a file. On AWS S3 a single popular image can cost more in bandwidth over a year than it cost to store. On Cloudflare R2 egress is free, which is why we were there in the first place. On OVH and Hetzner it is effectively free at our volumes because the public cloud plan already includes enough bandwidth to swamp what we send. On Scaleway there is a generous free tier and then a penny per gigabyte. Most people forget to count egress until they open the bill. The numbers I actually collected Cloudflare R2. Fifteen cents per gigabyte per month for storage, zero egress, and two separate per-million-request fees that almost never bite at our scale. Ten terabytes lands at a hundred and fifty dollars a month of storage plus a few dollars of operation fees. Flat, predictable, and the free egress promise is real. There is a genuine free tier of ten gigabytes, and CivNode sat inside it comfortably for its whole first month of traffic. OVHcloud Standard Object Storage, Frankfurt. Around seven tenths of a cent per gigabyte per month. Ten terabytes costs roughly seventy euros. Traffic inside the region is uncharged at our volumes. You pay for actual bytes on disk and almost nothing else. The S3 API compatibility is complete enough that the switch from R2 to OVH was a single environment variable. Scaleway, Paris. Just over a cent per gigabyte per month for storage, seventy five gigabytes of free monthly egress, a penny per gigabyte after that. Ten terabytes lands around a hundred and twenty euros plus egress. A good middle ground if you want a French provider with a real free tier for small projects. Bunny Storage, single region. A cent per gigabyte per month for a plain single-region bucket, two cents if you want replication across continents. Their pull zones make the bandwidth itself cheap. Ten terabytes costs a flat hundred dollars, and for a site that serves mostly static assets through their CDN the total bill can be the lowest of anyone on this list. Hetzner Object Storage. About six tenths of a cent per gigabyte per month, with egress metered at around a cent per gigabyte. They are newer at this than the others, the S3 API implementation is complete, and their Nuremberg buckets sit in the same data centre as the application server that runs CivNode. Roughly seventy euros a month for ten terabytes. AWS S3. Two and a third cents per gigabyte per month for standard storage, nine cents per gigabyte for egress. Ten terabytes of storage is two hundred and thirty dollars. The moment users start actually loading images the egress bill shows you why nobody building a public site stays on raw S3 in 2026. Infomaniak Public Cloud, Geneva. The Swiss option. Around thirteen euros per terabyte per month for object storage with your data held under Swiss privacy law rather than EU or US. Ten terabytes is a hundred and thirty euros. A little more expensive than the French or German providers, a lot cheaper than AWS, and useful if the people uploading care about the difference between the EU and Switzerland. Why OVH, specifically For a platform like CivNode the real question is not 'which is cheapest' but 'which gives me the lowest blended cost at the scale I expect to hit, with the fewest surprises'. R2 was the right answer when we launched because the free egress tier meant we did not have to think about bandwidth at all. That logic still holds up. It is an excellent product. What changed for me is that CivNode is European, the servers are in Nuremberg, and the users so far have been European too. Sending every image request across the Atlantic to ask Cloudflare for it, then sending the bytes back, felt wrong in a way I could not put in a spreadsheet. OVH Frankfurt sits a few dozen kilometres from our application server. The round trip is short enough that the browser cache and the nginx proxy cache both land inside the same cup of coffee. There is a second reason that is harder to quantify. OVH prices object storage per byte and does not charge separately for reads, writes, or listing calls. The bill is a function of how much you store. I like bills that are legible. I like knowing that a burst of popularity does not create a surprise the following month. And there is a third, cruder reason, which is that I was already paying for an OVH public cloud project that existed only to hold CivNode files, and Cloudflare was charging me the same on top of that. The simplest thing you can do with your infrastructure bill is notice when you are paying twice. What R2 was good at I do not want to write R2 off. It is the best egress-free object store in the world, the developer experience is excellent, and the S3 API compatibility is the cleanest of anyone on this list. If CivNode had launched global from day one, or if we were serving a huge amount of video, I would have kept it without hesitation. Cloudflare also gave us free credits early on, they answered support questions quickly, and the dashboard never lost a bucket. That counts for something. The story here is not 'R2 is bad'. The story is 'our shape changed, and the new shape fits OVH better'. What the move actually looked like Two thousand two hundred and nineteen objects across the two R2 buckets. The sync tool I wrote read the list of keys from both, uploaded each one to the new OVH bucket in parallel with eight workers, skipped anything that was already there, and logged progress every twenty five copied files. Three minutes forty one seconds. Zero failures. The runtime cutover was a single environment variable. When CIVNODE_S3_ENDPOINT is set, the storage layer talks to that endpoint instead of building the Cloudflare URL from our R2 account id. When the variable is empty, nothing changes. I committed the code, pushed to main, waited for CI to deploy the binary, edited the env file on the server, restarted the app and the backup sidecar. The paged reader and the background images came back from their new home in Frankfurt with no user-visible gap. The R2 buckets are still sitting there, still populated, still reachable if anything on OVH goes sideways in the next week. After the watch period I will empty them and turn the account off. What the bill looks like now A little over five euros a month for what we have today. Trending toward something close to seventy a month when we fill out to ten terabytes of user work, chapter images, book covers, and the nightly database dumps. It is not a dramatic saving at our current size. It will matter at the next zero.