How to Set Up Cloudflare R2 in 2026 — Complete Step-by-Step Guide
Cloudflare R2 is S3-compatible object storage with one critical difference: zero egress fees. AWS S3 charges $0.09 per GB of data transferred out — R2 charges nothing. For storage-heavy applications that serve a lot of data to users, that difference is enormous. This guide walks you through setting up R2 from zero to production.
What Is Cloudflare R2?
R2 is Cloudflare’s object storage product — essentially AWS S3 but without the bandwidth costs. It stores files (images, videos, backups, static assets, documents) and serves them through Cloudflare’s global network. The API is S3-compatible, so any existing tooling that works with S3 works with R2 with minimal configuration changes.
R2 free tier (per month):
- 10GB storage
- 1,000,000 Class A operations (writes, lists)
- 10,000,000 Class B operations (reads)
- Egress: always free, no limits
R2 paid pricing:
- $0.015 per GB-month of storage (after 10GB)
- $4.50 per million Class A operations (after free tier)
- $0.36 per million Class B operations (after free tier)
- Egress: $0 forever
Prerequisites
Before starting, you need:
- A Cloudflare account (free at cloudflare.com)
- Your application’s code — this guide shows Node.js/JavaScript examples, but the concepts apply to any language
- A custom domain on Cloudflare (optional, but recommended for production)
Step 1 — Enable R2 in Your Cloudflare Account
R2 requires enabling before first use.
- Log in to the Cloudflare Dashboard
- In the left sidebar, click R2 Object Storage
- Click Enable R2
- Add a payment method if prompted — R2 won’t charge you until you exceed the free tier, but a card is required to activate
Once enabled, you’ll land on the R2 overview page showing your buckets (empty to start).
Step 2 — Create a Bucket
A bucket is the top-level container for your files, similar to a folder at the root level.
- Click Create bucket
- Enter a bucket name — use lowercase letters, numbers, and hyphens only (e.g.
my-app-uploads) - Choose a location:
- Automatic — Cloudflare picks the region closest to where data is written (recommended for most apps)
- Location hint — Suggest a region (ENAM, WNAM, WEUR, EEUR, APAC) without strict enforcement
- Jurisdiction — Enforce EU or FedRAMP data residency (paid feature)
- Click Create bucket
Naming advice: bucket names cannot be changed after creation. Use something descriptive and environment-specific — e.g. myapp-prod-uploads, myapp-staging-uploads. Avoid generic names like uploads or files.
Step 3 — Create an API Token
Your application accesses R2 through API tokens, not your main Cloudflare login credentials.
- On the R2 overview page, click Manage R2 API Tokens in the top-right
- Click Create API Token
- Configure the token:
- Token name — something descriptive like
myapp-production-r2 - Permissions — choose based on what your app needs:
Admin Read & Write— full access (use for admin tools only)Object Read & Write— read and write objects (most apps)Object Read only— read-only access (public asset serving)
- Specify bucket(s) — select your specific bucket rather than “All buckets” for better security
- TTL — set an expiry if you want tokens to auto-rotate
- Token name — something descriptive like
- Click Create API Token
- Copy all three values immediately — you won’t see the Secret Access Key again:
Access Key IDSecret Access KeyEndpoint URL(format:https://<account-id>.r2.cloudflarestorage.com)
Store these in your environment variables — never hardcode them in source code.
# .env
R2_ACCOUNT_ID=your_account_id
R2_ACCESS_KEY_ID=your_access_key_id
R2_SECRET_ACCESS_KEY=your_secret_access_key
R2_BUCKET_NAME=my-app-uploads
R2_ENDPOINT=https://your_account_id.r2.cloudflarestorage.com
Step 4 — Connect R2 to Your Application
R2 is S3-compatible, so you use the AWS SDK with a custom endpoint — no Cloudflare-specific SDK required.
Node.js / JavaScript (AWS SDK v3)
Install the S3 client:
npm install @aws-sdk/client-s3
Create an R2 client:
import { S3Client } from '@aws-sdk/client-s3';
export const r2 = new S3Client({
region: 'auto',
endpoint: process.env.R2_ENDPOINT,
credentials: {
accessKeyId: process.env.R2_ACCESS_KEY_ID,
secretAccessKey: process.env.R2_SECRET_ACCESS_KEY,
},
});
Upload a File
import { PutObjectCommand } from '@aws-sdk/client-s3';
async function uploadFile(key, fileBuffer, contentType) {
await r2.send(new PutObjectCommand({
Bucket: process.env.R2_BUCKET_NAME,
Key: key, // e.g. 'avatars/user-123.png'
Body: fileBuffer,
ContentType: contentType, // e.g. 'image/png'
}));
return `https://your-custom-domain.com/${key}`;
}
Read / Download a File
import { GetObjectCommand } from '@aws-sdk/client-s3';
async function getFile(key) {
const response = await r2.send(new GetObjectCommand({
Bucket: process.env.R2_BUCKET_NAME,
Key: key,
}));
return response.Body; // ReadableStream
}
Delete a File
import { DeleteObjectCommand } from '@aws-sdk/client-s3';
async function deleteFile(key) {
await r2.send(new DeleteObjectCommand({
Bucket: process.env.R2_BUCKET_NAME,
Key: key,
}));
}
List Files in a Bucket
import { ListObjectsV2Command } from '@aws-sdk/client-s3';
async function listFiles(prefix = '') {
const response = await r2.send(new ListObjectsV2Command({
Bucket: process.env.R2_BUCKET_NAME,
Prefix: prefix, // e.g. 'avatars/' to list only avatars
}));
return response.Contents || [];
}
Step 5 — Generate Presigned URLs (For Direct Browser Uploads)
Never upload files through your server if you can avoid it — it wastes bandwidth and server resources. Instead, generate a presigned URL and have the browser upload directly to R2.
import { PutObjectCommand } from '@aws-sdk/client-s3';
import { getSignedUrl } from '@aws-sdk/s3-request-presigner';
async function getUploadUrl(key, contentType, expiresInSeconds = 300) {
const command = new PutObjectCommand({
Bucket: process.env.R2_BUCKET_NAME,
Key: key,
ContentType: contentType,
});
const url = await getSignedUrl(r2, command, { expiresIn: expiresInSeconds });
return url; // Send this to the frontend
}
Frontend usage:
// 1. Get presigned URL from your API
const { url } = await fetch('/api/upload-url?filename=photo.jpg').then(r => r.json());
// 2. Upload directly to R2 — bypasses your server entirely
await fetch(url, {
method: 'PUT',
body: file,
headers: { 'Content-Type': file.type },
});
Step 6 — Set Up a Custom Domain (Recommended)
By default, R2 files are served from https://<account-id>.r2.cloudflarestorage.com/<bucket>/<key> — not great for user-facing URLs. Set up a custom domain to serve files from something like files.yourdomain.com.
Requirements: The domain must be on Cloudflare (proxied through Cloudflare DNS).
- Open your R2 bucket in the Cloudflare Dashboard
- Click the Settings tab
- Under Public Access, click Connect Domain
- Enter your subdomain (e.g.
files.yourdomain.com) - Cloudflare automatically creates the DNS record and SSL certificate
- Click Connect
Your files are now accessible at https://files.yourdomain.com/<key>.
Update your upload function to return the custom domain URL:
const PUBLIC_URL = 'https://files.yourdomain.com';
async function uploadFile(key, fileBuffer, contentType) {
await r2.send(new PutObjectCommand({
Bucket: process.env.R2_BUCKET_NAME,
Key: key,
Body: fileBuffer,
ContentType: contentType,
}));
return `${PUBLIC_URL}/${key}`;
}
Step 7 — Configure CORS (For Browser Uploads)
If your frontend uploads directly to R2, you need CORS configured on the bucket. Without it, browsers will block the upload request.
- In the Cloudflare Dashboard, open your bucket
- Click the Settings tab
- Under CORS Policy, add a configuration:
[
{
"AllowedOrigins": ["https://yourdomain.com"],
"AllowedMethods": ["GET", "PUT", "POST", "DELETE"],
"AllowedHeaders": ["*"],
"MaxAgeSeconds": 3600
}
]
For local development, add http://localhost:3000 (or your dev port) to AllowedOrigins. Never use "*" as AllowedOrigins in production — it allows any website to upload to your bucket.
Step 8 — Using R2 with Cloudflare Workers
If you’re deploying on Cloudflare Workers, R2 has a native binding that’s faster and cheaper than using the S3 API (no external HTTP request, no API credentials needed).
In your wrangler.toml:
[[r2_buckets]]
binding = "MY_BUCKET"
bucket_name = "my-app-uploads"
In your Worker:
export default {
async fetch(request, env) {
const url = new URL(request.url);
const key = url.pathname.slice(1); // Remove leading '/'
if (request.method === 'GET') {
const object = await env.MY_BUCKET.get(key);
if (!object) return new Response('Not Found', { status: 404 });
return new Response(object.body, {
headers: { 'Content-Type': object.httpMetadata.contentType },
});
}
if (request.method === 'PUT') {
await env.MY_BUCKET.put(key, request.body, {
httpMetadata: { contentType: request.headers.get('Content-Type') },
});
return new Response('Uploaded', { status: 200 });
}
return new Response('Method Not Allowed', { status: 405 });
},
};
The Workers binding avoids all egress costs and API overhead — use it wherever possible.
R2 vs AWS S3 — When to Choose Which
| Factor | Cloudflare R2 | AWS S3 |
|---|---|---|
| Egress cost | $0 always | $0.09/GB |
| Storage cost | $0.015/GB | $0.023/GB |
| Free tier | 10GB + ops | 5GB (12 months only) |
| Global CDN | Included via Cloudflare | Needs CloudFront ($) |
| Ecosystem | Growing | Massive |
| Best for | High-egress workloads, Cloudflare stack | AWS-native apps, max ecosystem |
R2 wins on cost for any workload that serves files to users. S3 wins when you’re already deep in the AWS ecosystem (Lambda, ECS, RDS) and want native IAM integration.
Common Mistakes to Avoid
Using your main Cloudflare API key — always create a scoped R2 API token with the minimum permissions your app needs. Your main key has full account access.
Public bucket without a custom domain — exposing your account ID in URLs is avoidable and looks unprofessional. Set up a custom domain (Step 6).
Missing CORS configuration — browser uploads will silently fail without it. Test with the browser’s network tab open.
Uploading through your server — for user-generated files, use presigned URLs (Step 5). It’s faster, cheaper, and puts less load on your server.
No key organization — flat key structures become unmanageable. Use prefixes like users/{userId}/avatars/, posts/{postId}/images/, backups/2026-03/ from the start.
The Bottom Line
Cloudflare R2 is the best object storage choice for teams on the Cloudflare stack and anyone who serves significant amounts of data. The zero egress fee is a genuine competitive advantage — at 10TB of monthly egress, you’d pay ~$900/month on S3 vs $0 on R2. Setup takes under 30 minutes, the S3-compatible API means existing tools work out of the box, and the free tier is generous enough to run most side projects for free indefinitely.