Vercel File Storage Guide¶
Portfolio playbook for Morphism and Alawein projects deployed on Vercel.
Executive Summary¶
Vercel is excellent for deployment, request handling, preview environments, and globally distributed web delivery. It is not a durable filesystem for runtime uploads. A file written during one request may exist briefly in the current execution environment and then disappear on the next cold start, deployment, or scale-out event.
Use this rule everywhere in the portfolio:
- Static assets checked into the repo belong in the repo.
- Build output belongs to the build pipeline.
- User uploads, generated reports, AI ingestion files, and private documents belong in external object storage.
- Ownership, visibility, retention, and access policy belong in a database.
Default portfolio recommendation:
- Morphism:
Supabase Storage + Postgres metadata, withAWS S3reserved for larger private artifact workloads later. - SaaS or dashboard apps already using Supabase:
Supabase Storage. - Simple public-facing uploads:
Vercel Blob. - Media-heavy properties:
Cloudinary. - Research, datasets, batch exports, and cross-service artifacts:
AWS S3.
Quick Recommendation Matrix¶
| Use case | Default choice | Why |
|---|---|---|
| Next.js or React app with auth and existing Supabase | Supabase Storage |
auth-aware buckets, signed URLs, metadata in Postgres |
| Small Vercel-hosted app with simple public uploads | Vercel Blob |
fastest setup, Vercel-native DX |
| Large exports, datasets, checkpoints, research artifacts | AWS S3 |
durable, scalable, presigned upload/download flows |
| Image- or video-heavy experience | Cloudinary |
transformations, optimized delivery, media workflows |
| Multi-tenant SaaS with private documents and audit trails | Supabase Storage or S3 + DB metadata |
strong access control and lifecycle management |
Vercel Storage Reality¶
What Vercel Does Not Provide¶
Vercel does not provide a persistent writable local upload directory for production runtime data.
Do not rely on:
./uploadspublic/uploadswritten at runtime- temporary local writes surviving between requests
- a single instance keeping files around forever
What Actually Happens¶
When a request hits a Vercel-hosted app:
- the request runs in a serverless or managed compute environment
- local writes are ephemeral
- a later request may hit a different instance
- a new deploy replaces the old runtime environment
- horizontal scale means many instances may exist at once
This breaks local-file assumptions such as:
- upload once, read later from disk
- save generated PDFs locally for users to fetch later
- cache user assets to the app filesystem
- write reports to
public/and expect them to persist after the next deploy
Safe Runtime Uses of Local Disk¶
Temporary local storage is only acceptable for short-lived processing such as:
- image transformation before upload
- temporary zip assembly
- transient parsing of uploaded documents
- converting a generated export into a final uploadable payload
If you write locally during processing, upload the result to durable storage before the request completes.
Storage Decision Matrix¶
| Solution | Best for | Private files | Signed access | Media transforms | Large files | Vercel DX | Recommended Alawein fit |
|---|---|---|---|---|---|---|---|
| Vercel Blob | Simple app uploads, quick prototypes, public or semi-private files | Limited policy depth compared to full backend | Yes | No native media pipeline | Moderate | Excellent | small apps, early-stage dashboards |
| Supabase Storage | Auth-aware SaaS, per-user files, row-level security, metadata in Postgres | Strong | Yes | Basic image serving, not Cloudinary-level transforms | Good | Excellent if already on Supabase | Morphism, SaaS, data apps |
| AWS S3 | Large assets, exports, datasets, multi-service systems | Strong | Yes | No built-in media layer | Excellent | Good, more setup | research, enterprise, data-heavy apps |
| Cloudinary | Image/video-heavy projects, galleries, social media workflows | Strong with signed delivery patterns | Yes | Excellent | Good | Good | media-heavy or design-heavy apps |
Verified Portfolio Signals¶
The table below is based on observed local signals from the Alawein portfolio as of 2026-03-06 rather than repo-name-only guesses.
| Repo | Observed signals | Inferred app type | Primary storage | Fallback | Confidence |
|---|---|---|---|---|---|
morphism |
Next.js app in monorepo, Supabase already in use, governance exports and document workflows | governance SaaS / operational platform | Supabase Storage | S3 | High |
alawein |
notebooks and docs, no package metadata | notebook and docs workspace | GitHub/Jupyter-hosted docs assets | Blob for public files | Medium |
attributa |
Vite, TypeScript, React, Supabase, .vercel/ |
SPA with Supabase-backed data | Supabase Storage | Vercel Blob | High |
bolts |
Next.js, Supabase, Stripe, backend folder | full-stack app with payments | Supabase Storage | S3 | High |
devkit |
Turbo monorepo, package-focused tooling | developer toolkit | package registry artifacts only | Blob for docs/demo assets | High |
event-discovery-framework |
Python repo, backend folder, notebooks, paper, Railway/Vercel config | research plus backend service | S3 | Supabase Storage | High |
gainboy |
Vite, TypeScript, React, .vercel/ |
SPA | Vercel Blob | Supabase Storage | High |
helios |
governance/archive style layout, no obvious runtime app | docs/archive repo | static-hosted docs only | Blob if public downloads appear | Medium |
llmworks |
TypeScript app, React, Supabase, Docker, Prometheus | app plus LLM backend | S3 | Supabase Storage | High |
maglogic |
Python, scientific simulation directories | scientific framework | S3 | PyPI-only if code-only release | High |
MeatheadPhysicist |
Python, data, notebooks, papers, .vercel/ |
research plus public-facing content | Cloudinary for public media + S3 for datasets | S3 only | High |
meshal-web |
Vite, React, .vercel/ |
public web presence | Vercel Blob | Cloudinary | High |
QAPlibria |
Python, data, notebooks, papers | research/data product | S3 | Supabase Storage | High |
qmatsim |
Python, simulation package layout | scientific simulation library | S3 | PyPI-only if no runtime uploads | High |
qmlab |
React app, accessibility-heavy frontend | frontend web app | Vercel Blob | Supabase Storage | High |
qubeml |
Python, data, quantum and materials subdomains | ML/research workload | S3 | Supabase Storage | High |
repz |
Turbo monorepo, React, Supabase, backend folder | full-stack platform | Supabase Storage | Cloudinary or S3 depending on asset mix | High |
rounaq-atelier |
React app, Supabase, .vercel/ |
design/shopfront style app | Cloudinary | Supabase Storage | High |
scicomp |
Python, notebooks, Mathematica/MATLAB adjacency | scientific computing framework | S3 | PyPI-only for code artifacts | High |
scribd |
Next.js app router, Supabase, Stripe, Playwright | document-centric full-stack app | S3 | Supabase Storage | High |
simcore |
React app, Supabase, .vercel/ |
SPA with managed backend | Supabase Storage | Vercel Blob | High |
spincirc |
Python, Docker, simulation tooling | scientific/simulation framework | S3 | Docker artifact registry plus S3 | High |
Project-Specific Recommendations¶
| Project | Likely type | Recommended storage | Fallback | Notes |
|---|---|---|---|---|
morphism |
governance SaaS / app platform | Supabase Storage | S3 | private docs, exports, agent uploads, auditability |
meshal-web |
public web presence | Vercel Blob | Cloudinary | public assets, resumes, downloadable media |
gainboy |
React SPA | Vercel Blob | Supabase Storage | use Blob until authenticated uploads appear |
repz |
full-stack platform | Supabase Storage | S3 or Cloudinary | use Supabase for private docs, Cloudinary for commerce media |
attributa |
Supabase-backed SPA | Supabase Storage | Vercel Blob | keep ownership and metadata in Supabase |
llmworks |
AI workflow app | S3 | Supabase Storage | prompts, artifacts, generated outputs, ingestion docs |
qmlab |
frontend app | Vercel Blob | Supabase Storage | mostly public UI assets unless auth arrives |
qmatsim |
simulation / research | S3 | PyPI package artifacts only | large generated result bundles |
qubeml |
ML / research | S3 | Supabase Storage | models, batches, reports |
simcore |
Supabase-backed SPA | Supabase Storage | Vercel Blob | metadata and auth already align with Supabase |
scicomp |
scientific computing | S3 | PyPI package artifacts only | datasets and generated reports |
spincirc |
research or modeling | S3 | Docker registry plus S3 | analysis outputs and packaged results |
maglogic |
scientific product/framework | S3 | PyPI package artifacts only | mixed docs and simulation artifacts |
rounaq-atelier |
design or portfolio | Cloudinary | Supabase Storage | public image-first workflows |
MeatheadPhysicist |
research plus brand/media | Cloudinary + S3 | S3 only | public media separated from datasets |
QAPlibria |
research/data product | S3 | Supabase Storage | datasets and generated papers |
scribd |
document-heavy Next.js app | S3 | Supabase Storage | private docs and signed download links |
helios |
docs/archive repo | static-hosted docs | Vercel Blob | only if downloadable assets become a feature |
devkit |
developer tooling | package registry artifacts | Vercel Blob | docs assets, generated examples |
bolts |
Next.js + Supabase + Stripe app | Supabase Storage | Cloudinary | private customer files plus product assets |
event-discovery-framework |
analytics/research backend | S3 | Supabase Storage | report exports and batch artifacts |
alawein |
notebook/docs umbrella | public docs storage only | Vercel Blob | no durable app uploads indicated |
Provider Recommendations By App Pattern¶
Vite or React SPA With No Verified Backend¶
Examples:
gainboymeshal-webqmlab
Recommended path:
- start with
Vercel Blobfor simple public uploads or downloadable assets - move to
Supabase Storageonly when user auth, ownership, or private files appear
Supabase-Backed Web Apps¶
Examples:
morphismattributaboltsrepzsimcore
Recommended path:
- use
Supabase Storagefor application files - keep metadata in Postgres
- use signed URLs for private access
- reserve
Cloudinaryfor image-heavy public delivery only
Python Research and Simulation Repos¶
Examples:
event-discovery-frameworkmaglogicQAPlibriaqmatsimqubemlscicompspincirc
Recommended path:
- use
AWS S3for durable datasets, exports, checkpoints, and generated artifacts - avoid storing large research outputs in repo history
- publish libraries to PyPI separately from data artifacts
Media-Forward Properties¶
Examples:
rounaq-atelierMeatheadPhysicist
Recommended path:
- use
Cloudinaryfor public image/video delivery and transforms - use
S3when raw originals or research attachments must be retained separately
Environment Variable Conventions¶
Use consistent names across the portfolio.
Vercel Blob¶
BLOB_READ_WRITE_TOKEN
Supabase Storage¶
NEXT_PUBLIC_SUPABASE_URLNEXT_PUBLIC_SUPABASE_ANON_KEYSUPABASE_SERVICE_ROLE_KEYSUPABASE_STORAGE_BUCKET_PUBLICSUPABASE_STORAGE_BUCKET_PRIVATE
AWS S3¶
AWS_ACCESS_KEY_IDAWS_SECRET_ACCESS_KEYAWS_REGIONS3_BUCKET_PUBLICS3_BUCKET_PRIVATES3_ENDPOINTif S3-compatible
Cloudinary¶
NEXT_PUBLIC_CLOUDINARY_CLOUD_NAMECLOUDINARY_API_KEYCLOUDINARY_API_SECRETCLOUDINARY_UPLOAD_PRESETonly for approved unsigned flows
Shared Portfolio Metadata¶
STORAGE_PROVIDERSTORAGE_PUBLIC_BASE_URLUPLOAD_MAX_BYTESUPLOAD_ALLOWED_MIME_TYPESUPLOAD_SIGNED_URL_TTL_SECONDS
Integration Patterns¶
Pattern 1: Authenticated Upload Through Server¶
Use when:
- files must be validated server-side
- storage credentials must stay server-only
- you need audit logs or DB writes in the same transaction flow
Flow:
- user authenticates
- client submits file to route handler
- server validates auth, size, MIME, ownership
- server uploads to storage provider
- server writes metadata row
- server returns file id and signed or public URL
Pattern 2: Direct Browser Upload With Signed URL¶
Use when:
- files are large
- you want to avoid proxying file bytes through the app server
- users upload media directly to object storage
Flow:
- client requests signed upload URL
- server authenticates and returns signed upload config
- browser uploads directly to storage
- client notifies app of success
- app stores metadata in DB
Pattern 3: Generated Report Storage¶
Use when:
- app creates PDFs, CSV exports, zipped artifacts, or model outputs
Flow:
- job generates file in memory or temp dir
- upload immediately to storage
- create metadata row
- give user signed download URL
- apply retention policy if temporary
Pattern 4: Multi-Tenant Private Files¶
Use when:
- multiple organizations or workspaces share the same application
- files must be isolated by tenant
Flow:
- derive tenant id from the authenticated session
- build an object key such as
org/{orgId}/user/{userId}/... - upload to a private bucket
- persist metadata with owner and tenant identifiers
- generate short-lived signed read URLs only after authorization checks
Pattern 5: Public Media With Private Originals¶
Use when:
- public, transformed assets should be fast to serve
- original assets must remain restricted
Flow:
- upload original to private storage
- generate or push optimized derivatives to Cloudinary or a public bucket
- store both original and derivative references in metadata
- treat derivative regeneration as asynchronous work
Setup Guides¶
Option A: Vercel Blob¶
Best for:
- fast path on Vercel
- simple public or semi-private file storage
Setup:
- Create a Blob store in Vercel.
- Add
BLOB_READ_WRITE_TOKENto the Vercel project and local.env.local. - Use server-side upload APIs in route handlers or server actions.
- Store object keys and owners in the database if files matter beyond simple public assets.
- Use server-side token access only; do not expose write tokens to arbitrary browser code.
Example upload route:
import { put } from '@vercel/blob';
export async function POST(req: Request) {
const formData = await req.formData();
const file = formData.get('file');
if (!(file instanceof File)) {
return Response.json({ error: 'file is required' }, { status: 400 });
}
const blob = await put(`uploads/${crypto.randomUUID()}-${file.name}`, file, {
access: 'private',
token: process.env.BLOB_READ_WRITE_TOKEN,
});
return Response.json({ url: blob.url, pathname: blob.pathname });
}
Example delete:
import { del } from '@vercel/blob';
export async function DELETE(req: Request) {
const { pathname } = await req.json();
await del(pathname, {
token: process.env.BLOB_READ_WRITE_TOKEN,
});
return Response.json({ deleted: true });
}
Option B: Supabase Storage¶
Best for:
- authenticated apps
- per-user or per-org files
- Morphism-style governance metadata
Setup:
- Create public and private buckets.
- Add storage policies tied to authenticated users or service-role server flows.
- Add env vars to Vercel and local development.
- Use anon key in browser only for approved client flows.
- Use service role only on the server.
- Add bucket policies before shipping upload endpoints.
Example server upload:
import { createClient } from '@supabase/supabase-js';
const supabase = createClient(
process.env.NEXT_PUBLIC_SUPABASE_URL!,
process.env.SUPABASE_SERVICE_ROLE_KEY!
);
export async function POST(req: Request) {
const formData = await req.formData();
const file = formData.get('file');
if (!(file instanceof File)) {
return Response.json({ error: 'file is required' }, { status: 400 });
}
const path = `user-uploads/${crypto.randomUUID()}-${file.name}`;
const arrayBuffer = await file.arrayBuffer();
const { error } = await supabase.storage
.from(process.env.SUPABASE_STORAGE_BUCKET_PRIVATE!)
.upload(path, Buffer.from(arrayBuffer), {
contentType: file.type,
upsert: false,
});
if (error) {
return Response.json({ error: error.message }, { status: 500 });
}
return Response.json({ path });
}
Signed retrieval:
const { data, error } = await supabase.storage
.from(process.env.SUPABASE_STORAGE_BUCKET_PRIVATE!)
.createSignedUrl(path, 60 * 10);
Delete:
await supabase.storage
.from(process.env.SUPABASE_STORAGE_BUCKET_PRIVATE!)
.remove([path]);
Recommended Morphism table design:
create table file_objects (
id uuid primary key,
provider text not null,
bucket text not null,
object_key text not null,
owner_type text not null,
owner_id uuid,
organization_id uuid,
visibility text not null,
mime_type text not null,
size_bytes bigint not null,
checksum text,
created_at timestamptz not null default now(),
deleted_at timestamptz
);
Option C: AWS S3¶
Best for:
- large files
- dataset storage
- durable export pipelines
- cross-service interoperability
Setup:
- Create buckets for public and private content.
- Create IAM user or role with least-privilege access.
- Add credentials to Vercel env vars.
- Prefer presigned uploads for large client-side transfers.
- Store object keys in DB, not full assumptions-only URLs.
- Prefer separate buckets or prefixes for private and public objects.
Upload example:
import { PutObjectCommand, S3Client } from '@aws-sdk/client-s3';
const s3 = new S3Client({
region: process.env.AWS_REGION,
credentials: {
accessKeyId: process.env.AWS_ACCESS_KEY_ID!,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY!,
},
});
export async function uploadToS3(file: File, key: string) {
const body = Buffer.from(await file.arrayBuffer());
await s3.send(
new PutObjectCommand({
Bucket: process.env.S3_BUCKET_PRIVATE!,
Key: key,
Body: body,
ContentType: file.type,
})
);
return { key };
}
Signed download example:
import { GetObjectCommand } from '@aws-sdk/client-s3';
import { getSignedUrl } from '@aws-sdk/s3-request-presigner';
export async function getDownloadUrl(key: string) {
return getSignedUrl(
s3,
new GetObjectCommand({
Bucket: process.env.S3_BUCKET_PRIVATE!,
Key: key,
}),
{ expiresIn: 600 }
);
}
Delete example:
import { DeleteObjectCommand } from '@aws-sdk/client-s3';
await s3.send(
new DeleteObjectCommand({
Bucket: process.env.S3_BUCKET_PRIVATE!,
Key: key,
})
);
Presigned upload pattern:
import { PutObjectCommand } from '@aws-sdk/client-s3';
import { getSignedUrl } from '@aws-sdk/s3-request-presigner';
export async function createUploadUrl(key: string, contentType: string) {
return getSignedUrl(
s3,
new PutObjectCommand({
Bucket: process.env.S3_BUCKET_PRIVATE!,
Key: key,
ContentType: contentType,
}),
{ expiresIn: 300 }
);
}
Option D: Cloudinary¶
Best for:
- image/video-heavy projects
- transformations, resizing, social-ready delivery
Setup:
- Create Cloudinary product environment.
- Add cloud name, API key, and API secret to Vercel.
- Use signed uploads for authenticated apps.
- Keep original upload metadata in DB if ownership matters.
- Use signed uploads for user content rather than globally unsigned presets.
Server upload example:
import { v2 as cloudinary } from 'cloudinary';
cloudinary.config({
cloud_name: process.env.NEXT_PUBLIC_CLOUDINARY_CLOUD_NAME,
api_key: process.env.CLOUDINARY_API_KEY,
api_secret: process.env.CLOUDINARY_API_SECRET,
});
export async function uploadImage(dataUri: string, publicId: string) {
const result = await cloudinary.uploader.upload(dataUri, {
public_id: publicId,
resource_type: 'image',
folder: 'uploads',
});
return result.secure_url;
}
Delete example:
await cloudinary.uploader.destroy(publicId, { resource_type: 'image' });
Signed upload parameter example:
import { v2 as cloudinary } from 'cloudinary';
export function createCloudinarySignature(paramsToSign: Record<string, string>) {
const timestamp = String(Math.floor(Date.now() / 1000));
const signature = cloudinary.utils.api_sign_request(
{ ...paramsToSign, timestamp },
process.env.CLOUDINARY_API_SECRET!
);
return {
timestamp,
signature,
apiKey: process.env.CLOUDINARY_API_KEY,
cloudName: process.env.NEXT_PUBLIC_CLOUDINARY_CLOUD_NAME,
};
}
Common File Operations¶
Save Metadata in Database¶
Use a metadata table for every non-trivial project.
Suggested columns:
idproviderbucketobject_keyowner_idowner_typevisibilitymime_typesize_byteschecksumcreated_atdeleted_at
Example insert shape:
await db.insert(fileObjects).values({
id: crypto.randomUUID(),
provider: 'supabase',
bucket: 'morphism-private',
objectKey: path,
ownerId: user.id,
ownerType: 'user',
visibility: 'private',
mimeType: file.type,
sizeBytes: file.size,
});
Build Stable Object Keys¶
Stable key generation matters for cleanup, deduplication, and tenant isolation.
Recommended shape:
{scope}/{owner}/{kind}/{yyyy}/{mm}/{uuid}-{safe-name}
Examples:
org/6f2.../reports/2026/03/uuid-governance-export.pdfuser/19a.../avatars/2026/03/uuid-profile.pngpublic/marketing/images/2026/03/uuid-hero.jpg
Example helper:
export function buildObjectKey(parts: {
scope: 'public' | 'user' | 'org'
ownerId?: string
kind: string
fileName: string
now?: Date
}) {
const now = parts.now ?? new Date();
const yyyy = now.getUTCFullYear();
const mm = String(now.getUTCMonth() + 1).padStart(2, '0');
const safeName = parts.fileName.toLowerCase().replace(/[^a-z0-9.-]+/g, '-');
const owner = parts.ownerId ?? 'shared';
return `${parts.scope}/${owner}/${parts.kind}/${yyyy}/${mm}/${crypto.randomUUID()}-${safeName}`;
}
Validate Type and Size¶
const allowed = new Set(['application/pdf', 'image/png', 'image/jpeg']);
const maxBytes = 10 * 1024 * 1024;
if (!allowed.has(file.type)) {
throw new Error('Unsupported file type');
}
if (file.size > maxBytes) {
throw new Error('File too large');
}
Security Requirements¶
Always implement:
- authentication before upload or delete
- authorization checks on every read and write
- MIME and extension validation
- server-side size limits
- signed URLs for private file delivery
- least-privilege credentials
- separate public and private storage containers
- audit logging for deletes and privileged access
Recommended controls:
- private-by-default storage
- path naming by tenant or owner, such as
org/{orgId}/... - CORS restricted to known origins
- rate limiting on upload endpoints
- malware scanning if accepting user documents from untrusted parties
- retention rules for generated exports and temporary artifacts
- short signed URL TTLs for private downloads
- soft delete in metadata before hard delete in object storage
- explicit separation between client-safe env vars and server-only secrets
CORS Guidance¶
For browser-direct upload flows:
- allow only known production and preview origins
- allow only required methods such as
PUT,POST,GET,HEAD - avoid wildcard origins on private buckets
- keep credentialed browser flows tied to signed URLs rather than long-lived service keys
Access-Control Guidance By Provider¶
Supabase Storage: use bucket policies and signed URLs; keep service role server-onlyS3: lock bucket policy down and use presigned URLs instead of public writesCloudinary: prefer signed uploads for user content; use unsigned presets only for narrowly constrained public workflowsVercel Blob: avoid distributing write tokens beyond trusted server code
Performance and Scalability Best Practices¶
Use these defaults:
- prefer direct-to-storage uploads for large files
- stream where possible instead of buffering entire files
- use presigned URLs for S3-class storage
- avoid transforming large media synchronously in request handlers
- store metadata separately so list views do not require storage scans
- use CDN-backed delivery for public media
- apply lifecycle cleanup for temporary exports
- use deterministic object naming where idempotency matters
Large File Guidance¶
If files exceed single-request comfort limits:
- prefer direct-to-storage uploads
- avoid reading whole files into memory in serverless handlers
- use multipart or chunked upload flows where supported
- trigger background processing after durable upload, not before
- store progress metadata separately from the raw object
Cache Guidance¶
- public media: long-lived cache headers with immutable URLs
- private signed URLs: short TTL, no shared cache assumptions
- generated derivatives: version keys or content hashes to avoid stale edge content
Common Pitfalls¶
Avoid:
- writing runtime uploads into
public/ - expecting local files to persist after deploys
- exposing service-role or secret keys to the client
- using public buckets for private documents
- storing only URLs without ownership metadata
- deleting DB records without deleting the object
- deleting objects without soft-deleting metadata
Shared Utility Module¶
This repo now provides a reusable helper module at @morphism-systems/shared/storage for cross-project storage conventions.
Use it for:
- upload validation rules
- stable object-key construction
- environment-variable lookup per provider
- portfolio-level default provider recommendations
Example:
import {
buildStorageObjectKey,
getProviderEnvVarNames,
recommendStorageProvider,
validateUploadConstraints,
} from '@morphism-systems/shared/storage';
const recommendation = recommendStorageProvider({
repo: 'morphism',
hasSupabase: true,
isResearchRepo: false,
isMediaHeavy: false,
});
const validation = validateUploadConstraints(
{ name: 'report.pdf', size: 1024, type: 'application/pdf' },
{ maxBytes: 10 * 1024 * 1024, allowedMimeTypes: ['application/pdf'] }
);
const objectKey = buildStorageObjectKey({
scope: 'org',
ownerId: 'org_123',
category: 'reports',
fileName: 'Governance Export.pdf',
});
const envNames = getProviderEnvVarNames(recommendation.provider);
Migration Guide¶
If a project currently writes files locally:
- identify every runtime write path
- classify each file as public, private, temporary, or generated
- pick the target storage provider
- create metadata tables first
- update write paths to upload externally
- update read paths to use public or signed URLs
- backfill existing files if needed
- remove local-disk assumptions from code and tests
Portfolio Standard¶
Use this unless there is a strong reason not to:
- Morphism:
Supabase Storage + Postgres metadata - Most Alawein apps:
Supabase Storageif auth/data already exist, otherwiseVercel Blob - Research-heavy repos:
AWS S3 - Media-heavy or portfolio sites:
Cloudinary
If you only do one thing, do this:
- never save user or generated files to local disk on Vercel
- store the bytes in external object storage
- store the rules in your database