March 2025ยท7 min readยท
capacitorotahonocloudflare workersd1kv

Rolling my own OTA update server for Capacitor apps

No third-party update service. A Hono API on Cloudflare Workers backed by D1 and KV, two shell scripts, and a /check endpoint the app polls on launch. Everything runs from my self-hosted code server โ€” no CI, no cloud build agent.

bash โ€” push an OTA update from the code server
# build web bundle, zip it, upload to my worker
$ ./scripts/deploy-ota.sh admin mypassword 1.2.0

๐Ÿš€ Starting Nuxt build (npm run generate)...
โœ… Nuxt build complete.
๐Ÿ“ฆ Generating bundle for version 1.2.0...
๐Ÿ”‘ Logging in to https://worker.example.com/api/auth/login...
โœ… Authentication successful. Token obtained.
โฌ†๏ธ  Uploading stable-1.2.0.zip...
โœ… Done!
OVERVIEW

Full control over delivery.

I wanted channels, versioning, upload history, and rollback โ€” without being locked into a third-party service's pricing or infra. I was already running a Cloudflare Worker for other things, so building on top of it was the obvious move.

D1 stores every bundle ever pushed โ€” version, channel, filename, checksum, timestamp. KV holds the raw bundle bytes under BUNDLES and the active manifest per channel under OTA_MANIFEST. The app polls /check on next launch; if the manifest version differs from the installed build, it downloads and applies the update.

Stack
HonoCloudflare WorkersD1 (SQLite)KVJWT auth@capgo/cli (zip only)shell scriptsdocker-android-build-box
PIPELINE

How the OTA pipeline flows end to end.

One script, one upload, one manifest.

Everything starts in the terminal of my self-hosted code server โ€” a VS Code instance running on my ThinkPad E560, reachable from any browser over Tailscale. The deploy script builds the web bundle, zips it, authenticates with the worker, and uploads. The worker writes to KV and D1, then the app polls /check on next launch.

build scriptdeploy-ota.sh
->npm generate
capgo clibundle zip
->POST /upload
hono workerCF Worker
->KV + D1
capacitor app/check poll
->download
devicelive update

APK builds use the same flow.

For native builds, build-apk.sh spins up mingc/android-build-box inside Docker on the code server โ€” no local Android SDK anywhere. Same JWT auth, same upload endpoint, different channel.

build scriptbuild-apk.sh
->docker run
android-build-boxgradlew assemble
->POST /apk/upload
hono workerCF Worker
->KV + D1
channel: apkmanifest
BACKEND

Hono worker with KV and D1.

Uploads update the manifest, history stays in D1.

Admin routes are protected by JWT middleware. BUNDLES stores raw files, while OTA_MANIFEST stores the active manifest per channel. Three code paths cover the whole lifecycle: upload, check, and rollback-on-delete.

worker โ€” POST /admin/ota/upload (simplified)
const key = `${channel}-${version}.zip`;
await c.env.BUNDLES.put(key, await file.arrayBuffer());

const manifest = {
  version, key, checksum,
  url: `${origin}/api/ota/bundle/${key}`,
  updated: new Date().toISOString(),
};
await c.env.OTA_MANIFEST.put(`manifest:${channel}`, JSON.stringify(manifest));

// record in D1 for history / rollback
await c.env.RouteDB.prepare(
  'INSERT INTO history (channel, version, filename, uploaded_at, checksum) VALUES (?, ?, ?, datetime("now"), ?)'
).bind(channel, version, key, checksum).run();
app polls on launch
worker โ€” POST /check
const { version_build, channel = 'stable' } = await req.json();

const manifest = JSON.parse(
  await c.env.OTA_MANIFEST.get(`manifest:${channel}`)
);

if (manifest.version !== version_build) {
  return c.json({
    version: manifest.version,
    url: manifest.url,
    checksum: manifest.checksum,
  });
}

return c.json({});  // already up to date
delete triggers rollback
worker โ€” auto-rollback on delete
// after deleting from BUNDLES + D1...
const { results } = await c.env.RouteDB.prepare(
  'SELECT version, filename FROM history WHERE channel = ? ORDER BY uploaded_at DESC LIMIT 1'
).bind(channel).all();

if (results.length > 0) {
  // promote the previous bundle to active
  await c.env.OTA_MANIFEST.put(`manifest:${channel}`, JSON.stringify(newManifest));
} else {
  // no more bundles โ€” clear the manifest
  await c.env.OTA_MANIFEST.delete(`manifest:${channel}`);
}
SCRIPTS

Two scripts, no CI needed.

01
deploy-ota.sh
Run from the code server terminal. Builds, zips, extracts the SHA256 checksum from @capgo/cli output, gets a JWT, then uploads to the worker.
scripts/deploy-ota.sh โ€” key steps
# 1. build
npm run generate
npx cap sync

# 2. zip + extract checksum from capgo cli output
CAPGO_OUTPUT=$(npx @capgo/cli@latest bundle zip)
ZIP_NAME=$(echo "$CAPGO_OUTPUT" | grep "Saved to" | awk '{print $NF}')
CHECKSUM=$(echo "$CAPGO_OUTPUT" | grep "Checksum SHA256" | awk '{print $NF}')

# 3. login โ€” get JWT
AUTH_TOKEN=$(curl -sS -X POST "$BASE_URL/api/auth/login" \
  -H "Content-Type: application/json" \
  -d '{"username":"...","password":"..."}' | jq -r '.token')

# 4. upload
curl -X POST "$BASE_URL/api/ota/admin/ota/upload" \
  -H "Authorization: Bearer $AUTH_TOKEN" \
  -F "file=@$ZIP_PATH" \
  -F "version=$VERSION" \
  -F "channel=stable" \
  -F "checksum=$CHECKSUM"
02
build-apk.sh
Also runs from the code server. Spins up mingc/android-build-box in Docker โ€” no local Android SDK needed. The Docker socket is mounted on the host.
scripts/build-apk.sh โ€” key steps
# 1. build web layer
npm run generate
npx cap sync

# 2. build APK inside docker โ€” Docker socket mounted on code server host
docker run --rm \
  -v "$(pwd):/project" \
  mingc/android-build-box \
  bash -c 'cd /project/android; ./gradlew :app:assembleDebug'

# 3. same JWT auth flow as deploy-ota.sh
AUTH_TOKEN=$(curl ... | jq -r '.token')

# 4. upload APK to its own channel
curl -X POST "$BASE_URL/api/ota/admin/apk/upload" \
  -H "Authorization: Bearer $AUTH_TOKEN" \
  -F "file=@android/app/build/outputs/apk/debug/app-debug.apk" \
  -F "version=$VERSION"
tip APKs and OTA bundles share the same history table in D1 โ€” they're just different channels (apk vs stable). The admin endpoints filter by channel so they stay separate.
LEARNINGS

What I learned along the way.

01

KV has a 25MB limit per value

Both zip bundles and APKs are stored in KV. 25MB is enough for web bundles but tight for APKs. Stripping debug symbols and using assembleRelease instead of debug cuts size significantly.

02

D1 makes rollback trivial

Having the full upload history in a queryable database means rollback is just a delete โ€” the worker automatically walks back to the previous entry. No manual manifest editing needed.

03

JWT auth in shell scripts needs care

The scripts build the JSON payload with printf instead of string interpolation to avoid injection issues with special characters in passwords. Small thing but it matters when you're curling with credentials.

heads upVITE_CF_API_URL is loaded from .env โ€” never hardcoded. Credentials are passed as script arguments, not env vars, so they don't linger in shell history.

The whole thing took a weekend to wire up. Running it all from my self-hosted code server means I never need to install anything locally or spin up a CI job. Open the browser, run the script, done.

related project
Self-hosted code server & Docker build pipeline
How the build environment behind these scripts actually works โ€” code-server on Debian 12, Nginx, Tailscale subnet router, Docker socket mounted to the host.
View project โ†’