An offline-first self-hosted location tracker that keeps your data with you
Location data is personal. I wanted a tracker I owned end to end — raw GPS, activity labels, routes, and the day view. So I built a Strava-meets-Life360 Android app with passive background tracking, activity recognition, stay-point detection, and a Google-Timeline-style day view, all synced through a Cloudflare Worker I wrote myself. No third-party tracking SDK. No Mapbox account. Just GPS, a SQLite queue, and a /sync endpoint. The OTA pipeline that ships the app is its own compact system — Worker API, D1 history, KV bundles, and a /check endpoint. Read the OTA write-up →
# device goes STILL → activity recognition fires → location queued I QipzActivity event type=STILL confidence=95 debug=still=95,drive=0,walk=0 I QipzActivitySync queued type=STILL confidence=95 ts=1749123456789 I QipzUpload batch_uploaded count=3 http=200 inserted=3 deduped=0 # 8 minutes later, watchdog heartbeat sync I watchdog.still heartbeat_sync ts=1749124000000 lastStillSyncAt=1749123456789 I QipzUpload batch_uploaded count=1 http=200 inserted=1 deduped=0
Ownership is the feature.
I wanted every byte — raw GPS fixes, activity labels, route history. Most SDKs hide this behind a dashboard you don't control and a pricing plan that can change under you.
Iku runs a native Android plugin (qipz-activity) that hooks directly into Google's Activity Recognition API, batches points into a local SQLite queue, and retries uploads with exponential backoff. The Nuxt 3 app talks to the same backend through Dexie.js hooks. One POST /sync, two sources, zero data loss.
/check endpoint that the app polls on launch. Read the OTA write-up →One map, every layer.
Nuxt 3 + Cap.
qipz-activity
Hono backend
storage
dashboard
Two paths, one ingest.
Native path — always on.
The Java plugin runs independently of the web layer. Activity events wake ActivityLocationSyncService, grab a GPS fix, enqueue in SQLite, then drain immediately. A circuit breaker pauses uploads for 30 minutes after 5 consecutive failures.
JS path — offline by default.
The Nuxt app writes routes and points to local Dexie.js (IndexedDB) first. Dexie hooks fire on every write and push the same payload to POST /api/location/sync. If the network is down, the write lands locally and retries when connectivity resumes.
One worker, one ingest rule.
One endpoint handles both sources.
POST /api/location/sync validates every sample — coordinate bounds, timestamp sanity, SHA-256 sampleHash — then deduplicates by hash and groups passive points into routes using a 30-minute gap rule per device_id. Accepted passive samples are mirrored into the points table so the admin map renders them the same way as active route points.
// validate sampleHash = SHA-256(deviceId|timestamp|lat6|lng6) const expected = await sha256(`${deviceId}|${ts}|${lat6}|${lng6}`); if (expected !== sample.sampleHash) return reject(sample, 'bad_hash'); // dedup by hash const exists = await db.prepare( 'SELECT id FROM passive_locations WHERE sample_hash = ?' ).bind(sample.sampleHash).first(); if (exists) return deduped++; // 30-min gap rule — find or create a route for this device const lastRoute = await db.prepare( 'SELECT id, last_point_at FROM routes WHERE device_id = ? AND (? - last_point_at) < 1800000 ORDER BY last_point_at DESC LIMIT 1' ).bind(deviceId, ts).first(); const routeId = lastRoute ? lastRoute.id : await createPassiveRoute(db, deviceId, ts); await db.prepare( 'INSERT INTO passive_locations (route_id, device_id, sample_hash, lat, lng, timestamp, ...) VALUES (?, ?, ?, ?, ?, ?, ...)' ).bind(routeId, deviceId, sample.sampleHash, lat, lng, ts).run();
// paginate passive_locations with a (timestamp, id) cursor const { cursorTs, cursorId, since, limit, deviceId } = params; const rows = await db.prepare(` SELECT * FROM passive_locations WHERE timestamp >= ? AND (? = 0 OR (timestamp, id) < (?, ?)) ORDER BY timestamp DESC, id DESC LIMIT ? `).bind(since, cursorTs, cursorTs, cursorId, limit).all(); const lastRow = rows.results.at(-1); return c.json({ passive_locations: rows.results, passiveCursor: lastRow ? { ts: lastRow.timestamp, id: lastRow.id } : null, });
Native tracking engine
Activity recognition, smoothed for reality.
Google's raw ActivityRecognitionResult is noisy. The receiver builds a score histogram, then applies three layers of smoothing: a low-confidence hold, a short-hold to prevent thrashing within 12 seconds, and a driving confirmation that requires two consecutive high-confidence reads. IN_VEHICLE now correctly maps to DRIVING.
PassiveLocationDriftGuard. GPS fixes while STILL are notoriously noisy. Uses a two-confirmation spike detection scheme — a point must be confirmed by a second nearby reading before it's accepted. Implied speed over 35 m/s is rejected outright, and stale baselines older than 5 minutes are cleared so drift can't compound over time.
StayPointDetector. While the device is STILL, incoming GPS fixes accumulate into a running centroid. If a new fix drifts beyond 80 m of the centroid, the current stay is flushed as a StayVisit. Stays shorter than 3 minutes are discarded. Gaps longer than 10 minutes split the stay into two separate visits.
TripStatisticsStore.TripBuilder. On transition from STILL to any moving activity, a TripBuilder starts accumulating GPS fixes — distance, mode-time histogram, and reservoir-sampled waypoints (max 20). On STILL return, flushes a TripRecord with dominant mode, per-mode percentages, distance, and the waypoint polyline.
ActivityRecognitionWatchdog. A 5-minute AlarmManager alarm that checks whether the last activity event is older than 12 minutes. If so, triggerRecover re-registers with Google Play Services — handling the case where the OS killed the registration silently. On STILL, also fires a heartbeat sync if the last still-sync was more than 8 minutes ago.
// base 30s, doubles per attempt, caps at 6h private long computeBackoffMillis(int attempts) { long base = 30_000L; long cap = 6L * 60L * 60L * 1000L; long delay = base * (1L << Math.min(attempts + 1, 8)); return System.currentTimeMillis() + Math.min(delay, cap); } // circuit breaker — 5 consecutive failures → pause 30 min private void tripCircuitIfNeeded() { if (++consecutiveFailures >= QipzConfig.MAX_CONSECUTIVE_FAILURES) { circuitOpenUntil = System.currentTimeMillis() + QipzConfig.CIRCUIT_BREAKER_DURATION_MS; consecutiveFailures = 0; } }
A Google Timeline clone, built on Vue.
The timeline engine is client-side.
The admin map page lazy-loads Leaflet, then polls /api/location/fetchAll with a cursor-based paginator — up to 20 pages per refresh, each page 500 passive points. Once local, a chain of Vue computed properties turns raw GPS fixes into a day-by-day timeline with start/end place labels, mini Leaflet maps per segment, and stay duration stats. No server does any of this computation.
The first long stay becomes the home candidate. The second-longest stay more than 220 m away becomes the office candidate. Segments between stays become trips: "At Home for 7h 32m → Went to Office in 22m".
passiveDayTimeline → buildDayTripSegments → stay detection at 130 m / 20 min → home/office labeling → mini Leaflet maps per segment.
If stays can't resolve the pattern, a fallback time-distance chunker splits by 15-minute gaps or 800 m jumps.
Leaflet mini-maps use Map<id, element> refs — not v-for index refs — so reorders don't hand Leaflet the wrong container.
const STOP_RADIUS_M = 130; const STOP_MIN_DURATION_MS = 20 * 60 * 1000; for (let i = 1; i < sorted.length; i++) { const far = geoDistanceMeters(center, sorted[i]) > STOP_RADIUS_M; if (!far) { /* extend cluster */ continue; } if (durationMs >= STOP_MIN_DURATION_MS) stays.push({ center, durationMs }); } const homeCenter = stays[0]?.center; const officeCenter = stays .filter(s => geoDistanceMeters(s.center, homeCenter) > 220) .sort((a, b) => b.durationMs - a.durationMs)[0]?.center; for (let i = 0; i < stays.length - 1; i++) { segments.push({ startStory: `At ${labelFrom(stays[i].center)} for ${fmt(stays[i].durationMs)}`, endStory: `Went to ${labelFrom(stays[i+1].center)} in ${fmt(travelMs)}`, points: sorted.slice(stays[i].endIdx, stays[i+1].startIdx + 1), }); }
Continuity is the work you never see.
Where the system drifted.
After 2026-03-10, activity recognition went quiet — the watchdog kept firing, but only as a heartbeat. Real movement stopped reaching the backend. Out-of-order samples bent route coherence. The admin map was doing full refreshes on every poll, pulling routes, geofences, and points even when the UI only needed headers.
Passive route lookups used accountKey OR deviceId filters — forced scans, inflated read volume. One fetchAll poll fanned out into ~11M rows.
Auth failures surfaced as successful 2xx uploads with authRejected=1. Offline history had to stay trustworthy even when the server had no matching route.
What the logs made impossible to ignore.
The last confirmed activity.receiver event was 2026-03-12T19:57:18Z. Everything after was STILL from foreground cache and watchdog heartbeats. Making fetchAll incremental and splitting OR queries by code branch removed the full-table scans.
Splitting accountKey and deviceId reads turned the ~11M-row scan into a bounded, predictable query and removed map reload jank.
Idempotent registration beat unregister-first flows. Every extra unregister was a chance to miss events during a recover cycle.
What it taught me.
GPS is lying to you when you're still. A phone sitting on a desk reports 10–30 m of drift every few minutes. Without PassiveLocationDriftGuard's two-confirmation spike detection, the stay-point detector would have constantly thought the device was moving. The fix: don't accept a point until a second reading confirms it's real.
Android kills background registrations silently. On some devices, the OS cancels your ActivityRecognition subscription without firing any callback. The only detection: a watchdog that checks how long since the last event. More than 12 minutes — re-register. This was responsible for several hours of missing tracking data.
Two upload paths means double deduplication. The native plugin and Dexie JS path can both submit the same physical location. The sampleHash (SHA-256(deviceId|timestamp|lat6|lng6)) approach lets the backend deduplicate silently — same hash is counted as dedupedCount and never written twice.
Cursor pagination for time-series needs both dimensions. Paginating by timestamp alone breaks when multiple rows share the same millisecond. The (timestamp, id) compound cursor — sent back from the server and checked in the next query — gives stable, gap-free pagination even when points arrive in bursts.
Leaflet mini-maps need a ref Map, not a ref array. Vue's v-for template refs produce an array keyed by index. If items reorder or filter, the index shifts and Leaflet gets the wrong container. Switching to a Map<id, element> with a manual setRef function fixed the "wrong map in wrong card" bug entirely.
An OR condition in SQL can silently scan millions of rows.(account_key = ? OR device_id = ?) looked harmless until EXPLAIN QUERY PLAN revealed a USE TEMP B-TREE FOR ORDER BY and 11M rows read in 24 hours. The fix: branch in code — each path hits exactly one index, the temp B-TREE disappears, and reads dropped by over 90%.
qipz-activity plugin is a local npm dependency. Forgetting to run npx cap sync after changing it is the fastest way to spend 20 minutes debugging phantom behavior. The whole tracking stack — native plugin, backend ingest, admin dashboard, OTA pipeline — shipped from a single browser tab connected to my self-hosted code server over Tailscale. No local Android SDK. No cloud build agent. Just a ThinkPad running Debian 12 and a lot of adb logcat.