cloudflare worker
what is this?
a cloudflare worker that tracks page views and likes for the site. uses d1 (cloudflare’s sqlite) for storage. no external analytics, no cookies, privacy-friendly.
the worker lives in worker/ directory of the sooriya repo.
api endpoints
GET /api/analytics/view # all viewsGET /api/analytics/view/:type/:slug # single view countPOST /api/analytics/view/:type/:slug # increment view
GET /api/analytics/like # all likesGET /api/analytics/like/:type/:slug # like count for contentPOST /api/analytics/like/:type/:slug # toggle like (body: { action: "like" | "unlike" }):type is one of: post, tweet, project
database schema
three tables handle everything:
-- page views (one row per content piece)CREATE TABLE page_views ( slug TEXT NOT NULL, content_type TEXT NOT NULL, views INTEGER DEFAULT 0, updated_at INTEGER NOT NULL, PRIMARY KEY (slug, content_type));
-- individual user likes (up to 13 per user per content)CREATE TABLE likes ( slug TEXT NOT NULL, content_type TEXT NOT NULL, user_id TEXT NOT NULL, count INTEGER DEFAULT 1 CHECK(count >= 0 AND count <= 13), created_at INTEGER NOT NULL, updated_at INTEGER NOT NULL, PRIMARY KEY (slug, content_type, user_id));
-- denormalized totals for fast readsCREATE TABLE likes_count ( slug TEXT NOT NULL, content_type TEXT NOT NULL, total_count INTEGER DEFAULT 0, unique_users INTEGER DEFAULT 0, updated_at INTEGER NOT NULL, PRIMARY KEY (slug, content_type));the likes_count table is denormalized because counting across likes on every read would be slow. it gets updated via triggers or in the like handler.
project structure
worker/├── src/│ ├── index.ts # main entry, routing│ ├── types.ts # typescript types│ ├── db/│ │ └── schema.sql # table definitions│ ├── routes/│ │ ├── views.ts # view handlers│ │ └── likes.ts # like handlers│ └── utils/│ ├── cors.ts # cors helpers│ └── validation.ts # input validation└── wrangler.toml # cloudflare configwrangler config
name = "sooriya-analytics"main = "src/index.ts"compatibility_date = "2024-01-01"
[[d1_databases]]binding = "DB"database_name = "sooriya-analytics"database_id = "your-database-id"
[dev]port = 8787local_protocol = "http"the binding = "DB" makes the database available as env.DB in your worker.
local development
cd worker
# init local db with schemabunx wrangler d1 execute DB --local --file=src/db/schema.sql
# run dev serverbunx wrangler devlocal dev uses .wrangler/ directory for the sqlite file. it’s gitignored.
to reset local db:
rm -rf .wrangler && bunx wrangler d1 execute DB --local --file=src/db/schema.sqldeployment
# create d1 database (first time only)bunx wrangler d1 create sooriya-analytics
# run migrations on productionbunx wrangler d1 execute sooriya-analytics --file=src/db/schema.sql
# deploy workerbunx wrangler deployafter creating the database, grab the database_id from the output and add it to wrangler.toml.
cors handling
the worker needs to handle cors since it’s called from the browser. a helper in utils/cors.ts:
export function cors_options(): Response { return new Response(null, { headers: { "Access-Control-Allow-Origin": "*", "Access-Control-Allow-Methods": "GET, POST, OPTIONS", "Access-Control-Allow-Headers": "Content-Type", }, });}
export function cors_response( data: unknown, status: number, extra?: Record<string, string>,): Response { return new Response(JSON.stringify(data), { status, headers: { "Content-Type": "application/json", "Access-Control-Allow-Origin": "*", ...extra, }, });}calling from the site
on the astro side, a simple fetch on page load:
// increment viewawait fetch(`${ANALYTICS_URL}/api/analytics/view/post/${slug}`, { method: "POST",});
// get view countconst res = await fetch(`${ANALYTICS_URL}/api/analytics/view/post/${slug}`);const { views } = await res.json();the analytics url comes from env var PUBLIC_ANALYTICS_URL.
gotchas
-
d1 is eventually consistent: reads might be slightly stale. fine for analytics, not for banking.
-
cold starts: first request after inactivity might be slow (~50-100ms). subsequent requests are fast.
-
rate limits: d1 has read/write limits on the free tier. monitor in the cloudflare dashboard.
-
user identification: for likes, we use a fingerprint or generate a random id stored in localStorage. no cookies needed.