skip to content

cloudflare worker

lightweight analytics api with d1 database
published:
0 views

what is this?

a cloudflare worker that tracks page views and likes for the site. uses d1 (cloudflare’s sqlite) for storage. no external analytics, no cookies, privacy-friendly.

the worker lives in worker/ directory of the sooriya repo.

api endpoints

GET /api/analytics/view # all views
GET /api/analytics/view/:type/:slug # single view count
POST /api/analytics/view/:type/:slug # increment view
GET /api/analytics/like # all likes
GET /api/analytics/like/:type/:slug # like count for content
POST /api/analytics/like/:type/:slug # toggle like (body: { action: "like" | "unlike" })

:type is one of: post, tweet, project

database schema

three tables handle everything:

-- page views (one row per content piece)
CREATE TABLE page_views (
slug TEXT NOT NULL,
content_type TEXT NOT NULL,
views INTEGER DEFAULT 0,
updated_at INTEGER NOT NULL,
PRIMARY KEY (slug, content_type)
);
-- individual user likes (up to 13 per user per content)
CREATE TABLE likes (
slug TEXT NOT NULL,
content_type TEXT NOT NULL,
user_id TEXT NOT NULL,
count INTEGER DEFAULT 1 CHECK(count >= 0 AND count <= 13),
created_at INTEGER NOT NULL,
updated_at INTEGER NOT NULL,
PRIMARY KEY (slug, content_type, user_id)
);
-- denormalized totals for fast reads
CREATE TABLE likes_count (
slug TEXT NOT NULL,
content_type TEXT NOT NULL,
total_count INTEGER DEFAULT 0,
unique_users INTEGER DEFAULT 0,
updated_at INTEGER NOT NULL,
PRIMARY KEY (slug, content_type)
);

the likes_count table is denormalized because counting across likes on every read would be slow. it gets updated via triggers or in the like handler.

project structure

worker/
├── src/
│ ├── index.ts # main entry, routing
│ ├── types.ts # typescript types
│ ├── db/
│ │ └── schema.sql # table definitions
│ ├── routes/
│ │ ├── views.ts # view handlers
│ │ └── likes.ts # like handlers
│ └── utils/
│ ├── cors.ts # cors helpers
│ └── validation.ts # input validation
└── wrangler.toml # cloudflare config

wrangler config

name = "sooriya-analytics"
main = "src/index.ts"
compatibility_date = "2024-01-01"
[[d1_databases]]
binding = "DB"
database_name = "sooriya-analytics"
database_id = "your-database-id"
[dev]
port = 8787
local_protocol = "http"

the binding = "DB" makes the database available as env.DB in your worker.

local development

Terminal window
cd worker
# init local db with schema
bunx wrangler d1 execute DB --local --file=src/db/schema.sql
# run dev server
bunx wrangler dev

local dev uses .wrangler/ directory for the sqlite file. it’s gitignored.

to reset local db:

Terminal window
rm -rf .wrangler && bunx wrangler d1 execute DB --local --file=src/db/schema.sql

deployment

Terminal window
# create d1 database (first time only)
bunx wrangler d1 create sooriya-analytics
# run migrations on production
bunx wrangler d1 execute sooriya-analytics --file=src/db/schema.sql
# deploy worker
bunx wrangler deploy

after creating the database, grab the database_id from the output and add it to wrangler.toml.

cors handling

the worker needs to handle cors since it’s called from the browser. a helper in utils/cors.ts:

export function cors_options(): Response {
return new Response(null, {
headers: {
"Access-Control-Allow-Origin": "*",
"Access-Control-Allow-Methods": "GET, POST, OPTIONS",
"Access-Control-Allow-Headers": "Content-Type",
},
});
}
export function cors_response(
data: unknown,
status: number,
extra?: Record<string, string>,
): Response {
return new Response(JSON.stringify(data), {
status,
headers: {
"Content-Type": "application/json",
"Access-Control-Allow-Origin": "*",
...extra,
},
});
}

calling from the site

on the astro side, a simple fetch on page load:

// increment view
await fetch(`${ANALYTICS_URL}/api/analytics/view/post/${slug}`, {
method: "POST",
});
// get view count
const res = await fetch(`${ANALYTICS_URL}/api/analytics/view/post/${slug}`);
const { views } = await res.json();

the analytics url comes from env var PUBLIC_ANALYTICS_URL.

gotchas

  1. d1 is eventually consistent: reads might be slightly stale. fine for analytics, not for banking.

  2. cold starts: first request after inactivity might be slow (~50-100ms). subsequent requests are fast.

  3. rate limits: d1 has read/write limits on the free tier. monitor in the cloudflare dashboard.

  4. user identification: for likes, we use a fingerprint or generate a random id stored in localStorage. no cookies needed.