As is customary after launching a new developer blog, I decided one of the blog’s first posts should detail how the blog itself is built and hosted.
Tech stack
- Astro 5 — Static site generator. Zero client-side JavaScript by default, content collections with schema validation, Markdown-first.
- AWS S3 — Hosting. Static files synced to an S3 bucket. CloudFront CDN to be added soon.
- GitHub Actions — CI/CD. Three workflows for linting, deploying, and PR previews, plus a fourth for Bluesky auto-posting.
- Bluesky — Comments. No separate comment service — replies to a Bluesky post become the comment thread.
- Plausible — Privacy-respecting analytics. No cookies, no tracking scripts in development.
Content pipeline
Blog posts live in src/content/blog/ of a GitHub repo as Markdown files with a date-prefixed filename like 2026-03-10-blog-infrastructure.md. Astro’s content collections validate every post’s frontmatter against a Zod schema at build time:
typescript
const blog = defineCollection({
loader: glob({ pattern: '**/*.md', base: './src/content/blog' }),
schema: z.object({
title: z.string(),
pubDate: z.coerce.date(),
description: z.string(),
author: z.string(),
tags: z.array(z.string()).optional(),
image: z.string().optional(),
imageAlt: z.string().optional(),
updatedDate: z.coerce.date().optional(),
blueskyPostUri: z.string().optional(),
}),
});If a post is missing a required field or has a wrong type, the build fails. No runtime surprises.
Mermaid diagrams
Diagrams are written as fenced code blocks with the mermaid language tag. A script in the blog layout detects them and lazy-loads Mermaid 11 from a CDN — only on pages that actually use diagrams. A custom rehype plugin wraps all other code blocks in collapsible <details> elements, but skips Mermaid blocks so diagrams render inline.
Structured data
Each post generates JSON-LD structured data (BlogPosting schema) for search engines, including the title, author, publish date, and description. The homepage generates a WebSite schema. Open Graph and Twitter Card meta tags are set automatically from frontmatter fields.
CI/CD with GitHub Actions
Four workflows handle the full lifecycle: CI (lint, type-check, build on every PR), deploy (build and sync to S3 on merge to main), PR previews, and Bluesky auto-posting.
Deploy
Pushes to main build the site and sync to S3. Authentication between GitHub and AWS uses OIDC — no long-lived AWS credentials stored in GitHub:
yaml
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v4
with:
role-to-assume: ${{ secrets.AWS_ROLE_ARN }}
aws-region: us-east-1
- name: Deploy to S3
run: >
aws s3 sync dist/ s3://www.bstjohn.net/ --delete
--exclude "preview/*"The --delete flag removes old files, but --exclude "preview/*" protects active PR previews from being wiped.
PR preview deployments
Every pull request gets its own live preview URL.
Each preview is built with a unique base path using Astro’s ASTRO_BASE environment variable:
yaml
- name: Build with preview base path
run: npm run build
env:
ASTRO_BASE: /preview/PR-${{ github.event.pull_request.number }}-${sha}/The built files are synced to that path in S3, and a comment is posted on the PR with the preview URL. When the PR is closed, a cleanup job deletes the preview files and updates the comment.
All links within the site need to be prefixed with the base URL so they resolve correctly under /preview/PR-N-sha/ instead of /. Astro’s import.meta.env.BASE_URL handles this in components, but Markdown content uses root-relative links like /blog/other-post/ — those need to work in both production (base = /) and previews (base = /preview/PR-N-sha/).
Bluesky comments
This is the feature I’m most pleased with. The idea isn’t mine — it was inspired by Emily Liu’s post on using Bluesky for blog comments. The concept is simple: every blog post has a corresponding Bluesky post, and replies to that post become the comment section.
No database. No authentication for readers. No moderation backend. Comments are just Bluesky replies.
Auto-posting to Bluesky
When a deploy succeeds, a workflow_run trigger fires the Bluesky post workflow. A Node script scans every blog post’s frontmatter for a blueskyPostUri field. Posts without one are new — the script creates a Bluesky post for each:
graph TD
deploy["Deploy Workflow<br>completes"] -->|"workflow_run"| scan["Scan blog posts<br>for missing blueskyPostUri"]
scan -->|"new posts found"| auth["Authenticate with Bluesky"]
auth --> post["Create Bluesky post<br>with title, link, hashtags"]
post --> write["Write AT URI back<br>to frontmatter"]
write --> commit["Commit with [skip ci]<br>and push"]
The Bluesky post includes the title, a link to the post, and hashtag facets for each tag. It also includes an app.bsky.embed.external embed so Bluesky renders a link card with the title and description:
javascript
const record = {
$type: 'app.bsky.feed.post',
text,
createdAt: new Date().toISOString(),
facets,
embed: {
$type: 'app.bsky.embed.external',
external: { uri: url, title, description },
},
};After posting, the script writes the AT URI (e.g., at://did:plc:.../app.bsky.feed.post/...) back into the post’s frontmatter and commits with [skip ci] to avoid retriggering the deploy.
Rendering comments
The BlueskyComments component fetches the reply thread client-side using Bluesky’s public API — no authentication required:
plaintext
GET https://public.api.bsky.app/xrpc/app.bsky.feed.getPostThread?uri={postUri}&depth=6Replies are rendered recursively up to 4 levels deep, sorted by like count. The component handles Bluesky’s rich text format — facets with byte-indexed positions for links, mentions, and hashtags are parsed and converted to HTML. Beyond depth 4, a “Continue on Bluesky” link takes readers to the full thread.
Posts with moderation labels are filtered out. A <noscript> fallback provides a direct link to the Bluesky post for readers without JavaScript.
Why this works well
- No infrastructure to maintain — No database, no auth service, no comment moderation queue.
- Identity is handled — Commenters are Bluesky users with profiles and reputation. This naturally reduces spam.
- Fully public API — Reading comments doesn’t require any API keys or authentication.
- Comments live where the conversation is — Readers who find the post via Bluesky can reply without leaving the platform. Readers on the blog see the same replies.
The only downside is the dependency on Bluesky’s API availability, but since comments are a progressive enhancement (the blog works fine without them), this is an acceptable trade-off.
Wrapping up
The blog infrastructure is simple enough to maintain and complex enough to be interesting. The Bluesky comments integration has been the most satisfying part — it solves the blog comments problem without adding any backend complexity.
If you’re running a similar setup or thinking about Bluesky-powered comments, the full source is on GitHub.
For more on the home lab infrastructure that this blog runs alongside, see GitOps for Home Labs with Flux CD and k3s and Automated Wildcard HTTPS Behind NAT with Let’s Encrypt.
Comments
Comments are powered by Bluesky. Reply to the linked post to join the conversation.