I run 25+ services in my home lab, all behind NAT with no port forwarding. Every one of them is accessible via https://<service>.lab.bstjohn.net with a trusted Let’s Encrypt certificate. No browser warnings, no manual renewal, no exceptions.
This post covers the HTTPS and certificate side of the setup. For the GitOps deployment approach, see GitOps for Home Labs with Flux CD and k3s.
The end result
Every *.lab.bstjohn.net service gets a trusted certificate automatically. A single wildcard cert covers all subdomains. cert-manager renews it before expiry. I haven’t thought about certificates since setting this up.
DNS-01 vs HTTP-01
Let’s Encrypt offers two ways to prove you own a domain:
- HTTP-01 — Let’s Encrypt makes an HTTP request to your server on port 80. You serve a specific token. Simple, but requires your server to be publicly accessible.
- DNS-01 — You create a TXT record in your domain’s DNS. Let’s Encrypt checks DNS to verify ownership. No inbound traffic required.
For a home lab behind NAT, DNS-01 is the only practical option. You can’t (or don’t want to) open port 80 to the internet. DNS-01 also enables wildcard certificates, which HTTP-01 cannot do.
Architecture
Here’s how the certificate flow works:
graph TD
le["Let's Encrypt"]
dns["GoDaddy DNS API"]
subgraph cluster["k3s Cluster"]
cm["cert-manager"]
webhook["GoDaddy Webhook"]
secret["K8s Secret<br>(lab-bstjohn-wildcard-tls)"]
traefik["Traefik (Ingress)<br>terminates SSL"]
pods["immich · plex · HA · ..."]
end
cm -->|"requests cert"| le
cm -->|"delegates DNS proof"| webhook
webhook -->|"creates TXT record"| dns
le -.->|"validates DNS"| dns
le -->|"issues cert"| cm
cm -->|"stores cert"| secret
secret --> traefik
traefik --> pods
cert-manager requests a certificate from Let’s Encrypt, proves domain ownership by creating a DNS TXT record via the GoDaddy API, and stores the resulting certificate as a Kubernetes secret. Traefik uses that secret to terminate SSL for every service.
cert-manager setup
cert-manager is deployed via a Helm release managed by Flux:
infrastructure/cert-manager/release.yaml:
yaml
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
name: cert-manager
namespace: cert-manager
spec:
interval: 30m
chart:
spec:
chart: cert-manager
version: '1.x'
sourceRef:
kind: HelmRepository
name: jetstack
namespace: cert-manager
values:
installCRDs: trueThis installs cert-manager and its Custom Resource Definitions (Certificate, ClusterIssuer, etc.).
GoDaddy webhook
cert-manager doesn’t natively support GoDaddy DNS. A webhook bridges the gap — it receives DNS-01 verification requests from cert-manager and proves domain ownership by creating the required TXT records in GoDaddy DNS.
infrastructure/godaddy-webhook/release.yaml:
yaml
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
name: godaddy-webhook
namespace: cert-manager
spec:
interval: 30m
chart:
spec:
chart: godaddy-webhook
version: '1.x'
sourceRef:
kind: HelmRepository
name: godaddy-webhook
namespace: cert-manager
values:
groupName: acme.bstjohn.netAPI credentials
Create a secret with your GoDaddy API credentials (this is one of the few things not stored in Git):
bash
kubectl create secret generic godaddy-api-key \
--from-literal=key='YOUR_API_KEY' \
--from-literal=secret='YOUR_API_SECRET' \
-n cert-managerImportant: The webhook expects separate key and secret fields, not a combined token.
RBAC
The webhook needs permission to read secrets in the cert-manager namespace. This is where a naming gotcha shows up:
infrastructure/godaddy-webhook/rbac.yaml:
yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: godaddy-webhook-secret-reader
namespace: cert-manager
rules:
- apiGroups: ['']
resources: ['secrets']
resourceNames: ['godaddy-api-key']
verbs: ['get', 'watch']
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: godaddy-webhook-secret-reader
namespace: cert-manager
subjects:
- kind: ServiceAccount
name: cert-manager-cert-manager
namespace: cert-manager
roleRef:
kind: Role
name: godaddy-webhook-secret-reader
apiGroup: rbac.authorization.k8s.ioNotice the ServiceAccount name is cert-manager-cert-manager, not just cert-manager. Helm prefixes the release name to the service account name. If this is wrong, the webhook silently fails to read the API credentials and the challenge never completes.
ClusterIssuers
A ClusterIssuer tells cert-manager how to request certificates from Let’s Encrypt. Always start with staging to avoid hitting rate limits while testing.
Staging (config/certificates/letsencrypt-staging.yaml):
yaml
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-staging
spec:
acme:
email: you@example.com
server: https://acme-staging-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: letsencrypt-staging
solvers:
- dns01:
webhook:
groupName: acme.bstjohn.net
solverName: godaddy
config:
apiKeySecretRef:
name: godaddy-api-key
key: key
secret: secretProduction (config/certificates/letsencrypt-production.yaml):
yaml
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-production
spec:
acme:
email: you@example.com
server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: letsencrypt-production
solvers:
- dns01:
webhook:
groupName: acme.bstjohn.net
solverName: godaddy
config:
apiKeySecretRef:
name: godaddy-api-key
key: key
secret: secretLet’s Encrypt production has rate limits: 50 certificates per registered domain per week, and 5 duplicate certificates per week. A wildcard cert counts as just one certificate toward this limit — another advantage of the wildcard approach over per-service certificates. Staging has much higher limits and is the right place to debug issues. Once staging works, switch to production.
Wildcard certificate
The Certificate resource requests the actual wildcard cert:
config/certificates/wildcard-cert.yaml:
yaml
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: lab-bstjohn-wildcard
namespace: default
spec:
secretName: lab-bstjohn-wildcard-tls
issuerRef:
name: letsencrypt-production
kind: ClusterIssuer
dnsNames:
- '*.lab.bstjohn.net'Note:
*.lab.bstjohn.netmatchesservice.lab.bstjohn.netbut notlab.bstjohn.netitself. If you need the apex domain, add it as a separate entry indnsNames.
When this resource is created, cert-manager:
- Generates a private key and CSR
- Sends the CSR to Let’s Encrypt, which responds with a DNS-01 challenge
- Calls the GoDaddy webhook, which creates a TXT record at
_acme-challenge.lab.bstjohn.net - Tells Let’s Encrypt to check the DNS record
- Let’s Encrypt validates and issues the certificate
- cert-manager stores the certificate in the
lab-bstjohn-wildcard-tlsKubernetes secret - Automatically renews 30 days before expiry (certificates are valid for 90 days)
You can monitor the process with:
bash
kubectl describe certificate lab-bstjohn-wildcard
kubectl describe certificaterequest
kubectl describe order
kubectl describe challengeTraefik
Traefik acts as the ingress controller, terminating SSL and routing traffic to services.
infrastructure/traefik/release.yaml (relevant values):
yaml
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
name: traefik
namespace: traefik
spec:
interval: 30m
chart:
spec:
chart: traefik
version: '26.x'
sourceRef:
kind: HelmRepository
name: traefik
namespace: traefik
values:
service:
type: LoadBalancer
spec:
loadBalancerIP: 192.168.0.251
ports:
web:
redirectTo:
port: websecure
websecure:
tls:
enabled: trueKey settings:
loadBalancerIP: 192.168.0.251— Fixed IP so DNS records don’t need to change.redirectTo: websecure— All HTTP traffic is automatically redirected to HTTPS.tls.enabled: true— HTTPS is on by default for all routes.
DNS configuration
In your DNS provider, create a wildcard A record:
- Name:
*.lab - Value:
192.168.0.251
Now any <service>.lab.bstjohn.net resolves to the cluster IP where Traefik is listening. Traefik matches the hostname from the request to the correct Ingress resource and routes to the right service.
How services use the certificate
Every service’s Ingress references the same wildcard certificate secret:
yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: immich
spec:
ingressClassName: traefik-traefik
tls:
- hosts:
- immich.lab.bstjohn.net
secretName: lab-bstjohn-wildcard-tls
rules:
- host: immich.lab.bstjohn.net
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: immich
port:
number: 2283The secretName: lab-bstjohn-wildcard-tls is the same across all services. One certificate, unlimited subdomains.
Gotchas
A few things that tripped me up:
IngressClass naming — The IngressClass is traefik-traefik, not traefik. Helm prepends the release name to the IngressClass name. If you get 404s, check kubectl get ingressclass.
ServiceAccount naming — As mentioned in the RBAC section, the cert-manager ServiceAccount is cert-manager-cert-manager. Getting this wrong means the webhook can’t read the API secret and challenges silently fail.
DNS propagation delay — After the webhook creates the TXT record, it can take minutes for DNS to propagate. cert-manager will retry, but if you’re watching the challenge resource and it seems stuck, give it time. Check with dig TXT _acme-challenge.lab.bstjohn.net to see if the record exists.
Use staging first — Always test with the staging ClusterIssuer before switching to production. Staging certificates aren’t trusted by browsers, but the flow is identical. You’ll avoid burning through rate limits while debugging.
Don’t create per-service certificates — If you add a cert-manager.io/cluster-issuer annotation to individual Ingress resources, cert-manager will request separate certificates for each service. This creates _acme-challenge.<service>.lab.bstjohn.net TXT records, which make those subdomains “exist” as empty non-terminals in DNS. The wildcard record is then ignored for those specific subdomains, causing intermittent DNS failures. Always use the single shared wildcard secret.
Secret namespace — The wildcard Certificate resource and the Ingress resources that reference the secret must be in the same namespace (both in default in my setup). If they’re in different namespaces, the Ingress won’t find the TLS secret.
For the GitOps deployment setup that ties this all together, see GitOps for Home Labs with Flux CD and k3s.
Comments
Comments are powered by Bluesky. Reply to the linked post to join the conversation.