This blog is a static site built with Pelican. The articles (such as this one) are written in Markdown, and then I leverage my ancient UNIX sysadmin roots: make prod run inside of my codebase transforms the content into static files, syncs them to AWS, and creates a CloudFront invalidation.

The trouble is, I don't want to have to find myself a shell environment with a bunch of tooling installed every time I want to share my "thought leadership" with the internet; I travel way too much for that. The answer I found lay in an Amazon service called CodeBuild (not to be confused with CodeDeploy, CodePipeline, or CodeCommit, all of which can be called "Code*", or used using the Amazon service "CodeStar." If that official pun just struck you now, welcome to the club; we're all miserable here.), which (once I got the hang of it) works really well.

I configured a CodeBuild project to use a Python3 build image that AWS maintains for me, told it to have no artifacts, and gave it a restricted IAM role that gave it just enough permissions to do what I needed it to do. I set up a webhook trigger from GitHub, where I have a private repository that hosts this site.

My initial buildspec worked after a bit of back and forth, and took just over two minutes. This wasn't unreasonable, but I don't have that kind of attention span.

After discussions with Clare Liguori and Samuel Karp, I learned that there were a few things that would reduce build time substantially.

First, I pared down a bunch of requirements that were superfluous. I didn't need to install boto3, Pelican's development dependencies, a comfortable zsh environment with my dotfiles inside of the build container, etc.

Secondly, I got rid of the "artifacts" portion of my buildspec file; the Makefile handles the publication step for me, so there wasn't much value in doing anything with artifacts as CodeBuild sees them.

Thirdly, I took advantage of the new-to-me CodeBuild caching feature. By telling it what to cache, I no longer had to build pip dependencies at build time after the first run; everything in /root/.cache/pip/**/* is now stored in an S3 bucket and copied down when the build starts. I could theoretically use a custom build image, but at this point I'm less than 30 seconds per build; I don't really see the value in optimizing further here-- particularly at the cost of having to maintain that image myself in the future. This is a managed solution; I don't have to futz with it at all.

Now I can write a blog post like this one, commit and push it to my private GitHub repo (Working Copy on the iPad is great for this when I'm on the road), and within thirty seconds the new article is up.

I'd be remiss if I didn't talk about what this costs me. There's an unlimited free tier for CodeBuild that grants 100 build minutes per month. Each build rounds up to the nearest minute. Until I'm publishing more than 100 builds a month, this service is completely free. Were I to ignore the free tier, each build would cost me half a penny.

If this is at all helpful to anyone, please use it. Note that there are no tests; this is probably a terrible thing, but your "testing" is my "proofreading." Further note that the CloudFormation CodeBuild resource doesn't set up webhooks directly; I did that by hand.

The buildspec and CloudFormation for this live in this gist. Feedback is, as always, most welcome.