Hey Oliver! There's a lot here, and I don't have all the answers, but I'll do what I can.

Scaling out is a great strategic concern to have. It doesn't really matter if we can get this working for one pipeline - the way terraform (or any provisioning tool) works, is we're going to have a lot of these pipelines.

I have the most experience doing this in Azure DevOps, which is very similar to GitHub Actions. In that world I have individual files for each project pipeline. They all live in the same repo. That isn't very DRY, but I'm a fan of simple declarative code.

I literally have project teams build their own pipelines (which is nearly a copy and paste operation since they can reference my original YAML). If they need to customize, they can, b/c each pipeline is easy and declarative. If they don't need to, they just check in the new pipeline definition in a PR and I import it and set code triggers (to run the pipeline for pull request updates on certain repo paths).

I am running around 150 pipelines in a single mono-repo using this model, and as far as I'm aware this will scale out to maybe 1k pipelines. At some point it'd make sense to build automation around importing the pipeline definition and setting code triggers (maybe using Ansible? I'm working on it, but low priority for me right now).

I did a presentation on scaling issues and solutions with this model:


This of course is Azure DevOps, and not AWS CodeCommit, but they'd work in similar ways - the pipelines are defined in code, and you'd need to tell the platform when to trigger it (if mono-repo). If each project is in it's own repo, this is easier, and you can exactly copy the model I have here, and have folks deploy their own version.

You asked a good question though - if folks all have their own versions how do you monitor them at scale? To my knowledge you can't, but I'm far from an expert here. If there's some way to share a reference pipeline definition across many projects, I'm not aware of it.

You can definitely version terraform modules Git tags are easy and not proprietary, and Hashi's own solution is Terraform Enterprise that allows you to host your own module libraries. The terraform code and pipeline definitions are separate and, while one calls the other, they otherwise don't interact. For instance, you don't write the pipeline using terraform, and the pipelines doesn't write terraform (or care if it is terraform), it just runs it.

I hope I hit all your questions, even if I don't know the answers to some of them. If you solve the problems maybe write them up and link me to them so I can learn how to solve these great Qs. Thanks Oliver!


Written by

NetOps/DevOps engineer, consultant, business owner, Pluralsight author. Fascinated with computer security and privacy policy. Teacher. He/Him.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store