Compare commits

4 Commits

6 changed files with 164 additions and 3 deletions

View File

@ -14,9 +14,9 @@ minifyOutput = true
[params]
env = "production"
title = "Hackanooga"
description = "ExampleSite description"
description = "Hackanooga - Confessions of a homelab hacker"
keywords = [ "Blog", "Portfolio", "PaperMod" ]
author = "Me"
author = "Mike Conrad"
images = [ "<link or path of image for opengraph, twitter-cards>" ]
DateFormat = "January 2, 2006"
defaultTheme = "auto"

Binary file not shown.

View File

@ -0,0 +1,43 @@
---
title: "IaC (Infrastructure as Complexity?)"
date: 2025-04-24T12:04:03-04:00
draft: false
---
## Why do IaC? (Or Infstructure as Complexity)
This is a question I have thought a lot about lately. I have personally wrestled with it quite a bit as I grow in my career. While I certainly recognize the many benefits of IaC, I was having trouble justifiying why a small team or individual should adopt this approach. After all, it does introduce overhead and complications. It introduces somewhat specialized workflows, processes and tooling, plus the configs tend to be pretty verbose.
## Tech Debt
Most of the above concerns fall under the category of tech debt. For a single individual project where you are maybe only managing a handful of cloud resources, it can feel like overkill. It is tempting to just use the cli tool or even web console for most tasks. Some platforms make this easier than others. Azure for example will create most of the dependent resources for you when creating things through the console. Take a `WebApp` for you. A basic WebApp in Azure requires all of the following:
- Resource Group
- App Service plan
- Web App
- Application Insights
- Log Analytics Workspace
Azure will happily create all of those resources for you when you walk through the process in the console, however if you are working via the cli or IaC then you are obviously on your own. This can be one of the biggest hurdles to getting started with this approach, knowing all of the required peices and how to put them together. In my experience, Azure seems to do a better job of this than AWS but it is still up to you as an engineer to learn and understand the fundamentals.
## Disaster Recovery
With all of this complexity it can seem daunting, overwhelming and possibly overkill. My thoughts around all of this changed however after a conversation with my good friend John Goodwin. We were discussing this exact topic and he brought up a very good, indisputable point. One of the biggest benefits you get out of IaC is that you are essentially generating a blueprint of your infrastructure.
Think about it, this is a pretty common scenario. Your company hires a "DevOps" guy to help set up and maintain all of your infrastructure. You are a small remote team and he mostly works by himself in a closet (figuratively but sometimes also literally.). Let's say he is in this role for several years and your application grows considerably. You started out with a couple ec2 instances and now you have load balancers, container registries, custom vpc's, subnets, elastic ips, databases, etc.
If he were to leave, you would most likely have zero clue hat infrastructure you currently have, let alone how to maintain it. Worse, if there were a catastrophic failure, or disaster (hackers, ransomware, data center meltdown, cloud provider refuses you service, etc) you would have zero chance of setting everything back up on your own, or even figuring out what you had set up in the first place. At this point, you would probably reach out to a specialist like myself to help you "recover" everything. I know this because I have seen it many times before.
The problem is that if you are reaching out to me about something like this, it is oftentimes too late and the best we can do is an educated guess on how things may have been set up previously. Unfortunately with the number of dependencies in modern software and infrastructure, there will likely be a lot of pieces we will not be able to put back together properly and I will either advise you to start from scratch (within reason), or do my best to hack and patch things together.
All of this could have been avoided if you had pushed for and implemented IaC early on in the process. Instead of the mess and guess, you would have a (hopefully) current and up to date blueprint of your infrastructure and how to put it back together. Of course, in this scenario the insurance is only worth it if the "DevOps" guy is maintaining it in the first place. If you are small startup founder, this may be hard to enforce but for small teams and responsible engineers it should make perfect since.
## Audit and Documentation trails
There are some other serious benefits of adopting this approach, even for small projects. As engineers, we tend to focus lots of time and energy into solving problems. We often write code that is complex, or requires several layers of abstraction. For some, we like to be able to reason about our code. If this variable is defined here, where is it being passed, manipulated, etc? Infrastructure is no different.
Think about how many times you have written a line of code that solved an annoying problem. You spent days on trying to solve this one thing and finally figured it out. Now six months later, there is a bug and you have to go back and try and remember what you did and why. This is why good commit messages and pull requests are so essential to the developer workflow. Not to mention comments and tests.
Let's take these same concepts and apply them to our infrastructure. It is pretty common to go into your cloud console and find resources hanging around with obscure names and no clear understanding of who created them and for what purpose. This is especially true with interdependent resources (vpcs, subnets, firewalls, etc.). Now if instead of creating resources like this, you are disciplined to only create them using GitOps workflows and IaC tools, you can have much more confidence in what you have and why.
It provides you with an audit trail, and hopefully you are displined enough to comment your configs and provide detailed descriptions of changes in your commits and pull requests. Your pull requests should also be a place where you can discuss these changes. Sure it may "slow things down" in the short term because it is always easier to just run a shell script, but this approach ultimately helps to solve some of the tribal knowledge issues.
Instead of wondering what scripts Jim is using to manage these pieces of infrastructure, you have a versioned, commented (and hopefuly up to date) configuration and plan in your source control.

View File

@ -0,0 +1,113 @@
---
title: "Docker Shell Tricks"
date: 2025-06-18T09:10:13-04:00
categories:
- Tips and Tricks
tags:
- Docker
- CI/CD
- Linux
- Containers
- DevOps
---
# Overview
One of my all time favorite tricks when working with Docker. Getting information on or performing an action on multiple containers with similar naming schemes.
Passing the --filter name= flag allows for fuzzy matching.
docker container ps -q returns only container ids by default and is perfect for combining with commands like docker container stop or docker container rm.
Of course if you are using docker compose, simply running `docker compose down` will accomplish the same thing.
I wanted to share one of my all time favorite tricks when working with Docker. It is fairly common (for me at least) to want to perform some action on a number of containers at once. For example, sometimes I have a lot of random containers running and I want to shut them all down (and sometimes remove them) at once. Of course if you are using compose you can simply run `docker compose down` but sometimes you have containers not managed by a compose file.
In the past I would write shell scripts to handle this. Something like:
```bash
$ for i in $(docker container ps --format '{{.ID}}'); do docker container rm --force $i; done
5d5b3004003f
038b809fb0af
de4abb80414c
530440f8848e
3aff02eddfe1
3e29e7db168c
46275b44f744
b3cd33cd7658
8e9f226f107e
29f67eea6ac8
597b72330d3d
```
This works ok, but if you also only want to kill certain containers it gets a bit trickier. For instance, we use Docker to spin up preview environments where each environment may have up to 10 containers. They are all prefixed with a PR number so I could do something like this:
```bash
$ for i in $(docker container ps --format '{{.ID}} {{.Names}}' | grep pr1 | awk '{print $1}'); do docker container rm --force $i; done
```
While both of those solutions work, they are a bit messing and fortunately Docker has a much better solution built in. Passing the `-q` flag to certain commands will by default just return a list of ids
```bash
$ docker container ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
31c3dbd33795 nginx "/docker-entrypoint.…" 6 minutes ago Up 6 minutes 80/tcp pr9
899384beaf6f nginx "/docker-entrypoint.…" 6 minutes ago Up 6 minutes 80/tcp pr8
e3aa660916d7 nginx "/docker-entrypoint.…" 6 minutes ago Up 6 minutes 80/tcp pr7
5fd2998a800a nginx "/docker-entrypoint.…" 6 minutes ago Up 6 minutes 80/tcp pr6
898450246c0c nginx "/docker-entrypoint.…" 6 minutes ago Up 6 minutes 80/tcp pr5
9e22f39b9810 nginx "/docker-entrypoint.…" 6 minutes ago Up 6 minutes 80/tcp pr4
3a0ee53664cf nginx "/docker-entrypoint.…" 6 minutes ago Up 6 minutes 80/tcp pr3
7e3a512739a4 nginx "/docker-entrypoint.…" 6 minutes ago Up 6 minutes 80/tcp pr2
af96f09686ef nginx "/docker-entrypoint.…" 6 minutes ago Up 6 minutes 80/tcp pr1
$ docker container ps -q
31c3dbd33795
899384beaf6f
e3aa660916d7
5fd2998a800a
898450246c0c
9e22f39b9810
3a0ee53664cf
7e3a512739a4
af96f09686ef
```
One of the best parts about this trick though is that you can combine it with the `--filter` flag like this:
```bash
$ docker container ps --filter name=pr -q
31c3dbd33795
899384beaf6f
e3aa660916d7
5fd2998a800a
898450246c0c
9e22f39b9810
3a0ee53664cf
7e3a512739a4
af96f09686ef
```
Now stopping all my containers is as easy as:
```bash
$ docker container rm $(docker container ps -qa) --force
31c3dbd33795
899384beaf6f
e3aa660916d7
5fd2998a800a
898450246c0c
9e22f39b9810
3a0ee53664cf
7e3a512739a4
af96f09686ef
```
Or, if I just want to stop specific containers, say all the ones that start with pr11217 in the name:
```bash
$ docker container rm $(docker container ps --filter name=pr11217 -qa) --force
```
Pretty slick! No more messing with `awk,grep,head,tail, etc` , instead just do it in one simple command. I would love to hear your tips and tricks for working with Docker!
#docker #cicd #containers #softwaredevelopment

View File

@ -0,0 +1,5 @@
<script
src="https://logs.hackanooga.com/api/script.js"
data-site-id="2"
defer
></script>