hackanooga / Confessions of a homelab hacker Wed, 25 Sep 2024 13:56:04 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.2 /wp-content/uploads/2024/03/cropped-cropped-avatar-32x32.png hackanooga / 32 32 Standing up a Wireguard VPN /standing-up-a-wireguard-vpn/ Wed, 25 Sep 2024 13:56:04 +0000 /?p=619 VPN’s have traditionally been slow, complex and hard to set up and configure. That all changed several years ago when Wireguard was officially merged into the mainline Linux kernel (src). I won’t go over all the reasons for why you should want to use Wireguard in this article, instead I will be focusing on just how easy it is to set up and configure.

For this tutorial we will be using Terraform to stand up a Digital Ocean droplet and then install Wireguard onto that. The Digital Ocean droplet will be acting as our “server” in this example and we will be using our own computer as the “client”. Of course, you don’t have to use Terraform, you just need a Linux box to install Wireguard on. You can find the code for this tutorial on my personal Git server here.

Create Droplet with Terraform

I have written some very basic Terraform to get us started. The Terraform is very basic and just creates a droplet with a predefined ssh key and a setup script passed as user data. When the droplet gets created, the script will get copied to the instance and automatically executed. After a few minutes everything should be ready to go. If you want to clone the repo above, feel free to, or if you would rather do everything by hand that’s great too. I will assume that you are doing everything by hand. The process of deploying from the repo should be pretty self explainitory. My reasoning for doing it this way is because I wanted to better understand the process.

First create our main.tf with the following contents:

# main.tf
# Attach an SSH key to our droplet
resource "digitalocean_ssh_key" "default" {
  name       = "Terraform Example"
  public_key = file("./tf-digitalocean.pub")
}

# Create a new Web Droplet in the nyc1 region
resource "digitalocean_droplet" "web" {
  image    = "ubuntu-22-04-x64"
  name     = "wireguard"
  region   = "nyc1"
  size     = "s-2vcpu-4gb"
  ssh_keys = [digitalocean_ssh_key.default.fingerprint]
  user_data = file("setup.sh")
}

output "droplet_output" {
  value = digitalocean_droplet.web.ipv4_address
}

Next create a terraform.tf file in the same directory with the following contents:

terraform {
  required_providers {
    digitalocean = {
      source  = "digitalocean/digitalocean"
      version = "2.41.0"
    }
  }
}

provider "digitalocean" {
}

Now we will need to create the ssh key that we defined in our Terraform code.

$ ssh-keygen -t rsa -C "WireguardVPN" -f ./tf-digitalocean -q -N ""

Next we need to set an environment variable for our DigitalOcean access token.

$ export DIGITALOCEAN_ACCESS_TOKEN=dop_v1_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

Now we are ready to initialize our Terraform and apply it:

$ terraform init
$ terraform apply

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # digitalocean_droplet.web will be created
  + resource "digitalocean_droplet" "web" {
      + backups              = false
      + created_at           = (known after apply)
      + disk                 = (known after apply)
      + graceful_shutdown    = false
      + id                   = (known after apply)
      + image                = "ubuntu-22-04-x64"
      + ipv4_address         = (known after apply)
      + ipv4_address_private = (known after apply)
      + ipv6                 = false
      + ipv6_address         = (known after apply)
      + locked               = (known after apply)
      + memory               = (known after apply)
      + monitoring           = false
      + name                 = "wireguard"
      + price_hourly         = (known after apply)
      + price_monthly        = (known after apply)
      + private_networking   = (known after apply)
      + region               = "nyc1"
      + resize_disk          = true
      + size                 = "s-2vcpu-4gb"
      + ssh_keys             = (known after apply)
      + status               = (known after apply)
      + urn                  = (known after apply)
      + user_data            = "69d130f386b262b136863be5fcffc32bff055ac0"
      + vcpus                = (known after apply)
      + volume_ids           = (known after apply)
      + vpc_uuid             = (known after apply)
    }

  # digitalocean_ssh_key.default will be created
  + resource "digitalocean_ssh_key" "default" {
      + fingerprint = (known after apply)
      + id          = (known after apply)
      + name        = "Terraform Example"
      + public_key  = <<-EOT
            ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDXOBlFdNqV48oxWobrn2rPt4y1FTqrqscA5bSu2f3CogwbDKDyNglXu8RL4opjfdBHQES+pEqvt21niqes8z2QsBTF3TRQ39SaHM8wnOTeC8d0uSgyrp9b7higHd0SDJVJZT0Bz5AlpYfCO/gpEW51XrKKeud7vImj8nGPDHnENN0Ie0UVYZ5+V1zlr0BBI7LX01MtzUOgSldDX0lif7IZWW4XEv40ojWyYJNQwO/gwyDrdAq+kl+xZu7LmBhngcqd02+X6w4SbdgYg2flu25Td0MME0DEsXKiZYf7kniTrKgCs4kJAmidCDYlYRt43dlM69pB5jVD/u4r3O+erTapH/O1EDhsdA9y0aYpKOv26ssYU+ZXK/nax+Heu0giflm7ENTCblKTPCtpG1DBthhX6Ml0AYjZF1cUaaAvpN8UjElxQ9r+PSwXloSnf25/r9UOBs1uco8VDwbx5cM0SpdYm6ERtLqGRYrG2SDJ8yLgiCE9EK9n3uQExyrTMKWzVAc= WireguardVPN
        EOT
    }

Plan: 2 to add, 0 to change, 0 to destroy.

Changes to Outputs:
  + droplet_output = (known after apply)

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

digitalocean_ssh_key.default: Creating...
digitalocean_ssh_key.default: Creation complete after 1s [id=43499750]
digitalocean_droplet.web: Creating...
digitalocean_droplet.web: Still creating... [10s elapsed]
digitalocean_droplet.web: Still creating... [20s elapsed]
digitalocean_droplet.web: Still creating... [30s elapsed]
digitalocean_droplet.web: Creation complete after 31s [id=447469336]

Apply complete! Resources: 2 added, 0 changed, 0 destroyed.

Outputs:

droplet_output = "159.223.113.207"

All pretty standard stuff. Nice! It only took about 30 seconds or so on my machine to spin up a droplet and start provisioning it. It is worth noting that the setup script will take a few minutes to run. Before we log into our new droplet, let’s take a quick look at the setup script that we are running.

#!/usr/bin/env sh
set -e
set -u
# Set the listen port used by Wireguard, this is the default so feel free to change it.
LISTENPORT=51820
CONFIG_DIR=/root/wireguard-conf
umask 077
mkdir -p $CONFIG_DIR/client

# Install wireguard
apt update && apt install -y wireguard

# Generate public/private key for the "server".
wg genkey > $CONFIG_DIR/privatekey
wg pubkey < $CONFIG_DIR/privatekey > $CONFIG_DIR/publickey

# Generate public/private key for the "client"
wg genkey > $CONFIG_DIR/client/privatekey
wg pubkey < $CONFIG_DIR/client/privatekey > $CONFIG_DIR/client/publickey


# Generate server config
echo "[Interface]
Address = 10.66.66.1/24,fd42:42:42::1/64
ListenPort = $LISTENPORT
PrivateKey = $(cat $CONFIG_DIR/privatekey)

### Client config
[Peer]
PublicKey = $(cat $CONFIG_DIR/client/publickey)
AllowedIPs = 10.66.66.2/32,fd42:42:42::2/128
" > /etc/wireguard/do.conf


# Generate client config.  This will need to be copied to your machine.
echo "[Interface]
PrivateKey = $(cat $CONFIG_DIR/client/privatekey)
Address = 10.66.66.2/32,fd42:42:42::2/128
DNS = 1.1.1.1,1.0.0.1

[Peer]
PublicKey = $(cat publickey)
Endpoint = $(curl icanhazip.com):$LISTENPORT
AllowedIPs = 0.0.0.0/0,::/0
" > client-config.conf

wg-quick up do

# Add iptables rules to forward internet traffic through this box
# We are assuming our Wireguard interface is called do and our
# primary public facing interface is called eth0.

iptables -I INPUT -p udp --dport 51820 -j ACCEPT
iptables -I FORWARD -i eth0 -o do -j ACCEPT
iptables -I FORWARD -i do -j ACCEPT
iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
ip6tables -I FORWARD -i do -j ACCEPT
ip6tables -t nat -A POSTROUTING -o eth0 -j MASQUERADE

# Enable routing on the server
echo "net.ipv4.ip_forward = 1
      net.ipv6.conf.all.forwarding = 1" >/etc/sysctl.d/wg.conf
sysctl --system

As you can see, it is pretty straightforward. All you really need to do is:

On the “server” side:

  1. Generate a private key and derive a public key from it for both the “server” and the “client”.
  2. Create a “server” config that tells the droplet what address to bind to for the wireguard interface, which private key to use to secure that interface and what port to listen on.
  3. The “server” config also needs to know what peers or “clients” to accept connections from in the AllowedIPs block. In this case we are just specifying one. The “server” also needs to know the public key of the “client” that will be connecting.

On the “client” side:

  1. Create a “client” config that tells our machine what address to assign to the wireguard interface (obviously needs to be on the same subnet as the interface on the server side).
  2. The client needs to know which private key to use to secure the interface.
  3. It also needs to know the public key of the server as well as the public IP address/hostname of the “server” it is connecting to as well as the port it is listening on.
  4. Finally it needs to know what traffic to route over the wireguard interface. In this example we are simply routing all traffic but you could restrict this as you see fit.

Now that we have our configs in place, we need to copy the client config to our local machine. The following command should work as long as you make sure to replace the IP address with the IP address of your newly created droplet:

## Make sure you have Wireguard installed on your local machine as well.
## https://wireguard.com/install

## Copy the client config to our local machine and move it to our wireguard directory.
$ ssh -i tf-digitalocean root@157.230.177.54 -- cat /root/wireguard-conf/client-config.conf| sudo tee /etc/wireguard/do.conf

Before we try to connect, let’s log into the server and make sure everything is set up correctly:

$ ssh -i tf-digitalocean root@159.223.113.207
Welcome to Ubuntu 22.04.4 LTS (GNU/Linux 5.15.0-113-generic x86_64)

 * Documentation:  https://help.ubuntu.com/
 * Management:     https://landscape.canonical.com/
 * Support:        https://ubuntu.com/pro

 System information as of Wed Sep 25 13:19:02 UTC 2024

  System load:  0.03              Processes:             113
  Usage of /:   2.1% of 77.35GB   Users logged in:       0
  Memory usage: 6%                IPv4 address for eth0: 157.230.221.196
  Swap usage:   0%                IPv4 address for eth0: 10.10.0.5

Expanded Security Maintenance for Applications is not enabled.

70 updates can be applied immediately.
40 of these updates are standard security updates.
To see these additional updates run: apt list --upgradable

Enable ESM Apps to receive additional future security updates.
See https://ubuntu.com/esm or run: sudo pro status

New release '24.04.1 LTS' available.
Run 'do-release-upgrade' to upgrade to it.


Last login: Wed Sep 25 13:16:25 2024 from 74.221.191.214
root@wireguard:~#

Awesome! We are connected. Now let’s check the wireguard interface using the wg command. If our config was correct, we should see an interface line and 1 peer line like so. If the peer line is missing then something is wrong with the configuration. Most likely a mismatch between public/private key.:

root@wireguard:~# wg
interface: do
  public key: fTvqo/cZVofJ9IZgWHwU6XKcIwM/EcxUsMw4voeS/Hg=
  private key: (hidden)
  listening port: 51820

peer: 5RxMenh1L+rNJobROkUrub4DBUj+nEUPKiNe4DFR8iY=
  allowed ips: 10.66.66.2/32, fd42:42:42::2/128
root@wireguard:~# 

So now we should be ready to go! On your local machine go ahead and try it out:

## Start the interface with wg-quick up [interface_name]
$ sudo wg-quick up do
[sudo] password for mikeconrad: 
[#] ip link add do type wireguard
[#] wg setconf do /dev/fd/63
[#] ip -4 address add 10.66.66.2/32 dev do
[#] ip -6 address add fd42:42:42::2/128 dev do
[#] ip link set mtu 1420 up dev do
[#] resolvconf -a do -m 0 -x
[#] wg set do fwmark 51820
[#] ip -6 route add ::/0 dev do table 51820
[#] ip -6 rule add not fwmark 51820 table 51820
[#] ip -6 rule add table main suppress_prefixlength 0
[#] ip6tables-restore -n
[#] ip -4 route add 0.0.0.0/0 dev do table 51820
[#] ip -4 rule add not fwmark 51820 table 51820
[#] ip -4 rule add table main suppress_prefixlength 0
[#] sysctl -q net.ipv4.conf.all.src_valid_mark=1
[#] iptables-restore -n

## Check our config
$ sudo wg
interface: do
  public key: fJ8mptCR/utCR4K2LmJTKTjn3xc4RDmZ3NNEQGwI7iI=
  private key: (hidden)
  listening port: 34596
  fwmark: 0xca6c

peer: duTHwMhzSZxnRJ2GFCUCHE4HgY5tSeRn9EzQt9XVDx4=
  endpoint: 157.230.177.54:51820
  allowed ips: 0.0.0.0/0, ::/0
  latest handshake: 1 second ago
  transfer: 1.82 KiB received, 2.89 KiB sent

## Make sure we can ping the outside world
mikeconrad@pop-os:~/projects/wireguard-terraform-digitalocean$ ping 1.1.1.1
PING 1.1.1.1 (1.1.1.1) 56(84) bytes of data.
64 bytes from 1.1.1.1: icmp_seq=1 ttl=56 time=28.0 ms
^C
--- 1.1.1.1 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 27.991/27.991/27.991/0.000 ms

## Verify our traffic is actually going over the tunnel.
$ curl icanhazip.com
157.230.177.54


We should also be able to ssh into our instance over the VPN using the 10.66.66.1 address:

$ ssh -i tf-digitalocean root@10.66.66.1
The authenticity of host '10.66.66.1 (10.66.66.1)' can't be established.
ED25519 key fingerprint is SHA256:E7BKSO3qP+iVVXfb/tLaUfKIc4RvtZ0k248epdE04m8.
This host key is known by the following other names/addresses:
    ~/.ssh/known_hosts:130: [hashed name]
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '10.66.66.1' (ED25519) to the list of known hosts.
Welcome to Ubuntu 22.04.4 LTS (GNU/Linux 5.15.0-113-generic x86_64)

 * Documentation:  https://help.ubuntu.com/
 * Management:     https://landscape.canonical.com/
 * Support:        https://ubuntu.com/pro

 System information as of Wed Sep 25 13:32:12 UTC 2024

  System load:  0.02              Processes:             109
  Usage of /:   2.1% of 77.35GB   Users logged in:       0
  Memory usage: 6%                IPv4 address for eth0: 157.230.177.54
  Swap usage:   0%                IPv4 address for eth0: 10.10.0.5

Expanded Security Maintenance for Applications is not enabled.

73 updates can be applied immediately.
40 of these updates are standard security updates.
To see these additional updates run: apt list --upgradable

Enable ESM Apps to receive additional future security updates.
See https://ubuntu.com/esm or run: sudo pro status

New release '24.04.1 LTS' available.
Run 'do-release-upgrade' to upgrade to it.


root@wireguard:~# 

Looks like everything is working! If you run the script from the repo you will have a fully functioning Wireguard VPN in less than 5 minutes! Pretty cool stuff! This article was not meant to be exhaustive but instead a simple primer to get your feet wet. The setup script I used is heavily inspired by angristan/wireguard-install. Another great resource is the Unofficial docs repo.

]]>
Hardening your web server by only allowing traffic from Cloudflare /hardening-your-web-server-by-only-allowing-traffic-from-cloudflare/ Thu, 01 Aug 2024 21:02:29 +0000 /?p=607 TDLR:

If you just want the code you can find a convenient script on my Gitea server here. This version has been slightly modified so that it will work on more systems.

I have been using Cloudflare for several years for both personal and professional projects. The free plan has some various gracious limits and it’s a great way to clear out some low hanging fruit and improve the security of your application. If you’re not familiar with how it works, basically Cloudflare has two modes for DNS records. DNS Only and Proxied. The only way to get the advantages of Cloudflare is to use Proxied mode. Cloudflare has some great documentation on how all of their services work but basically what happens is that you are pointing your domain to Cloudflare and Cloudflare provisions their network of Proxy servers to handle requests for your domain.

These proxy servers allow you to secure your domain by implementing things like WAF and Rate limiting. You can also enforce HTTPS only mode and modify/add custom request/response headers. You will notice that once you turn this mode on, your webserver will log requests as coming from Cloudflare IP addresses. They have great documentation on how to configure your webserver to restore these IP addresses in your log files.

This is a very easy step to start securing your origin server but it still allows attackers to access your servers directly if they know the IP address. We can take our security one step forward by only allowing requests from IP addresses originating within Cloudflare meaning that we will only allow requests if they are coming from a Cloudflare proxy server. The setup is fairly straightforward. In this example I will be using a Linux server.

We can achieve this pretty easily because Cloudflare provides a sort of API where they regular publish their network blocks. Here is the basic script we will use:

for ip in $(curl https://www.cloudflare.com/ips-v4/); do iptables -I INPUT -p tcp -m multiport --dports http,https -s $ip -j ACCEPT; done

for ip in $(curl https://www.cloudflare.com/ips-v6/); do ip6tables -I INPUT -p tcp -m multiport --dports http,https -s $ip -j ACCEPT; done

iptables -A INPUT -p tcp -m multiport --dports http,https -j DROP
ip6tables -A INPUT -p tcp -m multiport --dports http,https -j DROP

This will pull down the latest network addresses from Cloudflare and create iptables rules for us. These IP addresses do change from time to time so you may want to put this in a script and run it via a cronjob to have it update on a regular basis.

Now with this in place, here is the results:

This should cut down on some of the noise from attackers and script kiddies trying to find holes in your security.

]]>
SFTP Server Setup for Daily Inventory File Transfers /sftp-server-setup-for-daily-inventory-file-transfers/ Wed, 17 Jul 2024 02:15:23 +0000 /?p=599 Job Description

We are looking for an experienced professional to help us set up an SFTP server that will allow our vendors to send us inventory files on a daily basis. The server should ensure secure and reliable file transfers, allowing our vendors to easily upload their inventory updates. The successful candidate will possess expertise in SFTP server setup and configuration, as well as knowledge of network security protocols. The required skills for this job include:

– SFTP server setup and configuration
– Network security protocols
– Troubleshooting and problem-solving skills

If you have demonstrated experience in setting up SFTP servers and ensuring smooth daily file transfers, we would love to hear from you.


My Role

I walked the client through the process of setting up a Digital Ocean account. I created a Ubuntu 22.04 VM and installed SFTPGo. I set the client up with an administrator user so that they could easily login and manage users and shares. I implemented some basic security practices as well and set the client up with a custom domain and free TLS/SSL certificate from LetsEncrypt. With the documentation and screenshots I provided the client, they were able to get everything up and running and add users and connect other systems easily and securly.


Client Feedback

Rating is 5 out of 5.

Michael was EXTREMELY helpful and great to work with. We really benefited from his support and help with everything.

]]>
Debugging running Nginx config /debugging-running-nginx-config/ Wed, 17 Jul 2024 01:42:43 +0000 /?p=596 I was recently working on project where a client had cPanel/WHM with Nginx and Apache. They had a large number of sites managed by Nginx with a large number of includes. I created a custom config to override a location block and needed to be certain that my changes where actually being picked up. Anytime I make changes to an Nginx config, I try to be vigilant about running:

nginx -t

to test my configuration and ensure I don’t have any syntax errors. I was looking for an easy way to view the actual compiled config and found the -T flag which will test the configuration and dump it to standard out. This is pretty handy if you have a large number of includes in various locations. Here is an example from a fresh Nginx Docker container:

root@2771f302dc98:/# nginx -T
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
# configuration file /etc/nginx/nginx.conf:

user  nginx;
worker_processes  auto;

error_log  /var/log/nginx/error.log notice;
pid        /var/run/nginx.pid;


events {
    worker_connections  1024;
}


http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    sendfile        on;
    #tcp_nopush     on;

    keepalive_timeout  65;

    #gzip  on;

    include /etc/nginx/conf.d/*.conf;
}

# configuration file /etc/nginx/mime.types:

types {
    text/html                                        html htm shtml;
    text/css                                         css;
    text/xml                                         xml;
    image/gif                                        gif;
    image/jpeg                                       jpeg jpg;
    application/javascript                           js;
    application/atom+xml                             atom;
    application/rss+xml                              rss;

    text/mathml                                      mml;
    text/plain                                       txt;
    text/vnd.sun.j2me.app-descriptor                 jad;
    text/vnd.wap.wml                                 wml;
    text/x-component                                 htc;

    image/avif                                       avif;
    image/png                                        png;
    image/svg+xml                                    svg svgz;
    image/tiff                                       tif tiff;
    image/vnd.wap.wbmp                               wbmp;
    image/webp                                       webp;
    image/x-icon                                     ico;
    image/x-jng                                      jng;
    image/x-ms-bmp                                   bmp;

    font/woff                                        woff;
    font/woff2                                       woff2;

    application/java-archive                         jar war ear;
    application/json                                 json;
    application/mac-binhex40                         hqx;
    application/msword                               doc;
    application/pdf                                  pdf;
    application/postscript                           ps eps ai;
    application/rtf                                  rtf;
    application/vnd.apple.mpegurl                    m3u8;
    application/vnd.google-earth.kml+xml             kml;
    application/vnd.google-earth.kmz                 kmz;
    application/vnd.ms-excel                         xls;
    application/vnd.ms-fontobject                    eot;
    application/vnd.ms-powerpoint                    ppt;
    application/vnd.oasis.opendocument.graphics      odg;
    application/vnd.oasis.opendocument.presentation  odp;
    application/vnd.oasis.opendocument.spreadsheet   ods;
    application/vnd.oasis.opendocument.text          odt;
    application/vnd.openxmlformats-officedocument.presentationml.presentation
                                                     pptx;
    application/vnd.openxmlformats-officedocument.spreadsheetml.sheet
                                                     xlsx;
    application/vnd.openxmlformats-officedocument.wordprocessingml.document
                                                     docx;
    application/vnd.wap.wmlc                         wmlc;
    application/wasm                                 wasm;
    application/x-7z-compressed                      7z;
    application/x-cocoa                              cco;
    application/x-java-archive-diff                  jardiff;
    application/x-java-jnlp-file                     jnlp;
    application/x-makeself                           run;
    application/x-perl                               pl pm;
    application/x-pilot                              prc pdb;
    application/x-rar-compressed                     rar;
    application/x-redhat-package-manager             rpm;
    application/x-sea                                sea;
    application/x-shockwave-flash                    swf;
    application/x-stuffit                            sit;
    application/x-tcl                                tcl tk;
    application/x-x509-ca-cert                       der pem crt;
    application/x-xpinstall                          xpi;
    application/xhtml+xml                            xhtml;
    application/xspf+xml                             xspf;
    application/zip                                  zip;

    application/octet-stream                         bin exe dll;
    application/octet-stream                         deb;
    application/octet-stream                         dmg;
    application/octet-stream                         iso img;
    application/octet-stream                         msi msp msm;

    audio/midi                                       mid midi kar;
    audio/mpeg                                       mp3;
    audio/ogg                                        ogg;
    audio/x-m4a                                      m4a;
    audio/x-realaudio                                ra;

    video/3gpp                                       3gpp 3gp;
    video/mp2t                                       ts;
    video/mp4                                        mp4;
    video/mpeg                                       mpeg mpg;
    video/quicktime                                  mov;
    video/webm                                       webm;
    video/x-flv                                      flv;
    video/x-m4v                                      m4v;
    video/x-mng                                      mng;
    video/x-ms-asf                                   asx asf;
    video/x-ms-wmv                                   wmv;
    video/x-msvideo                                  avi;
}

# configuration file /etc/nginx/conf.d/default.conf:
server {
    listen       80;
    server_name  localhost;

    #access_log  /var/log/nginx/host.access.log  main;

    location / {
        root   /usr/share/nginx/html;
        index  index.html index.htm;
    }

    #error_page  404              /404.html;

    # redirect server error pages to the static page /50x.html
    #
    error_page   500 502 503 504  /50x.html;
    location = /50x.html {
        root   /usr/share/nginx/html;
    }

    # proxy the PHP scripts to Apache listening on 127.0.0.1:80
    #
    #location ~ \.php$ {
    #    proxy_pass   http://127.0.0.1;/
    #}

    # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
    #
    #location ~ \.php$ {
    #    root           html;
    #    fastcgi_pass   127.0.0.1:9000;
    #    fastcgi_index  index.php;
    #    fastcgi_param  SCRIPT_FILENAME  /scripts$fastcgi_script_name;
    #    include        fastcgi_params;
    #}

    # deny access to .htaccess files, if Apache's document root
    # concurs with nginx's one
    #
    #location ~ /\.ht {
    #    deny  all;
    #}
}

As you can see from the output above, we get all of the various Nginx config files in use printed to the console, perfect for grepping or searching/filtering with other tools.

]]>
Fun with bots – SSH tarpitting /fun-with-bots-ssh-tarpitting/ Mon, 24 Jun 2024 13:37:43 +0000 /?p=576 For those of you who aren’t familiar with the concept of a network tarpit it is a fairly simple concept. Wikipedia defines it like this:

tarpit is a service on a computer system (usually a server) that purposely delays incoming connections. The technique was developed as a defense against a computer worm, and the idea is that network abuses such as spamming or broad scanning are less effective, and therefore less attractive, if they take too long. The concept is analogous with a tar pit, in which animals can get bogged down and slowly sink under the surface, like in a swamp.

https://en.wikipedia.org/wiki/Tarpit_(networking)

If you run any sort of service on the internet then you know as soon as your server has a public IP address and open ports, there are scanners and bots trying to get in constantly. If you take decent steps towards security then it is little more than an annoyance, but annoying all the less. One day when I had some extra time on my hands I started researching ways to mess with the bots trying to scan/attack my site.

It turns out that this problem has been solved multiple times in multiple ways. One of the most popular tools for tarpitting ssh connections is endlessh. The way it works is actually pretty simple. The SSH RFC states that when an SSH connection is established, both sides MUST send an identification string. Further down the spec is the line that allows this behavior:

   The server MAY send other lines of data before sending the version
   string.  Each line SHOULD be terminated by a Carriage Return and Line
   Feed.  Such lines MUST NOT begin with "SSH-", and SHOULD be encoded
   in ISO-10646 UTF-8 [RFC3629] (language is not specified).  Clients
   MUST be able to process such lines.  Such lines MAY be silently
   ignored, or MAY be displayed to the client user.  If they are
   displayed, control character filtering, as discussed in [SSH-ARCH],
   SHOULD be used.  The primary use of this feature is to allow TCP-
   wrappers to display an error message before disconnecting.
SSH RFC

Essentially this means that their is no limit to the amount of data that a server can send back to the client and the client must be able to wait and process all of this data. Now let’s see it in action.

git clone https://github.com/skeeto/endlessh.git
cd endlessh
make
./endlessh &

By default this fake server listens on port 2222. I have a port forward set up that forwards all ssh traffic from port 22 to 2222. Now try to connect via ssh:

ssh -vvv localhost -p 2222

If you wait a few seconds you will see the server send back the version string and then start sending a random banner:

$:/tmp/endlessh$ 2024-06-24T13:05:59.488Z Port 2222
2024-06-24T13:05:59.488Z Delay 10000
2024-06-24T13:05:59.488Z MaxLineLength 32
2024-06-24T13:05:59.488Z MaxClients 4096
2024-06-24T13:05:59.488Z BindFamily IPv4 Mapped IPv6
2024-06-24T13:05:59.488Z socket() = 3
2024-06-24T13:05:59.488Z setsockopt(3, SO_REUSEADDR, true) = 0
2024-06-24T13:05:59.488Z setsockopt(3, IPV6_V6ONLY, true) = 0
2024-06-24T13:05:59.488Z bind(3, port=2222) = 0
2024-06-24T13:05:59.488Z listen(3) = 0
2024-06-24T13:05:59.488Z poll(1, -1)
ssh -vvv localhost -p 2222
OpenSSH_8.9p1 Ubuntu-3ubuntu0.7, OpenSSL 3.0.2 15 Mar 2022
debug1: Reading configuration data /home/mikeconrad/.ssh/config
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files
debug1: /etc/ssh/ssh_config line 21: Applying options for *
debug3: expanded UserKnownHostsFile '~/.ssh/known_hosts' -> '/home/mikeconrad/.ssh/known_hosts'
debug3: expanded UserKnownHostsFile '~/.ssh/known_hosts2' -> '/home/mikeconrad/.ssh/known_hosts2'
debug2: resolving "localhost" port 2222
debug3: resolve_host: lookup localhost:2222
debug3: ssh_connect_direct: entering
debug1: Connecting to localhost [::1] port 2222.
debug3: set_sock_tos: set socket 3 IPV6_TCLASS 0x10
debug1: Connection established.
2024-06-24T13:06:08.635Z = 1
2024-06-24T13:06:08.635Z accept() = 4
2024-06-24T13:06:08.635Z setsockopt(4, SO_RCVBUF, 1) = 0
2024-06-24T13:06:08.635Z ACCEPT host=::1 port=43696 fd=4 n=1/4096
2024-06-24T13:06:08.635Z poll(1, 10000)
debug1: identity file /home/mikeconrad/.ssh/id_rsa type 0
debug1: identity file /home/mikeconrad/.ssh/id_rsa-cert type 4
debug1: identity file /home/mikeconrad/.ssh/id_ecdsa type -1
debug1: identity file /home/mikeconrad/.ssh/id_ecdsa-cert type -1
debug1: identity file /home/mikeconrad/.ssh/id_ecdsa_sk type -1
debug1: identity file /home/mikeconrad/.ssh/id_ecdsa_sk-cert type -1
debug1: identity file /home/mikeconrad/.ssh/id_ed25519 type -1
debug1: identity file /home/mikeconrad/.ssh/id_ed25519-cert type -1
debug1: identity file /home/mikeconrad/.ssh/id_ed25519_sk type -1
debug1: identity file /home/mikeconrad/.ssh/id_ed25519_sk-cert type -1
debug1: identity file /home/mikeconrad/.ssh/id_xmss type -1
debug1: identity file /home/mikeconrad/.ssh/id_xmss-cert type -1
debug1: identity file /home/mikeconrad/.ssh/id_dsa type -1
debug1: identity file /home/mikeconrad/.ssh/id_dsa-cert type -1
debug1: Local version string SSH-2.0-OpenSSH_8.9p1 Ubuntu-3ubuntu0.7
2024-06-24T13:06:18.684Z = 0
2024-06-24T13:06:18.684Z write(4) = 3
2024-06-24T13:06:18.684Z poll(1, 10000)
debug1: kex_exchange_identification: banner line 0: V
2024-06-24T13:06:28.734Z = 0
2024-06-24T13:06:28.734Z write(4) = 25
2024-06-24T13:06:28.734Z poll(1, 10000)
debug1: kex_exchange_identification: banner line 1: 2I=ED}PZ,z T_Y|Yc]$b{R]

This is a great way to give back to those bots and script kiddies. In my research into other methods I also stumbled across this brilliant program fakessh. While fakessh isn’t technically a tarpit, it’s more of a honeypot but very interesting nonetheless. It creates a fake SSH server and logs the ip address, connection string and any commands executed by the attacker. Essentially it allows any username/password combination to connect and gives them a fake shell prompt. There is no actual access to any file system and all of their commands basically return gibberish.

Here are some logs from an actual server of mine running fakessh

2024/06/24 06:51:20 [conn] ip=183.81.169.238:40430
2024/06/24 06:51:22 [auth] ip=183.81.169.238:40430 version="SSH-2.0-Go" user="root" password="0"
2024/06/24 06:51:23 [conn] ip=183.81.169.238:40444
2024/06/24 06:51:25 [auth] ip=183.81.169.238:40444 version="SSH-2.0-Go" user="root" password="eve"
2024/06/24 06:51:26 [conn] ip=183.81.169.238:48408
2024/06/24 06:51:27 [auth] ip=183.81.169.238:48408 version="SSH-2.0-Go" user="root" password="root"
2024/06/24 06:51:28 [conn] ip=183.81.169.238:48434
2024/06/24 06:51:30 [auth] ip=183.81.169.238:48434 version="SSH-2.0-Go" user="root" password="1"
2024/06/24 06:51:30 [conn] ip=183.81.169.238:48448
2024/06/24 06:51:32 [auth] ip=183.81.169.238:48448 version="SSH-2.0-Go" user="root" password="123"
2024/06/24 06:51:32 [conn] ip=183.81.169.238:48476
2024/06/24 06:51:35 [auth] ip=183.81.169.238:48476 version="SSH-2.0-Go" user="root" password="admin"
2024/06/24 06:51:35 [conn] ip=183.81.169.238:39250
2024/06/24 06:51:37 [auth] ip=183.81.169.238:39250 version="SSH-2.0-Go" user="root" password="123456"
2024/06/24 06:51:38 [conn] ip=183.81.169.238:39276
2024/06/24 06:51:40 [auth] ip=183.81.169.238:39276 version="SSH-2.0-Go" user="root" password="123123"
2024/06/24 06:51:40 [conn] ip=183.81.169.238:39294
2024/06/24 06:51:42 [auth] ip=183.81.169.238:39294 version="SSH-2.0-Go" user="root" password="test"
2024/06/24 06:51:43 [conn] ip=183.81.169.238:39316
2024/06/24 06:51:45 [auth] ip=183.81.169.238:39316 version="SSH-2.0-Go" user="root" password="123456789"
2024/06/24 06:51:45 [conn] ip=183.81.169.238:35108
2024/06/24 06:51:47 [auth] ip=183.81.169.238:35108 version="SSH-2.0-Go" user="root" password="12345"
2024/06/24 06:51:48 [conn] ip=183.81.169.238:35114
2024/06/24 06:51:50 [auth] ip=183.81.169.238:35114 version="SSH-2.0-Go" user="root" password="password"
2024/06/24 06:51:50 [conn] ip=183.81.169.238:35130
2024/06/24 06:51:52 [auth] ip=183.81.169.238:35130 version="SSH-2.0-Go" user="root" password="12345678"
2024/06/24 06:51:52 [conn] ip=183.81.169.238:35146
2024/06/24 06:51:54 [auth] ip=183.81.169.238:35146 version="SSH-2.0-Go" user="root" password="111111"
2024/06/24 06:51:55 [conn] ip=183.81.169.238:58490
2024/06/24 06:51:57 [auth] ip=183.81.169.238:58490 version="SSH-2.0-Go" user="root" password="1234567890"
2024/06/24 06:51:57 [conn] ip=183.81.169.238:58528
2024/06/24 06:51:59 [auth] ip=183.81.169.238:58528 version="SSH-2.0-Go" user="root" password="1234"
2024/06/24 06:52:00 [conn] ip=183.81.169.238:58572
2024/06/24 06:52:02 [auth] ip=183.81.169.238:58572 version="SSH-2.0-Go" user="root" password="password123"
2024/06/24 06:52:02 [conn] ip=183.81.169.238:58588
2024/06/24 06:52:04 [auth] ip=183.81.169.238:58588 version="SSH-2.0-Go" user="root" password="ubuntu"
2024/06/24 06:52:05 [conn] ip=183.81.169.238:37198
2024/06/24 06:52:07 [auth] ip=183.81.169.238:37198 version="SSH-2.0-Go" user="Antminer" password="root"
2024/06/24 06:52:07 [conn] ip=183.81.169.238:37214
2024/06/24 06:52:09 [auth] ip=183.81.169.238:37214 version="SSH-2.0-Go" user="Antminer" password="admin"
2024/06/24 06:52:10 [conn] ip=183.81.169.238:37238
2024/06/24 06:52:11 [auth] ip=183.81.169.238:37238 version="SSH-2.0-Go" user="root" password="innot1t2"
2024/06/24 06:52:12 [conn] ip=183.81.169.238:37258
2024/06/24 06:52:14 [auth] ip=183.81.169.238:37258 version="SSH-2.0-Go" user="root" password="t1t2t3a5"
2024/06/24 06:52:14 [conn] ip=183.81.169.238:55658
2024/06/24 06:52:16 [auth] ip=183.81.169.238:55658 version="SSH-2.0-Go" user="root" password="blacksheepwall"
2024/06/24 06:52:17 [conn] ip=183.81.169.238:55670
2024/06/24 06:52:19 [auth] ip=183.81.169.238:55670 version="SSH-2.0-Go" user="root" password="envision"
2024/06/24 06:52:19 [conn] ip=183.81.169.238:55708
2024/06/24 06:52:21 [auth] ip=183.81.169.238:55708 version="SSH-2.0-Go" user="root" password="bwcon"
2024/06/24 06:52:22 [conn] ip=183.81.169.238:55776
2024/06/24 06:52:23 [auth] ip=183.81.169.238:55776 version="SSH-2.0-Go" user="admin" password="root"
2024/06/24 06:52:24 [conn] ip=183.81.169.238:46646
2024/06/24 06:52:26 [auth] ip=183.81.169.238:46646 version="SSH-2.0-Go" user="baikal" password="baikal"
2024/06/24 06:52:26 [conn] ip=180.101.88.197:44620
2024/06/24 06:52:27 [conn] ip=180.101.88.197:44620 err="ssh: disconnect, reason 11: "
2024/06/24 06:53:35 [conn] ip=218.92.0.76:50610
2024/06/24 06:53:36 [conn] ip=218.92.0.76:50610 err="ssh: disconnect, reason 11: "
2024/06/24 07:02:28 [conn] ip=218.92.0.27:64676
2024/06/24 07:02:30 [conn] ip=218.92.0.27:64676 err="ssh: disconnect, reason 11: "
2024/06/24 07:10:05 [conn] ip=218.92.0.76:57601
2024/06/24 07:10:07 [conn] ip=218.92.0.76:57601 err="ssh: disconnect, reason 11: "
2024/06/24 07:14:05 [conn] ip=193.201.9.156:63056
2024/06/24 07:14:05 [auth] ip=193.201.9.156:63056 version="SSH-2.0-Go" user="ubnt" password="ubnt"
2024/06/24 07:14:05 [conn] ip=193.201.9.156:63056 err="read tcp 10.10.10.107:2222->193.201.9.156:63056: read: connection reset by peer"
2024/06/24 07:24:53 [conn] ip=218.92.0.31:25485
2024/06/24 07:24:54 [conn] ip=218.92.0.31:25485 err="ssh: disconnect, reason 11: "
2024/06/24 07:24:54 [conn] ip=218.92.0.112:39270
2024/06/24 07:24:56 [conn] ip=218.92.0.112:39270 err="ssh: disconnect, reason 11: "
2024/06/24 07:26:42 [conn] ip=218.92.0.34:59993
2024/06/24 07:35:46 [conn] ip=218.92.0.34:59993 err="read tcp 10.10.10.107:2222->218.92.0.34:59993: read: connection reset by peer"
2024/06/24 07:41:28 [conn] ip=218.92.0.107:62285
2024/06/24 07:41:31 [conn] ip=218.92.0.107:62285 err="ssh: disconnect, reason 11: "
2024/06/24 07:43:27 [conn] ip=218.92.0.29:34556
2024/06/24 07:43:28 [conn] ip=218.92.0.29:34556 err="ssh: disconnect, reason 11: "
2024/06/24 07:44:15 [conn] ip=218.92.0.118:37047
2024/06/24 07:44:22 [conn] ip=218.92.0.118:37047 err="ssh: disconnect, reason 11: "
2024/06/24 07:56:10 [conn] ip=157.245.98.245:6116
2024/06/24 07:56:11 [conn] ip=157.245.98.245:6116 err="ssh: unexpected message type 20 (expected 21)"
2024/06/24 07:57:57 [conn] ip=218.92.0.112:28326
2024/06/24 07:57:58 [conn] ip=218.92.0.112:28326 err="ssh: disconnect, reason 11: "
2024/06/24 08:00:01 [conn] ip=218.92.0.24:24948
2024/06/24 08:00:02 [conn] ip=218.92.0.24:24948 err="ssh: disconnect, reason 11: "
2024/06/24 08:06:19 [conn] ip=193.201.9.156:46865
2024/06/24 08:06:20 [auth] ip=193.201.9.156:46865 version="SSH-2.0-Go" user="root" password="xc3511"
2024/06/24 08:06:20 [conn] ip=193.201.9.156:46865 err="read tcp 10.10.10.107:2222->193.201.9.156:46865: read: connection reset by peer"
2024/06/24 08:14:26 [conn] ip=180.101.88.197:48347
2024/06/24 08:14:28 [conn] ip=180.101.88.197:48347 err="ssh: disconnect, reason 11: "
2024/06/24 08:16:28 [conn] ip=218.92.0.56:18064
2024/06/24 08:16:32 [conn] ip=218.92.0.56:18064 err="ssh: disconnect, reason 11: "
2024/06/24 08:30:55 [conn] ip=180.101.88.196:40495
2024/06/24 08:30:57 [conn] ip=180.101.88.196:40495 err="ssh: disconnect, reason 11: "
2024/06/24 08:32:20 [conn] ip=85.209.11.227:15493
2024/06/24 08:32:21 [auth] ip=85.209.11.227:15493 version="SSH-2.0-Go" user="telecomadmin" password="admintelecom"
2024/06/24 08:32:21 [conn] ip=85.209.11.227:15493 err="read tcp 10.10.10.107:2222->85.209.11.227:15493: read: connection reset by peer"
2024/06/24 08:33:19 [conn] ip=218.92.0.34:59804
2024/06/24 08:33:21 [conn] ip=218.92.0.34:59804 err="ssh: disconnect, reason 11: "
2024/06/24 08:41:00 [conn] ip=218.92.0.27:45567
2024/06/24 08:41:02 [conn] ip=218.92.0.27:45567 err="ssh: disconnect, reason 11: "
2024/06/24 08:47:15 [conn] ip=180.101.88.196:17032
2024/06/24 08:47:16 [conn] ip=180.101.88.196:17032 err="ssh: disconnect, reason 11: "
2024/06/24 08:49:51 [conn] ip=218.92.0.29:26360
2024/06/24 08:49:57 [conn] ip=218.92.0.29:26360 err="ssh: disconnect, reason 11: "
2024/06/24 08:58:27 [conn] ip=193.201.9.156:49525
2024/06/24 08:58:28 [auth] ip=193.201.9.156:49525 version="SSH-2.0-Go" user="admin" password="1234"
2024/06/24 08:58:28 [conn] ip=193.201.9.156:49525 err="read tcp 10.10.10.107:2222->193.201.9.156:49525: read: connection reset by peer"
2024/06/24 08:58:44 [conn] ip=218.92.0.31:11835
2024/06/24 08:58:46 [conn] ip=218.92.0.31:11835 err="ssh: disconnect, reason 11: "
2024/06/24 09:03:38 [conn] ip=218.92.0.107:57758
2024/06/24 09:03:40 [conn] ip=218.92.0.107:57758 err="ssh: disconnect, reason 11: "
2024/06/24 09:07:36 [conn] ip=218.92.0.56:21354
2024/06/24 09:07:39 [conn] ip=218.92.0.56:21354 err="ssh: disconnect, reason 11: "

Those are mostly connections and disconnections. They probably connected, realized it was fake and disconnected. There are a couple that tried to execute some commands though:

:~$ sudo grep head /var/log/fakessh/fakessh.log 
2024/06/23 15:48:02 [shell] ip=184.160.233.163:45735 duration=0s bytes=15 head="ls 2>/dev/null\n"
2024/06/24 03:55:11 [shell] ip=14.46.116.243:43656 duration=20s bytes=0 head=""

Fun fact: Cloudflare’s Bot Fight Mode uses a form of tarpitting:

Once enabled, when we detect a bad bot, we will do three things: (1) we’re going to disincentivize the bot maker economically by tarpitting them, including requiring them to solve a computationally intensive challenge that will require more of their bot’s CPU; (2) for Bandwidth Alliance partners, we’re going to hand the IP of the bot to the partner and get the bot kicked offline; and (3) we’re going to plant trees to make up for the bot’s carbon cost.

https://blog.cloudflare.com/cleaning-up-bad-bots
]]>
Traefik 3.0 service discovery in Docker Swarm mode /traefik-3-0-service-discovery-in-docker-swarm-mode/ Sat, 11 May 2024 13:44:01 +0000 /?p=564 I recently decided to set up a Docker swarm cluster for a project I was working on. If you aren’t familiar with Swarm mode, it is similar in some ways to k8s but with much less complexity and it is built into Docker. If you are looking for a fairly straightforward way to deploy containers across a number of nodes without all the overhead of k8s it can be a good choice, however it isn’t a very popular or widespread solution these days.

Anyway, I set up a VM scaling set in Azure with 10 Ubuntu 22.04 vms and wrote some Ansible scripts to automate the process of installing Docker on each machine as well as setting 3 up as swarm managers and the other 7 as worker nodes. I ssh’d into the primary manager node and created a docker compose file for launching an observability stack.

Here is what that docker-compose.yml looks like:

---
services:
  otel-collector:
    image: otel/opentelemetry-collector-contrib:0.88.0
    volumes:
      - /home/user/repo/common/devops/observability/otel-config.yaml:/etc/otel/config.yaml
      - /home/user/repo/log:/log/otel
    command: --config /etc/otel/config.yaml
    environment:
      JAEGER_ENDPOINT: 'tempo:4317'
      LOKI_ENDPOINT: 'http://loki:3100/loki/api/v1/push'
    ports:
      - '8889:8889' # Prometheus metrics exporter (scrape endpoint)
      - '13133:13133' # health_check extension
      - '55679:55679' # ZPages extension
    deploy:
      placement:
        constraints:
          - node.hostname==dockerswa2V8BY4
    networks:
      - traefik
  prometheus:
    container_name: prometheus
    image: prom/prometheus:v2.42.0
    volumes:
      - /home/user/repo/common/devops/observability/prometheus.yml:/etc/prometheus/prometheus.yml
    ports:
      - '9090:9090'
    deploy:
      placement:
        constraints:
          - node.hostname==dockerswa2V8BY4
    networks:
      - traefik
  loki:
    container_name: loki
    image: grafana/loki:2.7.4
    ports:
      - '3100:3100'
    networks:
      - traefik
  grafana:
    container_name: grafana
    image: grafana/grafana:9.4.3
    volumes:
      - /home/user/repo/common/devops/observability/grafana-datasources.yml:/etc/grafana/provisioning/datasources/datasources.yml
    environment:
      GF_AUTH_ANONYMOUS_ENABLED: 'false'
      GF_AUTH_ANONYMOUS_ORG_ROLE: 'Admin'
    expose:
      - '3000'
    labels:
      - traefik.constraint-label=traefik
      - traefik.http.middlewares.https-redirect.redirectscheme.scheme=https
      - traefik.http.middlewares.https-redirect.redirectscheme.permanent=true
      - traefik.http.routers.grafana-http.rule=Host(`swarm-grafana.mydomain.com`)
      - traefik.http.routers.grafana-http.entrypoints=http
      - traefik.http.routers.grafana-http.middlewares=https-redirect
      # traefik-https the actual router using HTTPS
      # Uses the environment variable DOMAIN
      - traefik.http.routers.grafana-https.rule=Host(`swarm-grafana.mydomain.com`)
      - traefik.http.routers.grafana-https.entrypoints=https
      - traefik.http.routers.grafana-https.tls=true
      # Use the special Traefik service api@internal with the web UI/Dashboard
      - traefik.http.routers.grafana-https.service=grafana
      # Use the "le" (Let's Encrypt) resolver created below
      - traefik.http.routers.grafana-https.tls.certresolver=le
      # Enable HTTP Basic auth, using the middleware created above
      - traefik.http.services.grafana.loadbalancer.server.port=3000
    deploy:
      placement:
        constraints:
          - node.hostname==dockerswa2V8BY4
    networks:
      - traefik
  # Tempo runs as user 10001, and docker compose creates the volume as root.
  # As such, we need to chown the volume in order for Tempo to start correctly.
  init:
    image: &tempoImage grafana/tempo:latest
    user: root
    entrypoint:
      - 'chown'
      - '10001:10001'
      - '/var/tempo'
    volumes:
      - /home/user/repo/tempo-data:/var/tempo
    deploy:
      placement:
        constraints:
          - node.hostname==dockerswa2V8BY4

  tempo:
    image: *tempoImage
    container_name: tempo
    command: ['-config.file=/etc/tempo.yaml']
    volumes:
      - /home/user/repo/common/devops/observability/tempo.yaml:/etc/tempo.yaml
      - /home/user/repo/tempo-data:/var/tempo
    deploy:
      placement:
        constraints:
          - node.hostname==dockerswa2V8BY4
    ports:
      - '14268' # jaeger ingest
      - '3200' # tempo
      - '4317' # otlp grpc
      - '4318' # otlp http
      - '9411' # zipkin
    depends_on:
      - init
    networks:
      - traefik
networks:
  traefik:
    external: true

Pretty straightforward so I proceed to deploy it into the swarm

docker stack deploy -c docker-compose.yml observability

Everything deploys properly but when I view the Traefik logs there is an issue with all the services except for the grafana service. I get errors like this:

traefik_traefik.1.tm5iqb9x59on@dockerswa2V8BY4    | 2024-05-11T13:14:16Z ERR error="service \"observability-prometheus\" error: port is missing" container=observability-prometheus-37i852h4o36c23lzwuu9pvee9 providerName=swarm

It drove me crazy for about half a day or so. I couldn’t find any reason why the grafana service worked as expected but none of the others did. Part of my love/hate relationship with Traefik stems from the fact that configuration issues like this can be hard to track and debug. Ultimately after lots of searching and banging my head against a wall I found the answer in the Traefik docs and thought I would share here for anyone else who might run into this issue. Again, this solution is specific to Docker Swarm mode.

https://doc.traefik.io/traefik/providers/swarm/#configuration-examples

Expand that first section and you will see the solution:

It turns out I just needed to update my docker-compose.yml and nest the labels under a deploy section, redeploy and everything was working as expected.

]]>
Stop all running containers with Docker /stop-all-running-containers-with-docker/ Wed, 03 Apr 2024 13:12:41 +0000 /?p=557 These are some handy snippets I use on a regular basis when managing containers. I have one server in particular that can sometimes end up with 50 to 100 orphaned containers for various reasons. The easiest/quickest way to stop all of them is to do something like this:

docker container stop $(docker container ps -q)

Let me break this down in case you are not familiar with the syntax. Basically we are passing the output of docker container ps -q into docker container stop. This works because the stop command can take a list of container ids which is what we get when passing the -q flag to docker container ps.

]]>
Automating CI/CD with TeamCity and Ansible /automating-ci-cd-with-teamcity-ansible/ Mon, 11 Mar 2024 13:37:47 +0000 https://wordpress.hackanooga.com/?p=393 In part one of this series we are going to explore a CI/CD option you may not be familiar with but should definitely be on your radar. I used Jetbrains TeamCity for several months at my last company and really enjoyed my time with it. A couple of the things I like most about it are:

  • Ability to declare global variables and have them be passed down to all projects
  • Ability to declare variables that are made up of other variables

I like to use private or self hosted Docker registries for a lot of my projects and one of the pain points I have had with some other solutions (well mostly Bitbucket) is that they don’t integrate well with these private registries and when I run into a situation where I am pushing an image to or pulling an image from a private registry it get’s a little messy. TeamCity is nice in that I can add a connection to my private registry in my root project and them simply add that as a build feature to any projects that may need it. Essentially, now I only have one place where I have to keep those credentials and manage that connection.

Another reason I love it is the fact that you can create really powerful build templates that you can reuse. This became very powerful when we were trying to standardize our build processes. For example, most of the apps we build are .NET backends and React frontends. We built docker images for every project and pushed them to our private registry. TeamCity gave us the ability to standardize the naming convention and really streamline the build process. Enough about that though, the rest of this series will assume that you are using TeamCity. This post will focus on getting up and running using Ansible.


Installation and Setup

For this I will assume that you already have Ansible on your machine and that you will be installing TeamCity locally. You can simply follow along with the installation guide here. We will be creating an Ansible playbook based on the following steps. If you just want the finished code, you can find it on my Gitea instance here:

Step 1 : Create project and initial playbook

To get started go ahead and create a new directory to hold our configuration:

mkdir ~/projects/teamcity-configuration-ansible 
touch install-teamcity-server.yml

Now open up install-teamcity-server.yml and add a task to install Java 17 as it is a prerequisite. You will need sudo for this task. ***As of this writing TeamCity does not support Java 18 or 19. If you try to install one of these you will get an error when trying to start TeamCity.

---
- name: Install Teamcity
  hosts: localhost
  become: true
  become_user: sudo

 # Add some variables to make our lives easier
  vars:
    java_version: "17"
    teamcity:
      installation_path: /opt/TeamCity
      version: "2023.11.4"
  
  tasks:
  - name: Install Java
    ansible.builtin.apt:
      name: openjdk-{{ java_version }}-jre-headless
      update_cache: yes
      state: latest
      install_recommends: no

The next step is to create a dedicated user account. Add the following task to install-teamcity-server.yml

  - name: Add Teamcity User
    ansible.builtin.user:
      name: teamcity

Next we will need to download the latest version of TeamCity. 2023.11.4 is the latest as of this writing. Add the following task to your install-teamcity-server.yml

  - name: Download TeamCity Server
    ansible.builtin.get_url:
      url: https://download.jetbrains.com/teamcity/TeamCity-{{teamcity.version}}.tar.gz
      dest: /opt/TeamCity-{{teamcity.version}}.tar.gz
      mode: '0770'

Now to install TeamCity Server add the following:

  - name: Install TeamCity Server
    ansible.builtin.shell: |
      tar xfz /opt/TeamCity-{{teamcity.version}}.tar.gz
      rm -rf /opt/TeamCity-{{teamcity.version}}.tar.gz
    args:
      chdir: /opt

Now that we have everything set up and installed we want to make sure that our new teamcity user has access to everything they need to get up and running. We will add the following lines:

  - name: Update permissions
    ansible.builtin.shell: chown -R teamcity:teamcity /opt/TeamCity

This gives us a pretty nice setup. We have TeamCity server installed with a dedicated user account. The last thing we will do is create a systemd service so that we can easily start/stop the server. For this we will need to add a few things.

  1. A service file that tells our system how to manage TeamCity
  2. A j2 template file that is used to create this service file
  3. A handler that tells the system to run systemctl daemon-reload once the service has been installed.

Go ahead and create a new templates folder with the following teamcity.service.j2 file

[Unit]
Description=JetBrains TeamCity
Requires=network.target
After=syslog.target network.target
[Service]
Type=forking
ExecStart={{teamcity.installation_path}}/bin/runAll.sh start
ExecStop={{teamcity.installation_path}}/bin/runAll.sh stop
User=teamcity
PIDFile={{teamcity.installation_path}}/teamcity.pid
Environment="TEAMCITY_PID_FILE_PATH={{teamcity.installation_path}}/teamcity.pid"
[Install]
WantedBy=multi-user.target

Your project should now look like the following:

$: ~/projects/teamcity-ansible-terraform
 .
├── install-teamcity-server.yml
└── templates
    └── teamcity.service.j2

1 directory, 2 files

That’s it! Now you should have a fully automated installed of TeamCity Server ready to be deployed wherever you need it. Here is the final playbook file, also you can find the most up to date version in my repo:

---
- name: Install Teamcity
  hosts: localhost
  become: true
  become_method: sudo

  vars:
    java_version: "17"
    teamcity:
      installation_path: /opt/TeamCity
      version: "2023.11.4"

  tasks:
  - name: Install Java
    ansible.builtin.apt:
      name: openjdk-{{ java_version }}-jdk # This is important because TeamCity will fail to start if we try to use 18 or 19
      update_cache: yes
      state: latest
      install_recommends: no

  - name: Add TeamCity User
    ansible.builtin.user:
      name: teamcity

  - name: Download TeamCity Server
    ansible.builtin.get_url:
      url: https://download.jetbrains.com/teamcity/TeamCity-{{teamcity.version}}.tar.gz
      dest: /opt/TeamCity-{{teamcity.version}}.tar.gz
      mode: '0770'

  - name: Install TeamCity Server
    ansible.builtin.shell: |
      tar xfz /opt/TeamCity-{{teamcity.version}}.tar.gz
      rm -rf /opt/TeamCity-{{teamcity.version}}.tar.gz
    args:
      chdir: /opt

  - name: Update permissions
    ansible.builtin.shell: chown -R teamcity:teamcity /opt/TeamCity

  - name: TeamCity | Create environment file
    template: src=teamcity.service.j2 dest=/etc/systemd/system/teamcityserver.service
    notify:
      - reload systemctl
  - name: TeamCity | Start teamcity
    service: name=teamcityserver.service state=started enabled=yes

  # Trigger a reload of systemctl after the service file has been created.
  handlers:
    - name: reload systemctl
      command: systemctl daemon-reload
]]>
Self hosted package registries with Gitea /self-hosted-package-registries-with-gitea/ Thu, 07 Mar 2024 15:07:07 +0000 https://wordpress.hackanooga.com/?p=413 I am a big proponent of open source technologies. I have been using Gitea for a couple years now in my homelab. A few years ago I moved most of my code off of Github and onto my self hosted instance. I recently came across a really handy feature that I didn’t know Gitea had and was pleasantly surprised by: Package Registry. You are no doubt familiar with what a package registry is in the broad context. Here are some examples of package registries you probably use on a regular basis:

  • npm
  • cargo
  • docker
  • composer
  • nuget
  • helm

There are a number of reasons why you would want to self host a registry. For example, in my home lab I have some Docker images that are specific to my use cases and I don’t necessarily want them on a public registry. I’m also not concerned about losing the artifacts as I can easily recreate them from code. Gitea makes this really easy to setup, in fact it comes baked in with the installation. For the sake of this post I will just assume that you already have Gitea installed and setup.

Since the package registry is baked in and enabled by default, I will demonstrate how easy it is to push a docker image. We will pull the default alpine image, re-tag it and push it to our internal registry:

# Pull the official Alpine image
docker pull alpine:latest

# Re tag the image with our local registry information
docker tag alpine:latest git.hackanooga.com/mikeconrad/alpine:latest

# Login using your gitea user account
docker login git.hackanooga.com

# Push the image to our registry
docker push git.hackanooga.com/mikeconrad/alpine:latest

Now log into your Gitea instance, navigate to your user account and look for packages. You should see the newly uploaded alpine image.

You can see that the package type is container. Clicking on it will give you more information:

]]>
Traefik with Let’s Encrypt and Cloudflare (pt 2) /traefik-with-lets-encrypt-and-cloudflare-pt-2/ Thu, 15 Feb 2024 20:19:12 +0000 https://wordpress.hackanooga.com/?p=425 In this article we are gonna get into setting up Traefik to request dynamic certs from Lets Encrypt. I had a few issues getting this up and running and the documentation is a little fuzzy. In my case I decided to go with the DNS challenge route. Really the only reason I went with this option is because I was having issues with the TLS and HTTP challenges. Well as it turns out my issues didn’t have as much to do with my configuration as they did with my router.

Sometime in the past I had set up some special rules on my router to force all clients on my network to send DNS requests through a self hosted DNS server. I did this to keep some of my “smart” devices from misbehaving by blocking there access to the outside world. As it turns out some devices will ignore the DNS servers that you hand out via DHCP and will use their own instead. That is of course unless you force DNS redirection but that is another post for another day.

Let’s revisit our current configuration:

version: '3'

services:
  reverse-proxy:
    # The official v2 Traefik docker image
    image: traefik:v2.11
    # Enables the web UI and tells Traefik to listen to docker
    command:
      - --api.insecure=true
      - --providers.docker=true
      - --providers.file.filename=/config.yml
      - --entrypoints.web.address=:80
      - --entrypoints.websecure.address=:443
      # Set up LetsEncrypt
      - --certificatesresolvers.letsencrypt.acme.dnschallenge=true
      - --certificatesresolvers.letsencrypt.acme.dnschallenge.provider=cloudflare
      - --certificatesresolvers.letsencrypt.acme.email=mikeconrad@onmail.com
      - --certificatesresolvers.letsencrypt.acme.storage=/letsencrypt/acme.json
      - --entryPoints.web.http.redirections.entryPoint.to=websecure
      - --entryPoints.web.http.redirections.entryPoint.scheme=https
      - --entryPoints.web.http.redirections.entrypoint.permanent=true
      - --log=true
      - --log.level=INFO
#      - '--certificatesresolvers.letsencrypt.acme.caserver=https://acme-staging-v02.api.letsencrypt.org/directory'

    environment:
      - CF_DNS_API_TOKEN=${CF_DNS_API_TOKEN}
    ports:
      # The HTTP port
      - "80:80"
      - "443:443"
      # The Web UI (enabled by --api.insecure=true)
      - "8080:8080"
    volumes:
      # So that Traefik can listen to the Docker events
      - /var/run/docker.sock:/var/run/docker.sock:ro
      - ./letsencrypt:/letsencrypt
      - ./volumes/traefik/logs:/logs
      - ./traefik/config.yml:/config.yml:ro
    networks:
      - traefik
  ots:
    image: luzifer/ots
    container_name: ots
    restart: always
    environment:
      # Optional, see "Customization" in README
      #CUSTOMIZE: '/etc/ots/customize.yaml'
      # See README for details
      REDIS_URL: redis://redis:6379/0
      # 168h = 1w
      SECRET_EXPIRY: "604800"
      # "mem" or "redis" (See README)
      STORAGE_TYPE: redis
    depends_on:
      - redis
    labels:
      - traefik.enable=true
      - traefik.http.routers.ots.rule=Host(`ots.hackanooga.com`)
      - traefik.http.routers.ots.entrypoints=websecure
      - traefik.http.routers.ots.tls=true
      - traefik.http.routers.ots.tls.certresolver=letsencrypt
    networks:
      - traefik
  redis:
    image: redis:alpine
    restart: always
    volumes:
      - ./redis-data:/data
    networks:
      - traefik
networks:
  traefik:
    external: true

Now that we have all of this in place there are a couple more things we need to do on the Cloudflare side:

Step 1: Setup wildcard DNS entry

This is pretty straightforward. Follow the Cloudflare documentation if you aren’t familiar with setting this up.

Step 2: Create API Token

This is where the Traefik documentation is a little lacking. I had some issues getting this set up initially but ultimately found this documentation which pointed me in the right direction. In your Cloudflare account you will need to create an API token. Navigate to the dashboard, go to your profile -> API Tokens and create new token. It should have the following permissions:

Zone.Zone.Read
Zone.DNS.Edit

Also be sure to give it permission to access all zones in your account. Now simply provide that token when starting up the stack and you should be good to go:

CF_DNS_API_TOKEN=[redacted] docker compose up -d
]]>