In part one of this series we are going to explore a CI/CD option you may not be familiar with but should definitely be on your radar. I used Jetbrains TeamCity for several months at my last company and really enjoyed my time with it. A couple of the things I like most about it are:
-
-
-
-
-
Ability to declare global variables and have them be passed down to all projects
-
-
-
-
Ability to declare variables that are made up of other variables
-
-
-
-
-
I like to use private or self hosted Docker registries for a lot of my projects and one of the pain points I have had with some other solutions (well mostly Bitbucket) is that they don’t integrate well with these private registries and when I run into a situation where I am pushing an image to or pulling an image from a private registry it get’s a little messy. TeamCity is nice in that I can add a connection to my private registry in my root project and them simply add that as a build feature to any projects that may need it. Essentially, now I only have one place where I have to keep those credentials and manage that connection.
-
-
-
-
-
-
-
-
Another reason I love it is the fact that you can create really powerful build templates that you can reuse. This became very powerful when we were trying to standardize our build processes. For example, most of the apps we build are .NET backends and React frontends. We built docker images for every project and pushed them to our private registry. TeamCity gave us the ability to standardize the naming convention and really streamline the build process. Enough about that though, the rest of this series will assume that you are using TeamCity. This post will focus on getting up and running using Ansible.
-
-
-
-
-
-
-
-
Installation and Setup
-
-
-
-
For this I will assume that you already have Ansible on your machine and that you will be installing TeamCity locally. You can simply follow along with the installation guide here. We will be creating an Ansible playbook based on the following steps. If you just want the finished code, you can find it on my Gitea instance here:
-
-
-
-
Step 1 : Create project and initial playbook
-
-
-
-
To get started go ahead and create a new directory to hold our configuration:
Now open up install-teamcity-server.yml and add a task to install Java 17 as it is a prerequisite. You will need sudo for this task. ***As of this writing TeamCity does not support Java 18 or 19. If you try to install one of these you will get an error when trying to start TeamCity.
The next step is to create a dedicated user account. Add the following task to install-teamcity-server.yml
-
-
-
-
- name: Add Teamcity User
- ansible.builtin.user:
- name: teamcity
-
-
-
-
Next we will need to download the latest version of TeamCity. 2023.11.4 is the latest as of this writing. Add the following task to your install-teamcity-server.yml
Now that we have everything set up and installed we want to make sure that our new teamcity user has access to everything they need to get up and running. We will add the following lines:
This gives us a pretty nice setup. We have TeamCity server installed with a dedicated user account. The last thing we will do is create a systemd service so that we can easily start/stop the server. For this we will need to add a few things.
-
-
-
-
-
A service file that tells our system how to manage TeamCity
-
-
-
-
A j2 template file that is used to create this service file
-
-
-
-
A handler that tells the system to run systemctl daemon-reload once the service has been installed.
-
-
-
-
-
Go ahead and create a new templates folder with the following teamcity.service.j2 file
That’s it! Now you should have a fully automated installed of TeamCity Server ready to be deployed wherever you need it. Here is the final playbook file, also you can find the most up to date version in my repo:
-
-
-
-
---
-- name: Install Teamcity
- hosts: localhost
- become: true
- become_method: sudo
-
- vars:
- java_version: "17"
- teamcity:
- installation_path: /opt/TeamCity
- version: "2023.11.4"
-
- tasks:
- - name: Install Java
- ansible.builtin.apt:
- name: openjdk-{{ java_version }}-jdk # This is important because TeamCity will fail to start if we try to use 18 or 19
- update_cache: yes
- state: latest
- install_recommends: no
-
- - name: Add TeamCity User
- ansible.builtin.user:
- name: teamcity
-
- - name: Download TeamCity Server
- ansible.builtin.get_url:
- url: https://download.jetbrains.com/teamcity/TeamCity-{{teamcity.version}}.tar.gz
- dest: /opt/TeamCity-{{teamcity.version}}.tar.gz
- mode: '0770'
-
- - name: Install TeamCity Server
- ansible.builtin.shell: |
- tar xfz /opt/TeamCity-{{teamcity.version}}.tar.gz
- rm -rf /opt/TeamCity-{{teamcity.version}}.tar.gz
- args:
- chdir: /opt
-
- - name: Update permissions
- ansible.builtin.shell: chown -R teamcity:teamcity /opt/TeamCity
-
- - name: TeamCity | Create environment file
- template: src=teamcity.service.j2 dest=/etc/systemd/system/teamcityserver.service
- notify:
- - reload systemctl
- - name: TeamCity | Start teamcity
- service: name=teamcityserver.service state=started enabled=yes
-
- # Trigger a reload of systemctl after the service file has been created.
- handlers:
- - name: reload systemctl
- command: systemctl daemon-reload
VPN’s have traditionally been slow, complex and hard to set up and configure. That all changed several years ago when Wireguard was officially merged into the mainline Linux kernel (src). I won’t go over all the reasons for why you should want to use Wireguard in this article, instead I will be focusing on just…
I was recently working on project where a client had cPanel/WHM with Nginx and Apache. They had a large number of sites managed by Nginx with a large number of includes. I created a custom config to override a location block and needed to be certain that my changes where actually being picked up. Anytime…
For those of you who aren’t familiar with the concept of a network tarpit it is a fairly simple concept. Wikipedia defines it like this: A tarpit is a service on a computer system (usually a server) that purposely delays incoming connections. The technique was developed as a defense against a computer worm, and the idea is that network abuses such as spamming or broad…
I recently decided to set up a Docker swarm cluster for a project I was working on. If you aren’t familiar with Swarm mode, it is similar in some ways to k8s but with much less complexity and it is built into Docker. If you are looking for a fairly straightforward way to deploy containers…
These are some handy snippets I use on a regular basis when managing containers. I have one server in particular that can sometimes end up with 50 to 100 orphaned containers for various reasons. The easiest/quickest way to stop all of them is to do something like this: Let me break this down in case…
In part one of this series we are going to explore a CI/CD option you may not be familiar with but should definitely be on your radar. I used Jetbrains TeamCity for several months at my last company and really enjoyed my time with it. A couple of the things I like most about it…
I am a big proponent of open source technologies. I have been using Gitea for a couple years now in my homelab. A few years ago I moved most of my code off of Github and onto my self hosted instance. I recently came across a really handy feature that I didn’t know Gitea had…
In this article we are gonna get into setting up Traefik to request dynamic certs from Lets Encrypt. I had a few issues getting this up and running and the documentation is a little fuzzy. In my case I decided to go with the DNS challenge route. Really the only reason I went with this…
Recently I decided to rebuild one of my homelab servers. Previously I was using Nginx as my reverse proxy but I decided to switch to Traefik since I have been using it professionally for some time now. One of the reasons I like Traefik is that it is stupid simple to set up certificates and…
-
-
-
-
\ No newline at end of file
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/category/ansible/feed/index.xml b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/category/ansible/feed/index.xml
deleted file mode 100644
index e37b6ad..0000000
--- a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/category/ansible/feed/index.xml
+++ /dev/null
@@ -1,475 +0,0 @@
-
-
-
- Ansible – hackanooga
-
- /
- Confessions of a homelab hacker
- Sat, 11 May 2024 13:44:01 +0000
- en-US
-
- hourly
-
- 1
- https://wordpress.org/?v=6.6.2
-
-
- /wp-content/uploads/2024/03/cropped-cropped-avatar-32x32.png
- Ansible – hackanooga
- /
- 32
- 32
-
-
- Traefik 3.0 service discovery in Docker Swarm mode
- /traefik-3-0-service-discovery-in-docker-swarm-mode/
-
-
- Sat, 11 May 2024 13:44:01 +0000
-
-
-
-
-
-
- /?p=564
-
-
- I recently decided to set up a Docker swarm cluster for a project I was working on. If you aren’t familiar with Swarm mode, it is similar in some ways to k8s but with much less complexity and it is built into Docker. If you are looking for a fairly straightforward way to deploy containers across a number of nodes without all the overhead of k8s it can be a good choice, however it isn’t a very popular or widespread solution these days.
-
-
-
-
Anyway, I set up a VM scaling set in Azure with 10 Ubuntu 22.04 vms and wrote some Ansible scripts to automate the process of installing Docker on each machine as well as setting 3 up as swarm managers and the other 7 as worker nodes. I ssh’d into the primary manager node and created a docker compose file for launching an observability stack.
Everything deploys properly but when I view the Traefik logs there is an issue with all the services except for the grafana service. I get errors like this:
-
-
-
-
traefik_traefik.1.tm5iqb9x59on@dockerswa2V8BY4 | 2024-05-11T13:14:16Z ERR error="service \"observability-prometheus\" error: port is missing" container=observability-prometheus-37i852h4o36c23lzwuu9pvee9 providerName=swarm
-
-
-
-
-
It drove me crazy for about half a day or so. I couldn’t find any reason why the grafana service worked as expected but none of the others did. Part of my love/hate relationship with Traefik stems from the fact that configuration issues like this can be hard to track and debug. Ultimately after lots of searching and banging my head against a wall I found the answer in the Traefik docs and thought I would share here for anyone else who might run into this issue. Again, this solution is specific to Docker Swarm mode.
Expand that first section and you will see the solution:
-
-
-
-
-
-
-
-
It turns out I just needed to update my docker-compose.yml and nest the labels under a deploy section, redeploy and everything was working as expected.
-]]>
-
-
-
-
-
- Automating CI/CD with TeamCity and Ansible
- /automating-ci-cd-with-teamcity-ansible/
-
-
- Mon, 11 Mar 2024 13:37:47 +0000
-
-
-
-
-
- https://wordpress.hackanooga.com/?p=393
-
-
- In part one of this series we are going to explore a CI/CD option you may not be familiar with but should definitely be on your radar. I used Jetbrains TeamCity for several months at my last company and really enjoyed my time with it. A couple of the things I like most about it are:
-
-
-
-
-
Ability to declare global variables and have them be passed down to all projects
-
-
-
-
Ability to declare variables that are made up of other variables
-
-
-
-
-
I like to use private or self hosted Docker registries for a lot of my projects and one of the pain points I have had with some other solutions (well mostly Bitbucket) is that they don’t integrate well with these private registries and when I run into a situation where I am pushing an image to or pulling an image from a private registry it get’s a little messy. TeamCity is nice in that I can add a connection to my private registry in my root project and them simply add that as a build feature to any projects that may need it. Essentially, now I only have one place where I have to keep those credentials and manage that connection.
-
-
-
-
-
-
-
-
Another reason I love it is the fact that you can create really powerful build templates that you can reuse. This became very powerful when we were trying to standardize our build processes. For example, most of the apps we build are .NET backends and React frontends. We built docker images for every project and pushed them to our private registry. TeamCity gave us the ability to standardize the naming convention and really streamline the build process. Enough about that though, the rest of this series will assume that you are using TeamCity. This post will focus on getting up and running using Ansible.
-
-
-
-
-
-
-
-
Installation and Setup
-
-
-
-
For this I will assume that you already have Ansible on your machine and that you will be installing TeamCity locally. You can simply follow along with the installation guide here. We will be creating an Ansible playbook based on the following steps. If you just want the finished code, you can find it on my Gitea instance here:
-
-
-
-
Step 1 : Create project and initial playbook
-
-
-
-
To get started go ahead and create a new directory to hold our configuration:
Now open up install-teamcity-server.yml and add a task to install Java 17 as it is a prerequisite. You will need sudo for this task. ***As of this writing TeamCity does not support Java 18 or 19. If you try to install one of these you will get an error when trying to start TeamCity.
The next step is to create a dedicated user account. Add the following task to install-teamcity-server.yml
-
-
-
-
- name: Add Teamcity User
- ansible.builtin.user:
- name: teamcity
-
-
-
-
Next we will need to download the latest version of TeamCity. 2023.11.4 is the latest as of this writing. Add the following task to your install-teamcity-server.yml
Now that we have everything set up and installed we want to make sure that our new teamcity user has access to everything they need to get up and running. We will add the following lines:
This gives us a pretty nice setup. We have TeamCity server installed with a dedicated user account. The last thing we will do is create a systemd service so that we can easily start/stop the server. For this we will need to add a few things.
-
-
-
-
-
A service file that tells our system how to manage TeamCity
-
-
-
-
A j2 template file that is used to create this service file
-
-
-
-
A handler that tells the system to run systemctl daemon-reload once the service has been installed.
-
-
-
-
-
Go ahead and create a new templates folder with the following teamcity.service.j2 file
That’s it! Now you should have a fully automated installed of TeamCity Server ready to be deployed wherever you need it. Here is the final playbook file, also you can find the most up to date version in my repo:
-
-
-
-
---
-- name: Install Teamcity
- hosts: localhost
- become: true
- become_method: sudo
-
- vars:
- java_version: "17"
- teamcity:
- installation_path: /opt/TeamCity
- version: "2023.11.4"
-
- tasks:
- - name: Install Java
- ansible.builtin.apt:
- name: openjdk-{{ java_version }}-jdk # This is important because TeamCity will fail to start if we try to use 18 or 19
- update_cache: yes
- state: latest
- install_recommends: no
-
- - name: Add TeamCity User
- ansible.builtin.user:
- name: teamcity
-
- - name: Download TeamCity Server
- ansible.builtin.get_url:
- url: https://download.jetbrains.com/teamcity/TeamCity-{{teamcity.version}}.tar.gz
- dest: /opt/TeamCity-{{teamcity.version}}.tar.gz
- mode: '0770'
-
- - name: Install TeamCity Server
- ansible.builtin.shell: |
- tar xfz /opt/TeamCity-{{teamcity.version}}.tar.gz
- rm -rf /opt/TeamCity-{{teamcity.version}}.tar.gz
- args:
- chdir: /opt
-
- - name: Update permissions
- ansible.builtin.shell: chown -R teamcity:teamcity /opt/TeamCity
-
- - name: TeamCity | Create environment file
- template: src=teamcity.service.j2 dest=/etc/systemd/system/teamcityserver.service
- notify:
- - reload systemctl
- - name: TeamCity | Start teamcity
- service: name=teamcityserver.service state=started enabled=yes
-
- # Trigger a reload of systemctl after the service file has been created.
- handlers:
- - name: reload systemctl
- command: systemctl daemon-reload
-
-
-
-
\ No newline at end of file
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/category/automation/feed/index.xml b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/category/automation/feed/index.xml
deleted file mode 100644
index 0eb84a9..0000000
--- a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/category/automation/feed/index.xml
+++ /dev/null
@@ -1,1196 +0,0 @@
-
-
-
- Automation – hackanooga
-
- /
- Confessions of a homelab hacker
- Wed, 25 Sep 2024 13:56:04 +0000
- en-US
-
- hourly
-
- 1
- https://wordpress.org/?v=6.6.2
-
-
- /wp-content/uploads/2024/03/cropped-cropped-avatar-32x32.png
- Automation – hackanooga
- /
- 32
- 32
-
-
- Standing up a Wireguard VPN
- /standing-up-a-wireguard-vpn/
-
-
- Wed, 25 Sep 2024 13:56:04 +0000
-
-
-
-
-
-
-
-
- /?p=619
-
-
- VPN’s have traditionally been slow, complex and hard to set up and configure. That all changed several years ago when Wireguard was officially merged into the mainline Linux kernel (src). I won’t go over all the reasons for why you should want to use Wireguard in this article, instead I will be focusing on just how easy it is to set up and configure.
-
-
-
-
For this tutorial we will be using Terraform to stand up a Digital Ocean droplet and then install Wireguard onto that. The Digital Ocean droplet will be acting as our “server” in this example and we will be using our own computer as the “client”. Of course, you don’t have to use Terraform, you just need a Linux box to install Wireguard on. You can find the code for this tutorial on my personal Git server here.
-
-
-
-
Create Droplet with Terraform
-
-
-
-
I have written some very basic Terraform to get us started. The Terraform is very basic and just creates a droplet with a predefined ssh key and a setup script passed as user data. When the droplet gets created, the script will get copied to the instance and automatically executed. After a few minutes everything should be ready to go. If you want to clone the repo above, feel free to, or if you would rather do everything by hand that’s great too. I will assume that you are doing everything by hand. The process of deploying from the repo should be pretty self explainitory. My reasoning for doing it this way is because I wanted to better understand the process.
-
-
-
-
First create our main.tf with the following contents:
-
-
-
-
# main.tf
-# Attach an SSH key to our droplet
-resource "digitalocean_ssh_key" "default" {
- name = "Terraform Example"
- public_key = file("./tf-digitalocean.pub")
-}
-
-# Create a new Web Droplet in the nyc1 region
-resource "digitalocean_droplet" "web" {
- image = "ubuntu-22-04-x64"
- name = "wireguard"
- region = "nyc1"
- size = "s-2vcpu-4gb"
- ssh_keys = [digitalocean_ssh_key.default.fingerprint]
- user_data = file("setup.sh")
-}
-
-output "droplet_output" {
- value = digitalocean_droplet.web.ipv4_address
-}
-
-
-
-
Next create a terraform.tf file in the same directory with the following contents:
Now we are ready to initialize our Terraform and apply it:
-
-
-
-
$ terraform init
-$ terraform apply
-
-Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
- + create
-
-Terraform will perform the following actions:
-
- # digitalocean_droplet.web will be created
- + resource "digitalocean_droplet" "web" {
- + backups = false
- + created_at = (known after apply)
- + disk = (known after apply)
- + graceful_shutdown = false
- + id = (known after apply)
- + image = "ubuntu-22-04-x64"
- + ipv4_address = (known after apply)
- + ipv4_address_private = (known after apply)
- + ipv6 = false
- + ipv6_address = (known after apply)
- + locked = (known after apply)
- + memory = (known after apply)
- + monitoring = false
- + name = "wireguard"
- + price_hourly = (known after apply)
- + price_monthly = (known after apply)
- + private_networking = (known after apply)
- + region = "nyc1"
- + resize_disk = true
- + size = "s-2vcpu-4gb"
- + ssh_keys = (known after apply)
- + status = (known after apply)
- + urn = (known after apply)
- + user_data = "69d130f386b262b136863be5fcffc32bff055ac0"
- + vcpus = (known after apply)
- + volume_ids = (known after apply)
- + vpc_uuid = (known after apply)
- }
-
- # digitalocean_ssh_key.default will be created
- + resource "digitalocean_ssh_key" "default" {
- + fingerprint = (known after apply)
- + id = (known after apply)
- + name = "Terraform Example"
- + public_key = <<-EOT
- ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDXOBlFdNqV48oxWobrn2rPt4y1FTqrqscA5bSu2f3CogwbDKDyNglXu8RL4opjfdBHQES+pEqvt21niqes8z2QsBTF3TRQ39SaHM8wnOTeC8d0uSgyrp9b7higHd0SDJVJZT0Bz5AlpYfCO/gpEW51XrKKeud7vImj8nGPDHnENN0Ie0UVYZ5+V1zlr0BBI7LX01MtzUOgSldDX0lif7IZWW4XEv40ojWyYJNQwO/gwyDrdAq+kl+xZu7LmBhngcqd02+X6w4SbdgYg2flu25Td0MME0DEsXKiZYf7kniTrKgCs4kJAmidCDYlYRt43dlM69pB5jVD/u4r3O+erTapH/O1EDhsdA9y0aYpKOv26ssYU+ZXK/nax+Heu0giflm7ENTCblKTPCtpG1DBthhX6Ml0AYjZF1cUaaAvpN8UjElxQ9r+PSwXloSnf25/r9UOBs1uco8VDwbx5cM0SpdYm6ERtLqGRYrG2SDJ8yLgiCE9EK9n3uQExyrTMKWzVAc= WireguardVPN
- EOT
- }
-
-Plan: 2 to add, 0 to change, 0 to destroy.
-
-Changes to Outputs:
- + droplet_output = (known after apply)
-
-Do you want to perform these actions?
- Terraform will perform the actions described above.
- Only 'yes' will be accepted to approve.
-
- Enter a value: yes
-
-digitalocean_ssh_key.default: Creating...
-digitalocean_ssh_key.default: Creation complete after 1s [id=43499750]
-digitalocean_droplet.web: Creating...
-digitalocean_droplet.web: Still creating... [10s elapsed]
-digitalocean_droplet.web: Still creating... [20s elapsed]
-digitalocean_droplet.web: Still creating... [30s elapsed]
-digitalocean_droplet.web: Creation complete after 31s [id=447469336]
-
-Apply complete! Resources: 2 added, 0 changed, 0 destroyed.
-
-Outputs:
-
-droplet_output = "159.223.113.207"
-
-
-
-
-
All pretty standard stuff. Nice! It only took about 30 seconds or so on my machine to spin up a droplet and start provisioning it. It is worth noting that the setup script will take a few minutes to run. Before we log into our new droplet, let’s take a quick look at the setup script that we are running.
-
-
-
-
#!/usr/bin/env sh
-set -e
-set -u
-# Set the listen port used by Wireguard, this is the default so feel free to change it.
-LISTENPORT=51820
-CONFIG_DIR=/root/wireguard-conf
-umask 077
-mkdir -p $CONFIG_DIR/client
-
-# Install wireguard
-apt update && apt install -y wireguard
-
-# Generate public/private key for the "server".
-wg genkey > $CONFIG_DIR/privatekey
-wg pubkey < $CONFIG_DIR/privatekey > $CONFIG_DIR/publickey
-
-# Generate public/private key for the "client"
-wg genkey > $CONFIG_DIR/client/privatekey
-wg pubkey < $CONFIG_DIR/client/privatekey > $CONFIG_DIR/client/publickey
-
-
-# Generate server config
-echo "[Interface]
-Address = 10.66.66.1/24,fd42:42:42::1/64
-ListenPort = $LISTENPORT
-PrivateKey = $(cat $CONFIG_DIR/privatekey)
-
-### Client config
-[Peer]
-PublicKey = $(cat $CONFIG_DIR/client/publickey)
-AllowedIPs = 10.66.66.2/32,fd42:42:42::2/128
-" > /etc/wireguard/do.conf
-
-
-# Generate client config. This will need to be copied to your machine.
-echo "[Interface]
-PrivateKey = $(cat $CONFIG_DIR/client/privatekey)
-Address = 10.66.66.2/32,fd42:42:42::2/128
-DNS = 1.1.1.1,1.0.0.1
-
-[Peer]
-PublicKey = $(cat publickey)
-Endpoint = $(curl icanhazip.com):$LISTENPORT
-AllowedIPs = 0.0.0.0/0,::/0
-" > client-config.conf
-
-wg-quick up do
-
-# Add iptables rules to forward internet traffic through this box
-# We are assuming our Wireguard interface is called do and our
-# primary public facing interface is called eth0.
-
-iptables -I INPUT -p udp --dport 51820 -j ACCEPT
-iptables -I FORWARD -i eth0 -o do -j ACCEPT
-iptables -I FORWARD -i do -j ACCEPT
-iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
-ip6tables -I FORWARD -i do -j ACCEPT
-ip6tables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
-
-# Enable routing on the server
-echo "net.ipv4.ip_forward = 1
- net.ipv6.conf.all.forwarding = 1" >/etc/sysctl.d/wg.conf
-sysctl --system
-
-
-
-
As you can see, it is pretty straightforward. All you really need to do is:
-
-
-
-
On the “server” side:
-
-
-
-
-
Generate a private key and derive a public key from it for both the “server” and the “client”.
-
-
-
-
Create a “server” config that tells the droplet what address to bind to for the wireguard interface, which private key to use to secure that interface and what port to listen on.
-
-
-
-
The “server” config also needs to know what peers or “clients” to accept connections from in the AllowedIPs block. In this case we are just specifying one. The “server” also needs to know the public key of the “client” that will be connecting.
-
-
-
-
-
On the “client” side:
-
-
-
-
-
Create a “client” config that tells our machine what address to assign to the wireguard interface (obviously needs to be on the same subnet as the interface on the server side).
-
-
-
-
The client needs to know which private key to use to secure the interface.
-
-
-
-
It also needs to know the public key of the server as well as the public IP address/hostname of the “server” it is connecting to as well as the port it is listening on.
-
-
-
-
Finally it needs to know what traffic to route over the wireguard interface. In this example we are simply routing all traffic but you could restrict this as you see fit.
-
-
-
-
-
Now that we have our configs in place, we need to copy the client config to our local machine. The following command should work as long as you make sure to replace the IP address with the IP address of your newly created droplet:
-
-
-
-
## Make sure you have Wireguard installed on your local machine as well.
-## https://wireguard.com/install
-
-## Copy the client config to our local machine and move it to our wireguard directory.
-$ ssh -i tf-digitalocean root@157.230.177.54 -- cat /root/wireguard-conf/client-config.conf| sudo tee /etc/wireguard/do.conf
-
-
-
-
Before we try to connect, let’s log into the server and make sure everything is set up correctly:
-
-
-
-
$ ssh -i tf-digitalocean root@159.223.113.207
-Welcome to Ubuntu 22.04.4 LTS (GNU/Linux 5.15.0-113-generic x86_64)
-
- * Documentation: https://help.ubuntu.com/
- * Management: https://landscape.canonical.com/
- * Support: https://ubuntu.com/pro
-
- System information as of Wed Sep 25 13:19:02 UTC 2024
-
- System load: 0.03 Processes: 113
- Usage of /: 2.1% of 77.35GB Users logged in: 0
- Memory usage: 6% IPv4 address for eth0: 157.230.221.196
- Swap usage: 0% IPv4 address for eth0: 10.10.0.5
-
-Expanded Security Maintenance for Applications is not enabled.
-
-70 updates can be applied immediately.
-40 of these updates are standard security updates.
-To see these additional updates run: apt list --upgradable
-
-Enable ESM Apps to receive additional future security updates.
-See https://ubuntu.com/esm or run: sudo pro status
-
-New release '24.04.1 LTS' available.
-Run 'do-release-upgrade' to upgrade to it.
-
-
-Last login: Wed Sep 25 13:16:25 2024 from 74.221.191.214
-root@wireguard:~#
-
-
-
-
-
-
Awesome! We are connected. Now let’s check the wireguard interface using the wg command. If our config was correct, we should see an interface line and 1 peer line like so. If the peer line is missing then something is wrong with the configuration. Most likely a mismatch between public/private key.:
-
-
-
-
root@wireguard:~# wg
-interface: do
- public key: fTvqo/cZVofJ9IZgWHwU6XKcIwM/EcxUsMw4voeS/Hg=
- private key: (hidden)
- listening port: 51820
-
-peer: 5RxMenh1L+rNJobROkUrub4DBUj+nEUPKiNe4DFR8iY=
- allowed ips: 10.66.66.2/32, fd42:42:42::2/128
-root@wireguard:~#
-
-
-
-
So now we should be ready to go! On your local machine go ahead and try it out:
-
-
-
-
## Start the interface with wg-quick up [interface_name]
-$ sudo wg-quick up do
-[sudo] password for mikeconrad:
-[#] ip link add do type wireguard
-[#] wg setconf do /dev/fd/63
-[#] ip -4 address add 10.66.66.2/32 dev do
-[#] ip -6 address add fd42:42:42::2/128 dev do
-[#] ip link set mtu 1420 up dev do
-[#] resolvconf -a do -m 0 -x
-[#] wg set do fwmark 51820
-[#] ip -6 route add ::/0 dev do table 51820
-[#] ip -6 rule add not fwmark 51820 table 51820
-[#] ip -6 rule add table main suppress_prefixlength 0
-[#] ip6tables-restore -n
-[#] ip -4 route add 0.0.0.0/0 dev do table 51820
-[#] ip -4 rule add not fwmark 51820 table 51820
-[#] ip -4 rule add table main suppress_prefixlength 0
-[#] sysctl -q net.ipv4.conf.all.src_valid_mark=1
-[#] iptables-restore -n
-
-## Check our config
-$ sudo wg
-interface: do
- public key: fJ8mptCR/utCR4K2LmJTKTjn3xc4RDmZ3NNEQGwI7iI=
- private key: (hidden)
- listening port: 34596
- fwmark: 0xca6c
-
-peer: duTHwMhzSZxnRJ2GFCUCHE4HgY5tSeRn9EzQt9XVDx4=
- endpoint: 157.230.177.54:51820
- allowed ips: 0.0.0.0/0, ::/0
- latest handshake: 1 second ago
- transfer: 1.82 KiB received, 2.89 KiB sent
-
-## Make sure we can ping the outside world
-mikeconrad@pop-os:~/projects/wireguard-terraform-digitalocean$ ping 1.1.1.1
-PING 1.1.1.1 (1.1.1.1) 56(84) bytes of data.
-64 bytes from 1.1.1.1: icmp_seq=1 ttl=56 time=28.0 ms
-^C
---- 1.1.1.1 ping statistics ---
-1 packets transmitted, 1 received, 0% packet loss, time 0ms
-rtt min/avg/max/mdev = 27.991/27.991/27.991/0.000 ms
-
-## Verify our traffic is actually going over the tunnel.
-$ curl icanhazip.com
-157.230.177.54
-
-
-
-
-
-
-
We should also be able to ssh into our instance over the VPN using the 10.66.66.1 address:
-
-
-
-
$ ssh -i tf-digitalocean root@10.66.66.1
-The authenticity of host '10.66.66.1 (10.66.66.1)' can't be established.
-ED25519 key fingerprint is SHA256:E7BKSO3qP+iVVXfb/tLaUfKIc4RvtZ0k248epdE04m8.
-This host key is known by the following other names/addresses:
- ~/.ssh/known_hosts:130: [hashed name]
-Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
-Warning: Permanently added '10.66.66.1' (ED25519) to the list of known hosts.
-Welcome to Ubuntu 22.04.4 LTS (GNU/Linux 5.15.0-113-generic x86_64)
-
- * Documentation: https://help.ubuntu.com/
- * Management: https://landscape.canonical.com/
- * Support: https://ubuntu.com/pro
-
- System information as of Wed Sep 25 13:32:12 UTC 2024
-
- System load: 0.02 Processes: 109
- Usage of /: 2.1% of 77.35GB Users logged in: 0
- Memory usage: 6% IPv4 address for eth0: 157.230.177.54
- Swap usage: 0% IPv4 address for eth0: 10.10.0.5
-
-Expanded Security Maintenance for Applications is not enabled.
-
-73 updates can be applied immediately.
-40 of these updates are standard security updates.
-To see these additional updates run: apt list --upgradable
-
-Enable ESM Apps to receive additional future security updates.
-See https://ubuntu.com/esm or run: sudo pro status
-
-New release '24.04.1 LTS' available.
-Run 'do-release-upgrade' to upgrade to it.
-
-
-root@wireguard:~#
-
-
-
-
-
Looks like everything is working! If you run the script from the repo you will have a fully functioning Wireguard VPN in less than 5 minutes! Pretty cool stuff! This article was not meant to be exhaustive but instead a simple primer to get your feet wet. The setup script I used is heavily inspired by angristan/wireguard-install. Another great resource is the Unofficial docs repo.
-]]>
-
-
-
-
-
- Traefik 3.0 service discovery in Docker Swarm mode
- /traefik-3-0-service-discovery-in-docker-swarm-mode/
-
-
- Sat, 11 May 2024 13:44:01 +0000
-
-
-
-
-
-
- /?p=564
-
-
- I recently decided to set up a Docker swarm cluster for a project I was working on. If you aren’t familiar with Swarm mode, it is similar in some ways to k8s but with much less complexity and it is built into Docker. If you are looking for a fairly straightforward way to deploy containers across a number of nodes without all the overhead of k8s it can be a good choice, however it isn’t a very popular or widespread solution these days.
-
-
-
-
Anyway, I set up a VM scaling set in Azure with 10 Ubuntu 22.04 vms and wrote some Ansible scripts to automate the process of installing Docker on each machine as well as setting 3 up as swarm managers and the other 7 as worker nodes. I ssh’d into the primary manager node and created a docker compose file for launching an observability stack.
Everything deploys properly but when I view the Traefik logs there is an issue with all the services except for the grafana service. I get errors like this:
-
-
-
-
traefik_traefik.1.tm5iqb9x59on@dockerswa2V8BY4 | 2024-05-11T13:14:16Z ERR error="service \"observability-prometheus\" error: port is missing" container=observability-prometheus-37i852h4o36c23lzwuu9pvee9 providerName=swarm
-
-
-
-
-
It drove me crazy for about half a day or so. I couldn’t find any reason why the grafana service worked as expected but none of the others did. Part of my love/hate relationship with Traefik stems from the fact that configuration issues like this can be hard to track and debug. Ultimately after lots of searching and banging my head against a wall I found the answer in the Traefik docs and thought I would share here for anyone else who might run into this issue. Again, this solution is specific to Docker Swarm mode.
Expand that first section and you will see the solution:
-
-
-
-
-
-
-
-
It turns out I just needed to update my docker-compose.yml and nest the labels under a deploy section, redeploy and everything was working as expected.
-]]>
-
-
-
-
-
- Stop all running containers with Docker
- /stop-all-running-containers-with-docker/
-
-
- Wed, 03 Apr 2024 13:12:41 +0000
-
-
-
-
- /?p=557
-
-
- These are some handy snippets I use on a regular basis when managing containers. I have one server in particular that can sometimes end up with 50 to 100 orphaned containers for various reasons. The easiest/quickest way to stop all of them is to do something like this:
-
-
-
-
docker container stop $(docker container ps -q)
-
-
-
-
Let me break this down in case you are not familiar with the syntax. Basically we are passing the output of docker container ps -q into docker container stop. This works because the stop command can take a list of container ids which is what we get when passing the -q flag to docker container ps.
-]]>
-
-
-
-
-
- Automating CI/CD with TeamCity and Ansible
- /automating-ci-cd-with-teamcity-ansible/
-
-
- Mon, 11 Mar 2024 13:37:47 +0000
-
-
-
-
-
- https://wordpress.hackanooga.com/?p=393
-
-
- In part one of this series we are going to explore a CI/CD option you may not be familiar with but should definitely be on your radar. I used Jetbrains TeamCity for several months at my last company and really enjoyed my time with it. A couple of the things I like most about it are:
-
-
-
-
-
Ability to declare global variables and have them be passed down to all projects
-
-
-
-
Ability to declare variables that are made up of other variables
-
-
-
-
-
I like to use private or self hosted Docker registries for a lot of my projects and one of the pain points I have had with some other solutions (well mostly Bitbucket) is that they don’t integrate well with these private registries and when I run into a situation where I am pushing an image to or pulling an image from a private registry it get’s a little messy. TeamCity is nice in that I can add a connection to my private registry in my root project and them simply add that as a build feature to any projects that may need it. Essentially, now I only have one place where I have to keep those credentials and manage that connection.
-
-
-
-
-
-
-
-
Another reason I love it is the fact that you can create really powerful build templates that you can reuse. This became very powerful when we were trying to standardize our build processes. For example, most of the apps we build are .NET backends and React frontends. We built docker images for every project and pushed them to our private registry. TeamCity gave us the ability to standardize the naming convention and really streamline the build process. Enough about that though, the rest of this series will assume that you are using TeamCity. This post will focus on getting up and running using Ansible.
-
-
-
-
-
-
-
-
Installation and Setup
-
-
-
-
For this I will assume that you already have Ansible on your machine and that you will be installing TeamCity locally. You can simply follow along with the installation guide here. We will be creating an Ansible playbook based on the following steps. If you just want the finished code, you can find it on my Gitea instance here:
-
-
-
-
Step 1 : Create project and initial playbook
-
-
-
-
To get started go ahead and create a new directory to hold our configuration:
Now open up install-teamcity-server.yml and add a task to install Java 17 as it is a prerequisite. You will need sudo for this task. ***As of this writing TeamCity does not support Java 18 or 19. If you try to install one of these you will get an error when trying to start TeamCity.
The next step is to create a dedicated user account. Add the following task to install-teamcity-server.yml
-
-
-
-
- name: Add Teamcity User
- ansible.builtin.user:
- name: teamcity
-
-
-
-
Next we will need to download the latest version of TeamCity. 2023.11.4 is the latest as of this writing. Add the following task to your install-teamcity-server.yml
Now that we have everything set up and installed we want to make sure that our new teamcity user has access to everything they need to get up and running. We will add the following lines:
This gives us a pretty nice setup. We have TeamCity server installed with a dedicated user account. The last thing we will do is create a systemd service so that we can easily start/stop the server. For this we will need to add a few things.
-
-
-
-
-
A service file that tells our system how to manage TeamCity
-
-
-
-
A j2 template file that is used to create this service file
-
-
-
-
A handler that tells the system to run systemctl daemon-reload once the service has been installed.
-
-
-
-
-
Go ahead and create a new templates folder with the following teamcity.service.j2 file
That’s it! Now you should have a fully automated installed of TeamCity Server ready to be deployed wherever you need it. Here is the final playbook file, also you can find the most up to date version in my repo:
-
-
-
-
---
-- name: Install Teamcity
- hosts: localhost
- become: true
- become_method: sudo
-
- vars:
- java_version: "17"
- teamcity:
- installation_path: /opt/TeamCity
- version: "2023.11.4"
-
- tasks:
- - name: Install Java
- ansible.builtin.apt:
- name: openjdk-{{ java_version }}-jdk # This is important because TeamCity will fail to start if we try to use 18 or 19
- update_cache: yes
- state: latest
- install_recommends: no
-
- - name: Add TeamCity User
- ansible.builtin.user:
- name: teamcity
-
- - name: Download TeamCity Server
- ansible.builtin.get_url:
- url: https://download.jetbrains.com/teamcity/TeamCity-{{teamcity.version}}.tar.gz
- dest: /opt/TeamCity-{{teamcity.version}}.tar.gz
- mode: '0770'
-
- - name: Install TeamCity Server
- ansible.builtin.shell: |
- tar xfz /opt/TeamCity-{{teamcity.version}}.tar.gz
- rm -rf /opt/TeamCity-{{teamcity.version}}.tar.gz
- args:
- chdir: /opt
-
- - name: Update permissions
- ansible.builtin.shell: chown -R teamcity:teamcity /opt/TeamCity
-
- - name: TeamCity | Create environment file
- template: src=teamcity.service.j2 dest=/etc/systemd/system/teamcityserver.service
- notify:
- - reload systemctl
- - name: TeamCity | Start teamcity
- service: name=teamcityserver.service state=started enabled=yes
-
- # Trigger a reload of systemctl after the service file has been created.
- handlers:
- - name: reload systemctl
- command: systemctl daemon-reload
-]]>
-
-
-
-
-
- Self hosted package registries with Gitea
- /self-hosted-package-registries-with-gitea/
-
-
- Thu, 07 Mar 2024 15:07:07 +0000
-
-
-
-
-
- https://wordpress.hackanooga.com/?p=413
-
-
- I am a big proponent of open source technologies. I have been using Gitea for a couple years now in my homelab. A few years ago I moved most of my code off of Github and onto my self hosted instance. I recently came across a really handy feature that I didn’t know Gitea had and was pleasantly surprised by: Package Registry. You are no doubt familiar with what a package registry is in the broad context. Here are some examples of package registries you probably use on a regular basis:
-
-
-
-
-
npm
-
-
-
-
cargo
-
-
-
-
docker
-
-
-
-
composer
-
-
-
-
nuget
-
-
-
-
helm
-
-
-
-
-
There are a number of reasons why you would want to self host a registry. For example, in my home lab I have some Docker images that are specific to my use cases and I don’t necessarily want them on a public registry. I’m also not concerned about losing the artifacts as I can easily recreate them from code. Gitea makes this really easy to setup, in fact it comes baked in with the installation. For the sake of this post I will just assume that you already have Gitea installed and setup.
-
-
-
-
Since the package registry is baked in and enabled by default, I will demonstrate how easy it is to push a docker image. We will pull the default alpine image, re-tag it and push it to our internal registry:
-
-
-
-
# Pull the official Alpine image
-docker pull alpine:latest
-
-# Re tag the image with our local registry information
-docker tag alpine:latest git.hackanooga.com/mikeconrad/alpine:latest
-
-# Login using your gitea user account
-docker login git.hackanooga.com
-
-# Push the image to our registry
-docker push git.hackanooga.com/mikeconrad/alpine:latest
-
-
-
-
-
-
Now log into your Gitea instance, navigate to your user account and look for packages. You should see the newly uploaded alpine image.
-
-
-
-
-
-
-
-
You can see that the package type is container. Clicking on it will give you more information:
-
-
-
-
-]]>
-
-
-
-
-
- Traefik with Let’s Encrypt and Cloudflare (pt 2)
- /traefik-with-lets-encrypt-and-cloudflare-pt-2/
-
-
- Thu, 15 Feb 2024 20:19:12 +0000
-
-
-
-
-
- https://wordpress.hackanooga.com/?p=425
-
-
- In this article we are gonna get into setting up Traefik to request dynamic certs from Lets Encrypt. I had a few issues getting this up and running and the documentation is a little fuzzy. In my case I decided to go with the DNS challenge route. Really the only reason I went with this option is because I was having issues with the TLS and HTTP challenges. Well as it turns out my issues didn’t have as much to do with my configuration as they did with my router.
-
-
-
-
Sometime in the past I had set up some special rules on my router to force all clients on my network to send DNS requests through a self hosted DNS server. I did this to keep some of my “smart” devices from misbehaving by blocking there access to the outside world. As it turns out some devices will ignore the DNS servers that you hand out via DHCP and will use their own instead. That is of course unless you force DNS redirection but that is another post for another day.
-
-
-
-
Let’s revisit our current configuration:
-
-
-
-
version: '3'
-
-services:
- reverse-proxy:
- # The official v2 Traefik docker image
- image: traefik:v2.11
- # Enables the web UI and tells Traefik to listen to docker
- command:
- - --api.insecure=true
- - --providers.docker=true
- - --providers.file.filename=/config.yml
- - --entrypoints.web.address=:80
- - --entrypoints.websecure.address=:443
- # Set up LetsEncrypt
- - --certificatesresolvers.letsencrypt.acme.dnschallenge=true
- - --certificatesresolvers.letsencrypt.acme.dnschallenge.provider=cloudflare
- - --certificatesresolvers.letsencrypt.acme.email=mikeconrad@onmail.com
- - --certificatesresolvers.letsencrypt.acme.storage=/letsencrypt/acme.json
- - --entryPoints.web.http.redirections.entryPoint.to=websecure
- - --entryPoints.web.http.redirections.entryPoint.scheme=https
- - --entryPoints.web.http.redirections.entrypoint.permanent=true
- - --log=true
- - --log.level=INFO
-# - '--certificatesresolvers.letsencrypt.acme.caserver=https://acme-staging-v02.api.letsencrypt.org/directory'
-
- environment:
- - CF_DNS_API_TOKEN=${CF_DNS_API_TOKEN}
- ports:
- # The HTTP port
- - "80:80"
- - "443:443"
- # The Web UI (enabled by --api.insecure=true)
- - "8080:8080"
- volumes:
- # So that Traefik can listen to the Docker events
- - /var/run/docker.sock:/var/run/docker.sock:ro
- - ./letsencrypt:/letsencrypt
- - ./volumes/traefik/logs:/logs
- - ./traefik/config.yml:/config.yml:ro
- networks:
- - traefik
- ots:
- image: luzifer/ots
- container_name: ots
- restart: always
- environment:
- # Optional, see "Customization" in README
- #CUSTOMIZE: '/etc/ots/customize.yaml'
- # See README for details
- REDIS_URL: redis://redis:6379/0
- # 168h = 1w
- SECRET_EXPIRY: "604800"
- # "mem" or "redis" (See README)
- STORAGE_TYPE: redis
- depends_on:
- - redis
- labels:
- - traefik.enable=true
- - traefik.http.routers.ots.rule=Host(`ots.hackanooga.com`)
- - traefik.http.routers.ots.entrypoints=websecure
- - traefik.http.routers.ots.tls=true
- - traefik.http.routers.ots.tls.certresolver=letsencrypt
- networks:
- - traefik
- redis:
- image: redis:alpine
- restart: always
- volumes:
- - ./redis-data:/data
- networks:
- - traefik
-networks:
- traefik:
- external: true
-
-
-
-
-
-
Now that we have all of this in place there are a couple more things we need to do on the Cloudflare side:
-
-
-
-
Step 1: Setup wildcard DNS entry
-
-
-
-
This is pretty straightforward. Follow the Cloudflare documentation if you aren’t familiar with setting this up.
-
-
-
-
Step 2: Create API Token
-
-
-
-
This is where the Traefik documentation is a little lacking. I had some issues getting this set up initially but ultimately found this documentation which pointed me in the right direction. In your Cloudflare account you will need to create an API token. Navigate to the dashboard, go to your profile -> API Tokens and create new token. It should have the following permissions:
-
-
-
-
Zone.Zone.Read
-Zone.DNS.Edit
-
-
-
-
-
-
-
-
Also be sure to give it permission to access all zones in your account. Now simply provide that token when starting up the stack and you should be good to go:
-
-
-
-
\ No newline at end of file
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/category/ci-cd/feed/index.xml b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/category/ci-cd/feed/index.xml
deleted file mode 100644
index fde3aa7..0000000
--- a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/category/ci-cd/feed/index.xml
+++ /dev/null
@@ -1,285 +0,0 @@
-
-
-
- CI/CD – hackanooga
-
- /
- Confessions of a homelab hacker
- Tue, 12 Mar 2024 19:11:24 +0000
- en-US
-
- hourly
-
- 1
- https://wordpress.org/?v=6.6.2
-
-
- /wp-content/uploads/2024/03/cropped-cropped-avatar-32x32.png
- CI/CD – hackanooga
- /
- 32
- 32
-
-
- Automating CI/CD with TeamCity and Ansible
- /automating-ci-cd-with-teamcity-ansible/
-
-
- Mon, 11 Mar 2024 13:37:47 +0000
-
-
-
-
-
- https://wordpress.hackanooga.com/?p=393
-
-
- In part one of this series we are going to explore a CI/CD option you may not be familiar with but should definitely be on your radar. I used Jetbrains TeamCity for several months at my last company and really enjoyed my time with it. A couple of the things I like most about it are:
-
-
-
-
-
Ability to declare global variables and have them be passed down to all projects
-
-
-
-
Ability to declare variables that are made up of other variables
-
-
-
-
-
I like to use private or self hosted Docker registries for a lot of my projects and one of the pain points I have had with some other solutions (well mostly Bitbucket) is that they don’t integrate well with these private registries and when I run into a situation where I am pushing an image to or pulling an image from a private registry it get’s a little messy. TeamCity is nice in that I can add a connection to my private registry in my root project and them simply add that as a build feature to any projects that may need it. Essentially, now I only have one place where I have to keep those credentials and manage that connection.
-
-
-
-
-
-
-
-
Another reason I love it is the fact that you can create really powerful build templates that you can reuse. This became very powerful when we were trying to standardize our build processes. For example, most of the apps we build are .NET backends and React frontends. We built docker images for every project and pushed them to our private registry. TeamCity gave us the ability to standardize the naming convention and really streamline the build process. Enough about that though, the rest of this series will assume that you are using TeamCity. This post will focus on getting up and running using Ansible.
-
-
-
-
-
-
-
-
Installation and Setup
-
-
-
-
For this I will assume that you already have Ansible on your machine and that you will be installing TeamCity locally. You can simply follow along with the installation guide here. We will be creating an Ansible playbook based on the following steps. If you just want the finished code, you can find it on my Gitea instance here:
-
-
-
-
Step 1 : Create project and initial playbook
-
-
-
-
To get started go ahead and create a new directory to hold our configuration:
Now open up install-teamcity-server.yml and add a task to install Java 17 as it is a prerequisite. You will need sudo for this task. ***As of this writing TeamCity does not support Java 18 or 19. If you try to install one of these you will get an error when trying to start TeamCity.
The next step is to create a dedicated user account. Add the following task to install-teamcity-server.yml
-
-
-
-
- name: Add Teamcity User
- ansible.builtin.user:
- name: teamcity
-
-
-
-
Next we will need to download the latest version of TeamCity. 2023.11.4 is the latest as of this writing. Add the following task to your install-teamcity-server.yml
Now that we have everything set up and installed we want to make sure that our new teamcity user has access to everything they need to get up and running. We will add the following lines:
This gives us a pretty nice setup. We have TeamCity server installed with a dedicated user account. The last thing we will do is create a systemd service so that we can easily start/stop the server. For this we will need to add a few things.
-
-
-
-
-
A service file that tells our system how to manage TeamCity
-
-
-
-
A j2 template file that is used to create this service file
-
-
-
-
A handler that tells the system to run systemctl daemon-reload once the service has been installed.
-
-
-
-
-
Go ahead and create a new templates folder with the following teamcity.service.j2 file
That’s it! Now you should have a fully automated installed of TeamCity Server ready to be deployed wherever you need it. Here is the final playbook file, also you can find the most up to date version in my repo:
-
-
-
-
---
-- name: Install Teamcity
- hosts: localhost
- become: true
- become_method: sudo
-
- vars:
- java_version: "17"
- teamcity:
- installation_path: /opt/TeamCity
- version: "2023.11.4"
-
- tasks:
- - name: Install Java
- ansible.builtin.apt:
- name: openjdk-{{ java_version }}-jdk # This is important because TeamCity will fail to start if we try to use 18 or 19
- update_cache: yes
- state: latest
- install_recommends: no
-
- - name: Add TeamCity User
- ansible.builtin.user:
- name: teamcity
-
- - name: Download TeamCity Server
- ansible.builtin.get_url:
- url: https://download.jetbrains.com/teamcity/TeamCity-{{teamcity.version}}.tar.gz
- dest: /opt/TeamCity-{{teamcity.version}}.tar.gz
- mode: '0770'
-
- - name: Install TeamCity Server
- ansible.builtin.shell: |
- tar xfz /opt/TeamCity-{{teamcity.version}}.tar.gz
- rm -rf /opt/TeamCity-{{teamcity.version}}.tar.gz
- args:
- chdir: /opt
-
- - name: Update permissions
- ansible.builtin.shell: chown -R teamcity:teamcity /opt/TeamCity
-
- - name: TeamCity | Create environment file
- template: src=teamcity.service.j2 dest=/etc/systemd/system/teamcityserver.service
- notify:
- - reload systemctl
- - name: TeamCity | Start teamcity
- service: name=teamcityserver.service state=started enabled=yes
-
- # Trigger a reload of systemctl after the service file has been created.
- handlers:
- - name: reload systemctl
- command: systemctl daemon-reload
-
-
-
-
\ No newline at end of file
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/category/cloudflare/feed/index.xml b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/category/cloudflare/feed/index.xml
deleted file mode 100644
index d4f5b16..0000000
--- a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/category/cloudflare/feed/index.xml
+++ /dev/null
@@ -1,760 +0,0 @@
-
-
-
- Cloudflare – hackanooga
-
- /
- Confessions of a homelab hacker
- Mon, 16 Sep 2024 13:07:16 +0000
- en-US
-
- hourly
-
- 1
- https://wordpress.org/?v=6.6.2
-
-
- /wp-content/uploads/2024/03/cropped-cropped-avatar-32x32.png
- Cloudflare – hackanooga
- /
- 32
- 32
-
-
- Hardening your web server by only allowing traffic from Cloudflare
- /hardening-your-web-server-by-only-allowing-traffic-from-cloudflare/
-
-
- Thu, 01 Aug 2024 21:02:29 +0000
-
-
-
-
-
- /?p=607
-
-
- TDLR:
-
-
-
-
If you just want the code you can find a convenient script on my Gitea server here. This version has been slightly modified so that it will work on more systems.
-
-
-
-
-
-
-
-
I have been using Cloudflare for several years for both personal and professional projects. The free plan has some various gracious limits and it’s a great way to clear out some low hanging fruit and improve the security of your application. If you’re not familiar with how it works, basically Cloudflare has two modes for DNS records. DNS Only and Proxied. The only way to get the advantages of Cloudflare is to use Proxied mode. Cloudflare has some great documentation on how all of their services work but basically what happens is that you are pointing your domain to Cloudflare and Cloudflare provisions their network of Proxy servers to handle requests for your domain.
-
-
-
-
These proxy servers allow you to secure your domain by implementing things like WAF and Rate limiting. You can also enforce HTTPS only mode and modify/add custom request/response headers. You will notice that once you turn this mode on, your webserver will log requests as coming from Cloudflare IP addresses. They have great documentation on how to configure your webserver to restore these IP addresses in your log files.
-
-
-
-
This is a very easy step to start securing your origin server but it still allows attackers to access your servers directly if they know the IP address. We can take our security one step forward by only allowing requests from IP addresses originating within Cloudflare meaning that we will only allow requests if they are coming from a Cloudflare proxy server. The setup is fairly straightforward. In this example I will be using a Linux server.
-
-
-
-
We can achieve this pretty easily because Cloudflare provides a sort of API where they regular publish their network blocks. Here is the basic script we will use:
-
-
-
-
for ip in $(curl https://www.cloudflare.com/ips-v4/); do iptables -I INPUT -p tcp -m multiport --dports http,https -s $ip -j ACCEPT; done
-
-for ip in $(curl https://www.cloudflare.com/ips-v6/); do ip6tables -I INPUT -p tcp -m multiport --dports http,https -s $ip -j ACCEPT; done
-
-iptables -A INPUT -p tcp -m multiport --dports http,https -j DROP
-ip6tables -A INPUT -p tcp -m multiport --dports http,https -j DROP
-
-
-
-
-
This will pull down the latest network addresses from Cloudflare and create iptables rules for us. These IP addresses do change from time to time so you may want to put this in a script and run it via a cronjob to have it update on a regular basis.
-
-
-
-
Now with this in place, here is the results:
-
-
-
-
-
-
-
-
This should cut down on some of the noise from attackers and script kiddies trying to find holes in your security.
We are looking for an experienced professional to help us set up an SFTP server that will allow our vendors to send us inventory files on a daily basis. The server should ensure secure and reliable file transfers, allowing our vendors to easily upload their inventory updates. The successful candidate will possess expertise in SFTP server setup and configuration, as well as knowledge of network security protocols. The required skills for this job include:
-
-
-
-
– SFTP server setup and configuration – Network security protocols – Troubleshooting and problem-solving skills
-
-
-
-
If you have demonstrated experience in setting up SFTP servers and ensuring smooth daily file transfers, we would love to hear from you.
-
-
-
-
-
-
-
-
-
-
-
-
-
My Role
-
-
-
-
I walked the client through the process of setting up a Digital Ocean account. I created a Ubuntu 22.04 VM and installed SFTPGo. I set the client up with an administrator user so that they could easily login and manage users and shares. I implemented some basic security practices as well and set the client up with a custom domain and free TLS/SSL certificate from LetsEncrypt. With the documentation and screenshots I provided the client, they were able to get everything up and running and add users and connect other systems easily and securly.
-
-
-
-
-
-
-
-
-
-
-
-
Client Feedback
-
-
-
-
-
Rating is 5 out of 5.
-
-
-
-
Michael was EXTREMELY helpful and great to work with. We really benefited from his support and help with everything.
-
-]]>
-
-
-
-
-
- Fun with bots – SSH tarpitting
- /fun-with-bots-ssh-tarpitting/
-
-
- Mon, 24 Jun 2024 13:37:43 +0000
-
-
-
-
-
-
- /?p=576
-
-
- For those of you who aren’t familiar with the concept of a network tarpit it is a fairly simple concept. Wikipedia defines it like this:
-
-
-
-
-
A tarpit is a service on a computer system (usually a server) that purposely delays incoming connections. The technique was developed as a defense against a computer worm, and the idea is that network abuses such as spamming or broad scanning are less effective, and therefore less attractive, if they take too long. The concept is analogous with a tar pit, in which animals can get bogged down and slowly sink under the surface, like in a swamp.
If you run any sort of service on the internet then you know as soon as your server has a public IP address and open ports, there are scanners and bots trying to get in constantly. If you take decent steps towards security then it is little more than an annoyance, but annoying all the less. One day when I had some extra time on my hands I started researching ways to mess with the bots trying to scan/attack my site.
-
-
-
-
It turns out that this problem has been solved multiple times in multiple ways. One of the most popular tools for tarpitting ssh connections is endlessh. The way it works is actually pretty simple. The SSH RFC states that when an SSH connection is established, both sides MUST send an identification string. Further down the spec is the line that allows this behavior:
-
-
-
-
-
The server MAY send other lines of data before sending the version
- string. Each line SHOULD be terminated by a Carriage Return and Line
- Feed. Such lines MUST NOT begin with "SSH-", and SHOULD be encoded
- in ISO-10646 UTF-8 [RFC3629] (language is not specified). Clients
- MUST be able to process such lines. Such lines MAY be silently
- ignored, or MAY be displayed to the client user. If they are
- displayed, control character filtering, as discussed in [SSH-ARCH],
- SHOULD be used. The primary use of this feature is to allow TCP-
- wrappers to display an error message before disconnecting.
-SSH RFC
-
-
-
-
Essentially this means that their is no limit to the amount of data that a server can send back to the client and the client must be able to wait and process all of this data. Now let’s see it in action.
By default this fake server listens on port 2222. I have a port forward set up that forwards all ssh traffic from port 22 to 2222. Now try to connect via ssh:
-
-
-
-
ssh -vvv localhost -p 2222
-
-
-
-
If you wait a few seconds you will see the server send back the version string and then start sending a random banner:
-
-
-
-
$:/tmp/endlessh$ 2024-06-24T13:05:59.488Z Port 2222
-2024-06-24T13:05:59.488Z Delay 10000
-2024-06-24T13:05:59.488Z MaxLineLength 32
-2024-06-24T13:05:59.488Z MaxClients 4096
-2024-06-24T13:05:59.488Z BindFamily IPv4 Mapped IPv6
-2024-06-24T13:05:59.488Z socket() = 3
-2024-06-24T13:05:59.488Z setsockopt(3, SO_REUSEADDR, true) = 0
-2024-06-24T13:05:59.488Z setsockopt(3, IPV6_V6ONLY, true) = 0
-2024-06-24T13:05:59.488Z bind(3, port=2222) = 0
-2024-06-24T13:05:59.488Z listen(3) = 0
-2024-06-24T13:05:59.488Z poll(1, -1)
-ssh -vvv localhost -p 2222
-OpenSSH_8.9p1 Ubuntu-3ubuntu0.7, OpenSSL 3.0.2 15 Mar 2022
-debug1: Reading configuration data /home/mikeconrad/.ssh/config
-debug1: Reading configuration data /etc/ssh/ssh_config
-debug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files
-debug1: /etc/ssh/ssh_config line 21: Applying options for *
-debug3: expanded UserKnownHostsFile '~/.ssh/known_hosts' -> '/home/mikeconrad/.ssh/known_hosts'
-debug3: expanded UserKnownHostsFile '~/.ssh/known_hosts2' -> '/home/mikeconrad/.ssh/known_hosts2'
-debug2: resolving "localhost" port 2222
-debug3: resolve_host: lookup localhost:2222
-debug3: ssh_connect_direct: entering
-debug1: Connecting to localhost [::1] port 2222.
-debug3: set_sock_tos: set socket 3 IPV6_TCLASS 0x10
-debug1: Connection established.
-2024-06-24T13:06:08.635Z = 1
-2024-06-24T13:06:08.635Z accept() = 4
-2024-06-24T13:06:08.635Z setsockopt(4, SO_RCVBUF, 1) = 0
-2024-06-24T13:06:08.635Z ACCEPT host=::1 port=43696 fd=4 n=1/4096
-2024-06-24T13:06:08.635Z poll(1, 10000)
-debug1: identity file /home/mikeconrad/.ssh/id_rsa type 0
-debug1: identity file /home/mikeconrad/.ssh/id_rsa-cert type 4
-debug1: identity file /home/mikeconrad/.ssh/id_ecdsa type -1
-debug1: identity file /home/mikeconrad/.ssh/id_ecdsa-cert type -1
-debug1: identity file /home/mikeconrad/.ssh/id_ecdsa_sk type -1
-debug1: identity file /home/mikeconrad/.ssh/id_ecdsa_sk-cert type -1
-debug1: identity file /home/mikeconrad/.ssh/id_ed25519 type -1
-debug1: identity file /home/mikeconrad/.ssh/id_ed25519-cert type -1
-debug1: identity file /home/mikeconrad/.ssh/id_ed25519_sk type -1
-debug1: identity file /home/mikeconrad/.ssh/id_ed25519_sk-cert type -1
-debug1: identity file /home/mikeconrad/.ssh/id_xmss type -1
-debug1: identity file /home/mikeconrad/.ssh/id_xmss-cert type -1
-debug1: identity file /home/mikeconrad/.ssh/id_dsa type -1
-debug1: identity file /home/mikeconrad/.ssh/id_dsa-cert type -1
-debug1: Local version string SSH-2.0-OpenSSH_8.9p1 Ubuntu-3ubuntu0.7
-2024-06-24T13:06:18.684Z = 0
-2024-06-24T13:06:18.684Z write(4) = 3
-2024-06-24T13:06:18.684Z poll(1, 10000)
-debug1: kex_exchange_identification: banner line 0: V
-2024-06-24T13:06:28.734Z = 0
-2024-06-24T13:06:28.734Z write(4) = 25
-2024-06-24T13:06:28.734Z poll(1, 10000)
-debug1: kex_exchange_identification: banner line 1: 2I=ED}PZ,z T_Y|Yc]$b{R]
-
-
-
-
-
-
This is a great way to give back to those bots and script kiddies. In my research into other methods I also stumbled across this brilliant program fakessh. While fakessh isn’t technically a tarpit, it’s more of a honeypot but very interesting nonetheless. It creates a fake SSH server and logs the ip address, connection string and any commands executed by the attacker. Essentially it allows any username/password combination to connect and gives them a fake shell prompt. There is no actual access to any file system and all of their commands basically return gibberish.
-
-
-
-
Here are some logs from an actual server of mine running fakessh
Those are mostly connections and disconnections. They probably connected, realized it was fake and disconnected. There are a couple that tried to execute some commands though:
Fun fact: Cloudflare’s Bot Fight Mode uses a form of tarpitting:
-
-
-
-
-
Once enabled, when we detect a bad bot, we will do three things: (1) we’re going to disincentivize the bot maker economically by tarpitting them, including requiring them to solve a computationally intensive challenge that will require more of their bot’s CPU; (2) for Bandwidth Alliance partners, we’re going to hand the IP of the bot to the partner and get the bot kicked offline; and (3) we’re going to plant trees to make up for the bot’s carbon cost.
-]]>
-
-
-
-
-
- Traefik with Let’s Encrypt and Cloudflare (pt 2)
- /traefik-with-lets-encrypt-and-cloudflare-pt-2/
-
-
- Thu, 15 Feb 2024 20:19:12 +0000
-
-
-
-
-
- https://wordpress.hackanooga.com/?p=425
-
-
- In this article we are gonna get into setting up Traefik to request dynamic certs from Lets Encrypt. I had a few issues getting this up and running and the documentation is a little fuzzy. In my case I decided to go with the DNS challenge route. Really the only reason I went with this option is because I was having issues with the TLS and HTTP challenges. Well as it turns out my issues didn’t have as much to do with my configuration as they did with my router.
-
-
-
-
Sometime in the past I had set up some special rules on my router to force all clients on my network to send DNS requests through a self hosted DNS server. I did this to keep some of my “smart” devices from misbehaving by blocking there access to the outside world. As it turns out some devices will ignore the DNS servers that you hand out via DHCP and will use their own instead. That is of course unless you force DNS redirection but that is another post for another day.
-
-
-
-
Let’s revisit our current configuration:
-
-
-
-
version: '3'
-
-services:
- reverse-proxy:
- # The official v2 Traefik docker image
- image: traefik:v2.11
- # Enables the web UI and tells Traefik to listen to docker
- command:
- - --api.insecure=true
- - --providers.docker=true
- - --providers.file.filename=/config.yml
- - --entrypoints.web.address=:80
- - --entrypoints.websecure.address=:443
- # Set up LetsEncrypt
- - --certificatesresolvers.letsencrypt.acme.dnschallenge=true
- - --certificatesresolvers.letsencrypt.acme.dnschallenge.provider=cloudflare
- - --certificatesresolvers.letsencrypt.acme.email=mikeconrad@onmail.com
- - --certificatesresolvers.letsencrypt.acme.storage=/letsencrypt/acme.json
- - --entryPoints.web.http.redirections.entryPoint.to=websecure
- - --entryPoints.web.http.redirections.entryPoint.scheme=https
- - --entryPoints.web.http.redirections.entrypoint.permanent=true
- - --log=true
- - --log.level=INFO
-# - '--certificatesresolvers.letsencrypt.acme.caserver=https://acme-staging-v02.api.letsencrypt.org/directory'
-
- environment:
- - CF_DNS_API_TOKEN=${CF_DNS_API_TOKEN}
- ports:
- # The HTTP port
- - "80:80"
- - "443:443"
- # The Web UI (enabled by --api.insecure=true)
- - "8080:8080"
- volumes:
- # So that Traefik can listen to the Docker events
- - /var/run/docker.sock:/var/run/docker.sock:ro
- - ./letsencrypt:/letsencrypt
- - ./volumes/traefik/logs:/logs
- - ./traefik/config.yml:/config.yml:ro
- networks:
- - traefik
- ots:
- image: luzifer/ots
- container_name: ots
- restart: always
- environment:
- # Optional, see "Customization" in README
- #CUSTOMIZE: '/etc/ots/customize.yaml'
- # See README for details
- REDIS_URL: redis://redis:6379/0
- # 168h = 1w
- SECRET_EXPIRY: "604800"
- # "mem" or "redis" (See README)
- STORAGE_TYPE: redis
- depends_on:
- - redis
- labels:
- - traefik.enable=true
- - traefik.http.routers.ots.rule=Host(`ots.hackanooga.com`)
- - traefik.http.routers.ots.entrypoints=websecure
- - traefik.http.routers.ots.tls=true
- - traefik.http.routers.ots.tls.certresolver=letsencrypt
- networks:
- - traefik
- redis:
- image: redis:alpine
- restart: always
- volumes:
- - ./redis-data:/data
- networks:
- - traefik
-networks:
- traefik:
- external: true
-
-
-
-
-
-
Now that we have all of this in place there are a couple more things we need to do on the Cloudflare side:
-
-
-
-
Step 1: Setup wildcard DNS entry
-
-
-
-
This is pretty straightforward. Follow the Cloudflare documentation if you aren’t familiar with setting this up.
-
-
-
-
Step 2: Create API Token
-
-
-
-
This is where the Traefik documentation is a little lacking. I had some issues getting this set up initially but ultimately found this documentation which pointed me in the right direction. In your Cloudflare account you will need to create an API token. Navigate to the dashboard, go to your profile -> API Tokens and create new token. It should have the following permissions:
-
-
-
-
Zone.Zone.Read
-Zone.DNS.Edit
-
-
-
-
-
-
-
-
Also be sure to give it permission to access all zones in your account. Now simply provide that token when starting up the stack and you should be good to go:
-
-
-
-
CF_DNS_API_TOKEN=[redacted] docker compose up -d
-]]>
-
-
-
-
-
- Traefik with Let’s Encrypt and Cloudflare (pt 1)
- /traefik-with-lets-encrypt-and-cloudflare-pt-1/
-
-
- Thu, 01 Feb 2024 19:35:00 +0000
-
-
-
-
-
- https://wordpress.hackanooga.com/?p=422
-
-
- Recently I decided to rebuild one of my homelab servers. Previously I was using Nginx as my reverse proxy but I decided to switch to Traefik since I have been using it professionally for some time now. One of the reasons I like Traefik is that it is stupid simple to set up certificates and when I am using it with Docker I don’t have to worry about a bunch of configuration files. If you aren’t familiar with how Traefik works with Docker, here is a brief example of a docker-compose.yaml
-
-
-
-
version: '3'
-
-services:
- reverse-proxy:
- # The official v2 Traefik docker image
- image: traefik:v2.11
- # Enables the web UI and tells Traefik to listen to docker
- command:
- - --api.insecure=true
- - --providers.docker=true
- - --entrypoints.web.address=:80
- - --entrypoints.websecure.address=:443
- # Set up LetsEncrypt
- - --certificatesresolvers.letsencrypt.acme.dnschallenge=true
- - --certificatesresolvers.letsencrypt.acme.dnschallenge.provider=cloudflare
- - --certificatesresolvers.letsencrypt.acme.email=user@example.com
- - --certificatesresolvers.letsencrypt.acme.storage=/letsencrypt/acme.json
- # Redirect all http requests to https
- - --entryPoints.web.http.redirections.entryPoint.to=websecure
- - --entryPoints.web.http.redirections.entryPoint.scheme=https
- - --entryPoints.web.http.redirections.entrypoint.permanent=true
- - --log=true
- - --log.level=INFO
- # Needed to request certs via lets encrypt
- environment:
- - CF_DNS_API_TOKEN=[redacted]
- ports:
- # The HTTP port
- - "80:80"
- - "443:443"
- # The Web UI (enabled by --api.insecure=true)
- - "8080:8080"
- volumes:
- # So that Traefik can listen to the Docker events
- - /var/run/docker.sock:/var/run/docker.sock:ro
- # Used for storing letsencrypt certificates
- - ./letsencrypt:/letsencrypt
- - ./volumes/traefik/logs:/logs
- networks:
- - traefik
- ots:
- image: luzifer/ots
- container_name: ots
- restart: always
- environment:
- REDIS_URL: redis://redis:6379/0
- SECRET_EXPIRY: "604800"
- STORAGE_TYPE: redis
- depends_on:
- - redis
- labels:
- - traefik.enable=true
- - traefik.http.routers.ots.rule=Host(`ots.example.com`)
- - traefik.http.routers.ots.entrypoints=websecure
- - traefik.http.routers.ots.tls=true
- - traefik.http.routers.ots.tls.certresolver=letsencrypt
- - traefik.http.services.ots.loadbalancer.server.port=3000
- networks:
- - traefik
- redis:
- image: redis:alpine
- restart: always
- volumes:
- - ./redis-data:/data
- networks:
- - traefik
-networks:
- traefik:
- external: true
-
-
-
-
-
-
-
In part one of this series I will be going over some of the basics of Traefik and how dynamic routing works. If you want to skip to the good stuff and get everything configured with Cloudflare, you can skip to part 2.
-
-
-
-
This example set’s up the primary Traefik container which acts as the ingress controller as well as a handy One Time Secret sharing service I use. Traefik handles routing in Docker via labels. For this to work properly the services that Traefik is trying to route to all need to be on the same Docker network. For this example we created a network called traefik by running the following:
-
-
-
-
docker network create traefik
-
-
-
-
-
Let’s take a look at the labels we applied to the ots container a little closer:
traefik.enable=true – This should be pretty self explanatory but it tells Traefik that we want it to know about this service.
-
-
-
-
traefik.http.routers.ots.rule=Host('ots.example.com') - This is where some of the magic comes in. Here we are defining a router called ots. The name is arbitrary in that it doesn’t have to match the name of the service but for our example it does. There are many rules that you can specify but the easiest for this example is host. Basically we are saying that any request coming in for ots.example.com should be picked up by this router. You can find more options for routers in the Traefik docs.
We are using these three labels to tell our router that we want it to use the websecure entrypoint, and that it should use the letsencrypt certresolver to grab it’s certificates. websecure is an arbitrary name that we assigned to our :443 interface. There are multiple ways to configure this, I choose to use the cli format in my traefik config:
-
-
-
-
“`
-
-
-
-
command:
- - --api.insecure=true
- - --providers.docker=true
- # Our entrypoint names are arbitrary but these are convention.
- # The important part is the port binding that we associate.
- - --entrypoints.web.address=:80
- - --entrypoints.websecure.address=:443
-
-
-
-
-
-
These last label is optional depending on your setup but it is important to understand as the documentation is a little fuzzy.
Here’s how it works. Suppose you have a container that exposes multiple ports. Maybe one of those is a web ui and another is something that you don’t want exposed. By default Traefik will try and guess which port to route requests to. My understanding is that it will try and use the first exposed port. However you can override this functionality by using the label above which will tell Traefik specifically which port you want to route to inside the container.
The service name is derived automatically from the definition in the docker compose file:
-
-
-
-
ots: # This will become the service name image: luzifer/ots container_name: ots
We are looking for an experienced professional to help us set up an SFTP server that will allow our vendors to send us inventory files on a daily basis. The server should ensure secure and reliable file transfers, allowing our vendors to easily upload their inventory updates. The successful candidate will possess expertise in SFTP server setup and configuration, as well as knowledge of network security protocols. The required skills for this job include:
-
-
-
-
– SFTP server setup and configuration – Network security protocols – Troubleshooting and problem-solving skills
-
-
-
-
If you have demonstrated experience in setting up SFTP servers and ensuring smooth daily file transfers, we would love to hear from you.
-
-
-
-
-
-
-
-
-
-
-
-
-
My Role
-
-
-
-
I walked the client through the process of setting up a Digital Ocean account. I created a Ubuntu 22.04 VM and installed SFTPGo. I set the client up with an administrator user so that they could easily login and manage users and shares. I implemented some basic security practices as well and set the client up with a custom domain and free TLS/SSL certificate from LetsEncrypt. With the documentation and screenshots I provided the client, they were able to get everything up and running and add users and connect other systems easily and securly.
-
-
-
-
-
-
-
-
-
-
-
-
Client Feedback
-
-
-
-
-
Rating is 5 out of 5.
-
-
-
-
Michael was EXTREMELY helpful and great to work with. We really benefited from his support and help with everything.
-
-]]>
-
-
-
-
-
- Debugging running Nginx config
- /debugging-running-nginx-config/
-
-
- Wed, 17 Jul 2024 01:42:43 +0000
-
-
-
-
- /?p=596
-
-
- I was recently working on project where a client had cPanel/WHM with Nginx and Apache. They had a large number of sites managed by Nginx with a large number of includes. I created a custom config to override a location block and needed to be certain that my changes where actually being picked up. Anytime I make changes to an Nginx config, I try to be vigilant about running:
-
-
-
-
nginx -t
-
-
-
-
to test my configuration and ensure I don’t have any syntax errors. I was looking for an easy way to view the actual compiled config and found the -T flag which will test the configuration and dump it to standard out. This is pretty handy if you have a large number of includes in various locations. Here is an example from a fresh Nginx Docker container:
As you can see from the output above, we get all of the various Nginx config files in use printed to the console, perfect for grepping or searching/filtering with other tools.
-]]>
-
-
-
-
-
- Traefik 3.0 service discovery in Docker Swarm mode
- /traefik-3-0-service-discovery-in-docker-swarm-mode/
-
-
- Sat, 11 May 2024 13:44:01 +0000
-
-
-
-
-
-
- /?p=564
-
-
- I recently decided to set up a Docker swarm cluster for a project I was working on. If you aren’t familiar with Swarm mode, it is similar in some ways to k8s but with much less complexity and it is built into Docker. If you are looking for a fairly straightforward way to deploy containers across a number of nodes without all the overhead of k8s it can be a good choice, however it isn’t a very popular or widespread solution these days.
-
-
-
-
Anyway, I set up a VM scaling set in Azure with 10 Ubuntu 22.04 vms and wrote some Ansible scripts to automate the process of installing Docker on each machine as well as setting 3 up as swarm managers and the other 7 as worker nodes. I ssh’d into the primary manager node and created a docker compose file for launching an observability stack.
Everything deploys properly but when I view the Traefik logs there is an issue with all the services except for the grafana service. I get errors like this:
-
-
-
-
traefik_traefik.1.tm5iqb9x59on@dockerswa2V8BY4 | 2024-05-11T13:14:16Z ERR error="service \"observability-prometheus\" error: port is missing" container=observability-prometheus-37i852h4o36c23lzwuu9pvee9 providerName=swarm
-
-
-
-
-
It drove me crazy for about half a day or so. I couldn’t find any reason why the grafana service worked as expected but none of the others did. Part of my love/hate relationship with Traefik stems from the fact that configuration issues like this can be hard to track and debug. Ultimately after lots of searching and banging my head against a wall I found the answer in the Traefik docs and thought I would share here for anyone else who might run into this issue. Again, this solution is specific to Docker Swarm mode.
Expand that first section and you will see the solution:
-
-
-
-
-
-
-
-
It turns out I just needed to update my docker-compose.yml and nest the labels under a deploy section, redeploy and everything was working as expected.
-]]>
-
-
-
-
-
- Stop all running containers with Docker
- /stop-all-running-containers-with-docker/
-
-
- Wed, 03 Apr 2024 13:12:41 +0000
-
-
-
-
- /?p=557
-
-
- These are some handy snippets I use on a regular basis when managing containers. I have one server in particular that can sometimes end up with 50 to 100 orphaned containers for various reasons. The easiest/quickest way to stop all of them is to do something like this:
-
-
-
-
docker container stop $(docker container ps -q)
-
-
-
-
Let me break this down in case you are not familiar with the syntax. Basically we are passing the output of docker container ps -q into docker container stop. This works because the stop command can take a list of container ids which is what we get when passing the -q flag to docker container ps.
-]]>
-
-
-
-
-
- Self hosted package registries with Gitea
- /self-hosted-package-registries-with-gitea/
-
-
- Thu, 07 Mar 2024 15:07:07 +0000
-
-
-
-
-
- https://wordpress.hackanooga.com/?p=413
-
-
- I am a big proponent of open source technologies. I have been using Gitea for a couple years now in my homelab. A few years ago I moved most of my code off of Github and onto my self hosted instance. I recently came across a really handy feature that I didn’t know Gitea had and was pleasantly surprised by: Package Registry. You are no doubt familiar with what a package registry is in the broad context. Here are some examples of package registries you probably use on a regular basis:
-
-
-
-
-
npm
-
-
-
-
cargo
-
-
-
-
docker
-
-
-
-
composer
-
-
-
-
nuget
-
-
-
-
helm
-
-
-
-
-
There are a number of reasons why you would want to self host a registry. For example, in my home lab I have some Docker images that are specific to my use cases and I don’t necessarily want them on a public registry. I’m also not concerned about losing the artifacts as I can easily recreate them from code. Gitea makes this really easy to setup, in fact it comes baked in with the installation. For the sake of this post I will just assume that you already have Gitea installed and setup.
-
-
-
-
Since the package registry is baked in and enabled by default, I will demonstrate how easy it is to push a docker image. We will pull the default alpine image, re-tag it and push it to our internal registry:
-
-
-
-
# Pull the official Alpine image
-docker pull alpine:latest
-
-# Re tag the image with our local registry information
-docker tag alpine:latest git.hackanooga.com/mikeconrad/alpine:latest
-
-# Login using your gitea user account
-docker login git.hackanooga.com
-
-# Push the image to our registry
-docker push git.hackanooga.com/mikeconrad/alpine:latest
-
-
-
-
-
-
Now log into your Gitea instance, navigate to your user account and look for packages. You should see the newly uploaded alpine image.
-
-
-
-
-
-
-
-
You can see that the package type is container. Clicking on it will give you more information:
-
-
-
-
-]]>
-
-
-
-
-
- Traefik with Let’s Encrypt and Cloudflare (pt 2)
- /traefik-with-lets-encrypt-and-cloudflare-pt-2/
-
-
- Thu, 15 Feb 2024 20:19:12 +0000
-
-
-
-
-
- https://wordpress.hackanooga.com/?p=425
-
-
- In this article we are gonna get into setting up Traefik to request dynamic certs from Lets Encrypt. I had a few issues getting this up and running and the documentation is a little fuzzy. In my case I decided to go with the DNS challenge route. Really the only reason I went with this option is because I was having issues with the TLS and HTTP challenges. Well as it turns out my issues didn’t have as much to do with my configuration as they did with my router.
-
-
-
-
Sometime in the past I had set up some special rules on my router to force all clients on my network to send DNS requests through a self hosted DNS server. I did this to keep some of my “smart” devices from misbehaving by blocking there access to the outside world. As it turns out some devices will ignore the DNS servers that you hand out via DHCP and will use their own instead. That is of course unless you force DNS redirection but that is another post for another day.
-
-
-
-
Let’s revisit our current configuration:
-
-
-
-
version: '3'
-
-services:
- reverse-proxy:
- # The official v2 Traefik docker image
- image: traefik:v2.11
- # Enables the web UI and tells Traefik to listen to docker
- command:
- - --api.insecure=true
- - --providers.docker=true
- - --providers.file.filename=/config.yml
- - --entrypoints.web.address=:80
- - --entrypoints.websecure.address=:443
- # Set up LetsEncrypt
- - --certificatesresolvers.letsencrypt.acme.dnschallenge=true
- - --certificatesresolvers.letsencrypt.acme.dnschallenge.provider=cloudflare
- - --certificatesresolvers.letsencrypt.acme.email=mikeconrad@onmail.com
- - --certificatesresolvers.letsencrypt.acme.storage=/letsencrypt/acme.json
- - --entryPoints.web.http.redirections.entryPoint.to=websecure
- - --entryPoints.web.http.redirections.entryPoint.scheme=https
- - --entryPoints.web.http.redirections.entrypoint.permanent=true
- - --log=true
- - --log.level=INFO
-# - '--certificatesresolvers.letsencrypt.acme.caserver=https://acme-staging-v02.api.letsencrypt.org/directory'
-
- environment:
- - CF_DNS_API_TOKEN=${CF_DNS_API_TOKEN}
- ports:
- # The HTTP port
- - "80:80"
- - "443:443"
- # The Web UI (enabled by --api.insecure=true)
- - "8080:8080"
- volumes:
- # So that Traefik can listen to the Docker events
- - /var/run/docker.sock:/var/run/docker.sock:ro
- - ./letsencrypt:/letsencrypt
- - ./volumes/traefik/logs:/logs
- - ./traefik/config.yml:/config.yml:ro
- networks:
- - traefik
- ots:
- image: luzifer/ots
- container_name: ots
- restart: always
- environment:
- # Optional, see "Customization" in README
- #CUSTOMIZE: '/etc/ots/customize.yaml'
- # See README for details
- REDIS_URL: redis://redis:6379/0
- # 168h = 1w
- SECRET_EXPIRY: "604800"
- # "mem" or "redis" (See README)
- STORAGE_TYPE: redis
- depends_on:
- - redis
- labels:
- - traefik.enable=true
- - traefik.http.routers.ots.rule=Host(`ots.hackanooga.com`)
- - traefik.http.routers.ots.entrypoints=websecure
- - traefik.http.routers.ots.tls=true
- - traefik.http.routers.ots.tls.certresolver=letsencrypt
- networks:
- - traefik
- redis:
- image: redis:alpine
- restart: always
- volumes:
- - ./redis-data:/data
- networks:
- - traefik
-networks:
- traefik:
- external: true
-
-
-
-
-
-
Now that we have all of this in place there are a couple more things we need to do on the Cloudflare side:
-
-
-
-
Step 1: Setup wildcard DNS entry
-
-
-
-
This is pretty straightforward. Follow the Cloudflare documentation if you aren’t familiar with setting this up.
-
-
-
-
Step 2: Create API Token
-
-
-
-
This is where the Traefik documentation is a little lacking. I had some issues getting this set up initially but ultimately found this documentation which pointed me in the right direction. In your Cloudflare account you will need to create an API token. Navigate to the dashboard, go to your profile -> API Tokens and create new token. It should have the following permissions:
-
-
-
-
Zone.Zone.Read
-Zone.DNS.Edit
-
-
-
-
-
-
-
-
Also be sure to give it permission to access all zones in your account. Now simply provide that token when starting up the stack and you should be good to go:
-
-
-
-
CF_DNS_API_TOKEN=[redacted] docker compose up -d
-]]>
-
-
-
-
-
- Traefik with Let’s Encrypt and Cloudflare (pt 1)
- /traefik-with-lets-encrypt-and-cloudflare-pt-1/
-
-
- Thu, 01 Feb 2024 19:35:00 +0000
-
-
-
-
-
- https://wordpress.hackanooga.com/?p=422
-
-
- Recently I decided to rebuild one of my homelab servers. Previously I was using Nginx as my reverse proxy but I decided to switch to Traefik since I have been using it professionally for some time now. One of the reasons I like Traefik is that it is stupid simple to set up certificates and when I am using it with Docker I don’t have to worry about a bunch of configuration files. If you aren’t familiar with how Traefik works with Docker, here is a brief example of a docker-compose.yaml
-
-
-
-
version: '3'
-
-services:
- reverse-proxy:
- # The official v2 Traefik docker image
- image: traefik:v2.11
- # Enables the web UI and tells Traefik to listen to docker
- command:
- - --api.insecure=true
- - --providers.docker=true
- - --entrypoints.web.address=:80
- - --entrypoints.websecure.address=:443
- # Set up LetsEncrypt
- - --certificatesresolvers.letsencrypt.acme.dnschallenge=true
- - --certificatesresolvers.letsencrypt.acme.dnschallenge.provider=cloudflare
- - --certificatesresolvers.letsencrypt.acme.email=user@example.com
- - --certificatesresolvers.letsencrypt.acme.storage=/letsencrypt/acme.json
- # Redirect all http requests to https
- - --entryPoints.web.http.redirections.entryPoint.to=websecure
- - --entryPoints.web.http.redirections.entryPoint.scheme=https
- - --entryPoints.web.http.redirections.entrypoint.permanent=true
- - --log=true
- - --log.level=INFO
- # Needed to request certs via lets encrypt
- environment:
- - CF_DNS_API_TOKEN=[redacted]
- ports:
- # The HTTP port
- - "80:80"
- - "443:443"
- # The Web UI (enabled by --api.insecure=true)
- - "8080:8080"
- volumes:
- # So that Traefik can listen to the Docker events
- - /var/run/docker.sock:/var/run/docker.sock:ro
- # Used for storing letsencrypt certificates
- - ./letsencrypt:/letsencrypt
- - ./volumes/traefik/logs:/logs
- networks:
- - traefik
- ots:
- image: luzifer/ots
- container_name: ots
- restart: always
- environment:
- REDIS_URL: redis://redis:6379/0
- SECRET_EXPIRY: "604800"
- STORAGE_TYPE: redis
- depends_on:
- - redis
- labels:
- - traefik.enable=true
- - traefik.http.routers.ots.rule=Host(`ots.example.com`)
- - traefik.http.routers.ots.entrypoints=websecure
- - traefik.http.routers.ots.tls=true
- - traefik.http.routers.ots.tls.certresolver=letsencrypt
- - traefik.http.services.ots.loadbalancer.server.port=3000
- networks:
- - traefik
- redis:
- image: redis:alpine
- restart: always
- volumes:
- - ./redis-data:/data
- networks:
- - traefik
-networks:
- traefik:
- external: true
-
-
-
-
-
-
-
In part one of this series I will be going over some of the basics of Traefik and how dynamic routing works. If you want to skip to the good stuff and get everything configured with Cloudflare, you can skip to part 2.
-
-
-
-
This example set’s up the primary Traefik container which acts as the ingress controller as well as a handy One Time Secret sharing service I use. Traefik handles routing in Docker via labels. For this to work properly the services that Traefik is trying to route to all need to be on the same Docker network. For this example we created a network called traefik by running the following:
-
-
-
-
docker network create traefik
-
-
-
-
-
Let’s take a look at the labels we applied to the ots container a little closer:
traefik.enable=true – This should be pretty self explanatory but it tells Traefik that we want it to know about this service.
-
-
-
-
traefik.http.routers.ots.rule=Host('ots.example.com') - This is where some of the magic comes in. Here we are defining a router called ots. The name is arbitrary in that it doesn’t have to match the name of the service but for our example it does. There are many rules that you can specify but the easiest for this example is host. Basically we are saying that any request coming in for ots.example.com should be picked up by this router. You can find more options for routers in the Traefik docs.
We are using these three labels to tell our router that we want it to use the websecure entrypoint, and that it should use the letsencrypt certresolver to grab it’s certificates. websecure is an arbitrary name that we assigned to our :443 interface. There are multiple ways to configure this, I choose to use the cli format in my traefik config:
-
-
-
-
“`
-
-
-
-
command:
- - --api.insecure=true
- - --providers.docker=true
- # Our entrypoint names are arbitrary but these are convention.
- # The important part is the port binding that we associate.
- - --entrypoints.web.address=:80
- - --entrypoints.websecure.address=:443
-
-
-
-
-
-
These last label is optional depending on your setup but it is important to understand as the documentation is a little fuzzy.
Here’s how it works. Suppose you have a container that exposes multiple ports. Maybe one of those is a web ui and another is something that you don’t want exposed. By default Traefik will try and guess which port to route requests to. My understanding is that it will try and use the first exposed port. However you can override this functionality by using the label above which will tell Traefik specifically which port you want to route to inside the container.
The service name is derived automatically from the definition in the docker compose file:
-
-
-
-
ots: # This will become the service name image: luzifer/ots container_name: ots
-
-
-
-
\ No newline at end of file
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/category/iac/feed/index.xml b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/category/iac/feed/index.xml
deleted file mode 100644
index 5857a25..0000000
--- a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/category/iac/feed/index.xml
+++ /dev/null
@@ -1,492 +0,0 @@
-
-
-
- IaC – hackanooga
-
- /
- Confessions of a homelab hacker
- Wed, 25 Sep 2024 13:56:04 +0000
- en-US
-
- hourly
-
- 1
- https://wordpress.org/?v=6.6.2
-
-
- /wp-content/uploads/2024/03/cropped-cropped-avatar-32x32.png
- IaC – hackanooga
- /
- 32
- 32
-
-
- Standing up a Wireguard VPN
- /standing-up-a-wireguard-vpn/
-
-
- Wed, 25 Sep 2024 13:56:04 +0000
-
-
-
-
-
-
-
-
- /?p=619
-
-
- VPN’s have traditionally been slow, complex and hard to set up and configure. That all changed several years ago when Wireguard was officially merged into the mainline Linux kernel (src). I won’t go over all the reasons for why you should want to use Wireguard in this article, instead I will be focusing on just how easy it is to set up and configure.
-
-
-
-
For this tutorial we will be using Terraform to stand up a Digital Ocean droplet and then install Wireguard onto that. The Digital Ocean droplet will be acting as our “server” in this example and we will be using our own computer as the “client”. Of course, you don’t have to use Terraform, you just need a Linux box to install Wireguard on. You can find the code for this tutorial on my personal Git server here.
-
-
-
-
Create Droplet with Terraform
-
-
-
-
I have written some very basic Terraform to get us started. The Terraform is very basic and just creates a droplet with a predefined ssh key and a setup script passed as user data. When the droplet gets created, the script will get copied to the instance and automatically executed. After a few minutes everything should be ready to go. If you want to clone the repo above, feel free to, or if you would rather do everything by hand that’s great too. I will assume that you are doing everything by hand. The process of deploying from the repo should be pretty self explainitory. My reasoning for doing it this way is because I wanted to better understand the process.
-
-
-
-
First create our main.tf with the following contents:
-
-
-
-
# main.tf
-# Attach an SSH key to our droplet
-resource "digitalocean_ssh_key" "default" {
- name = "Terraform Example"
- public_key = file("./tf-digitalocean.pub")
-}
-
-# Create a new Web Droplet in the nyc1 region
-resource "digitalocean_droplet" "web" {
- image = "ubuntu-22-04-x64"
- name = "wireguard"
- region = "nyc1"
- size = "s-2vcpu-4gb"
- ssh_keys = [digitalocean_ssh_key.default.fingerprint]
- user_data = file("setup.sh")
-}
-
-output "droplet_output" {
- value = digitalocean_droplet.web.ipv4_address
-}
-
-
-
-
Next create a terraform.tf file in the same directory with the following contents:
Now we are ready to initialize our Terraform and apply it:
-
-
-
-
$ terraform init
-$ terraform apply
-
-Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
- + create
-
-Terraform will perform the following actions:
-
- # digitalocean_droplet.web will be created
- + resource "digitalocean_droplet" "web" {
- + backups = false
- + created_at = (known after apply)
- + disk = (known after apply)
- + graceful_shutdown = false
- + id = (known after apply)
- + image = "ubuntu-22-04-x64"
- + ipv4_address = (known after apply)
- + ipv4_address_private = (known after apply)
- + ipv6 = false
- + ipv6_address = (known after apply)
- + locked = (known after apply)
- + memory = (known after apply)
- + monitoring = false
- + name = "wireguard"
- + price_hourly = (known after apply)
- + price_monthly = (known after apply)
- + private_networking = (known after apply)
- + region = "nyc1"
- + resize_disk = true
- + size = "s-2vcpu-4gb"
- + ssh_keys = (known after apply)
- + status = (known after apply)
- + urn = (known after apply)
- + user_data = "69d130f386b262b136863be5fcffc32bff055ac0"
- + vcpus = (known after apply)
- + volume_ids = (known after apply)
- + vpc_uuid = (known after apply)
- }
-
- # digitalocean_ssh_key.default will be created
- + resource "digitalocean_ssh_key" "default" {
- + fingerprint = (known after apply)
- + id = (known after apply)
- + name = "Terraform Example"
- + public_key = <<-EOT
- ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDXOBlFdNqV48oxWobrn2rPt4y1FTqrqscA5bSu2f3CogwbDKDyNglXu8RL4opjfdBHQES+pEqvt21niqes8z2QsBTF3TRQ39SaHM8wnOTeC8d0uSgyrp9b7higHd0SDJVJZT0Bz5AlpYfCO/gpEW51XrKKeud7vImj8nGPDHnENN0Ie0UVYZ5+V1zlr0BBI7LX01MtzUOgSldDX0lif7IZWW4XEv40ojWyYJNQwO/gwyDrdAq+kl+xZu7LmBhngcqd02+X6w4SbdgYg2flu25Td0MME0DEsXKiZYf7kniTrKgCs4kJAmidCDYlYRt43dlM69pB5jVD/u4r3O+erTapH/O1EDhsdA9y0aYpKOv26ssYU+ZXK/nax+Heu0giflm7ENTCblKTPCtpG1DBthhX6Ml0AYjZF1cUaaAvpN8UjElxQ9r+PSwXloSnf25/r9UOBs1uco8VDwbx5cM0SpdYm6ERtLqGRYrG2SDJ8yLgiCE9EK9n3uQExyrTMKWzVAc= WireguardVPN
- EOT
- }
-
-Plan: 2 to add, 0 to change, 0 to destroy.
-
-Changes to Outputs:
- + droplet_output = (known after apply)
-
-Do you want to perform these actions?
- Terraform will perform the actions described above.
- Only 'yes' will be accepted to approve.
-
- Enter a value: yes
-
-digitalocean_ssh_key.default: Creating...
-digitalocean_ssh_key.default: Creation complete after 1s [id=43499750]
-digitalocean_droplet.web: Creating...
-digitalocean_droplet.web: Still creating... [10s elapsed]
-digitalocean_droplet.web: Still creating... [20s elapsed]
-digitalocean_droplet.web: Still creating... [30s elapsed]
-digitalocean_droplet.web: Creation complete after 31s [id=447469336]
-
-Apply complete! Resources: 2 added, 0 changed, 0 destroyed.
-
-Outputs:
-
-droplet_output = "159.223.113.207"
-
-
-
-
-
All pretty standard stuff. Nice! It only took about 30 seconds or so on my machine to spin up a droplet and start provisioning it. It is worth noting that the setup script will take a few minutes to run. Before we log into our new droplet, let’s take a quick look at the setup script that we are running.
-
-
-
-
#!/usr/bin/env sh
-set -e
-set -u
-# Set the listen port used by Wireguard, this is the default so feel free to change it.
-LISTENPORT=51820
-CONFIG_DIR=/root/wireguard-conf
-umask 077
-mkdir -p $CONFIG_DIR/client
-
-# Install wireguard
-apt update && apt install -y wireguard
-
-# Generate public/private key for the "server".
-wg genkey > $CONFIG_DIR/privatekey
-wg pubkey < $CONFIG_DIR/privatekey > $CONFIG_DIR/publickey
-
-# Generate public/private key for the "client"
-wg genkey > $CONFIG_DIR/client/privatekey
-wg pubkey < $CONFIG_DIR/client/privatekey > $CONFIG_DIR/client/publickey
-
-
-# Generate server config
-echo "[Interface]
-Address = 10.66.66.1/24,fd42:42:42::1/64
-ListenPort = $LISTENPORT
-PrivateKey = $(cat $CONFIG_DIR/privatekey)
-
-### Client config
-[Peer]
-PublicKey = $(cat $CONFIG_DIR/client/publickey)
-AllowedIPs = 10.66.66.2/32,fd42:42:42::2/128
-" > /etc/wireguard/do.conf
-
-
-# Generate client config. This will need to be copied to your machine.
-echo "[Interface]
-PrivateKey = $(cat $CONFIG_DIR/client/privatekey)
-Address = 10.66.66.2/32,fd42:42:42::2/128
-DNS = 1.1.1.1,1.0.0.1
-
-[Peer]
-PublicKey = $(cat publickey)
-Endpoint = $(curl icanhazip.com):$LISTENPORT
-AllowedIPs = 0.0.0.0/0,::/0
-" > client-config.conf
-
-wg-quick up do
-
-# Add iptables rules to forward internet traffic through this box
-# We are assuming our Wireguard interface is called do and our
-# primary public facing interface is called eth0.
-
-iptables -I INPUT -p udp --dport 51820 -j ACCEPT
-iptables -I FORWARD -i eth0 -o do -j ACCEPT
-iptables -I FORWARD -i do -j ACCEPT
-iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
-ip6tables -I FORWARD -i do -j ACCEPT
-ip6tables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
-
-# Enable routing on the server
-echo "net.ipv4.ip_forward = 1
- net.ipv6.conf.all.forwarding = 1" >/etc/sysctl.d/wg.conf
-sysctl --system
-
-
-
-
As you can see, it is pretty straightforward. All you really need to do is:
-
-
-
-
On the “server” side:
-
-
-
-
-
Generate a private key and derive a public key from it for both the “server” and the “client”.
-
-
-
-
Create a “server” config that tells the droplet what address to bind to for the wireguard interface, which private key to use to secure that interface and what port to listen on.
-
-
-
-
The “server” config also needs to know what peers or “clients” to accept connections from in the AllowedIPs block. In this case we are just specifying one. The “server” also needs to know the public key of the “client” that will be connecting.
-
-
-
-
-
On the “client” side:
-
-
-
-
-
Create a “client” config that tells our machine what address to assign to the wireguard interface (obviously needs to be on the same subnet as the interface on the server side).
-
-
-
-
The client needs to know which private key to use to secure the interface.
-
-
-
-
It also needs to know the public key of the server as well as the public IP address/hostname of the “server” it is connecting to as well as the port it is listening on.
-
-
-
-
Finally it needs to know what traffic to route over the wireguard interface. In this example we are simply routing all traffic but you could restrict this as you see fit.
-
-
-
-
-
Now that we have our configs in place, we need to copy the client config to our local machine. The following command should work as long as you make sure to replace the IP address with the IP address of your newly created droplet:
-
-
-
-
## Make sure you have Wireguard installed on your local machine as well.
-## https://wireguard.com/install
-
-## Copy the client config to our local machine and move it to our wireguard directory.
-$ ssh -i tf-digitalocean root@157.230.177.54 -- cat /root/wireguard-conf/client-config.conf| sudo tee /etc/wireguard/do.conf
-
-
-
-
Before we try to connect, let’s log into the server and make sure everything is set up correctly:
-
-
-
-
$ ssh -i tf-digitalocean root@159.223.113.207
-Welcome to Ubuntu 22.04.4 LTS (GNU/Linux 5.15.0-113-generic x86_64)
-
- * Documentation: https://help.ubuntu.com/
- * Management: https://landscape.canonical.com/
- * Support: https://ubuntu.com/pro
-
- System information as of Wed Sep 25 13:19:02 UTC 2024
-
- System load: 0.03 Processes: 113
- Usage of /: 2.1% of 77.35GB Users logged in: 0
- Memory usage: 6% IPv4 address for eth0: 157.230.221.196
- Swap usage: 0% IPv4 address for eth0: 10.10.0.5
-
-Expanded Security Maintenance for Applications is not enabled.
-
-70 updates can be applied immediately.
-40 of these updates are standard security updates.
-To see these additional updates run: apt list --upgradable
-
-Enable ESM Apps to receive additional future security updates.
-See https://ubuntu.com/esm or run: sudo pro status
-
-New release '24.04.1 LTS' available.
-Run 'do-release-upgrade' to upgrade to it.
-
-
-Last login: Wed Sep 25 13:16:25 2024 from 74.221.191.214
-root@wireguard:~#
-
-
-
-
-
-
Awesome! We are connected. Now let’s check the wireguard interface using the wg command. If our config was correct, we should see an interface line and 1 peer line like so. If the peer line is missing then something is wrong with the configuration. Most likely a mismatch between public/private key.:
-
-
-
-
root@wireguard:~# wg
-interface: do
- public key: fTvqo/cZVofJ9IZgWHwU6XKcIwM/EcxUsMw4voeS/Hg=
- private key: (hidden)
- listening port: 51820
-
-peer: 5RxMenh1L+rNJobROkUrub4DBUj+nEUPKiNe4DFR8iY=
- allowed ips: 10.66.66.2/32, fd42:42:42::2/128
-root@wireguard:~#
-
-
-
-
So now we should be ready to go! On your local machine go ahead and try it out:
-
-
-
-
## Start the interface with wg-quick up [interface_name]
-$ sudo wg-quick up do
-[sudo] password for mikeconrad:
-[#] ip link add do type wireguard
-[#] wg setconf do /dev/fd/63
-[#] ip -4 address add 10.66.66.2/32 dev do
-[#] ip -6 address add fd42:42:42::2/128 dev do
-[#] ip link set mtu 1420 up dev do
-[#] resolvconf -a do -m 0 -x
-[#] wg set do fwmark 51820
-[#] ip -6 route add ::/0 dev do table 51820
-[#] ip -6 rule add not fwmark 51820 table 51820
-[#] ip -6 rule add table main suppress_prefixlength 0
-[#] ip6tables-restore -n
-[#] ip -4 route add 0.0.0.0/0 dev do table 51820
-[#] ip -4 rule add not fwmark 51820 table 51820
-[#] ip -4 rule add table main suppress_prefixlength 0
-[#] sysctl -q net.ipv4.conf.all.src_valid_mark=1
-[#] iptables-restore -n
-
-## Check our config
-$ sudo wg
-interface: do
- public key: fJ8mptCR/utCR4K2LmJTKTjn3xc4RDmZ3NNEQGwI7iI=
- private key: (hidden)
- listening port: 34596
- fwmark: 0xca6c
-
-peer: duTHwMhzSZxnRJ2GFCUCHE4HgY5tSeRn9EzQt9XVDx4=
- endpoint: 157.230.177.54:51820
- allowed ips: 0.0.0.0/0, ::/0
- latest handshake: 1 second ago
- transfer: 1.82 KiB received, 2.89 KiB sent
-
-## Make sure we can ping the outside world
-mikeconrad@pop-os:~/projects/wireguard-terraform-digitalocean$ ping 1.1.1.1
-PING 1.1.1.1 (1.1.1.1) 56(84) bytes of data.
-64 bytes from 1.1.1.1: icmp_seq=1 ttl=56 time=28.0 ms
-^C
---- 1.1.1.1 ping statistics ---
-1 packets transmitted, 1 received, 0% packet loss, time 0ms
-rtt min/avg/max/mdev = 27.991/27.991/27.991/0.000 ms
-
-## Verify our traffic is actually going over the tunnel.
-$ curl icanhazip.com
-157.230.177.54
-
-
-
-
-
-
-
We should also be able to ssh into our instance over the VPN using the 10.66.66.1 address:
-
-
-
-
$ ssh -i tf-digitalocean root@10.66.66.1
-The authenticity of host '10.66.66.1 (10.66.66.1)' can't be established.
-ED25519 key fingerprint is SHA256:E7BKSO3qP+iVVXfb/tLaUfKIc4RvtZ0k248epdE04m8.
-This host key is known by the following other names/addresses:
- ~/.ssh/known_hosts:130: [hashed name]
-Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
-Warning: Permanently added '10.66.66.1' (ED25519) to the list of known hosts.
-Welcome to Ubuntu 22.04.4 LTS (GNU/Linux 5.15.0-113-generic x86_64)
-
- * Documentation: https://help.ubuntu.com/
- * Management: https://landscape.canonical.com/
- * Support: https://ubuntu.com/pro
-
- System information as of Wed Sep 25 13:32:12 UTC 2024
-
- System load: 0.02 Processes: 109
- Usage of /: 2.1% of 77.35GB Users logged in: 0
- Memory usage: 6% IPv4 address for eth0: 157.230.177.54
- Swap usage: 0% IPv4 address for eth0: 10.10.0.5
-
-Expanded Security Maintenance for Applications is not enabled.
-
-73 updates can be applied immediately.
-40 of these updates are standard security updates.
-To see these additional updates run: apt list --upgradable
-
-Enable ESM Apps to receive additional future security updates.
-See https://ubuntu.com/esm or run: sudo pro status
-
-New release '24.04.1 LTS' available.
-Run 'do-release-upgrade' to upgrade to it.
-
-
-root@wireguard:~#
-
-
-
-
-
Looks like everything is working! If you run the script from the repo you will have a fully functioning Wireguard VPN in less than 5 minutes! Pretty cool stuff! This article was not meant to be exhaustive but instead a simple primer to get your feet wet. The setup script I used is heavily inspired by angristan/wireguard-install. Another great resource is the Unofficial docs repo.
-
-
-
-
\ No newline at end of file
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/category/networking/feed/index.xml b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/category/networking/feed/index.xml
deleted file mode 100644
index b259495..0000000
--- a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/category/networking/feed/index.xml
+++ /dev/null
@@ -1,596 +0,0 @@
-
-
-
- Networking – hackanooga
-
- /
- Confessions of a homelab hacker
- Mon, 16 Sep 2024 13:07:16 +0000
- en-US
-
- hourly
-
- 1
- https://wordpress.org/?v=6.6.2
-
-
- /wp-content/uploads/2024/03/cropped-cropped-avatar-32x32.png
- Networking – hackanooga
- /
- 32
- 32
-
-
- Hardening your web server by only allowing traffic from Cloudflare
- /hardening-your-web-server-by-only-allowing-traffic-from-cloudflare/
-
-
- Thu, 01 Aug 2024 21:02:29 +0000
-
-
-
-
-
- /?p=607
-
-
- TDLR:
-
-
-
-
If you just want the code you can find a convenient script on my Gitea server here. This version has been slightly modified so that it will work on more systems.
-
-
-
-
-
-
-
-
I have been using Cloudflare for several years for both personal and professional projects. The free plan has some various gracious limits and it’s a great way to clear out some low hanging fruit and improve the security of your application. If you’re not familiar with how it works, basically Cloudflare has two modes for DNS records. DNS Only and Proxied. The only way to get the advantages of Cloudflare is to use Proxied mode. Cloudflare has some great documentation on how all of their services work but basically what happens is that you are pointing your domain to Cloudflare and Cloudflare provisions their network of Proxy servers to handle requests for your domain.
-
-
-
-
These proxy servers allow you to secure your domain by implementing things like WAF and Rate limiting. You can also enforce HTTPS only mode and modify/add custom request/response headers. You will notice that once you turn this mode on, your webserver will log requests as coming from Cloudflare IP addresses. They have great documentation on how to configure your webserver to restore these IP addresses in your log files.
-
-
-
-
This is a very easy step to start securing your origin server but it still allows attackers to access your servers directly if they know the IP address. We can take our security one step forward by only allowing requests from IP addresses originating within Cloudflare meaning that we will only allow requests if they are coming from a Cloudflare proxy server. The setup is fairly straightforward. In this example I will be using a Linux server.
-
-
-
-
We can achieve this pretty easily because Cloudflare provides a sort of API where they regular publish their network blocks. Here is the basic script we will use:
-
-
-
-
for ip in $(curl https://www.cloudflare.com/ips-v4/); do iptables -I INPUT -p tcp -m multiport --dports http,https -s $ip -j ACCEPT; done
-
-for ip in $(curl https://www.cloudflare.com/ips-v6/); do ip6tables -I INPUT -p tcp -m multiport --dports http,https -s $ip -j ACCEPT; done
-
-iptables -A INPUT -p tcp -m multiport --dports http,https -j DROP
-ip6tables -A INPUT -p tcp -m multiport --dports http,https -j DROP
-
-
-
-
-
This will pull down the latest network addresses from Cloudflare and create iptables rules for us. These IP addresses do change from time to time so you may want to put this in a script and run it via a cronjob to have it update on a regular basis.
-
-
-
-
Now with this in place, here is the results:
-
-
-
-
-
-
-
-
This should cut down on some of the noise from attackers and script kiddies trying to find holes in your security.
-]]>
-
-
-
-
-
- Debugging running Nginx config
- /debugging-running-nginx-config/
-
-
- Wed, 17 Jul 2024 01:42:43 +0000
-
-
-
-
- /?p=596
-
-
- I was recently working on project where a client had cPanel/WHM with Nginx and Apache. They had a large number of sites managed by Nginx with a large number of includes. I created a custom config to override a location block and needed to be certain that my changes where actually being picked up. Anytime I make changes to an Nginx config, I try to be vigilant about running:
-
-
-
-
nginx -t
-
-
-
-
to test my configuration and ensure I don’t have any syntax errors. I was looking for an easy way to view the actual compiled config and found the -T flag which will test the configuration and dump it to standard out. This is pretty handy if you have a large number of includes in various locations. Here is an example from a fresh Nginx Docker container:
As you can see from the output above, we get all of the various Nginx config files in use printed to the console, perfect for grepping or searching/filtering with other tools.
-]]>
-
-
-
-
-
- Fun with bots – SSH tarpitting
- /fun-with-bots-ssh-tarpitting/
-
-
- Mon, 24 Jun 2024 13:37:43 +0000
-
-
-
-
-
-
- /?p=576
-
-
- For those of you who aren’t familiar with the concept of a network tarpit it is a fairly simple concept. Wikipedia defines it like this:
-
-
-
-
-
A tarpit is a service on a computer system (usually a server) that purposely delays incoming connections. The technique was developed as a defense against a computer worm, and the idea is that network abuses such as spamming or broad scanning are less effective, and therefore less attractive, if they take too long. The concept is analogous with a tar pit, in which animals can get bogged down and slowly sink under the surface, like in a swamp.
If you run any sort of service on the internet then you know as soon as your server has a public IP address and open ports, there are scanners and bots trying to get in constantly. If you take decent steps towards security then it is little more than an annoyance, but annoying all the less. One day when I had some extra time on my hands I started researching ways to mess with the bots trying to scan/attack my site.
-
-
-
-
It turns out that this problem has been solved multiple times in multiple ways. One of the most popular tools for tarpitting ssh connections is endlessh. The way it works is actually pretty simple. The SSH RFC states that when an SSH connection is established, both sides MUST send an identification string. Further down the spec is the line that allows this behavior:
-
-
-
-
-
The server MAY send other lines of data before sending the version
- string. Each line SHOULD be terminated by a Carriage Return and Line
- Feed. Such lines MUST NOT begin with "SSH-", and SHOULD be encoded
- in ISO-10646 UTF-8 [RFC3629] (language is not specified). Clients
- MUST be able to process such lines. Such lines MAY be silently
- ignored, or MAY be displayed to the client user. If they are
- displayed, control character filtering, as discussed in [SSH-ARCH],
- SHOULD be used. The primary use of this feature is to allow TCP-
- wrappers to display an error message before disconnecting.
-SSH RFC
-
-
-
-
Essentially this means that their is no limit to the amount of data that a server can send back to the client and the client must be able to wait and process all of this data. Now let’s see it in action.
By default this fake server listens on port 2222. I have a port forward set up that forwards all ssh traffic from port 22 to 2222. Now try to connect via ssh:
-
-
-
-
ssh -vvv localhost -p 2222
-
-
-
-
If you wait a few seconds you will see the server send back the version string and then start sending a random banner:
-
-
-
-
$:/tmp/endlessh$ 2024-06-24T13:05:59.488Z Port 2222
-2024-06-24T13:05:59.488Z Delay 10000
-2024-06-24T13:05:59.488Z MaxLineLength 32
-2024-06-24T13:05:59.488Z MaxClients 4096
-2024-06-24T13:05:59.488Z BindFamily IPv4 Mapped IPv6
-2024-06-24T13:05:59.488Z socket() = 3
-2024-06-24T13:05:59.488Z setsockopt(3, SO_REUSEADDR, true) = 0
-2024-06-24T13:05:59.488Z setsockopt(3, IPV6_V6ONLY, true) = 0
-2024-06-24T13:05:59.488Z bind(3, port=2222) = 0
-2024-06-24T13:05:59.488Z listen(3) = 0
-2024-06-24T13:05:59.488Z poll(1, -1)
-ssh -vvv localhost -p 2222
-OpenSSH_8.9p1 Ubuntu-3ubuntu0.7, OpenSSL 3.0.2 15 Mar 2022
-debug1: Reading configuration data /home/mikeconrad/.ssh/config
-debug1: Reading configuration data /etc/ssh/ssh_config
-debug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files
-debug1: /etc/ssh/ssh_config line 21: Applying options for *
-debug3: expanded UserKnownHostsFile '~/.ssh/known_hosts' -> '/home/mikeconrad/.ssh/known_hosts'
-debug3: expanded UserKnownHostsFile '~/.ssh/known_hosts2' -> '/home/mikeconrad/.ssh/known_hosts2'
-debug2: resolving "localhost" port 2222
-debug3: resolve_host: lookup localhost:2222
-debug3: ssh_connect_direct: entering
-debug1: Connecting to localhost [::1] port 2222.
-debug3: set_sock_tos: set socket 3 IPV6_TCLASS 0x10
-debug1: Connection established.
-2024-06-24T13:06:08.635Z = 1
-2024-06-24T13:06:08.635Z accept() = 4
-2024-06-24T13:06:08.635Z setsockopt(4, SO_RCVBUF, 1) = 0
-2024-06-24T13:06:08.635Z ACCEPT host=::1 port=43696 fd=4 n=1/4096
-2024-06-24T13:06:08.635Z poll(1, 10000)
-debug1: identity file /home/mikeconrad/.ssh/id_rsa type 0
-debug1: identity file /home/mikeconrad/.ssh/id_rsa-cert type 4
-debug1: identity file /home/mikeconrad/.ssh/id_ecdsa type -1
-debug1: identity file /home/mikeconrad/.ssh/id_ecdsa-cert type -1
-debug1: identity file /home/mikeconrad/.ssh/id_ecdsa_sk type -1
-debug1: identity file /home/mikeconrad/.ssh/id_ecdsa_sk-cert type -1
-debug1: identity file /home/mikeconrad/.ssh/id_ed25519 type -1
-debug1: identity file /home/mikeconrad/.ssh/id_ed25519-cert type -1
-debug1: identity file /home/mikeconrad/.ssh/id_ed25519_sk type -1
-debug1: identity file /home/mikeconrad/.ssh/id_ed25519_sk-cert type -1
-debug1: identity file /home/mikeconrad/.ssh/id_xmss type -1
-debug1: identity file /home/mikeconrad/.ssh/id_xmss-cert type -1
-debug1: identity file /home/mikeconrad/.ssh/id_dsa type -1
-debug1: identity file /home/mikeconrad/.ssh/id_dsa-cert type -1
-debug1: Local version string SSH-2.0-OpenSSH_8.9p1 Ubuntu-3ubuntu0.7
-2024-06-24T13:06:18.684Z = 0
-2024-06-24T13:06:18.684Z write(4) = 3
-2024-06-24T13:06:18.684Z poll(1, 10000)
-debug1: kex_exchange_identification: banner line 0: V
-2024-06-24T13:06:28.734Z = 0
-2024-06-24T13:06:28.734Z write(4) = 25
-2024-06-24T13:06:28.734Z poll(1, 10000)
-debug1: kex_exchange_identification: banner line 1: 2I=ED}PZ,z T_Y|Yc]$b{R]
-
-
-
-
-
-
This is a great way to give back to those bots and script kiddies. In my research into other methods I also stumbled across this brilliant program fakessh. While fakessh isn’t technically a tarpit, it’s more of a honeypot but very interesting nonetheless. It creates a fake SSH server and logs the ip address, connection string and any commands executed by the attacker. Essentially it allows any username/password combination to connect and gives them a fake shell prompt. There is no actual access to any file system and all of their commands basically return gibberish.
-
-
-
-
Here are some logs from an actual server of mine running fakessh
Those are mostly connections and disconnections. They probably connected, realized it was fake and disconnected. There are a couple that tried to execute some commands though:
Fun fact: Cloudflare’s Bot Fight Mode uses a form of tarpitting:
-
-
-
-
-
Once enabled, when we detect a bad bot, we will do three things: (1) we’re going to disincentivize the bot maker economically by tarpitting them, including requiring them to solve a computationally intensive challenge that will require more of their bot’s CPU; (2) for Bandwidth Alliance partners, we’re going to hand the IP of the bot to the partner and get the bot kicked offline; and (3) we’re going to plant trees to make up for the bot’s carbon cost.
-
-
-
-
\ No newline at end of file
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/category/oci/feed/index.xml b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/category/oci/feed/index.xml
deleted file mode 100644
index d6880d3..0000000
--- a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/category/oci/feed/index.xml
+++ /dev/null
@@ -1,118 +0,0 @@
-
-
-
- OCI – hackanooga
-
- /
- Confessions of a homelab hacker
- Tue, 12 Mar 2024 19:11:32 +0000
- en-US
-
- hourly
-
- 1
- https://wordpress.org/?v=6.6.2
-
-
- /wp-content/uploads/2024/03/cropped-cropped-avatar-32x32.png
- OCI – hackanooga
- /
- 32
- 32
-
-
- Self hosted package registries with Gitea
- /self-hosted-package-registries-with-gitea/
-
-
- Thu, 07 Mar 2024 15:07:07 +0000
-
-
-
-
-
- https://wordpress.hackanooga.com/?p=413
-
-
- I am a big proponent of open source technologies. I have been using Gitea for a couple years now in my homelab. A few years ago I moved most of my code off of Github and onto my self hosted instance. I recently came across a really handy feature that I didn’t know Gitea had and was pleasantly surprised by: Package Registry. You are no doubt familiar with what a package registry is in the broad context. Here are some examples of package registries you probably use on a regular basis:
-
-
-
-
-
npm
-
-
-
-
cargo
-
-
-
-
docker
-
-
-
-
composer
-
-
-
-
nuget
-
-
-
-
helm
-
-
-
-
-
There are a number of reasons why you would want to self host a registry. For example, in my home lab I have some Docker images that are specific to my use cases and I don’t necessarily want them on a public registry. I’m also not concerned about losing the artifacts as I can easily recreate them from code. Gitea makes this really easy to setup, in fact it comes baked in with the installation. For the sake of this post I will just assume that you already have Gitea installed and setup.
-
-
-
-
Since the package registry is baked in and enabled by default, I will demonstrate how easy it is to push a docker image. We will pull the default alpine image, re-tag it and push it to our internal registry:
-
-
-
-
# Pull the official Alpine image
-docker pull alpine:latest
-
-# Re tag the image with our local registry information
-docker tag alpine:latest git.hackanooga.com/mikeconrad/alpine:latest
-
-# Login using your gitea user account
-docker login git.hackanooga.com
-
-# Push the image to our registry
-docker push git.hackanooga.com/mikeconrad/alpine:latest
-
-
-
-
-
-
Now log into your Gitea instance, navigate to your user account and look for packages. You should see the newly uploaded alpine image.
-
-
-
-
-
-
-
-
You can see that the package type is container. Clicking on it will give you more information:
-
-
-
-
\ No newline at end of file
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/category/open-source/feed/index.xml b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/category/open-source/feed/index.xml
deleted file mode 100644
index 98a3838..0000000
--- a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/category/open-source/feed/index.xml
+++ /dev/null
@@ -1,842 +0,0 @@
-
-
-
- Open Source – hackanooga
-
- /
- Confessions of a homelab hacker
- Wed, 25 Sep 2024 13:56:04 +0000
- en-US
-
- hourly
-
- 1
- https://wordpress.org/?v=6.6.2
-
-
- /wp-content/uploads/2024/03/cropped-cropped-avatar-32x32.png
- Open Source – hackanooga
- /
- 32
- 32
-
-
- Standing up a Wireguard VPN
- /standing-up-a-wireguard-vpn/
-
-
- Wed, 25 Sep 2024 13:56:04 +0000
-
-
-
-
-
-
-
-
- /?p=619
-
-
- VPN’s have traditionally been slow, complex and hard to set up and configure. That all changed several years ago when Wireguard was officially merged into the mainline Linux kernel (src). I won’t go over all the reasons for why you should want to use Wireguard in this article, instead I will be focusing on just how easy it is to set up and configure.
-
-
-
-
For this tutorial we will be using Terraform to stand up a Digital Ocean droplet and then install Wireguard onto that. The Digital Ocean droplet will be acting as our “server” in this example and we will be using our own computer as the “client”. Of course, you don’t have to use Terraform, you just need a Linux box to install Wireguard on. You can find the code for this tutorial on my personal Git server here.
-
-
-
-
Create Droplet with Terraform
-
-
-
-
I have written some very basic Terraform to get us started. The Terraform is very basic and just creates a droplet with a predefined ssh key and a setup script passed as user data. When the droplet gets created, the script will get copied to the instance and automatically executed. After a few minutes everything should be ready to go. If you want to clone the repo above, feel free to, or if you would rather do everything by hand that’s great too. I will assume that you are doing everything by hand. The process of deploying from the repo should be pretty self explainitory. My reasoning for doing it this way is because I wanted to better understand the process.
-
-
-
-
First create our main.tf with the following contents:
-
-
-
-
# main.tf
-# Attach an SSH key to our droplet
-resource "digitalocean_ssh_key" "default" {
- name = "Terraform Example"
- public_key = file("./tf-digitalocean.pub")
-}
-
-# Create a new Web Droplet in the nyc1 region
-resource "digitalocean_droplet" "web" {
- image = "ubuntu-22-04-x64"
- name = "wireguard"
- region = "nyc1"
- size = "s-2vcpu-4gb"
- ssh_keys = [digitalocean_ssh_key.default.fingerprint]
- user_data = file("setup.sh")
-}
-
-output "droplet_output" {
- value = digitalocean_droplet.web.ipv4_address
-}
-
-
-
-
Next create a terraform.tf file in the same directory with the following contents:
Now we are ready to initialize our Terraform and apply it:
-
-
-
-
$ terraform init
-$ terraform apply
-
-Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
- + create
-
-Terraform will perform the following actions:
-
- # digitalocean_droplet.web will be created
- + resource "digitalocean_droplet" "web" {
- + backups = false
- + created_at = (known after apply)
- + disk = (known after apply)
- + graceful_shutdown = false
- + id = (known after apply)
- + image = "ubuntu-22-04-x64"
- + ipv4_address = (known after apply)
- + ipv4_address_private = (known after apply)
- + ipv6 = false
- + ipv6_address = (known after apply)
- + locked = (known after apply)
- + memory = (known after apply)
- + monitoring = false
- + name = "wireguard"
- + price_hourly = (known after apply)
- + price_monthly = (known after apply)
- + private_networking = (known after apply)
- + region = "nyc1"
- + resize_disk = true
- + size = "s-2vcpu-4gb"
- + ssh_keys = (known after apply)
- + status = (known after apply)
- + urn = (known after apply)
- + user_data = "69d130f386b262b136863be5fcffc32bff055ac0"
- + vcpus = (known after apply)
- + volume_ids = (known after apply)
- + vpc_uuid = (known after apply)
- }
-
- # digitalocean_ssh_key.default will be created
- + resource "digitalocean_ssh_key" "default" {
- + fingerprint = (known after apply)
- + id = (known after apply)
- + name = "Terraform Example"
- + public_key = <<-EOT
- ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDXOBlFdNqV48oxWobrn2rPt4y1FTqrqscA5bSu2f3CogwbDKDyNglXu8RL4opjfdBHQES+pEqvt21niqes8z2QsBTF3TRQ39SaHM8wnOTeC8d0uSgyrp9b7higHd0SDJVJZT0Bz5AlpYfCO/gpEW51XrKKeud7vImj8nGPDHnENN0Ie0UVYZ5+V1zlr0BBI7LX01MtzUOgSldDX0lif7IZWW4XEv40ojWyYJNQwO/gwyDrdAq+kl+xZu7LmBhngcqd02+X6w4SbdgYg2flu25Td0MME0DEsXKiZYf7kniTrKgCs4kJAmidCDYlYRt43dlM69pB5jVD/u4r3O+erTapH/O1EDhsdA9y0aYpKOv26ssYU+ZXK/nax+Heu0giflm7ENTCblKTPCtpG1DBthhX6Ml0AYjZF1cUaaAvpN8UjElxQ9r+PSwXloSnf25/r9UOBs1uco8VDwbx5cM0SpdYm6ERtLqGRYrG2SDJ8yLgiCE9EK9n3uQExyrTMKWzVAc= WireguardVPN
- EOT
- }
-
-Plan: 2 to add, 0 to change, 0 to destroy.
-
-Changes to Outputs:
- + droplet_output = (known after apply)
-
-Do you want to perform these actions?
- Terraform will perform the actions described above.
- Only 'yes' will be accepted to approve.
-
- Enter a value: yes
-
-digitalocean_ssh_key.default: Creating...
-digitalocean_ssh_key.default: Creation complete after 1s [id=43499750]
-digitalocean_droplet.web: Creating...
-digitalocean_droplet.web: Still creating... [10s elapsed]
-digitalocean_droplet.web: Still creating... [20s elapsed]
-digitalocean_droplet.web: Still creating... [30s elapsed]
-digitalocean_droplet.web: Creation complete after 31s [id=447469336]
-
-Apply complete! Resources: 2 added, 0 changed, 0 destroyed.
-
-Outputs:
-
-droplet_output = "159.223.113.207"
-
-
-
-
-
All pretty standard stuff. Nice! It only took about 30 seconds or so on my machine to spin up a droplet and start provisioning it. It is worth noting that the setup script will take a few minutes to run. Before we log into our new droplet, let’s take a quick look at the setup script that we are running.
-
-
-
-
#!/usr/bin/env sh
-set -e
-set -u
-# Set the listen port used by Wireguard, this is the default so feel free to change it.
-LISTENPORT=51820
-CONFIG_DIR=/root/wireguard-conf
-umask 077
-mkdir -p $CONFIG_DIR/client
-
-# Install wireguard
-apt update && apt install -y wireguard
-
-# Generate public/private key for the "server".
-wg genkey > $CONFIG_DIR/privatekey
-wg pubkey < $CONFIG_DIR/privatekey > $CONFIG_DIR/publickey
-
-# Generate public/private key for the "client"
-wg genkey > $CONFIG_DIR/client/privatekey
-wg pubkey < $CONFIG_DIR/client/privatekey > $CONFIG_DIR/client/publickey
-
-
-# Generate server config
-echo "[Interface]
-Address = 10.66.66.1/24,fd42:42:42::1/64
-ListenPort = $LISTENPORT
-PrivateKey = $(cat $CONFIG_DIR/privatekey)
-
-### Client config
-[Peer]
-PublicKey = $(cat $CONFIG_DIR/client/publickey)
-AllowedIPs = 10.66.66.2/32,fd42:42:42::2/128
-" > /etc/wireguard/do.conf
-
-
-# Generate client config. This will need to be copied to your machine.
-echo "[Interface]
-PrivateKey = $(cat $CONFIG_DIR/client/privatekey)
-Address = 10.66.66.2/32,fd42:42:42::2/128
-DNS = 1.1.1.1,1.0.0.1
-
-[Peer]
-PublicKey = $(cat publickey)
-Endpoint = $(curl icanhazip.com):$LISTENPORT
-AllowedIPs = 0.0.0.0/0,::/0
-" > client-config.conf
-
-wg-quick up do
-
-# Add iptables rules to forward internet traffic through this box
-# We are assuming our Wireguard interface is called do and our
-# primary public facing interface is called eth0.
-
-iptables -I INPUT -p udp --dport 51820 -j ACCEPT
-iptables -I FORWARD -i eth0 -o do -j ACCEPT
-iptables -I FORWARD -i do -j ACCEPT
-iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
-ip6tables -I FORWARD -i do -j ACCEPT
-ip6tables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
-
-# Enable routing on the server
-echo "net.ipv4.ip_forward = 1
- net.ipv6.conf.all.forwarding = 1" >/etc/sysctl.d/wg.conf
-sysctl --system
-
-
-
-
As you can see, it is pretty straightforward. All you really need to do is:
-
-
-
-
On the “server” side:
-
-
-
-
-
Generate a private key and derive a public key from it for both the “server” and the “client”.
-
-
-
-
Create a “server” config that tells the droplet what address to bind to for the wireguard interface, which private key to use to secure that interface and what port to listen on.
-
-
-
-
The “server” config also needs to know what peers or “clients” to accept connections from in the AllowedIPs block. In this case we are just specifying one. The “server” also needs to know the public key of the “client” that will be connecting.
-
-
-
-
-
On the “client” side:
-
-
-
-
-
Create a “client” config that tells our machine what address to assign to the wireguard interface (obviously needs to be on the same subnet as the interface on the server side).
-
-
-
-
The client needs to know which private key to use to secure the interface.
-
-
-
-
It also needs to know the public key of the server as well as the public IP address/hostname of the “server” it is connecting to as well as the port it is listening on.
-
-
-
-
Finally it needs to know what traffic to route over the wireguard interface. In this example we are simply routing all traffic but you could restrict this as you see fit.
-
-
-
-
-
Now that we have our configs in place, we need to copy the client config to our local machine. The following command should work as long as you make sure to replace the IP address with the IP address of your newly created droplet:
-
-
-
-
## Make sure you have Wireguard installed on your local machine as well.
-## https://wireguard.com/install
-
-## Copy the client config to our local machine and move it to our wireguard directory.
-$ ssh -i tf-digitalocean root@157.230.177.54 -- cat /root/wireguard-conf/client-config.conf| sudo tee /etc/wireguard/do.conf
-
-
-
-
Before we try to connect, let’s log into the server and make sure everything is set up correctly:
-
-
-
-
$ ssh -i tf-digitalocean root@159.223.113.207
-Welcome to Ubuntu 22.04.4 LTS (GNU/Linux 5.15.0-113-generic x86_64)
-
- * Documentation: https://help.ubuntu.com/
- * Management: https://landscape.canonical.com/
- * Support: https://ubuntu.com/pro
-
- System information as of Wed Sep 25 13:19:02 UTC 2024
-
- System load: 0.03 Processes: 113
- Usage of /: 2.1% of 77.35GB Users logged in: 0
- Memory usage: 6% IPv4 address for eth0: 157.230.221.196
- Swap usage: 0% IPv4 address for eth0: 10.10.0.5
-
-Expanded Security Maintenance for Applications is not enabled.
-
-70 updates can be applied immediately.
-40 of these updates are standard security updates.
-To see these additional updates run: apt list --upgradable
-
-Enable ESM Apps to receive additional future security updates.
-See https://ubuntu.com/esm or run: sudo pro status
-
-New release '24.04.1 LTS' available.
-Run 'do-release-upgrade' to upgrade to it.
-
-
-Last login: Wed Sep 25 13:16:25 2024 from 74.221.191.214
-root@wireguard:~#
-
-
-
-
-
-
Awesome! We are connected. Now let’s check the wireguard interface using the wg command. If our config was correct, we should see an interface line and 1 peer line like so. If the peer line is missing then something is wrong with the configuration. Most likely a mismatch between public/private key.:
-
-
-
-
root@wireguard:~# wg
-interface: do
- public key: fTvqo/cZVofJ9IZgWHwU6XKcIwM/EcxUsMw4voeS/Hg=
- private key: (hidden)
- listening port: 51820
-
-peer: 5RxMenh1L+rNJobROkUrub4DBUj+nEUPKiNe4DFR8iY=
- allowed ips: 10.66.66.2/32, fd42:42:42::2/128
-root@wireguard:~#
-
-
-
-
So now we should be ready to go! On your local machine go ahead and try it out:
-
-
-
-
## Start the interface with wg-quick up [interface_name]
-$ sudo wg-quick up do
-[sudo] password for mikeconrad:
-[#] ip link add do type wireguard
-[#] wg setconf do /dev/fd/63
-[#] ip -4 address add 10.66.66.2/32 dev do
-[#] ip -6 address add fd42:42:42::2/128 dev do
-[#] ip link set mtu 1420 up dev do
-[#] resolvconf -a do -m 0 -x
-[#] wg set do fwmark 51820
-[#] ip -6 route add ::/0 dev do table 51820
-[#] ip -6 rule add not fwmark 51820 table 51820
-[#] ip -6 rule add table main suppress_prefixlength 0
-[#] ip6tables-restore -n
-[#] ip -4 route add 0.0.0.0/0 dev do table 51820
-[#] ip -4 rule add not fwmark 51820 table 51820
-[#] ip -4 rule add table main suppress_prefixlength 0
-[#] sysctl -q net.ipv4.conf.all.src_valid_mark=1
-[#] iptables-restore -n
-
-## Check our config
-$ sudo wg
-interface: do
- public key: fJ8mptCR/utCR4K2LmJTKTjn3xc4RDmZ3NNEQGwI7iI=
- private key: (hidden)
- listening port: 34596
- fwmark: 0xca6c
-
-peer: duTHwMhzSZxnRJ2GFCUCHE4HgY5tSeRn9EzQt9XVDx4=
- endpoint: 157.230.177.54:51820
- allowed ips: 0.0.0.0/0, ::/0
- latest handshake: 1 second ago
- transfer: 1.82 KiB received, 2.89 KiB sent
-
-## Make sure we can ping the outside world
-mikeconrad@pop-os:~/projects/wireguard-terraform-digitalocean$ ping 1.1.1.1
-PING 1.1.1.1 (1.1.1.1) 56(84) bytes of data.
-64 bytes from 1.1.1.1: icmp_seq=1 ttl=56 time=28.0 ms
-^C
---- 1.1.1.1 ping statistics ---
-1 packets transmitted, 1 received, 0% packet loss, time 0ms
-rtt min/avg/max/mdev = 27.991/27.991/27.991/0.000 ms
-
-## Verify our traffic is actually going over the tunnel.
-$ curl icanhazip.com
-157.230.177.54
-
-
-
-
-
-
-
We should also be able to ssh into our instance over the VPN using the 10.66.66.1 address:
-
-
-
-
$ ssh -i tf-digitalocean root@10.66.66.1
-The authenticity of host '10.66.66.1 (10.66.66.1)' can't be established.
-ED25519 key fingerprint is SHA256:E7BKSO3qP+iVVXfb/tLaUfKIc4RvtZ0k248epdE04m8.
-This host key is known by the following other names/addresses:
- ~/.ssh/known_hosts:130: [hashed name]
-Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
-Warning: Permanently added '10.66.66.1' (ED25519) to the list of known hosts.
-Welcome to Ubuntu 22.04.4 LTS (GNU/Linux 5.15.0-113-generic x86_64)
-
- * Documentation: https://help.ubuntu.com/
- * Management: https://landscape.canonical.com/
- * Support: https://ubuntu.com/pro
-
- System information as of Wed Sep 25 13:32:12 UTC 2024
-
- System load: 0.02 Processes: 109
- Usage of /: 2.1% of 77.35GB Users logged in: 0
- Memory usage: 6% IPv4 address for eth0: 157.230.177.54
- Swap usage: 0% IPv4 address for eth0: 10.10.0.5
-
-Expanded Security Maintenance for Applications is not enabled.
-
-73 updates can be applied immediately.
-40 of these updates are standard security updates.
-To see these additional updates run: apt list --upgradable
-
-Enable ESM Apps to receive additional future security updates.
-See https://ubuntu.com/esm or run: sudo pro status
-
-New release '24.04.1 LTS' available.
-Run 'do-release-upgrade' to upgrade to it.
-
-
-root@wireguard:~#
-
-
-
-
-
Looks like everything is working! If you run the script from the repo you will have a fully functioning Wireguard VPN in less than 5 minutes! Pretty cool stuff! This article was not meant to be exhaustive but instead a simple primer to get your feet wet. The setup script I used is heavily inspired by angristan/wireguard-install. Another great resource is the Unofficial docs repo.
We are looking for an experienced professional to help us set up an SFTP server that will allow our vendors to send us inventory files on a daily basis. The server should ensure secure and reliable file transfers, allowing our vendors to easily upload their inventory updates. The successful candidate will possess expertise in SFTP server setup and configuration, as well as knowledge of network security protocols. The required skills for this job include:
-
-
-
-
– SFTP server setup and configuration – Network security protocols – Troubleshooting and problem-solving skills
-
-
-
-
If you have demonstrated experience in setting up SFTP servers and ensuring smooth daily file transfers, we would love to hear from you.
-
-
-
-
-
-
-
-
-
-
-
-
-
My Role
-
-
-
-
I walked the client through the process of setting up a Digital Ocean account. I created a Ubuntu 22.04 VM and installed SFTPGo. I set the client up with an administrator user so that they could easily login and manage users and shares. I implemented some basic security practices as well and set the client up with a custom domain and free TLS/SSL certificate from LetsEncrypt. With the documentation and screenshots I provided the client, they were able to get everything up and running and add users and connect other systems easily and securly.
-
-
-
-
-
-
-
-
-
-
-
-
Client Feedback
-
-
-
-
-
Rating is 5 out of 5.
-
-
-
-
Michael was EXTREMELY helpful and great to work with. We really benefited from his support and help with everything.
-
-]]>
-
-
-
-
-
- Fun with bots – SSH tarpitting
- /fun-with-bots-ssh-tarpitting/
-
-
- Mon, 24 Jun 2024 13:37:43 +0000
-
-
-
-
-
-
- /?p=576
-
-
- For those of you who aren’t familiar with the concept of a network tarpit it is a fairly simple concept. Wikipedia defines it like this:
-
-
-
-
-
A tarpit is a service on a computer system (usually a server) that purposely delays incoming connections. The technique was developed as a defense against a computer worm, and the idea is that network abuses such as spamming or broad scanning are less effective, and therefore less attractive, if they take too long. The concept is analogous with a tar pit, in which animals can get bogged down and slowly sink under the surface, like in a swamp.
If you run any sort of service on the internet then you know as soon as your server has a public IP address and open ports, there are scanners and bots trying to get in constantly. If you take decent steps towards security then it is little more than an annoyance, but annoying all the less. One day when I had some extra time on my hands I started researching ways to mess with the bots trying to scan/attack my site.
-
-
-
-
It turns out that this problem has been solved multiple times in multiple ways. One of the most popular tools for tarpitting ssh connections is endlessh. The way it works is actually pretty simple. The SSH RFC states that when an SSH connection is established, both sides MUST send an identification string. Further down the spec is the line that allows this behavior:
-
-
-
-
-
The server MAY send other lines of data before sending the version
- string. Each line SHOULD be terminated by a Carriage Return and Line
- Feed. Such lines MUST NOT begin with "SSH-", and SHOULD be encoded
- in ISO-10646 UTF-8 [RFC3629] (language is not specified). Clients
- MUST be able to process such lines. Such lines MAY be silently
- ignored, or MAY be displayed to the client user. If they are
- displayed, control character filtering, as discussed in [SSH-ARCH],
- SHOULD be used. The primary use of this feature is to allow TCP-
- wrappers to display an error message before disconnecting.
-SSH RFC
-
-
-
-
Essentially this means that their is no limit to the amount of data that a server can send back to the client and the client must be able to wait and process all of this data. Now let’s see it in action.
By default this fake server listens on port 2222. I have a port forward set up that forwards all ssh traffic from port 22 to 2222. Now try to connect via ssh:
-
-
-
-
ssh -vvv localhost -p 2222
-
-
-
-
If you wait a few seconds you will see the server send back the version string and then start sending a random banner:
-
-
-
-
$:/tmp/endlessh$ 2024-06-24T13:05:59.488Z Port 2222
-2024-06-24T13:05:59.488Z Delay 10000
-2024-06-24T13:05:59.488Z MaxLineLength 32
-2024-06-24T13:05:59.488Z MaxClients 4096
-2024-06-24T13:05:59.488Z BindFamily IPv4 Mapped IPv6
-2024-06-24T13:05:59.488Z socket() = 3
-2024-06-24T13:05:59.488Z setsockopt(3, SO_REUSEADDR, true) = 0
-2024-06-24T13:05:59.488Z setsockopt(3, IPV6_V6ONLY, true) = 0
-2024-06-24T13:05:59.488Z bind(3, port=2222) = 0
-2024-06-24T13:05:59.488Z listen(3) = 0
-2024-06-24T13:05:59.488Z poll(1, -1)
-ssh -vvv localhost -p 2222
-OpenSSH_8.9p1 Ubuntu-3ubuntu0.7, OpenSSL 3.0.2 15 Mar 2022
-debug1: Reading configuration data /home/mikeconrad/.ssh/config
-debug1: Reading configuration data /etc/ssh/ssh_config
-debug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files
-debug1: /etc/ssh/ssh_config line 21: Applying options for *
-debug3: expanded UserKnownHostsFile '~/.ssh/known_hosts' -> '/home/mikeconrad/.ssh/known_hosts'
-debug3: expanded UserKnownHostsFile '~/.ssh/known_hosts2' -> '/home/mikeconrad/.ssh/known_hosts2'
-debug2: resolving "localhost" port 2222
-debug3: resolve_host: lookup localhost:2222
-debug3: ssh_connect_direct: entering
-debug1: Connecting to localhost [::1] port 2222.
-debug3: set_sock_tos: set socket 3 IPV6_TCLASS 0x10
-debug1: Connection established.
-2024-06-24T13:06:08.635Z = 1
-2024-06-24T13:06:08.635Z accept() = 4
-2024-06-24T13:06:08.635Z setsockopt(4, SO_RCVBUF, 1) = 0
-2024-06-24T13:06:08.635Z ACCEPT host=::1 port=43696 fd=4 n=1/4096
-2024-06-24T13:06:08.635Z poll(1, 10000)
-debug1: identity file /home/mikeconrad/.ssh/id_rsa type 0
-debug1: identity file /home/mikeconrad/.ssh/id_rsa-cert type 4
-debug1: identity file /home/mikeconrad/.ssh/id_ecdsa type -1
-debug1: identity file /home/mikeconrad/.ssh/id_ecdsa-cert type -1
-debug1: identity file /home/mikeconrad/.ssh/id_ecdsa_sk type -1
-debug1: identity file /home/mikeconrad/.ssh/id_ecdsa_sk-cert type -1
-debug1: identity file /home/mikeconrad/.ssh/id_ed25519 type -1
-debug1: identity file /home/mikeconrad/.ssh/id_ed25519-cert type -1
-debug1: identity file /home/mikeconrad/.ssh/id_ed25519_sk type -1
-debug1: identity file /home/mikeconrad/.ssh/id_ed25519_sk-cert type -1
-debug1: identity file /home/mikeconrad/.ssh/id_xmss type -1
-debug1: identity file /home/mikeconrad/.ssh/id_xmss-cert type -1
-debug1: identity file /home/mikeconrad/.ssh/id_dsa type -1
-debug1: identity file /home/mikeconrad/.ssh/id_dsa-cert type -1
-debug1: Local version string SSH-2.0-OpenSSH_8.9p1 Ubuntu-3ubuntu0.7
-2024-06-24T13:06:18.684Z = 0
-2024-06-24T13:06:18.684Z write(4) = 3
-2024-06-24T13:06:18.684Z poll(1, 10000)
-debug1: kex_exchange_identification: banner line 0: V
-2024-06-24T13:06:28.734Z = 0
-2024-06-24T13:06:28.734Z write(4) = 25
-2024-06-24T13:06:28.734Z poll(1, 10000)
-debug1: kex_exchange_identification: banner line 1: 2I=ED}PZ,z T_Y|Yc]$b{R]
-
-
-
-
-
-
This is a great way to give back to those bots and script kiddies. In my research into other methods I also stumbled across this brilliant program fakessh. While fakessh isn’t technically a tarpit, it’s more of a honeypot but very interesting nonetheless. It creates a fake SSH server and logs the ip address, connection string and any commands executed by the attacker. Essentially it allows any username/password combination to connect and gives them a fake shell prompt. There is no actual access to any file system and all of their commands basically return gibberish.
-
-
-
-
Here are some logs from an actual server of mine running fakessh
Those are mostly connections and disconnections. They probably connected, realized it was fake and disconnected. There are a couple that tried to execute some commands though:
Fun fact: Cloudflare’s Bot Fight Mode uses a form of tarpitting:
-
-
-
-
-
Once enabled, when we detect a bad bot, we will do three things: (1) we’re going to disincentivize the bot maker economically by tarpitting them, including requiring them to solve a computationally intensive challenge that will require more of their bot’s CPU; (2) for Bandwidth Alliance partners, we’re going to hand the IP of the bot to the partner and get the bot kicked offline; and (3) we’re going to plant trees to make up for the bot’s carbon cost.
-
-
-
-
\ No newline at end of file
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/category/security/feed/index.xml b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/category/security/feed/index.xml
deleted file mode 100644
index 4f2b1a5..0000000
--- a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/category/security/feed/index.xml
+++ /dev/null
@@ -1,913 +0,0 @@
-
-
-
- Security – hackanooga
-
- /
- Confessions of a homelab hacker
- Wed, 25 Sep 2024 13:56:04 +0000
- en-US
-
- hourly
-
- 1
- https://wordpress.org/?v=6.6.2
-
-
- /wp-content/uploads/2024/03/cropped-cropped-avatar-32x32.png
- Security – hackanooga
- /
- 32
- 32
-
-
- Standing up a Wireguard VPN
- /standing-up-a-wireguard-vpn/
-
-
- Wed, 25 Sep 2024 13:56:04 +0000
-
-
-
-
-
-
-
-
- /?p=619
-
-
- VPN’s have traditionally been slow, complex and hard to set up and configure. That all changed several years ago when Wireguard was officially merged into the mainline Linux kernel (src). I won’t go over all the reasons for why you should want to use Wireguard in this article, instead I will be focusing on just how easy it is to set up and configure.
-
-
-
-
For this tutorial we will be using Terraform to stand up a Digital Ocean droplet and then install Wireguard onto that. The Digital Ocean droplet will be acting as our “server” in this example and we will be using our own computer as the “client”. Of course, you don’t have to use Terraform, you just need a Linux box to install Wireguard on. You can find the code for this tutorial on my personal Git server here.
-
-
-
-
Create Droplet with Terraform
-
-
-
-
I have written some very basic Terraform to get us started. The Terraform is very basic and just creates a droplet with a predefined ssh key and a setup script passed as user data. When the droplet gets created, the script will get copied to the instance and automatically executed. After a few minutes everything should be ready to go. If you want to clone the repo above, feel free to, or if you would rather do everything by hand that’s great too. I will assume that you are doing everything by hand. The process of deploying from the repo should be pretty self explainitory. My reasoning for doing it this way is because I wanted to better understand the process.
-
-
-
-
First create our main.tf with the following contents:
-
-
-
-
# main.tf
-# Attach an SSH key to our droplet
-resource "digitalocean_ssh_key" "default" {
- name = "Terraform Example"
- public_key = file("./tf-digitalocean.pub")
-}
-
-# Create a new Web Droplet in the nyc1 region
-resource "digitalocean_droplet" "web" {
- image = "ubuntu-22-04-x64"
- name = "wireguard"
- region = "nyc1"
- size = "s-2vcpu-4gb"
- ssh_keys = [digitalocean_ssh_key.default.fingerprint]
- user_data = file("setup.sh")
-}
-
-output "droplet_output" {
- value = digitalocean_droplet.web.ipv4_address
-}
-
-
-
-
Next create a terraform.tf file in the same directory with the following contents:
Now we are ready to initialize our Terraform and apply it:
-
-
-
-
$ terraform init
-$ terraform apply
-
-Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
- + create
-
-Terraform will perform the following actions:
-
- # digitalocean_droplet.web will be created
- + resource "digitalocean_droplet" "web" {
- + backups = false
- + created_at = (known after apply)
- + disk = (known after apply)
- + graceful_shutdown = false
- + id = (known after apply)
- + image = "ubuntu-22-04-x64"
- + ipv4_address = (known after apply)
- + ipv4_address_private = (known after apply)
- + ipv6 = false
- + ipv6_address = (known after apply)
- + locked = (known after apply)
- + memory = (known after apply)
- + monitoring = false
- + name = "wireguard"
- + price_hourly = (known after apply)
- + price_monthly = (known after apply)
- + private_networking = (known after apply)
- + region = "nyc1"
- + resize_disk = true
- + size = "s-2vcpu-4gb"
- + ssh_keys = (known after apply)
- + status = (known after apply)
- + urn = (known after apply)
- + user_data = "69d130f386b262b136863be5fcffc32bff055ac0"
- + vcpus = (known after apply)
- + volume_ids = (known after apply)
- + vpc_uuid = (known after apply)
- }
-
- # digitalocean_ssh_key.default will be created
- + resource "digitalocean_ssh_key" "default" {
- + fingerprint = (known after apply)
- + id = (known after apply)
- + name = "Terraform Example"
- + public_key = <<-EOT
- ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDXOBlFdNqV48oxWobrn2rPt4y1FTqrqscA5bSu2f3CogwbDKDyNglXu8RL4opjfdBHQES+pEqvt21niqes8z2QsBTF3TRQ39SaHM8wnOTeC8d0uSgyrp9b7higHd0SDJVJZT0Bz5AlpYfCO/gpEW51XrKKeud7vImj8nGPDHnENN0Ie0UVYZ5+V1zlr0BBI7LX01MtzUOgSldDX0lif7IZWW4XEv40ojWyYJNQwO/gwyDrdAq+kl+xZu7LmBhngcqd02+X6w4SbdgYg2flu25Td0MME0DEsXKiZYf7kniTrKgCs4kJAmidCDYlYRt43dlM69pB5jVD/u4r3O+erTapH/O1EDhsdA9y0aYpKOv26ssYU+ZXK/nax+Heu0giflm7ENTCblKTPCtpG1DBthhX6Ml0AYjZF1cUaaAvpN8UjElxQ9r+PSwXloSnf25/r9UOBs1uco8VDwbx5cM0SpdYm6ERtLqGRYrG2SDJ8yLgiCE9EK9n3uQExyrTMKWzVAc= WireguardVPN
- EOT
- }
-
-Plan: 2 to add, 0 to change, 0 to destroy.
-
-Changes to Outputs:
- + droplet_output = (known after apply)
-
-Do you want to perform these actions?
- Terraform will perform the actions described above.
- Only 'yes' will be accepted to approve.
-
- Enter a value: yes
-
-digitalocean_ssh_key.default: Creating...
-digitalocean_ssh_key.default: Creation complete after 1s [id=43499750]
-digitalocean_droplet.web: Creating...
-digitalocean_droplet.web: Still creating... [10s elapsed]
-digitalocean_droplet.web: Still creating... [20s elapsed]
-digitalocean_droplet.web: Still creating... [30s elapsed]
-digitalocean_droplet.web: Creation complete after 31s [id=447469336]
-
-Apply complete! Resources: 2 added, 0 changed, 0 destroyed.
-
-Outputs:
-
-droplet_output = "159.223.113.207"
-
-
-
-
-
All pretty standard stuff. Nice! It only took about 30 seconds or so on my machine to spin up a droplet and start provisioning it. It is worth noting that the setup script will take a few minutes to run. Before we log into our new droplet, let’s take a quick look at the setup script that we are running.
-
-
-
-
#!/usr/bin/env sh
-set -e
-set -u
-# Set the listen port used by Wireguard, this is the default so feel free to change it.
-LISTENPORT=51820
-CONFIG_DIR=/root/wireguard-conf
-umask 077
-mkdir -p $CONFIG_DIR/client
-
-# Install wireguard
-apt update && apt install -y wireguard
-
-# Generate public/private key for the "server".
-wg genkey > $CONFIG_DIR/privatekey
-wg pubkey < $CONFIG_DIR/privatekey > $CONFIG_DIR/publickey
-
-# Generate public/private key for the "client"
-wg genkey > $CONFIG_DIR/client/privatekey
-wg pubkey < $CONFIG_DIR/client/privatekey > $CONFIG_DIR/client/publickey
-
-
-# Generate server config
-echo "[Interface]
-Address = 10.66.66.1/24,fd42:42:42::1/64
-ListenPort = $LISTENPORT
-PrivateKey = $(cat $CONFIG_DIR/privatekey)
-
-### Client config
-[Peer]
-PublicKey = $(cat $CONFIG_DIR/client/publickey)
-AllowedIPs = 10.66.66.2/32,fd42:42:42::2/128
-" > /etc/wireguard/do.conf
-
-
-# Generate client config. This will need to be copied to your machine.
-echo "[Interface]
-PrivateKey = $(cat $CONFIG_DIR/client/privatekey)
-Address = 10.66.66.2/32,fd42:42:42::2/128
-DNS = 1.1.1.1,1.0.0.1
-
-[Peer]
-PublicKey = $(cat publickey)
-Endpoint = $(curl icanhazip.com):$LISTENPORT
-AllowedIPs = 0.0.0.0/0,::/0
-" > client-config.conf
-
-wg-quick up do
-
-# Add iptables rules to forward internet traffic through this box
-# We are assuming our Wireguard interface is called do and our
-# primary public facing interface is called eth0.
-
-iptables -I INPUT -p udp --dport 51820 -j ACCEPT
-iptables -I FORWARD -i eth0 -o do -j ACCEPT
-iptables -I FORWARD -i do -j ACCEPT
-iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
-ip6tables -I FORWARD -i do -j ACCEPT
-ip6tables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
-
-# Enable routing on the server
-echo "net.ipv4.ip_forward = 1
- net.ipv6.conf.all.forwarding = 1" >/etc/sysctl.d/wg.conf
-sysctl --system
-
-
-
-
As you can see, it is pretty straightforward. All you really need to do is:
-
-
-
-
On the “server” side:
-
-
-
-
-
Generate a private key and derive a public key from it for both the “server” and the “client”.
-
-
-
-
Create a “server” config that tells the droplet what address to bind to for the wireguard interface, which private key to use to secure that interface and what port to listen on.
-
-
-
-
The “server” config also needs to know what peers or “clients” to accept connections from in the AllowedIPs block. In this case we are just specifying one. The “server” also needs to know the public key of the “client” that will be connecting.
-
-
-
-
-
On the “client” side:
-
-
-
-
-
Create a “client” config that tells our machine what address to assign to the wireguard interface (obviously needs to be on the same subnet as the interface on the server side).
-
-
-
-
The client needs to know which private key to use to secure the interface.
-
-
-
-
It also needs to know the public key of the server as well as the public IP address/hostname of the “server” it is connecting to as well as the port it is listening on.
-
-
-
-
Finally it needs to know what traffic to route over the wireguard interface. In this example we are simply routing all traffic but you could restrict this as you see fit.
-
-
-
-
-
Now that we have our configs in place, we need to copy the client config to our local machine. The following command should work as long as you make sure to replace the IP address with the IP address of your newly created droplet:
-
-
-
-
## Make sure you have Wireguard installed on your local machine as well.
-## https://wireguard.com/install
-
-## Copy the client config to our local machine and move it to our wireguard directory.
-$ ssh -i tf-digitalocean root@157.230.177.54 -- cat /root/wireguard-conf/client-config.conf| sudo tee /etc/wireguard/do.conf
-
-
-
-
Before we try to connect, let’s log into the server and make sure everything is set up correctly:
-
-
-
-
$ ssh -i tf-digitalocean root@159.223.113.207
-Welcome to Ubuntu 22.04.4 LTS (GNU/Linux 5.15.0-113-generic x86_64)
-
- * Documentation: https://help.ubuntu.com/
- * Management: https://landscape.canonical.com/
- * Support: https://ubuntu.com/pro
-
- System information as of Wed Sep 25 13:19:02 UTC 2024
-
- System load: 0.03 Processes: 113
- Usage of /: 2.1% of 77.35GB Users logged in: 0
- Memory usage: 6% IPv4 address for eth0: 157.230.221.196
- Swap usage: 0% IPv4 address for eth0: 10.10.0.5
-
-Expanded Security Maintenance for Applications is not enabled.
-
-70 updates can be applied immediately.
-40 of these updates are standard security updates.
-To see these additional updates run: apt list --upgradable
-
-Enable ESM Apps to receive additional future security updates.
-See https://ubuntu.com/esm or run: sudo pro status
-
-New release '24.04.1 LTS' available.
-Run 'do-release-upgrade' to upgrade to it.
-
-
-Last login: Wed Sep 25 13:16:25 2024 from 74.221.191.214
-root@wireguard:~#
-
-
-
-
-
-
Awesome! We are connected. Now let’s check the wireguard interface using the wg command. If our config was correct, we should see an interface line and 1 peer line like so. If the peer line is missing then something is wrong with the configuration. Most likely a mismatch between public/private key.:
-
-
-
-
root@wireguard:~# wg
-interface: do
- public key: fTvqo/cZVofJ9IZgWHwU6XKcIwM/EcxUsMw4voeS/Hg=
- private key: (hidden)
- listening port: 51820
-
-peer: 5RxMenh1L+rNJobROkUrub4DBUj+nEUPKiNe4DFR8iY=
- allowed ips: 10.66.66.2/32, fd42:42:42::2/128
-root@wireguard:~#
-
-
-
-
So now we should be ready to go! On your local machine go ahead and try it out:
-
-
-
-
## Start the interface with wg-quick up [interface_name]
-$ sudo wg-quick up do
-[sudo] password for mikeconrad:
-[#] ip link add do type wireguard
-[#] wg setconf do /dev/fd/63
-[#] ip -4 address add 10.66.66.2/32 dev do
-[#] ip -6 address add fd42:42:42::2/128 dev do
-[#] ip link set mtu 1420 up dev do
-[#] resolvconf -a do -m 0 -x
-[#] wg set do fwmark 51820
-[#] ip -6 route add ::/0 dev do table 51820
-[#] ip -6 rule add not fwmark 51820 table 51820
-[#] ip -6 rule add table main suppress_prefixlength 0
-[#] ip6tables-restore -n
-[#] ip -4 route add 0.0.0.0/0 dev do table 51820
-[#] ip -4 rule add not fwmark 51820 table 51820
-[#] ip -4 rule add table main suppress_prefixlength 0
-[#] sysctl -q net.ipv4.conf.all.src_valid_mark=1
-[#] iptables-restore -n
-
-## Check our config
-$ sudo wg
-interface: do
- public key: fJ8mptCR/utCR4K2LmJTKTjn3xc4RDmZ3NNEQGwI7iI=
- private key: (hidden)
- listening port: 34596
- fwmark: 0xca6c
-
-peer: duTHwMhzSZxnRJ2GFCUCHE4HgY5tSeRn9EzQt9XVDx4=
- endpoint: 157.230.177.54:51820
- allowed ips: 0.0.0.0/0, ::/0
- latest handshake: 1 second ago
- transfer: 1.82 KiB received, 2.89 KiB sent
-
-## Make sure we can ping the outside world
-mikeconrad@pop-os:~/projects/wireguard-terraform-digitalocean$ ping 1.1.1.1
-PING 1.1.1.1 (1.1.1.1) 56(84) bytes of data.
-64 bytes from 1.1.1.1: icmp_seq=1 ttl=56 time=28.0 ms
-^C
---- 1.1.1.1 ping statistics ---
-1 packets transmitted, 1 received, 0% packet loss, time 0ms
-rtt min/avg/max/mdev = 27.991/27.991/27.991/0.000 ms
-
-## Verify our traffic is actually going over the tunnel.
-$ curl icanhazip.com
-157.230.177.54
-
-
-
-
-
-
-
We should also be able to ssh into our instance over the VPN using the 10.66.66.1 address:
-
-
-
-
$ ssh -i tf-digitalocean root@10.66.66.1
-The authenticity of host '10.66.66.1 (10.66.66.1)' can't be established.
-ED25519 key fingerprint is SHA256:E7BKSO3qP+iVVXfb/tLaUfKIc4RvtZ0k248epdE04m8.
-This host key is known by the following other names/addresses:
- ~/.ssh/known_hosts:130: [hashed name]
-Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
-Warning: Permanently added '10.66.66.1' (ED25519) to the list of known hosts.
-Welcome to Ubuntu 22.04.4 LTS (GNU/Linux 5.15.0-113-generic x86_64)
-
- * Documentation: https://help.ubuntu.com/
- * Management: https://landscape.canonical.com/
- * Support: https://ubuntu.com/pro
-
- System information as of Wed Sep 25 13:32:12 UTC 2024
-
- System load: 0.02 Processes: 109
- Usage of /: 2.1% of 77.35GB Users logged in: 0
- Memory usage: 6% IPv4 address for eth0: 157.230.177.54
- Swap usage: 0% IPv4 address for eth0: 10.10.0.5
-
-Expanded Security Maintenance for Applications is not enabled.
-
-73 updates can be applied immediately.
-40 of these updates are standard security updates.
-To see these additional updates run: apt list --upgradable
-
-Enable ESM Apps to receive additional future security updates.
-See https://ubuntu.com/esm or run: sudo pro status
-
-New release '24.04.1 LTS' available.
-Run 'do-release-upgrade' to upgrade to it.
-
-
-root@wireguard:~#
-
-
-
-
-
Looks like everything is working! If you run the script from the repo you will have a fully functioning Wireguard VPN in less than 5 minutes! Pretty cool stuff! This article was not meant to be exhaustive but instead a simple primer to get your feet wet. The setup script I used is heavily inspired by angristan/wireguard-install. Another great resource is the Unofficial docs repo.
-]]>
-
-
-
-
-
- Hardening your web server by only allowing traffic from Cloudflare
- /hardening-your-web-server-by-only-allowing-traffic-from-cloudflare/
-
-
- Thu, 01 Aug 2024 21:02:29 +0000
-
-
-
-
-
- /?p=607
-
-
- TDLR:
-
-
-
-
If you just want the code you can find a convenient script on my Gitea server here. This version has been slightly modified so that it will work on more systems.
-
-
-
-
-
-
-
-
I have been using Cloudflare for several years for both personal and professional projects. The free plan has some various gracious limits and it’s a great way to clear out some low hanging fruit and improve the security of your application. If you’re not familiar with how it works, basically Cloudflare has two modes for DNS records. DNS Only and Proxied. The only way to get the advantages of Cloudflare is to use Proxied mode. Cloudflare has some great documentation on how all of their services work but basically what happens is that you are pointing your domain to Cloudflare and Cloudflare provisions their network of Proxy servers to handle requests for your domain.
-
-
-
-
These proxy servers allow you to secure your domain by implementing things like WAF and Rate limiting. You can also enforce HTTPS only mode and modify/add custom request/response headers. You will notice that once you turn this mode on, your webserver will log requests as coming from Cloudflare IP addresses. They have great documentation on how to configure your webserver to restore these IP addresses in your log files.
-
-
-
-
This is a very easy step to start securing your origin server but it still allows attackers to access your servers directly if they know the IP address. We can take our security one step forward by only allowing requests from IP addresses originating within Cloudflare meaning that we will only allow requests if they are coming from a Cloudflare proxy server. The setup is fairly straightforward. In this example I will be using a Linux server.
-
-
-
-
We can achieve this pretty easily because Cloudflare provides a sort of API where they regular publish their network blocks. Here is the basic script we will use:
-
-
-
-
for ip in $(curl https://www.cloudflare.com/ips-v4/); do iptables -I INPUT -p tcp -m multiport --dports http,https -s $ip -j ACCEPT; done
-
-for ip in $(curl https://www.cloudflare.com/ips-v6/); do ip6tables -I INPUT -p tcp -m multiport --dports http,https -s $ip -j ACCEPT; done
-
-iptables -A INPUT -p tcp -m multiport --dports http,https -j DROP
-ip6tables -A INPUT -p tcp -m multiport --dports http,https -j DROP
-
-
-
-
-
This will pull down the latest network addresses from Cloudflare and create iptables rules for us. These IP addresses do change from time to time so you may want to put this in a script and run it via a cronjob to have it update on a regular basis.
-
-
-
-
Now with this in place, here is the results:
-
-
-
-
-
-
-
-
This should cut down on some of the noise from attackers and script kiddies trying to find holes in your security.
We are looking for an experienced professional to help us set up an SFTP server that will allow our vendors to send us inventory files on a daily basis. The server should ensure secure and reliable file transfers, allowing our vendors to easily upload their inventory updates. The successful candidate will possess expertise in SFTP server setup and configuration, as well as knowledge of network security protocols. The required skills for this job include:
-
-
-
-
– SFTP server setup and configuration – Network security protocols – Troubleshooting and problem-solving skills
-
-
-
-
If you have demonstrated experience in setting up SFTP servers and ensuring smooth daily file transfers, we would love to hear from you.
-
-
-
-
-
-
-
-
-
-
-
-
-
My Role
-
-
-
-
I walked the client through the process of setting up a Digital Ocean account. I created a Ubuntu 22.04 VM and installed SFTPGo. I set the client up with an administrator user so that they could easily login and manage users and shares. I implemented some basic security practices as well and set the client up with a custom domain and free TLS/SSL certificate from LetsEncrypt. With the documentation and screenshots I provided the client, they were able to get everything up and running and add users and connect other systems easily and securly.
-
-
-
-
-
-
-
-
-
-
-
-
Client Feedback
-
-
-
-
-
Rating is 5 out of 5.
-
-
-
-
Michael was EXTREMELY helpful and great to work with. We really benefited from his support and help with everything.
-
-]]>
-
-
-
-
-
- Fun with bots – SSH tarpitting
- /fun-with-bots-ssh-tarpitting/
-
-
- Mon, 24 Jun 2024 13:37:43 +0000
-
-
-
-
-
-
- /?p=576
-
-
- For those of you who aren’t familiar with the concept of a network tarpit it is a fairly simple concept. Wikipedia defines it like this:
-
-
-
-
-
A tarpit is a service on a computer system (usually a server) that purposely delays incoming connections. The technique was developed as a defense against a computer worm, and the idea is that network abuses such as spamming or broad scanning are less effective, and therefore less attractive, if they take too long. The concept is analogous with a tar pit, in which animals can get bogged down and slowly sink under the surface, like in a swamp.
If you run any sort of service on the internet then you know as soon as your server has a public IP address and open ports, there are scanners and bots trying to get in constantly. If you take decent steps towards security then it is little more than an annoyance, but annoying all the less. One day when I had some extra time on my hands I started researching ways to mess with the bots trying to scan/attack my site.
-
-
-
-
It turns out that this problem has been solved multiple times in multiple ways. One of the most popular tools for tarpitting ssh connections is endlessh. The way it works is actually pretty simple. The SSH RFC states that when an SSH connection is established, both sides MUST send an identification string. Further down the spec is the line that allows this behavior:
-
-
-
-
-
The server MAY send other lines of data before sending the version
- string. Each line SHOULD be terminated by a Carriage Return and Line
- Feed. Such lines MUST NOT begin with "SSH-", and SHOULD be encoded
- in ISO-10646 UTF-8 [RFC3629] (language is not specified). Clients
- MUST be able to process such lines. Such lines MAY be silently
- ignored, or MAY be displayed to the client user. If they are
- displayed, control character filtering, as discussed in [SSH-ARCH],
- SHOULD be used. The primary use of this feature is to allow TCP-
- wrappers to display an error message before disconnecting.
-SSH RFC
-
-
-
-
Essentially this means that their is no limit to the amount of data that a server can send back to the client and the client must be able to wait and process all of this data. Now let’s see it in action.
By default this fake server listens on port 2222. I have a port forward set up that forwards all ssh traffic from port 22 to 2222. Now try to connect via ssh:
-
-
-
-
ssh -vvv localhost -p 2222
-
-
-
-
If you wait a few seconds you will see the server send back the version string and then start sending a random banner:
-
-
-
-
$:/tmp/endlessh$ 2024-06-24T13:05:59.488Z Port 2222
-2024-06-24T13:05:59.488Z Delay 10000
-2024-06-24T13:05:59.488Z MaxLineLength 32
-2024-06-24T13:05:59.488Z MaxClients 4096
-2024-06-24T13:05:59.488Z BindFamily IPv4 Mapped IPv6
-2024-06-24T13:05:59.488Z socket() = 3
-2024-06-24T13:05:59.488Z setsockopt(3, SO_REUSEADDR, true) = 0
-2024-06-24T13:05:59.488Z setsockopt(3, IPV6_V6ONLY, true) = 0
-2024-06-24T13:05:59.488Z bind(3, port=2222) = 0
-2024-06-24T13:05:59.488Z listen(3) = 0
-2024-06-24T13:05:59.488Z poll(1, -1)
-ssh -vvv localhost -p 2222
-OpenSSH_8.9p1 Ubuntu-3ubuntu0.7, OpenSSL 3.0.2 15 Mar 2022
-debug1: Reading configuration data /home/mikeconrad/.ssh/config
-debug1: Reading configuration data /etc/ssh/ssh_config
-debug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files
-debug1: /etc/ssh/ssh_config line 21: Applying options for *
-debug3: expanded UserKnownHostsFile '~/.ssh/known_hosts' -> '/home/mikeconrad/.ssh/known_hosts'
-debug3: expanded UserKnownHostsFile '~/.ssh/known_hosts2' -> '/home/mikeconrad/.ssh/known_hosts2'
-debug2: resolving "localhost" port 2222
-debug3: resolve_host: lookup localhost:2222
-debug3: ssh_connect_direct: entering
-debug1: Connecting to localhost [::1] port 2222.
-debug3: set_sock_tos: set socket 3 IPV6_TCLASS 0x10
-debug1: Connection established.
-2024-06-24T13:06:08.635Z = 1
-2024-06-24T13:06:08.635Z accept() = 4
-2024-06-24T13:06:08.635Z setsockopt(4, SO_RCVBUF, 1) = 0
-2024-06-24T13:06:08.635Z ACCEPT host=::1 port=43696 fd=4 n=1/4096
-2024-06-24T13:06:08.635Z poll(1, 10000)
-debug1: identity file /home/mikeconrad/.ssh/id_rsa type 0
-debug1: identity file /home/mikeconrad/.ssh/id_rsa-cert type 4
-debug1: identity file /home/mikeconrad/.ssh/id_ecdsa type -1
-debug1: identity file /home/mikeconrad/.ssh/id_ecdsa-cert type -1
-debug1: identity file /home/mikeconrad/.ssh/id_ecdsa_sk type -1
-debug1: identity file /home/mikeconrad/.ssh/id_ecdsa_sk-cert type -1
-debug1: identity file /home/mikeconrad/.ssh/id_ed25519 type -1
-debug1: identity file /home/mikeconrad/.ssh/id_ed25519-cert type -1
-debug1: identity file /home/mikeconrad/.ssh/id_ed25519_sk type -1
-debug1: identity file /home/mikeconrad/.ssh/id_ed25519_sk-cert type -1
-debug1: identity file /home/mikeconrad/.ssh/id_xmss type -1
-debug1: identity file /home/mikeconrad/.ssh/id_xmss-cert type -1
-debug1: identity file /home/mikeconrad/.ssh/id_dsa type -1
-debug1: identity file /home/mikeconrad/.ssh/id_dsa-cert type -1
-debug1: Local version string SSH-2.0-OpenSSH_8.9p1 Ubuntu-3ubuntu0.7
-2024-06-24T13:06:18.684Z = 0
-2024-06-24T13:06:18.684Z write(4) = 3
-2024-06-24T13:06:18.684Z poll(1, 10000)
-debug1: kex_exchange_identification: banner line 0: V
-2024-06-24T13:06:28.734Z = 0
-2024-06-24T13:06:28.734Z write(4) = 25
-2024-06-24T13:06:28.734Z poll(1, 10000)
-debug1: kex_exchange_identification: banner line 1: 2I=ED}PZ,z T_Y|Yc]$b{R]
-
-
-
-
-
-
This is a great way to give back to those bots and script kiddies. In my research into other methods I also stumbled across this brilliant program fakessh. While fakessh isn’t technically a tarpit, it’s more of a honeypot but very interesting nonetheless. It creates a fake SSH server and logs the ip address, connection string and any commands executed by the attacker. Essentially it allows any username/password combination to connect and gives them a fake shell prompt. There is no actual access to any file system and all of their commands basically return gibberish.
-
-
-
-
Here are some logs from an actual server of mine running fakessh
Those are mostly connections and disconnections. They probably connected, realized it was fake and disconnected. There are a couple that tried to execute some commands though:
Fun fact: Cloudflare’s Bot Fight Mode uses a form of tarpitting:
-
-
-
-
-
Once enabled, when we detect a bad bot, we will do three things: (1) we’re going to disincentivize the bot maker economically by tarpitting them, including requiring them to solve a computationally intensive challenge that will require more of their bot’s CPU; (2) for Bandwidth Alliance partners, we’re going to hand the IP of the bot to the partner and get the bot kicked offline; and (3) we’re going to plant trees to make up for the bot’s carbon cost.
-
-
-
-
\ No newline at end of file
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/category/self-hosted/feed/index.xml b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/category/self-hosted/feed/index.xml
deleted file mode 100644
index ff69563..0000000
--- a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/category/self-hosted/feed/index.xml
+++ /dev/null
@@ -1,1034 +0,0 @@
-
-
-
- Self Hosted – hackanooga
-
- /
- Confessions of a homelab hacker
- Wed, 25 Sep 2024 13:56:04 +0000
- en-US
-
- hourly
-
- 1
- https://wordpress.org/?v=6.6.2
-
-
- /wp-content/uploads/2024/03/cropped-cropped-avatar-32x32.png
- Self Hosted – hackanooga
- /
- 32
- 32
-
-
- Standing up a Wireguard VPN
- /standing-up-a-wireguard-vpn/
-
-
- Wed, 25 Sep 2024 13:56:04 +0000
-
-
-
-
-
-
-
-
- /?p=619
-
-
- VPN’s have traditionally been slow, complex and hard to set up and configure. That all changed several years ago when Wireguard was officially merged into the mainline Linux kernel (src). I won’t go over all the reasons for why you should want to use Wireguard in this article, instead I will be focusing on just how easy it is to set up and configure.
-
-
-
-
For this tutorial we will be using Terraform to stand up a Digital Ocean droplet and then install Wireguard onto that. The Digital Ocean droplet will be acting as our “server” in this example and we will be using our own computer as the “client”. Of course, you don’t have to use Terraform, you just need a Linux box to install Wireguard on. You can find the code for this tutorial on my personal Git server here.
-
-
-
-
Create Droplet with Terraform
-
-
-
-
I have written some very basic Terraform to get us started. The Terraform is very basic and just creates a droplet with a predefined ssh key and a setup script passed as user data. When the droplet gets created, the script will get copied to the instance and automatically executed. After a few minutes everything should be ready to go. If you want to clone the repo above, feel free to, or if you would rather do everything by hand that’s great too. I will assume that you are doing everything by hand. The process of deploying from the repo should be pretty self explainitory. My reasoning for doing it this way is because I wanted to better understand the process.
-
-
-
-
First create our main.tf with the following contents:
-
-
-
-
# main.tf
-# Attach an SSH key to our droplet
-resource "digitalocean_ssh_key" "default" {
- name = "Terraform Example"
- public_key = file("./tf-digitalocean.pub")
-}
-
-# Create a new Web Droplet in the nyc1 region
-resource "digitalocean_droplet" "web" {
- image = "ubuntu-22-04-x64"
- name = "wireguard"
- region = "nyc1"
- size = "s-2vcpu-4gb"
- ssh_keys = [digitalocean_ssh_key.default.fingerprint]
- user_data = file("setup.sh")
-}
-
-output "droplet_output" {
- value = digitalocean_droplet.web.ipv4_address
-}
-
-
-
-
Next create a terraform.tf file in the same directory with the following contents:
Now we are ready to initialize our Terraform and apply it:
-
-
-
-
$ terraform init
-$ terraform apply
-
-Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
- + create
-
-Terraform will perform the following actions:
-
- # digitalocean_droplet.web will be created
- + resource "digitalocean_droplet" "web" {
- + backups = false
- + created_at = (known after apply)
- + disk = (known after apply)
- + graceful_shutdown = false
- + id = (known after apply)
- + image = "ubuntu-22-04-x64"
- + ipv4_address = (known after apply)
- + ipv4_address_private = (known after apply)
- + ipv6 = false
- + ipv6_address = (known after apply)
- + locked = (known after apply)
- + memory = (known after apply)
- + monitoring = false
- + name = "wireguard"
- + price_hourly = (known after apply)
- + price_monthly = (known after apply)
- + private_networking = (known after apply)
- + region = "nyc1"
- + resize_disk = true
- + size = "s-2vcpu-4gb"
- + ssh_keys = (known after apply)
- + status = (known after apply)
- + urn = (known after apply)
- + user_data = "69d130f386b262b136863be5fcffc32bff055ac0"
- + vcpus = (known after apply)
- + volume_ids = (known after apply)
- + vpc_uuid = (known after apply)
- }
-
- # digitalocean_ssh_key.default will be created
- + resource "digitalocean_ssh_key" "default" {
- + fingerprint = (known after apply)
- + id = (known after apply)
- + name = "Terraform Example"
- + public_key = <<-EOT
- ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDXOBlFdNqV48oxWobrn2rPt4y1FTqrqscA5bSu2f3CogwbDKDyNglXu8RL4opjfdBHQES+pEqvt21niqes8z2QsBTF3TRQ39SaHM8wnOTeC8d0uSgyrp9b7higHd0SDJVJZT0Bz5AlpYfCO/gpEW51XrKKeud7vImj8nGPDHnENN0Ie0UVYZ5+V1zlr0BBI7LX01MtzUOgSldDX0lif7IZWW4XEv40ojWyYJNQwO/gwyDrdAq+kl+xZu7LmBhngcqd02+X6w4SbdgYg2flu25Td0MME0DEsXKiZYf7kniTrKgCs4kJAmidCDYlYRt43dlM69pB5jVD/u4r3O+erTapH/O1EDhsdA9y0aYpKOv26ssYU+ZXK/nax+Heu0giflm7ENTCblKTPCtpG1DBthhX6Ml0AYjZF1cUaaAvpN8UjElxQ9r+PSwXloSnf25/r9UOBs1uco8VDwbx5cM0SpdYm6ERtLqGRYrG2SDJ8yLgiCE9EK9n3uQExyrTMKWzVAc= WireguardVPN
- EOT
- }
-
-Plan: 2 to add, 0 to change, 0 to destroy.
-
-Changes to Outputs:
- + droplet_output = (known after apply)
-
-Do you want to perform these actions?
- Terraform will perform the actions described above.
- Only 'yes' will be accepted to approve.
-
- Enter a value: yes
-
-digitalocean_ssh_key.default: Creating...
-digitalocean_ssh_key.default: Creation complete after 1s [id=43499750]
-digitalocean_droplet.web: Creating...
-digitalocean_droplet.web: Still creating... [10s elapsed]
-digitalocean_droplet.web: Still creating... [20s elapsed]
-digitalocean_droplet.web: Still creating... [30s elapsed]
-digitalocean_droplet.web: Creation complete after 31s [id=447469336]
-
-Apply complete! Resources: 2 added, 0 changed, 0 destroyed.
-
-Outputs:
-
-droplet_output = "159.223.113.207"
-
-
-
-
-
All pretty standard stuff. Nice! It only took about 30 seconds or so on my machine to spin up a droplet and start provisioning it. It is worth noting that the setup script will take a few minutes to run. Before we log into our new droplet, let’s take a quick look at the setup script that we are running.
-
-
-
-
#!/usr/bin/env sh
-set -e
-set -u
-# Set the listen port used by Wireguard, this is the default so feel free to change it.
-LISTENPORT=51820
-CONFIG_DIR=/root/wireguard-conf
-umask 077
-mkdir -p $CONFIG_DIR/client
-
-# Install wireguard
-apt update && apt install -y wireguard
-
-# Generate public/private key for the "server".
-wg genkey > $CONFIG_DIR/privatekey
-wg pubkey < $CONFIG_DIR/privatekey > $CONFIG_DIR/publickey
-
-# Generate public/private key for the "client"
-wg genkey > $CONFIG_DIR/client/privatekey
-wg pubkey < $CONFIG_DIR/client/privatekey > $CONFIG_DIR/client/publickey
-
-
-# Generate server config
-echo "[Interface]
-Address = 10.66.66.1/24,fd42:42:42::1/64
-ListenPort = $LISTENPORT
-PrivateKey = $(cat $CONFIG_DIR/privatekey)
-
-### Client config
-[Peer]
-PublicKey = $(cat $CONFIG_DIR/client/publickey)
-AllowedIPs = 10.66.66.2/32,fd42:42:42::2/128
-" > /etc/wireguard/do.conf
-
-
-# Generate client config. This will need to be copied to your machine.
-echo "[Interface]
-PrivateKey = $(cat $CONFIG_DIR/client/privatekey)
-Address = 10.66.66.2/32,fd42:42:42::2/128
-DNS = 1.1.1.1,1.0.0.1
-
-[Peer]
-PublicKey = $(cat publickey)
-Endpoint = $(curl icanhazip.com):$LISTENPORT
-AllowedIPs = 0.0.0.0/0,::/0
-" > client-config.conf
-
-wg-quick up do
-
-# Add iptables rules to forward internet traffic through this box
-# We are assuming our Wireguard interface is called do and our
-# primary public facing interface is called eth0.
-
-iptables -I INPUT -p udp --dport 51820 -j ACCEPT
-iptables -I FORWARD -i eth0 -o do -j ACCEPT
-iptables -I FORWARD -i do -j ACCEPT
-iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
-ip6tables -I FORWARD -i do -j ACCEPT
-ip6tables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
-
-# Enable routing on the server
-echo "net.ipv4.ip_forward = 1
- net.ipv6.conf.all.forwarding = 1" >/etc/sysctl.d/wg.conf
-sysctl --system
-
-
-
-
As you can see, it is pretty straightforward. All you really need to do is:
-
-
-
-
On the “server” side:
-
-
-
-
-
Generate a private key and derive a public key from it for both the “server” and the “client”.
-
-
-
-
Create a “server” config that tells the droplet what address to bind to for the wireguard interface, which private key to use to secure that interface and what port to listen on.
-
-
-
-
The “server” config also needs to know what peers or “clients” to accept connections from in the AllowedIPs block. In this case we are just specifying one. The “server” also needs to know the public key of the “client” that will be connecting.
-
-
-
-
-
On the “client” side:
-
-
-
-
-
Create a “client” config that tells our machine what address to assign to the wireguard interface (obviously needs to be on the same subnet as the interface on the server side).
-
-
-
-
The client needs to know which private key to use to secure the interface.
-
-
-
-
It also needs to know the public key of the server as well as the public IP address/hostname of the “server” it is connecting to as well as the port it is listening on.
-
-
-
-
Finally it needs to know what traffic to route over the wireguard interface. In this example we are simply routing all traffic but you could restrict this as you see fit.
-
-
-
-
-
Now that we have our configs in place, we need to copy the client config to our local machine. The following command should work as long as you make sure to replace the IP address with the IP address of your newly created droplet:
-
-
-
-
## Make sure you have Wireguard installed on your local machine as well.
-## https://wireguard.com/install
-
-## Copy the client config to our local machine and move it to our wireguard directory.
-$ ssh -i tf-digitalocean root@157.230.177.54 -- cat /root/wireguard-conf/client-config.conf| sudo tee /etc/wireguard/do.conf
-
-
-
-
Before we try to connect, let’s log into the server and make sure everything is set up correctly:
-
-
-
-
$ ssh -i tf-digitalocean root@159.223.113.207
-Welcome to Ubuntu 22.04.4 LTS (GNU/Linux 5.15.0-113-generic x86_64)
-
- * Documentation: https://help.ubuntu.com/
- * Management: https://landscape.canonical.com/
- * Support: https://ubuntu.com/pro
-
- System information as of Wed Sep 25 13:19:02 UTC 2024
-
- System load: 0.03 Processes: 113
- Usage of /: 2.1% of 77.35GB Users logged in: 0
- Memory usage: 6% IPv4 address for eth0: 157.230.221.196
- Swap usage: 0% IPv4 address for eth0: 10.10.0.5
-
-Expanded Security Maintenance for Applications is not enabled.
-
-70 updates can be applied immediately.
-40 of these updates are standard security updates.
-To see these additional updates run: apt list --upgradable
-
-Enable ESM Apps to receive additional future security updates.
-See https://ubuntu.com/esm or run: sudo pro status
-
-New release '24.04.1 LTS' available.
-Run 'do-release-upgrade' to upgrade to it.
-
-
-Last login: Wed Sep 25 13:16:25 2024 from 74.221.191.214
-root@wireguard:~#
-
-
-
-
-
-
Awesome! We are connected. Now let’s check the wireguard interface using the wg command. If our config was correct, we should see an interface line and 1 peer line like so. If the peer line is missing then something is wrong with the configuration. Most likely a mismatch between public/private key.:
-
-
-
-
root@wireguard:~# wg
-interface: do
- public key: fTvqo/cZVofJ9IZgWHwU6XKcIwM/EcxUsMw4voeS/Hg=
- private key: (hidden)
- listening port: 51820
-
-peer: 5RxMenh1L+rNJobROkUrub4DBUj+nEUPKiNe4DFR8iY=
- allowed ips: 10.66.66.2/32, fd42:42:42::2/128
-root@wireguard:~#
-
-
-
-
So now we should be ready to go! On your local machine go ahead and try it out:
-
-
-
-
## Start the interface with wg-quick up [interface_name]
-$ sudo wg-quick up do
-[sudo] password for mikeconrad:
-[#] ip link add do type wireguard
-[#] wg setconf do /dev/fd/63
-[#] ip -4 address add 10.66.66.2/32 dev do
-[#] ip -6 address add fd42:42:42::2/128 dev do
-[#] ip link set mtu 1420 up dev do
-[#] resolvconf -a do -m 0 -x
-[#] wg set do fwmark 51820
-[#] ip -6 route add ::/0 dev do table 51820
-[#] ip -6 rule add not fwmark 51820 table 51820
-[#] ip -6 rule add table main suppress_prefixlength 0
-[#] ip6tables-restore -n
-[#] ip -4 route add 0.0.0.0/0 dev do table 51820
-[#] ip -4 rule add not fwmark 51820 table 51820
-[#] ip -4 rule add table main suppress_prefixlength 0
-[#] sysctl -q net.ipv4.conf.all.src_valid_mark=1
-[#] iptables-restore -n
-
-## Check our config
-$ sudo wg
-interface: do
- public key: fJ8mptCR/utCR4K2LmJTKTjn3xc4RDmZ3NNEQGwI7iI=
- private key: (hidden)
- listening port: 34596
- fwmark: 0xca6c
-
-peer: duTHwMhzSZxnRJ2GFCUCHE4HgY5tSeRn9EzQt9XVDx4=
- endpoint: 157.230.177.54:51820
- allowed ips: 0.0.0.0/0, ::/0
- latest handshake: 1 second ago
- transfer: 1.82 KiB received, 2.89 KiB sent
-
-## Make sure we can ping the outside world
-mikeconrad@pop-os:~/projects/wireguard-terraform-digitalocean$ ping 1.1.1.1
-PING 1.1.1.1 (1.1.1.1) 56(84) bytes of data.
-64 bytes from 1.1.1.1: icmp_seq=1 ttl=56 time=28.0 ms
-^C
---- 1.1.1.1 ping statistics ---
-1 packets transmitted, 1 received, 0% packet loss, time 0ms
-rtt min/avg/max/mdev = 27.991/27.991/27.991/0.000 ms
-
-## Verify our traffic is actually going over the tunnel.
-$ curl icanhazip.com
-157.230.177.54
-
-
-
-
-
-
-
We should also be able to ssh into our instance over the VPN using the 10.66.66.1 address:
-
-
-
-
$ ssh -i tf-digitalocean root@10.66.66.1
-The authenticity of host '10.66.66.1 (10.66.66.1)' can't be established.
-ED25519 key fingerprint is SHA256:E7BKSO3qP+iVVXfb/tLaUfKIc4RvtZ0k248epdE04m8.
-This host key is known by the following other names/addresses:
- ~/.ssh/known_hosts:130: [hashed name]
-Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
-Warning: Permanently added '10.66.66.1' (ED25519) to the list of known hosts.
-Welcome to Ubuntu 22.04.4 LTS (GNU/Linux 5.15.0-113-generic x86_64)
-
- * Documentation: https://help.ubuntu.com/
- * Management: https://landscape.canonical.com/
- * Support: https://ubuntu.com/pro
-
- System information as of Wed Sep 25 13:32:12 UTC 2024
-
- System load: 0.02 Processes: 109
- Usage of /: 2.1% of 77.35GB Users logged in: 0
- Memory usage: 6% IPv4 address for eth0: 157.230.177.54
- Swap usage: 0% IPv4 address for eth0: 10.10.0.5
-
-Expanded Security Maintenance for Applications is not enabled.
-
-73 updates can be applied immediately.
-40 of these updates are standard security updates.
-To see these additional updates run: apt list --upgradable
-
-Enable ESM Apps to receive additional future security updates.
-See https://ubuntu.com/esm or run: sudo pro status
-
-New release '24.04.1 LTS' available.
-Run 'do-release-upgrade' to upgrade to it.
-
-
-root@wireguard:~#
-
-
-
-
-
Looks like everything is working! If you run the script from the repo you will have a fully functioning Wireguard VPN in less than 5 minutes! Pretty cool stuff! This article was not meant to be exhaustive but instead a simple primer to get your feet wet. The setup script I used is heavily inspired by angristan/wireguard-install. Another great resource is the Unofficial docs repo.
We are looking for an experienced professional to help us set up an SFTP server that will allow our vendors to send us inventory files on a daily basis. The server should ensure secure and reliable file transfers, allowing our vendors to easily upload their inventory updates. The successful candidate will possess expertise in SFTP server setup and configuration, as well as knowledge of network security protocols. The required skills for this job include:
-
-
-
-
– SFTP server setup and configuration – Network security protocols – Troubleshooting and problem-solving skills
-
-
-
-
If you have demonstrated experience in setting up SFTP servers and ensuring smooth daily file transfers, we would love to hear from you.
-
-
-
-
-
-
-
-
-
-
-
-
-
My Role
-
-
-
-
I walked the client through the process of setting up a Digital Ocean account. I created a Ubuntu 22.04 VM and installed SFTPGo. I set the client up with an administrator user so that they could easily login and manage users and shares. I implemented some basic security practices as well and set the client up with a custom domain and free TLS/SSL certificate from LetsEncrypt. With the documentation and screenshots I provided the client, they were able to get everything up and running and add users and connect other systems easily and securly.
-
-
-
-
-
-
-
-
-
-
-
-
Client Feedback
-
-
-
-
-
Rating is 5 out of 5.
-
-
-
-
Michael was EXTREMELY helpful and great to work with. We really benefited from his support and help with everything.
-
-]]>
-
-
-
-
-
- Debugging running Nginx config
- /debugging-running-nginx-config/
-
-
- Wed, 17 Jul 2024 01:42:43 +0000
-
-
-
-
- /?p=596
-
-
- I was recently working on project where a client had cPanel/WHM with Nginx and Apache. They had a large number of sites managed by Nginx with a large number of includes. I created a custom config to override a location block and needed to be certain that my changes where actually being picked up. Anytime I make changes to an Nginx config, I try to be vigilant about running:
-
-
-
-
nginx -t
-
-
-
-
to test my configuration and ensure I don’t have any syntax errors. I was looking for an easy way to view the actual compiled config and found the -T flag which will test the configuration and dump it to standard out. This is pretty handy if you have a large number of includes in various locations. Here is an example from a fresh Nginx Docker container:
As you can see from the output above, we get all of the various Nginx config files in use printed to the console, perfect for grepping or searching/filtering with other tools.
-]]>
-
-
-
-
-
- Self hosted package registries with Gitea
- /self-hosted-package-registries-with-gitea/
-
-
- Thu, 07 Mar 2024 15:07:07 +0000
-
-
-
-
-
- https://wordpress.hackanooga.com/?p=413
-
-
- I am a big proponent of open source technologies. I have been using Gitea for a couple years now in my homelab. A few years ago I moved most of my code off of Github and onto my self hosted instance. I recently came across a really handy feature that I didn’t know Gitea had and was pleasantly surprised by: Package Registry. You are no doubt familiar with what a package registry is in the broad context. Here are some examples of package registries you probably use on a regular basis:
-
-
-
-
-
npm
-
-
-
-
cargo
-
-
-
-
docker
-
-
-
-
composer
-
-
-
-
nuget
-
-
-
-
helm
-
-
-
-
-
There are a number of reasons why you would want to self host a registry. For example, in my home lab I have some Docker images that are specific to my use cases and I don’t necessarily want them on a public registry. I’m also not concerned about losing the artifacts as I can easily recreate them from code. Gitea makes this really easy to setup, in fact it comes baked in with the installation. For the sake of this post I will just assume that you already have Gitea installed and setup.
-
-
-
-
Since the package registry is baked in and enabled by default, I will demonstrate how easy it is to push a docker image. We will pull the default alpine image, re-tag it and push it to our internal registry:
-
-
-
-
# Pull the official Alpine image
-docker pull alpine:latest
-
-# Re tag the image with our local registry information
-docker tag alpine:latest git.hackanooga.com/mikeconrad/alpine:latest
-
-# Login using your gitea user account
-docker login git.hackanooga.com
-
-# Push the image to our registry
-docker push git.hackanooga.com/mikeconrad/alpine:latest
-
-
-
-
-
-
Now log into your Gitea instance, navigate to your user account and look for packages. You should see the newly uploaded alpine image.
-
-
-
-
-
-
-
-
You can see that the package type is container. Clicking on it will give you more information:
-
-
-
-
-]]>
-
-
-
-
-
- Traefik with Let’s Encrypt and Cloudflare (pt 1)
- /traefik-with-lets-encrypt-and-cloudflare-pt-1/
-
-
- Thu, 01 Feb 2024 19:35:00 +0000
-
-
-
-
-
- https://wordpress.hackanooga.com/?p=422
-
-
- Recently I decided to rebuild one of my homelab servers. Previously I was using Nginx as my reverse proxy but I decided to switch to Traefik since I have been using it professionally for some time now. One of the reasons I like Traefik is that it is stupid simple to set up certificates and when I am using it with Docker I don’t have to worry about a bunch of configuration files. If you aren’t familiar with how Traefik works with Docker, here is a brief example of a docker-compose.yaml
-
-
-
-
version: '3'
-
-services:
- reverse-proxy:
- # The official v2 Traefik docker image
- image: traefik:v2.11
- # Enables the web UI and tells Traefik to listen to docker
- command:
- - --api.insecure=true
- - --providers.docker=true
- - --entrypoints.web.address=:80
- - --entrypoints.websecure.address=:443
- # Set up LetsEncrypt
- - --certificatesresolvers.letsencrypt.acme.dnschallenge=true
- - --certificatesresolvers.letsencrypt.acme.dnschallenge.provider=cloudflare
- - --certificatesresolvers.letsencrypt.acme.email=user@example.com
- - --certificatesresolvers.letsencrypt.acme.storage=/letsencrypt/acme.json
- # Redirect all http requests to https
- - --entryPoints.web.http.redirections.entryPoint.to=websecure
- - --entryPoints.web.http.redirections.entryPoint.scheme=https
- - --entryPoints.web.http.redirections.entrypoint.permanent=true
- - --log=true
- - --log.level=INFO
- # Needed to request certs via lets encrypt
- environment:
- - CF_DNS_API_TOKEN=[redacted]
- ports:
- # The HTTP port
- - "80:80"
- - "443:443"
- # The Web UI (enabled by --api.insecure=true)
- - "8080:8080"
- volumes:
- # So that Traefik can listen to the Docker events
- - /var/run/docker.sock:/var/run/docker.sock:ro
- # Used for storing letsencrypt certificates
- - ./letsencrypt:/letsencrypt
- - ./volumes/traefik/logs:/logs
- networks:
- - traefik
- ots:
- image: luzifer/ots
- container_name: ots
- restart: always
- environment:
- REDIS_URL: redis://redis:6379/0
- SECRET_EXPIRY: "604800"
- STORAGE_TYPE: redis
- depends_on:
- - redis
- labels:
- - traefik.enable=true
- - traefik.http.routers.ots.rule=Host(`ots.example.com`)
- - traefik.http.routers.ots.entrypoints=websecure
- - traefik.http.routers.ots.tls=true
- - traefik.http.routers.ots.tls.certresolver=letsencrypt
- - traefik.http.services.ots.loadbalancer.server.port=3000
- networks:
- - traefik
- redis:
- image: redis:alpine
- restart: always
- volumes:
- - ./redis-data:/data
- networks:
- - traefik
-networks:
- traefik:
- external: true
-
-
-
-
-
-
-
In part one of this series I will be going over some of the basics of Traefik and how dynamic routing works. If you want to skip to the good stuff and get everything configured with Cloudflare, you can skip to part 2.
-
-
-
-
This example set’s up the primary Traefik container which acts as the ingress controller as well as a handy One Time Secret sharing service I use. Traefik handles routing in Docker via labels. For this to work properly the services that Traefik is trying to route to all need to be on the same Docker network. For this example we created a network called traefik by running the following:
-
-
-
-
docker network create traefik
-
-
-
-
-
Let’s take a look at the labels we applied to the ots container a little closer:
traefik.enable=true – This should be pretty self explanatory but it tells Traefik that we want it to know about this service.
-
-
-
-
traefik.http.routers.ots.rule=Host('ots.example.com') - This is where some of the magic comes in. Here we are defining a router called ots. The name is arbitrary in that it doesn’t have to match the name of the service but for our example it does. There are many rules that you can specify but the easiest for this example is host. Basically we are saying that any request coming in for ots.example.com should be picked up by this router. You can find more options for routers in the Traefik docs.
We are using these three labels to tell our router that we want it to use the websecure entrypoint, and that it should use the letsencrypt certresolver to grab it’s certificates. websecure is an arbitrary name that we assigned to our :443 interface. There are multiple ways to configure this, I choose to use the cli format in my traefik config:
-
-
-
-
“`
-
-
-
-
command:
- - --api.insecure=true
- - --providers.docker=true
- # Our entrypoint names are arbitrary but these are convention.
- # The important part is the port binding that we associate.
- - --entrypoints.web.address=:80
- - --entrypoints.websecure.address=:443
-
-
-
-
-
-
These last label is optional depending on your setup but it is important to understand as the documentation is a little fuzzy.
Here’s how it works. Suppose you have a container that exposes multiple ports. Maybe one of those is a web ui and another is something that you don’t want exposed. By default Traefik will try and guess which port to route requests to. My understanding is that it will try and use the first exposed port. However you can override this functionality by using the label above which will tell Traefik specifically which port you want to route to inside the container.
The service name is derived automatically from the definition in the docker compose file:
-
-
-
-
ots: # This will become the service name image: luzifer/ots container_name: ots
-
-
-
-
\ No newline at end of file
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/category/software-engineering/feed/index.xml b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/category/software-engineering/feed/index.xml
deleted file mode 100644
index 95865be..0000000
--- a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/category/software-engineering/feed/index.xml
+++ /dev/null
@@ -1,1696 +0,0 @@
-
-
-
- Software Engineering – hackanooga
-
- /
- Confessions of a homelab hacker
- Wed, 25 Sep 2024 13:56:04 +0000
- en-US
-
- hourly
-
- 1
- https://wordpress.org/?v=6.6.2
-
-
- /wp-content/uploads/2024/03/cropped-cropped-avatar-32x32.png
- Software Engineering – hackanooga
- /
- 32
- 32
-
-
- Standing up a Wireguard VPN
- /standing-up-a-wireguard-vpn/
-
-
- Wed, 25 Sep 2024 13:56:04 +0000
-
-
-
-
-
-
-
-
- /?p=619
-
-
- VPN’s have traditionally been slow, complex and hard to set up and configure. That all changed several years ago when Wireguard was officially merged into the mainline Linux kernel (src). I won’t go over all the reasons for why you should want to use Wireguard in this article, instead I will be focusing on just how easy it is to set up and configure.
-
-
-
-
For this tutorial we will be using Terraform to stand up a Digital Ocean droplet and then install Wireguard onto that. The Digital Ocean droplet will be acting as our “server” in this example and we will be using our own computer as the “client”. Of course, you don’t have to use Terraform, you just need a Linux box to install Wireguard on. You can find the code for this tutorial on my personal Git server here.
-
-
-
-
Create Droplet with Terraform
-
-
-
-
I have written some very basic Terraform to get us started. The Terraform is very basic and just creates a droplet with a predefined ssh key and a setup script passed as user data. When the droplet gets created, the script will get copied to the instance and automatically executed. After a few minutes everything should be ready to go. If you want to clone the repo above, feel free to, or if you would rather do everything by hand that’s great too. I will assume that you are doing everything by hand. The process of deploying from the repo should be pretty self explainitory. My reasoning for doing it this way is because I wanted to better understand the process.
-
-
-
-
First create our main.tf with the following contents:
-
-
-
-
# main.tf
-# Attach an SSH key to our droplet
-resource "digitalocean_ssh_key" "default" {
- name = "Terraform Example"
- public_key = file("./tf-digitalocean.pub")
-}
-
-# Create a new Web Droplet in the nyc1 region
-resource "digitalocean_droplet" "web" {
- image = "ubuntu-22-04-x64"
- name = "wireguard"
- region = "nyc1"
- size = "s-2vcpu-4gb"
- ssh_keys = [digitalocean_ssh_key.default.fingerprint]
- user_data = file("setup.sh")
-}
-
-output "droplet_output" {
- value = digitalocean_droplet.web.ipv4_address
-}
-
-
-
-
Next create a terraform.tf file in the same directory with the following contents:
Now we are ready to initialize our Terraform and apply it:
-
-
-
-
$ terraform init
-$ terraform apply
-
-Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
- + create
-
-Terraform will perform the following actions:
-
- # digitalocean_droplet.web will be created
- + resource "digitalocean_droplet" "web" {
- + backups = false
- + created_at = (known after apply)
- + disk = (known after apply)
- + graceful_shutdown = false
- + id = (known after apply)
- + image = "ubuntu-22-04-x64"
- + ipv4_address = (known after apply)
- + ipv4_address_private = (known after apply)
- + ipv6 = false
- + ipv6_address = (known after apply)
- + locked = (known after apply)
- + memory = (known after apply)
- + monitoring = false
- + name = "wireguard"
- + price_hourly = (known after apply)
- + price_monthly = (known after apply)
- + private_networking = (known after apply)
- + region = "nyc1"
- + resize_disk = true
- + size = "s-2vcpu-4gb"
- + ssh_keys = (known after apply)
- + status = (known after apply)
- + urn = (known after apply)
- + user_data = "69d130f386b262b136863be5fcffc32bff055ac0"
- + vcpus = (known after apply)
- + volume_ids = (known after apply)
- + vpc_uuid = (known after apply)
- }
-
- # digitalocean_ssh_key.default will be created
- + resource "digitalocean_ssh_key" "default" {
- + fingerprint = (known after apply)
- + id = (known after apply)
- + name = "Terraform Example"
- + public_key = <<-EOT
- ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDXOBlFdNqV48oxWobrn2rPt4y1FTqrqscA5bSu2f3CogwbDKDyNglXu8RL4opjfdBHQES+pEqvt21niqes8z2QsBTF3TRQ39SaHM8wnOTeC8d0uSgyrp9b7higHd0SDJVJZT0Bz5AlpYfCO/gpEW51XrKKeud7vImj8nGPDHnENN0Ie0UVYZ5+V1zlr0BBI7LX01MtzUOgSldDX0lif7IZWW4XEv40ojWyYJNQwO/gwyDrdAq+kl+xZu7LmBhngcqd02+X6w4SbdgYg2flu25Td0MME0DEsXKiZYf7kniTrKgCs4kJAmidCDYlYRt43dlM69pB5jVD/u4r3O+erTapH/O1EDhsdA9y0aYpKOv26ssYU+ZXK/nax+Heu0giflm7ENTCblKTPCtpG1DBthhX6Ml0AYjZF1cUaaAvpN8UjElxQ9r+PSwXloSnf25/r9UOBs1uco8VDwbx5cM0SpdYm6ERtLqGRYrG2SDJ8yLgiCE9EK9n3uQExyrTMKWzVAc= WireguardVPN
- EOT
- }
-
-Plan: 2 to add, 0 to change, 0 to destroy.
-
-Changes to Outputs:
- + droplet_output = (known after apply)
-
-Do you want to perform these actions?
- Terraform will perform the actions described above.
- Only 'yes' will be accepted to approve.
-
- Enter a value: yes
-
-digitalocean_ssh_key.default: Creating...
-digitalocean_ssh_key.default: Creation complete after 1s [id=43499750]
-digitalocean_droplet.web: Creating...
-digitalocean_droplet.web: Still creating... [10s elapsed]
-digitalocean_droplet.web: Still creating... [20s elapsed]
-digitalocean_droplet.web: Still creating... [30s elapsed]
-digitalocean_droplet.web: Creation complete after 31s [id=447469336]
-
-Apply complete! Resources: 2 added, 0 changed, 0 destroyed.
-
-Outputs:
-
-droplet_output = "159.223.113.207"
-
-
-
-
-
All pretty standard stuff. Nice! It only took about 30 seconds or so on my machine to spin up a droplet and start provisioning it. It is worth noting that the setup script will take a few minutes to run. Before we log into our new droplet, let’s take a quick look at the setup script that we are running.
-
-
-
-
#!/usr/bin/env sh
-set -e
-set -u
-# Set the listen port used by Wireguard, this is the default so feel free to change it.
-LISTENPORT=51820
-CONFIG_DIR=/root/wireguard-conf
-umask 077
-mkdir -p $CONFIG_DIR/client
-
-# Install wireguard
-apt update && apt install -y wireguard
-
-# Generate public/private key for the "server".
-wg genkey > $CONFIG_DIR/privatekey
-wg pubkey < $CONFIG_DIR/privatekey > $CONFIG_DIR/publickey
-
-# Generate public/private key for the "client"
-wg genkey > $CONFIG_DIR/client/privatekey
-wg pubkey < $CONFIG_DIR/client/privatekey > $CONFIG_DIR/client/publickey
-
-
-# Generate server config
-echo "[Interface]
-Address = 10.66.66.1/24,fd42:42:42::1/64
-ListenPort = $LISTENPORT
-PrivateKey = $(cat $CONFIG_DIR/privatekey)
-
-### Client config
-[Peer]
-PublicKey = $(cat $CONFIG_DIR/client/publickey)
-AllowedIPs = 10.66.66.2/32,fd42:42:42::2/128
-" > /etc/wireguard/do.conf
-
-
-# Generate client config. This will need to be copied to your machine.
-echo "[Interface]
-PrivateKey = $(cat $CONFIG_DIR/client/privatekey)
-Address = 10.66.66.2/32,fd42:42:42::2/128
-DNS = 1.1.1.1,1.0.0.1
-
-[Peer]
-PublicKey = $(cat publickey)
-Endpoint = $(curl icanhazip.com):$LISTENPORT
-AllowedIPs = 0.0.0.0/0,::/0
-" > client-config.conf
-
-wg-quick up do
-
-# Add iptables rules to forward internet traffic through this box
-# We are assuming our Wireguard interface is called do and our
-# primary public facing interface is called eth0.
-
-iptables -I INPUT -p udp --dport 51820 -j ACCEPT
-iptables -I FORWARD -i eth0 -o do -j ACCEPT
-iptables -I FORWARD -i do -j ACCEPT
-iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
-ip6tables -I FORWARD -i do -j ACCEPT
-ip6tables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
-
-# Enable routing on the server
-echo "net.ipv4.ip_forward = 1
- net.ipv6.conf.all.forwarding = 1" >/etc/sysctl.d/wg.conf
-sysctl --system
-
-
-
-
As you can see, it is pretty straightforward. All you really need to do is:
-
-
-
-
On the “server” side:
-
-
-
-
-
Generate a private key and derive a public key from it for both the “server” and the “client”.
-
-
-
-
Create a “server” config that tells the droplet what address to bind to for the wireguard interface, which private key to use to secure that interface and what port to listen on.
-
-
-
-
The “server” config also needs to know what peers or “clients” to accept connections from in the AllowedIPs block. In this case we are just specifying one. The “server” also needs to know the public key of the “client” that will be connecting.
-
-
-
-
-
On the “client” side:
-
-
-
-
-
Create a “client” config that tells our machine what address to assign to the wireguard interface (obviously needs to be on the same subnet as the interface on the server side).
-
-
-
-
The client needs to know which private key to use to secure the interface.
-
-
-
-
It also needs to know the public key of the server as well as the public IP address/hostname of the “server” it is connecting to as well as the port it is listening on.
-
-
-
-
Finally it needs to know what traffic to route over the wireguard interface. In this example we are simply routing all traffic but you could restrict this as you see fit.
-
-
-
-
-
Now that we have our configs in place, we need to copy the client config to our local machine. The following command should work as long as you make sure to replace the IP address with the IP address of your newly created droplet:
-
-
-
-
## Make sure you have Wireguard installed on your local machine as well.
-## https://wireguard.com/install
-
-## Copy the client config to our local machine and move it to our wireguard directory.
-$ ssh -i tf-digitalocean root@157.230.177.54 -- cat /root/wireguard-conf/client-config.conf| sudo tee /etc/wireguard/do.conf
-
-
-
-
Before we try to connect, let’s log into the server and make sure everything is set up correctly:
-
-
-
-
$ ssh -i tf-digitalocean root@159.223.113.207
-Welcome to Ubuntu 22.04.4 LTS (GNU/Linux 5.15.0-113-generic x86_64)
-
- * Documentation: https://help.ubuntu.com/
- * Management: https://landscape.canonical.com/
- * Support: https://ubuntu.com/pro
-
- System information as of Wed Sep 25 13:19:02 UTC 2024
-
- System load: 0.03 Processes: 113
- Usage of /: 2.1% of 77.35GB Users logged in: 0
- Memory usage: 6% IPv4 address for eth0: 157.230.221.196
- Swap usage: 0% IPv4 address for eth0: 10.10.0.5
-
-Expanded Security Maintenance for Applications is not enabled.
-
-70 updates can be applied immediately.
-40 of these updates are standard security updates.
-To see these additional updates run: apt list --upgradable
-
-Enable ESM Apps to receive additional future security updates.
-See https://ubuntu.com/esm or run: sudo pro status
-
-New release '24.04.1 LTS' available.
-Run 'do-release-upgrade' to upgrade to it.
-
-
-Last login: Wed Sep 25 13:16:25 2024 from 74.221.191.214
-root@wireguard:~#
-
-
-
-
-
-
Awesome! We are connected. Now let’s check the wireguard interface using the wg command. If our config was correct, we should see an interface line and 1 peer line like so. If the peer line is missing then something is wrong with the configuration. Most likely a mismatch between public/private key.:
-
-
-
-
root@wireguard:~# wg
-interface: do
- public key: fTvqo/cZVofJ9IZgWHwU6XKcIwM/EcxUsMw4voeS/Hg=
- private key: (hidden)
- listening port: 51820
-
-peer: 5RxMenh1L+rNJobROkUrub4DBUj+nEUPKiNe4DFR8iY=
- allowed ips: 10.66.66.2/32, fd42:42:42::2/128
-root@wireguard:~#
-
-
-
-
So now we should be ready to go! On your local machine go ahead and try it out:
-
-
-
-
## Start the interface with wg-quick up [interface_name]
-$ sudo wg-quick up do
-[sudo] password for mikeconrad:
-[#] ip link add do type wireguard
-[#] wg setconf do /dev/fd/63
-[#] ip -4 address add 10.66.66.2/32 dev do
-[#] ip -6 address add fd42:42:42::2/128 dev do
-[#] ip link set mtu 1420 up dev do
-[#] resolvconf -a do -m 0 -x
-[#] wg set do fwmark 51820
-[#] ip -6 route add ::/0 dev do table 51820
-[#] ip -6 rule add not fwmark 51820 table 51820
-[#] ip -6 rule add table main suppress_prefixlength 0
-[#] ip6tables-restore -n
-[#] ip -4 route add 0.0.0.0/0 dev do table 51820
-[#] ip -4 rule add not fwmark 51820 table 51820
-[#] ip -4 rule add table main suppress_prefixlength 0
-[#] sysctl -q net.ipv4.conf.all.src_valid_mark=1
-[#] iptables-restore -n
-
-## Check our config
-$ sudo wg
-interface: do
- public key: fJ8mptCR/utCR4K2LmJTKTjn3xc4RDmZ3NNEQGwI7iI=
- private key: (hidden)
- listening port: 34596
- fwmark: 0xca6c
-
-peer: duTHwMhzSZxnRJ2GFCUCHE4HgY5tSeRn9EzQt9XVDx4=
- endpoint: 157.230.177.54:51820
- allowed ips: 0.0.0.0/0, ::/0
- latest handshake: 1 second ago
- transfer: 1.82 KiB received, 2.89 KiB sent
-
-## Make sure we can ping the outside world
-mikeconrad@pop-os:~/projects/wireguard-terraform-digitalocean$ ping 1.1.1.1
-PING 1.1.1.1 (1.1.1.1) 56(84) bytes of data.
-64 bytes from 1.1.1.1: icmp_seq=1 ttl=56 time=28.0 ms
-^C
---- 1.1.1.1 ping statistics ---
-1 packets transmitted, 1 received, 0% packet loss, time 0ms
-rtt min/avg/max/mdev = 27.991/27.991/27.991/0.000 ms
-
-## Verify our traffic is actually going over the tunnel.
-$ curl icanhazip.com
-157.230.177.54
-
-
-
-
-
-
-
We should also be able to ssh into our instance over the VPN using the 10.66.66.1 address:
-
-
-
-
$ ssh -i tf-digitalocean root@10.66.66.1
-The authenticity of host '10.66.66.1 (10.66.66.1)' can't be established.
-ED25519 key fingerprint is SHA256:E7BKSO3qP+iVVXfb/tLaUfKIc4RvtZ0k248epdE04m8.
-This host key is known by the following other names/addresses:
- ~/.ssh/known_hosts:130: [hashed name]
-Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
-Warning: Permanently added '10.66.66.1' (ED25519) to the list of known hosts.
-Welcome to Ubuntu 22.04.4 LTS (GNU/Linux 5.15.0-113-generic x86_64)
-
- * Documentation: https://help.ubuntu.com/
- * Management: https://landscape.canonical.com/
- * Support: https://ubuntu.com/pro
-
- System information as of Wed Sep 25 13:32:12 UTC 2024
-
- System load: 0.02 Processes: 109
- Usage of /: 2.1% of 77.35GB Users logged in: 0
- Memory usage: 6% IPv4 address for eth0: 157.230.177.54
- Swap usage: 0% IPv4 address for eth0: 10.10.0.5
-
-Expanded Security Maintenance for Applications is not enabled.
-
-73 updates can be applied immediately.
-40 of these updates are standard security updates.
-To see these additional updates run: apt list --upgradable
-
-Enable ESM Apps to receive additional future security updates.
-See https://ubuntu.com/esm or run: sudo pro status
-
-New release '24.04.1 LTS' available.
-Run 'do-release-upgrade' to upgrade to it.
-
-
-root@wireguard:~#
-
-
-
-
-
Looks like everything is working! If you run the script from the repo you will have a fully functioning Wireguard VPN in less than 5 minutes! Pretty cool stuff! This article was not meant to be exhaustive but instead a simple primer to get your feet wet. The setup script I used is heavily inspired by angristan/wireguard-install. Another great resource is the Unofficial docs repo.
-]]>
-
-
-
-
-
- Hardening your web server by only allowing traffic from Cloudflare
- /hardening-your-web-server-by-only-allowing-traffic-from-cloudflare/
-
-
- Thu, 01 Aug 2024 21:02:29 +0000
-
-
-
-
-
- /?p=607
-
-
- TDLR:
-
-
-
-
If you just want the code you can find a convenient script on my Gitea server here. This version has been slightly modified so that it will work on more systems.
-
-
-
-
-
-
-
-
I have been using Cloudflare for several years for both personal and professional projects. The free plan has some various gracious limits and it’s a great way to clear out some low hanging fruit and improve the security of your application. If you’re not familiar with how it works, basically Cloudflare has two modes for DNS records. DNS Only and Proxied. The only way to get the advantages of Cloudflare is to use Proxied mode. Cloudflare has some great documentation on how all of their services work but basically what happens is that you are pointing your domain to Cloudflare and Cloudflare provisions their network of Proxy servers to handle requests for your domain.
-
-
-
-
These proxy servers allow you to secure your domain by implementing things like WAF and Rate limiting. You can also enforce HTTPS only mode and modify/add custom request/response headers. You will notice that once you turn this mode on, your webserver will log requests as coming from Cloudflare IP addresses. They have great documentation on how to configure your webserver to restore these IP addresses in your log files.
-
-
-
-
This is a very easy step to start securing your origin server but it still allows attackers to access your servers directly if they know the IP address. We can take our security one step forward by only allowing requests from IP addresses originating within Cloudflare meaning that we will only allow requests if they are coming from a Cloudflare proxy server. The setup is fairly straightforward. In this example I will be using a Linux server.
-
-
-
-
We can achieve this pretty easily because Cloudflare provides a sort of API where they regular publish their network blocks. Here is the basic script we will use:
-
-
-
-
for ip in $(curl https://www.cloudflare.com/ips-v4/); do iptables -I INPUT -p tcp -m multiport --dports http,https -s $ip -j ACCEPT; done
-
-for ip in $(curl https://www.cloudflare.com/ips-v6/); do ip6tables -I INPUT -p tcp -m multiport --dports http,https -s $ip -j ACCEPT; done
-
-iptables -A INPUT -p tcp -m multiport --dports http,https -j DROP
-ip6tables -A INPUT -p tcp -m multiport --dports http,https -j DROP
-
-
-
-
-
This will pull down the latest network addresses from Cloudflare and create iptables rules for us. These IP addresses do change from time to time so you may want to put this in a script and run it via a cronjob to have it update on a regular basis.
-
-
-
-
Now with this in place, here is the results:
-
-
-
-
-
-
-
-
This should cut down on some of the noise from attackers and script kiddies trying to find holes in your security.
-]]>
-
-
-
-
-
- Traefik 3.0 service discovery in Docker Swarm mode
- /traefik-3-0-service-discovery-in-docker-swarm-mode/
-
-
- Sat, 11 May 2024 13:44:01 +0000
-
-
-
-
-
-
- /?p=564
-
-
- I recently decided to set up a Docker swarm cluster for a project I was working on. If you aren’t familiar with Swarm mode, it is similar in some ways to k8s but with much less complexity and it is built into Docker. If you are looking for a fairly straightforward way to deploy containers across a number of nodes without all the overhead of k8s it can be a good choice, however it isn’t a very popular or widespread solution these days.
-
-
-
-
Anyway, I set up a VM scaling set in Azure with 10 Ubuntu 22.04 vms and wrote some Ansible scripts to automate the process of installing Docker on each machine as well as setting 3 up as swarm managers and the other 7 as worker nodes. I ssh’d into the primary manager node and created a docker compose file for launching an observability stack.
Everything deploys properly but when I view the Traefik logs there is an issue with all the services except for the grafana service. I get errors like this:
-
-
-
-
traefik_traefik.1.tm5iqb9x59on@dockerswa2V8BY4 | 2024-05-11T13:14:16Z ERR error="service \"observability-prometheus\" error: port is missing" container=observability-prometheus-37i852h4o36c23lzwuu9pvee9 providerName=swarm
-
-
-
-
-
It drove me crazy for about half a day or so. I couldn’t find any reason why the grafana service worked as expected but none of the others did. Part of my love/hate relationship with Traefik stems from the fact that configuration issues like this can be hard to track and debug. Ultimately after lots of searching and banging my head against a wall I found the answer in the Traefik docs and thought I would share here for anyone else who might run into this issue. Again, this solution is specific to Docker Swarm mode.
Expand that first section and you will see the solution:
-
-
-
-
-
-
-
-
It turns out I just needed to update my docker-compose.yml and nest the labels under a deploy section, redeploy and everything was working as expected.
-]]>
-
-
-
-
-
- Stop all running containers with Docker
- /stop-all-running-containers-with-docker/
-
-
- Wed, 03 Apr 2024 13:12:41 +0000
-
-
-
-
- /?p=557
-
-
- These are some handy snippets I use on a regular basis when managing containers. I have one server in particular that can sometimes end up with 50 to 100 orphaned containers for various reasons. The easiest/quickest way to stop all of them is to do something like this:
-
-
-
-
docker container stop $(docker container ps -q)
-
-
-
-
Let me break this down in case you are not familiar with the syntax. Basically we are passing the output of docker container ps -q into docker container stop. This works because the stop command can take a list of container ids which is what we get when passing the -q flag to docker container ps.
-]]>
-
-
-
-
-
- Roll your own authenticator app with KeystoneJS and React – pt 3
- /roll-your-own-authenticator-app-with-keystonejs-and-react-pt-3/
-
-
- Wed, 17 Jan 2024 17:11:00 +0000
-
-
-
-
-
- /?p=546
-
-
- In our previous post we got to the point of displaying an OTP in our card component. Now it is time to refactor a bit and implement a countdown functionality to see when this token will expire. For now we will go ahead and add this logic into our Card component. In order to figure out how to build this countdown timer we first need to understand how the TOTP counter is calculated.
-
-
-
-
In other words, we know that at TOTP token is derived from a secret key and the current time. If we dig into the spec some we can find that time is a reference to Linux epoch time or the number of seconds that have elapsed since January 1st 1970. For a little more clarification check out this Stackexchange article.
-
-
-
-
So if we know that the time is based on epoch time, we also need to know that most TOTP tokens have a validity period of either 30 seconds or 60 seconds. 30 seconds is the most common standard so we will use that for our implementation. If we put all that together then basically all we need is 2 variables:
-
-
-
-
-
Number of seconds since epoch
-
-
-
-
How many seconds until this token expires
-
-
-
-
-
The first one is easy:
-
-
-
-
let secondsSinceEpoch;
-secondsSinceEpoch = Math.ceil(Date.now() / 1000) - 1;
-
-# This gives us a time like so: 1710338609
-
-
-
-
For the second one we will need to do a little math but it’s pretty straightforward. We need to divide secondsSinceEpoch by 30 seconds and then subtract this number from 30. Here is what that looks like:
-
-
-
-
let secondsSinceEpoch;
-let secondsRemaining;
-const period = 30;
-
-secondsSinceEpoch = Math.ceil(Date.now() / 1000) - 1;
-secondsRemaining = period - (secondsSinceEpoch % period);
-
-
-
-
Now let’s put all of that together into a function that we can test out to make sure we are getting the results we expect.
Running this function should give you output similar to the following. In this example we are stopping the timer once it hits 1 second just to show that everything is working as we expect. In our application we will want this time to keep going forever:
Here is a JSfiddle that shows it in action: https://jsfiddle.net/561vg3k7/
-
-
-
-
We can go ahead and add this function to our Card component and get it wired up. I am going to skip ahead a bit and add a progress bar to our card that is synced with our countdown timer and changes colors as it drops below 10 seconds. For now we will be using a setInterval function to accomplish this.
-
-
-
-
Here is what my updated src/Components/Card.tsx looks like:
Pretty straightforward. I also updated my src/index.css and added a style for our progress bar:
-
-
-
-
.progressBar {
- height: 10px;
- position: absolute;
- top: 0;
- left: 0;
- right: inherit;
-}
-// Also be sure to add position:relative to .card which is the parent of this.
-
-
-
-
Here is what it all looks like in action:
-
-
-
-
-
-
-
-
If you look closely you will notice a few interesting things. First is that the color of the progress bar changes from green to red. This is handled by our timerStyle variable. That part is pretty simple, if the timer is less than 10 seconds we set the background color as salmon otherwise we use light green. The width of the progress bar is controlled by `${100 – (100 / 30 * (30 – secondsRemaining))}%`
-
-
-
-
The other interesting thing to note is that when the timer runs out it automatically restarts at 30 seconds with a new OTP. This is due to the fact that this component is re-rendering every 1/4 second and every time it re-renders it is running the entire function body including: const { otp } = TOTP.generate(token.token);.
-
-
-
-
This is to be expected since we are using React and we are just using a setInterval. It may be a little unexpected though if you aren’t as familiar with React render cycles. For our purposes this will work just fine for now. Stay tuned for pt 4 of this series where we wire up the backend API.
-]]>
-
-
-
-
-
-
- Roll your own authenticator app with KeystoneJS and React – pt 2
- /roll-your-own-authenticator-app-with-keystonejs-and-react-pt-2/
-
-
- Thu, 11 Jan 2024 01:41:00 +0000
-
-
-
-
-
-
-
- /?p=539
-
-
- In part 1 of this series we built out a basic backend using KeystoneJS. In this part we will go ahead and start a new React frontend that will interact with our backend. We will be using Vite. Let’s get started. Make sure you are in the authenticator folder and run the following:
-
-
-
-
$ yarn create vite@latest
-yarn create v1.22.21
-[1/4] Resolving packages...
-[2/4] Fetching packages...
-[3/4] Linking dependencies...
-[4/4] Building fresh packages...
-
-success Installed "create-vite@5.2.2" with binaries:
- - create-vite
- - cva
-✔ Project name: … frontend
-✔ Select a framework: › React
-✔ Select a variant: › TypeScript
-
-Scaffolding project in /home/mikeconrad/projects/authenticator/frontend...
-
-Done. Now run:
-
- cd frontend
- yarn
- yarn dev
-
-Done in 10.20s.
-
-
-
-
Let’s go ahead and go into our frontend directory and get started:
-
-
-
-
$ cd frontend
-$ yarn
-yarn install v1.22.21
-info No lockfile found.
-[1/4] Resolving packages...
-[2/4] Fetching packages...
-[3/4] Linking dependencies...
-[4/4] Building fresh packages...
-success Saved lockfile.
-Done in 10.21s.
-
-$ yarn dev
-yarn run v1.22.21
-$ vite
-Port 5173 is in use, trying another one...
-
- VITE v5.1.6 ready in 218 ms
-
- ➜ Local: http://localhost:5174/
- ➜ Network: use --host to expose
- ➜ press h + enter to show help
-
-
-
-
-
Next go ahead and open the project up in your IDE of choice. I prefer VSCodium:
-
-
-
-
codium frontend
-
-
-
-
Go ahead and open up src/App.tsx and remove all the boilerplate so it looks like this:
Now you should have something that looks like this:
-
-
-
-
-
-
-
-
Alright, we have some of the boring stuff out of the way, now let’s start making some magic. If you aren’t familiar with how TOTP tokens work, basically there is an Algorithm that generates them. I would encourage you to read the RFC for a detailed explanation. Basically it is an algorithm that generates a one time password using the current time as a source of uniqueness along with the secret key.
-
-
-
-
If we really wanted to we could implement this algorithm ourselves but thankfully there are some really simple libraries that do it for us. For our project we will be using one called totp-generator. Let’s go ahead and install it and check it out:
-
-
-
-
$ yarn add totp-generator
-
-
-
-
Now let’s add it to our card component and see what happens. Using it is really simple. We just need to import it, instantiate a new TokenGenerator and pass it our Secret key:
Now save and go back to your browser and you should see that our secret keys are now being displayed as tokens:
-
-
-
-
-
-
-
-
That is pretty cool, the only problem is you need to refresh the page to refresh the token. We will take care of that in part 3 of this series as well as handling fetching tokens from our backend.
-]]>
-
-
-
-
-
- Roll your own authenticator app with KeystoneJS and React
- /roll-your-own-authenticator-app-with-keystonejs-and-react/
-
-
- Thu, 04 Jan 2024 00:59:49 +0000
-
-
-
-
-
-
-
- /?p=533
-
-
- In this series of articles we are going to be building an authenticator app using KeystoneJS for the backend and React for the frontend. The concept is pretty simple and yes there are a bunch out there already but I recently had a need to learn some of the ins and outs of TOTP tokens and thought this project would be a fun idea. Let’s get started.
-
-
-
-
Step 1: Init keystone app
-
-
-
-
Open up a terminal and create a blank keystone project. We are going to call our app authenticator to keep things simple.
-
-
-
-
$ yarn create keystone-app
-yarn create v1.22.21
-[1/4] Resolving packages...
-[2/4] Fetching packages...
-[3/4] Linking dependencies...
-[4/4] Building fresh packages...
-
-success Installed "create-keystone-app@9.0.1" with binaries:
- - create-keystone-app
-[###################################################################################################################################################################################] 273/273
-✨ You're about to generate a project using Keystone 6 packages.
-
-✔ What directory should create-keystone-app generate your app into? · authenticator
-
-⠸ Installing dependencies with yarn. This may take a few minutes.
-⚠ Failed to install with yarn.
-✔ Installed dependencies with npm.
-
-
-🎉 Keystone created a starter project in: authenticator
-
- To launch your app, run:
-
- - cd authenticator
- - npm run dev
-
- Next steps:
-
- - Read authenticator/README.md for additional getting started details.
- - Edit authenticator/keystone.ts to customize your app.
- - Open the Admin UI
- - Open the Graphql API
- - Read the docs
- - Star Keystone on GitHub
-
-Done in 84.06s.
-
-
-
-
-
After a few minutes you should be ready to go. Ignore the error about yarn not being able to install dependencies, it’s an issue with my setup. Next go ahead and open up the project folder with your editor of choice. I use VSCodium:
-
-
-
-
codium authenticator
-
-
-
-
Let’s go ahead and remove all the comments from the schema.ts file and clean it up some:
-
-
-
-
sed -i '/\/\//d' schema.ts
-
-
-
-
Also, go ahead and delete the Post and Tag list as we won’t be using them. Our cleaned up schema.ts should look like this:
Next we will define the schema for our tokens. We will need 3 basic things to start with:
-
-
-
-
-
Issuer
-
-
-
-
Secret Key
-
-
-
-
Account
-
-
-
-
-
The only thing that really matters for generating a TOTP is actually the secret key. The other two fields are mostly for identifying and differentiating tokens. Go ahead and add the following to our schema.ts underneath the User list:
Now that we have defined our Token, we should probably link it to a user. KeystoneJS makes this really easily. We simply need to add a relationship field to our User list. Add the following field to the user list:
-
-
-
-
tokens: relationship({ ref:'Token', many: true })
-
-
-
-
We are defining a tokens field on the User list and tying it to our Token list. We are also passing many: true saying that a user can have one or more tokens. Now that we have the basics set up, let’s go ahead and spin up our app and see what we have:
-
-
-
-
$ yarn dev
-yarn run v1.22.21
-$ keystone dev
-✨ Starting Keystone
-⭐ Server listening on :3000 (http://localhost:3000/)
-⭐ GraphQL API available at /api/graphql
-✨ Generating GraphQL and Prisma schemas
-✨ The database is already in sync with the Prisma schema
-✨ Connecting to the database
-✨ Creating server
-✅ GraphQL API ready
-✨ Generating Admin UI code
-✨ Preparing Admin UI app
-✅ Admin UI ready
-
-
-
-
-
Our server should be running on localhost:3000 so let’s check it out! The first time we open it up we will be greeted with the initialization screen. Go ahead and create an account to login:
-
-
-
-
-
-
-
-
Once you login you should see a dashboard similar to this:
-
-
-
-
-
-
-
-
You can see we have Users and Tokens that we can manage. The beauty of KeystoneJS is that you get full CRUD functionality out of the box just by defining our schema! Go ahead and click on Tokens to add a token:
-
-
-
-
-
-
-
-
For this example I just entered some random text as an example. This is enough to start testing out our TOTP functionality. Click ‘Create Token’ and you should see a list displaying existing tokens:
-
-
-
-
-
-
-
-
We are now ready to jump into the frontend. Stay tuned for pt 2 of this series.
-]]>
-
-
-
-
-
- Hoots Wings
- /hoots-wings/
-
-
- Wed, 12 Jul 2023 15:32:44 +0000
-
-
- /?p=495
-
-
- While working for Morrison I had the pleasure of building a website for Hoots Wings. The CMS was Perch and it was mostly HTML, CSS, PHP and JavaScript on the frontend, however I built out a customer store locator using NodeJS and VueJS.
-
-
-
-
-
-
-
-
I was the sole frontend developer responsible for taking the designs from SketchUp and translating them to the site you see now. Most of the blocks and templates are built using a mix of PHP and HTML/SCSS. There was also some JavaScript for things like getting the users location and rendering popups/modals.
-
-
-
-
The store locator was a separate piece that was built in Vue2.0 with a NodeJS backend. For the backend I used KeystoneJS to hold all of the store information. There was also some custom development that was done in order to sync the stores added via the CMS with Yext and vice versa.
-
-
-
-
-
-
-
-
For that piece I ended up having to write a custom integration in Perch that would connect to the NodeJS backend and pull the stores but also make sure that those were in sync with Yext. This required diving into the Yext API some and examining a similar integration that we had for another client site.
-
-
-
-
Unfortunately I don’t have any screen grabs of the admin side of things since that is proprietary but the system I built allowed a site admin to go in and add/edit store locations that would show up on the site and also show up in Yext with the appropriate information.
-
-
-
-
Screenshots
-
-
-
-
Here are some full screenshots of the site.
-
-
-
-
Homepage
-
-
-
-
-
-
-
-
-
-
-
-
Menu Page
-
-
-
-
-
-
-
-
-
-
-
-
Locations Page
-
-
-
-
-]]>
-
-
-
-
-
- Hilger Grading Portal
- /hilger-grading-portal/
-
-
- Sun, 21 May 2023 20:07:50 +0000
-
-
- /?p=509
-
-
- Back around 2014 I took on my first freelance development project for a Homeschool Co-op here in Chattanooga called Hilger Higher Learning. The problem that they were trying to solve involved managing grades and report cards for their students. In the past, they had a developer build a rudimentary web application that would allow them to enter grades for students, however it lacked any sort of concurrency meaning that if two teachers were making changes to the same student at the same time, teacher b’s changes would overwrite teacher a’s changes. This was obviously a huge headache.
-
-
-
-
I built out the first version of the app using PHP and HTML, CSS and Datatables with lots of jQuery sprinkled in. I built in custom functionality that allowed them to easily compile and print all the report cards for all students with the simple click of a button. It was a game changer or them and streamlined the process significantly.
-
-
-
-
That system was in production for 5 years or so with minimal updates and maintance. I recently rebuilt it using React and ChakraUI on the frontend and KeystoneJS on the backend. I also modernized the deployment by building Docker images for the frontend/backend. I actually ended up keeping parts of it in PHP due to the fact that I couldn’t find a JavaScript library that would solve the challenges I had. Here are some screenshots of it in action:
-
-
-
-
-
-
-
-
This is the page listing all teachers in the system and whether or not they have admin privileges. Any admin user can grant other admin users this privilege. There is also a button to send the teacher a password reset email (via Postmark API integration) and an option that allows admin users to impersonate other users for troubleshooting and diagnostic purposes.
-
-
-
-
-
-
-
-
-
-
-
-
The data is all coming from the KeystoneJS backend GraphQL API. I am using urql for fetching the data and handling mutations. This is the page that displays students. It is filterable and searchable. Teachers also have the ability to mark a student as active or inactive for the semester as well as delete them from the system.
-
-
-
-
-
-
-
-
Clicking on a student takes the teacher/admin to an edit course screen where they can add and remove courses for each student. A teacher can add as many courses as they need. If multiple teachers have added courses for this student, the user will only see the courses they have entered.
-
-
-
-
-
-
-
-
There is another page that allows admin users to view and manage all of the parents in the system. It allows them to easily send a password reset email to the parents as well as to view the parent portal.
-
-
-
-
\ No newline at end of file
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/category/ssh/feed/index.xml b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/category/ssh/feed/index.xml
deleted file mode 100644
index de6ee7a..0000000
--- a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/category/ssh/feed/index.xml
+++ /dev/null
@@ -1,768 +0,0 @@
-
-
-
- SSH – hackanooga
-
- /
- Confessions of a homelab hacker
- Wed, 25 Sep 2024 13:56:04 +0000
- en-US
-
- hourly
-
- 1
- https://wordpress.org/?v=6.6.2
-
-
- /wp-content/uploads/2024/03/cropped-cropped-avatar-32x32.png
- SSH – hackanooga
- /
- 32
- 32
-
-
- Standing up a Wireguard VPN
- /standing-up-a-wireguard-vpn/
-
-
- Wed, 25 Sep 2024 13:56:04 +0000
-
-
-
-
-
-
-
-
- /?p=619
-
-
- VPN’s have traditionally been slow, complex and hard to set up and configure. That all changed several years ago when Wireguard was officially merged into the mainline Linux kernel (src). I won’t go over all the reasons for why you should want to use Wireguard in this article, instead I will be focusing on just how easy it is to set up and configure.
-
-
-
-
For this tutorial we will be using Terraform to stand up a Digital Ocean droplet and then install Wireguard onto that. The Digital Ocean droplet will be acting as our “server” in this example and we will be using our own computer as the “client”. Of course, you don’t have to use Terraform, you just need a Linux box to install Wireguard on. You can find the code for this tutorial on my personal Git server here.
-
-
-
-
Create Droplet with Terraform
-
-
-
-
I have written some very basic Terraform to get us started. The Terraform is very basic and just creates a droplet with a predefined ssh key and a setup script passed as user data. When the droplet gets created, the script will get copied to the instance and automatically executed. After a few minutes everything should be ready to go. If you want to clone the repo above, feel free to, or if you would rather do everything by hand that’s great too. I will assume that you are doing everything by hand. The process of deploying from the repo should be pretty self explainitory. My reasoning for doing it this way is because I wanted to better understand the process.
-
-
-
-
First create our main.tf with the following contents:
-
-
-
-
# main.tf
-# Attach an SSH key to our droplet
-resource "digitalocean_ssh_key" "default" {
- name = "Terraform Example"
- public_key = file("./tf-digitalocean.pub")
-}
-
-# Create a new Web Droplet in the nyc1 region
-resource "digitalocean_droplet" "web" {
- image = "ubuntu-22-04-x64"
- name = "wireguard"
- region = "nyc1"
- size = "s-2vcpu-4gb"
- ssh_keys = [digitalocean_ssh_key.default.fingerprint]
- user_data = file("setup.sh")
-}
-
-output "droplet_output" {
- value = digitalocean_droplet.web.ipv4_address
-}
-
-
-
-
Next create a terraform.tf file in the same directory with the following contents:
Now we are ready to initialize our Terraform and apply it:
-
-
-
-
$ terraform init
-$ terraform apply
-
-Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
- + create
-
-Terraform will perform the following actions:
-
- # digitalocean_droplet.web will be created
- + resource "digitalocean_droplet" "web" {
- + backups = false
- + created_at = (known after apply)
- + disk = (known after apply)
- + graceful_shutdown = false
- + id = (known after apply)
- + image = "ubuntu-22-04-x64"
- + ipv4_address = (known after apply)
- + ipv4_address_private = (known after apply)
- + ipv6 = false
- + ipv6_address = (known after apply)
- + locked = (known after apply)
- + memory = (known after apply)
- + monitoring = false
- + name = "wireguard"
- + price_hourly = (known after apply)
- + price_monthly = (known after apply)
- + private_networking = (known after apply)
- + region = "nyc1"
- + resize_disk = true
- + size = "s-2vcpu-4gb"
- + ssh_keys = (known after apply)
- + status = (known after apply)
- + urn = (known after apply)
- + user_data = "69d130f386b262b136863be5fcffc32bff055ac0"
- + vcpus = (known after apply)
- + volume_ids = (known after apply)
- + vpc_uuid = (known after apply)
- }
-
- # digitalocean_ssh_key.default will be created
- + resource "digitalocean_ssh_key" "default" {
- + fingerprint = (known after apply)
- + id = (known after apply)
- + name = "Terraform Example"
- + public_key = <<-EOT
- ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDXOBlFdNqV48oxWobrn2rPt4y1FTqrqscA5bSu2f3CogwbDKDyNglXu8RL4opjfdBHQES+pEqvt21niqes8z2QsBTF3TRQ39SaHM8wnOTeC8d0uSgyrp9b7higHd0SDJVJZT0Bz5AlpYfCO/gpEW51XrKKeud7vImj8nGPDHnENN0Ie0UVYZ5+V1zlr0BBI7LX01MtzUOgSldDX0lif7IZWW4XEv40ojWyYJNQwO/gwyDrdAq+kl+xZu7LmBhngcqd02+X6w4SbdgYg2flu25Td0MME0DEsXKiZYf7kniTrKgCs4kJAmidCDYlYRt43dlM69pB5jVD/u4r3O+erTapH/O1EDhsdA9y0aYpKOv26ssYU+ZXK/nax+Heu0giflm7ENTCblKTPCtpG1DBthhX6Ml0AYjZF1cUaaAvpN8UjElxQ9r+PSwXloSnf25/r9UOBs1uco8VDwbx5cM0SpdYm6ERtLqGRYrG2SDJ8yLgiCE9EK9n3uQExyrTMKWzVAc= WireguardVPN
- EOT
- }
-
-Plan: 2 to add, 0 to change, 0 to destroy.
-
-Changes to Outputs:
- + droplet_output = (known after apply)
-
-Do you want to perform these actions?
- Terraform will perform the actions described above.
- Only 'yes' will be accepted to approve.
-
- Enter a value: yes
-
-digitalocean_ssh_key.default: Creating...
-digitalocean_ssh_key.default: Creation complete after 1s [id=43499750]
-digitalocean_droplet.web: Creating...
-digitalocean_droplet.web: Still creating... [10s elapsed]
-digitalocean_droplet.web: Still creating... [20s elapsed]
-digitalocean_droplet.web: Still creating... [30s elapsed]
-digitalocean_droplet.web: Creation complete after 31s [id=447469336]
-
-Apply complete! Resources: 2 added, 0 changed, 0 destroyed.
-
-Outputs:
-
-droplet_output = "159.223.113.207"
-
-
-
-
-
All pretty standard stuff. Nice! It only took about 30 seconds or so on my machine to spin up a droplet and start provisioning it. It is worth noting that the setup script will take a few minutes to run. Before we log into our new droplet, let’s take a quick look at the setup script that we are running.
-
-
-
-
#!/usr/bin/env sh
-set -e
-set -u
-# Set the listen port used by Wireguard, this is the default so feel free to change it.
-LISTENPORT=51820
-CONFIG_DIR=/root/wireguard-conf
-umask 077
-mkdir -p $CONFIG_DIR/client
-
-# Install wireguard
-apt update && apt install -y wireguard
-
-# Generate public/private key for the "server".
-wg genkey > $CONFIG_DIR/privatekey
-wg pubkey < $CONFIG_DIR/privatekey > $CONFIG_DIR/publickey
-
-# Generate public/private key for the "client"
-wg genkey > $CONFIG_DIR/client/privatekey
-wg pubkey < $CONFIG_DIR/client/privatekey > $CONFIG_DIR/client/publickey
-
-
-# Generate server config
-echo "[Interface]
-Address = 10.66.66.1/24,fd42:42:42::1/64
-ListenPort = $LISTENPORT
-PrivateKey = $(cat $CONFIG_DIR/privatekey)
-
-### Client config
-[Peer]
-PublicKey = $(cat $CONFIG_DIR/client/publickey)
-AllowedIPs = 10.66.66.2/32,fd42:42:42::2/128
-" > /etc/wireguard/do.conf
-
-
-# Generate client config. This will need to be copied to your machine.
-echo "[Interface]
-PrivateKey = $(cat $CONFIG_DIR/client/privatekey)
-Address = 10.66.66.2/32,fd42:42:42::2/128
-DNS = 1.1.1.1,1.0.0.1
-
-[Peer]
-PublicKey = $(cat publickey)
-Endpoint = $(curl icanhazip.com):$LISTENPORT
-AllowedIPs = 0.0.0.0/0,::/0
-" > client-config.conf
-
-wg-quick up do
-
-# Add iptables rules to forward internet traffic through this box
-# We are assuming our Wireguard interface is called do and our
-# primary public facing interface is called eth0.
-
-iptables -I INPUT -p udp --dport 51820 -j ACCEPT
-iptables -I FORWARD -i eth0 -o do -j ACCEPT
-iptables -I FORWARD -i do -j ACCEPT
-iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
-ip6tables -I FORWARD -i do -j ACCEPT
-ip6tables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
-
-# Enable routing on the server
-echo "net.ipv4.ip_forward = 1
- net.ipv6.conf.all.forwarding = 1" >/etc/sysctl.d/wg.conf
-sysctl --system
-
-
-
-
As you can see, it is pretty straightforward. All you really need to do is:
-
-
-
-
On the “server” side:
-
-
-
-
-
Generate a private key and derive a public key from it for both the “server” and the “client”.
-
-
-
-
Create a “server” config that tells the droplet what address to bind to for the wireguard interface, which private key to use to secure that interface and what port to listen on.
-
-
-
-
The “server” config also needs to know what peers or “clients” to accept connections from in the AllowedIPs block. In this case we are just specifying one. The “server” also needs to know the public key of the “client” that will be connecting.
-
-
-
-
-
On the “client” side:
-
-
-
-
-
Create a “client” config that tells our machine what address to assign to the wireguard interface (obviously needs to be on the same subnet as the interface on the server side).
-
-
-
-
The client needs to know which private key to use to secure the interface.
-
-
-
-
It also needs to know the public key of the server as well as the public IP address/hostname of the “server” it is connecting to as well as the port it is listening on.
-
-
-
-
Finally it needs to know what traffic to route over the wireguard interface. In this example we are simply routing all traffic but you could restrict this as you see fit.
-
-
-
-
-
Now that we have our configs in place, we need to copy the client config to our local machine. The following command should work as long as you make sure to replace the IP address with the IP address of your newly created droplet:
-
-
-
-
## Make sure you have Wireguard installed on your local machine as well.
-## https://wireguard.com/install
-
-## Copy the client config to our local machine and move it to our wireguard directory.
-$ ssh -i tf-digitalocean root@157.230.177.54 -- cat /root/wireguard-conf/client-config.conf| sudo tee /etc/wireguard/do.conf
-
-
-
-
Before we try to connect, let’s log into the server and make sure everything is set up correctly:
-
-
-
-
$ ssh -i tf-digitalocean root@159.223.113.207
-Welcome to Ubuntu 22.04.4 LTS (GNU/Linux 5.15.0-113-generic x86_64)
-
- * Documentation: https://help.ubuntu.com/
- * Management: https://landscape.canonical.com/
- * Support: https://ubuntu.com/pro
-
- System information as of Wed Sep 25 13:19:02 UTC 2024
-
- System load: 0.03 Processes: 113
- Usage of /: 2.1% of 77.35GB Users logged in: 0
- Memory usage: 6% IPv4 address for eth0: 157.230.221.196
- Swap usage: 0% IPv4 address for eth0: 10.10.0.5
-
-Expanded Security Maintenance for Applications is not enabled.
-
-70 updates can be applied immediately.
-40 of these updates are standard security updates.
-To see these additional updates run: apt list --upgradable
-
-Enable ESM Apps to receive additional future security updates.
-See https://ubuntu.com/esm or run: sudo pro status
-
-New release '24.04.1 LTS' available.
-Run 'do-release-upgrade' to upgrade to it.
-
-
-Last login: Wed Sep 25 13:16:25 2024 from 74.221.191.214
-root@wireguard:~#
-
-
-
-
-
-
Awesome! We are connected. Now let’s check the wireguard interface using the wg command. If our config was correct, we should see an interface line and 1 peer line like so. If the peer line is missing then something is wrong with the configuration. Most likely a mismatch between public/private key.:
-
-
-
-
root@wireguard:~# wg
-interface: do
- public key: fTvqo/cZVofJ9IZgWHwU6XKcIwM/EcxUsMw4voeS/Hg=
- private key: (hidden)
- listening port: 51820
-
-peer: 5RxMenh1L+rNJobROkUrub4DBUj+nEUPKiNe4DFR8iY=
- allowed ips: 10.66.66.2/32, fd42:42:42::2/128
-root@wireguard:~#
-
-
-
-
So now we should be ready to go! On your local machine go ahead and try it out:
-
-
-
-
## Start the interface with wg-quick up [interface_name]
-$ sudo wg-quick up do
-[sudo] password for mikeconrad:
-[#] ip link add do type wireguard
-[#] wg setconf do /dev/fd/63
-[#] ip -4 address add 10.66.66.2/32 dev do
-[#] ip -6 address add fd42:42:42::2/128 dev do
-[#] ip link set mtu 1420 up dev do
-[#] resolvconf -a do -m 0 -x
-[#] wg set do fwmark 51820
-[#] ip -6 route add ::/0 dev do table 51820
-[#] ip -6 rule add not fwmark 51820 table 51820
-[#] ip -6 rule add table main suppress_prefixlength 0
-[#] ip6tables-restore -n
-[#] ip -4 route add 0.0.0.0/0 dev do table 51820
-[#] ip -4 rule add not fwmark 51820 table 51820
-[#] ip -4 rule add table main suppress_prefixlength 0
-[#] sysctl -q net.ipv4.conf.all.src_valid_mark=1
-[#] iptables-restore -n
-
-## Check our config
-$ sudo wg
-interface: do
- public key: fJ8mptCR/utCR4K2LmJTKTjn3xc4RDmZ3NNEQGwI7iI=
- private key: (hidden)
- listening port: 34596
- fwmark: 0xca6c
-
-peer: duTHwMhzSZxnRJ2GFCUCHE4HgY5tSeRn9EzQt9XVDx4=
- endpoint: 157.230.177.54:51820
- allowed ips: 0.0.0.0/0, ::/0
- latest handshake: 1 second ago
- transfer: 1.82 KiB received, 2.89 KiB sent
-
-## Make sure we can ping the outside world
-mikeconrad@pop-os:~/projects/wireguard-terraform-digitalocean$ ping 1.1.1.1
-PING 1.1.1.1 (1.1.1.1) 56(84) bytes of data.
-64 bytes from 1.1.1.1: icmp_seq=1 ttl=56 time=28.0 ms
-^C
---- 1.1.1.1 ping statistics ---
-1 packets transmitted, 1 received, 0% packet loss, time 0ms
-rtt min/avg/max/mdev = 27.991/27.991/27.991/0.000 ms
-
-## Verify our traffic is actually going over the tunnel.
-$ curl icanhazip.com
-157.230.177.54
-
-
-
-
-
-
-
We should also be able to ssh into our instance over the VPN using the 10.66.66.1 address:
-
-
-
-
$ ssh -i tf-digitalocean root@10.66.66.1
-The authenticity of host '10.66.66.1 (10.66.66.1)' can't be established.
-ED25519 key fingerprint is SHA256:E7BKSO3qP+iVVXfb/tLaUfKIc4RvtZ0k248epdE04m8.
-This host key is known by the following other names/addresses:
- ~/.ssh/known_hosts:130: [hashed name]
-Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
-Warning: Permanently added '10.66.66.1' (ED25519) to the list of known hosts.
-Welcome to Ubuntu 22.04.4 LTS (GNU/Linux 5.15.0-113-generic x86_64)
-
- * Documentation: https://help.ubuntu.com/
- * Management: https://landscape.canonical.com/
- * Support: https://ubuntu.com/pro
-
- System information as of Wed Sep 25 13:32:12 UTC 2024
-
- System load: 0.02 Processes: 109
- Usage of /: 2.1% of 77.35GB Users logged in: 0
- Memory usage: 6% IPv4 address for eth0: 157.230.177.54
- Swap usage: 0% IPv4 address for eth0: 10.10.0.5
-
-Expanded Security Maintenance for Applications is not enabled.
-
-73 updates can be applied immediately.
-40 of these updates are standard security updates.
-To see these additional updates run: apt list --upgradable
-
-Enable ESM Apps to receive additional future security updates.
-See https://ubuntu.com/esm or run: sudo pro status
-
-New release '24.04.1 LTS' available.
-Run 'do-release-upgrade' to upgrade to it.
-
-
-root@wireguard:~#
-
-
-
-
-
Looks like everything is working! If you run the script from the repo you will have a fully functioning Wireguard VPN in less than 5 minutes! Pretty cool stuff! This article was not meant to be exhaustive but instead a simple primer to get your feet wet. The setup script I used is heavily inspired by angristan/wireguard-install. Another great resource is the Unofficial docs repo.
-]]>
-
-
-
-
-
- Fun with bots – SSH tarpitting
- /fun-with-bots-ssh-tarpitting/
-
-
- Mon, 24 Jun 2024 13:37:43 +0000
-
-
-
-
-
-
- /?p=576
-
-
- For those of you who aren’t familiar with the concept of a network tarpit it is a fairly simple concept. Wikipedia defines it like this:
-
-
-
-
-
A tarpit is a service on a computer system (usually a server) that purposely delays incoming connections. The technique was developed as a defense against a computer worm, and the idea is that network abuses such as spamming or broad scanning are less effective, and therefore less attractive, if they take too long. The concept is analogous with a tar pit, in which animals can get bogged down and slowly sink under the surface, like in a swamp.
If you run any sort of service on the internet then you know as soon as your server has a public IP address and open ports, there are scanners and bots trying to get in constantly. If you take decent steps towards security then it is little more than an annoyance, but annoying all the less. One day when I had some extra time on my hands I started researching ways to mess with the bots trying to scan/attack my site.
-
-
-
-
It turns out that this problem has been solved multiple times in multiple ways. One of the most popular tools for tarpitting ssh connections is endlessh. The way it works is actually pretty simple. The SSH RFC states that when an SSH connection is established, both sides MUST send an identification string. Further down the spec is the line that allows this behavior:
-
-
-
-
-
The server MAY send other lines of data before sending the version
- string. Each line SHOULD be terminated by a Carriage Return and Line
- Feed. Such lines MUST NOT begin with "SSH-", and SHOULD be encoded
- in ISO-10646 UTF-8 [RFC3629] (language is not specified). Clients
- MUST be able to process such lines. Such lines MAY be silently
- ignored, or MAY be displayed to the client user. If they are
- displayed, control character filtering, as discussed in [SSH-ARCH],
- SHOULD be used. The primary use of this feature is to allow TCP-
- wrappers to display an error message before disconnecting.
-SSH RFC
-
-
-
-
Essentially this means that their is no limit to the amount of data that a server can send back to the client and the client must be able to wait and process all of this data. Now let’s see it in action.
By default this fake server listens on port 2222. I have a port forward set up that forwards all ssh traffic from port 22 to 2222. Now try to connect via ssh:
-
-
-
-
ssh -vvv localhost -p 2222
-
-
-
-
If you wait a few seconds you will see the server send back the version string and then start sending a random banner:
-
-
-
-
$:/tmp/endlessh$ 2024-06-24T13:05:59.488Z Port 2222
-2024-06-24T13:05:59.488Z Delay 10000
-2024-06-24T13:05:59.488Z MaxLineLength 32
-2024-06-24T13:05:59.488Z MaxClients 4096
-2024-06-24T13:05:59.488Z BindFamily IPv4 Mapped IPv6
-2024-06-24T13:05:59.488Z socket() = 3
-2024-06-24T13:05:59.488Z setsockopt(3, SO_REUSEADDR, true) = 0
-2024-06-24T13:05:59.488Z setsockopt(3, IPV6_V6ONLY, true) = 0
-2024-06-24T13:05:59.488Z bind(3, port=2222) = 0
-2024-06-24T13:05:59.488Z listen(3) = 0
-2024-06-24T13:05:59.488Z poll(1, -1)
-ssh -vvv localhost -p 2222
-OpenSSH_8.9p1 Ubuntu-3ubuntu0.7, OpenSSL 3.0.2 15 Mar 2022
-debug1: Reading configuration data /home/mikeconrad/.ssh/config
-debug1: Reading configuration data /etc/ssh/ssh_config
-debug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files
-debug1: /etc/ssh/ssh_config line 21: Applying options for *
-debug3: expanded UserKnownHostsFile '~/.ssh/known_hosts' -> '/home/mikeconrad/.ssh/known_hosts'
-debug3: expanded UserKnownHostsFile '~/.ssh/known_hosts2' -> '/home/mikeconrad/.ssh/known_hosts2'
-debug2: resolving "localhost" port 2222
-debug3: resolve_host: lookup localhost:2222
-debug3: ssh_connect_direct: entering
-debug1: Connecting to localhost [::1] port 2222.
-debug3: set_sock_tos: set socket 3 IPV6_TCLASS 0x10
-debug1: Connection established.
-2024-06-24T13:06:08.635Z = 1
-2024-06-24T13:06:08.635Z accept() = 4
-2024-06-24T13:06:08.635Z setsockopt(4, SO_RCVBUF, 1) = 0
-2024-06-24T13:06:08.635Z ACCEPT host=::1 port=43696 fd=4 n=1/4096
-2024-06-24T13:06:08.635Z poll(1, 10000)
-debug1: identity file /home/mikeconrad/.ssh/id_rsa type 0
-debug1: identity file /home/mikeconrad/.ssh/id_rsa-cert type 4
-debug1: identity file /home/mikeconrad/.ssh/id_ecdsa type -1
-debug1: identity file /home/mikeconrad/.ssh/id_ecdsa-cert type -1
-debug1: identity file /home/mikeconrad/.ssh/id_ecdsa_sk type -1
-debug1: identity file /home/mikeconrad/.ssh/id_ecdsa_sk-cert type -1
-debug1: identity file /home/mikeconrad/.ssh/id_ed25519 type -1
-debug1: identity file /home/mikeconrad/.ssh/id_ed25519-cert type -1
-debug1: identity file /home/mikeconrad/.ssh/id_ed25519_sk type -1
-debug1: identity file /home/mikeconrad/.ssh/id_ed25519_sk-cert type -1
-debug1: identity file /home/mikeconrad/.ssh/id_xmss type -1
-debug1: identity file /home/mikeconrad/.ssh/id_xmss-cert type -1
-debug1: identity file /home/mikeconrad/.ssh/id_dsa type -1
-debug1: identity file /home/mikeconrad/.ssh/id_dsa-cert type -1
-debug1: Local version string SSH-2.0-OpenSSH_8.9p1 Ubuntu-3ubuntu0.7
-2024-06-24T13:06:18.684Z = 0
-2024-06-24T13:06:18.684Z write(4) = 3
-2024-06-24T13:06:18.684Z poll(1, 10000)
-debug1: kex_exchange_identification: banner line 0: V
-2024-06-24T13:06:28.734Z = 0
-2024-06-24T13:06:28.734Z write(4) = 25
-2024-06-24T13:06:28.734Z poll(1, 10000)
-debug1: kex_exchange_identification: banner line 1: 2I=ED}PZ,z T_Y|Yc]$b{R]
-
-
-
-
-
-
This is a great way to give back to those bots and script kiddies. In my research into other methods I also stumbled across this brilliant program fakessh. While fakessh isn’t technically a tarpit, it’s more of a honeypot but very interesting nonetheless. It creates a fake SSH server and logs the ip address, connection string and any commands executed by the attacker. Essentially it allows any username/password combination to connect and gives them a fake shell prompt. There is no actual access to any file system and all of their commands basically return gibberish.
-
-
-
-
Here are some logs from an actual server of mine running fakessh
Those are mostly connections and disconnections. They probably connected, realized it was fake and disconnected. There are a couple that tried to execute some commands though:
Fun fact: Cloudflare’s Bot Fight Mode uses a form of tarpitting:
-
-
-
-
-
Once enabled, when we detect a bad bot, we will do three things: (1) we’re going to disincentivize the bot maker economically by tarpitting them, including requiring them to solve a computationally intensive challenge that will require more of their bot’s CPU; (2) for Bandwidth Alliance partners, we’re going to hand the IP of the bot to the partner and get the bot kicked offline; and (3) we’re going to plant trees to make up for the bot’s carbon cost.
-
-
-
-
\ No newline at end of file
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/category/teamcity/feed/index.xml b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/category/teamcity/feed/index.xml
deleted file mode 100644
index 66c3694..0000000
--- a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/category/teamcity/feed/index.xml
+++ /dev/null
@@ -1,285 +0,0 @@
-
-
-
- TeamCity – hackanooga
-
- /
- Confessions of a homelab hacker
- Tue, 12 Mar 2024 19:11:24 +0000
- en-US
-
- hourly
-
- 1
- https://wordpress.org/?v=6.6.2
-
-
- /wp-content/uploads/2024/03/cropped-cropped-avatar-32x32.png
- TeamCity – hackanooga
- /
- 32
- 32
-
-
- Automating CI/CD with TeamCity and Ansible
- /automating-ci-cd-with-teamcity-ansible/
-
-
- Mon, 11 Mar 2024 13:37:47 +0000
-
-
-
-
-
- https://wordpress.hackanooga.com/?p=393
-
-
- In part one of this series we are going to explore a CI/CD option you may not be familiar with but should definitely be on your radar. I used Jetbrains TeamCity for several months at my last company and really enjoyed my time with it. A couple of the things I like most about it are:
-
-
-
-
-
Ability to declare global variables and have them be passed down to all projects
-
-
-
-
Ability to declare variables that are made up of other variables
-
-
-
-
-
I like to use private or self hosted Docker registries for a lot of my projects and one of the pain points I have had with some other solutions (well mostly Bitbucket) is that they don’t integrate well with these private registries and when I run into a situation where I am pushing an image to or pulling an image from a private registry it get’s a little messy. TeamCity is nice in that I can add a connection to my private registry in my root project and them simply add that as a build feature to any projects that may need it. Essentially, now I only have one place where I have to keep those credentials and manage that connection.
-
-
-
-
-
-
-
-
Another reason I love it is the fact that you can create really powerful build templates that you can reuse. This became very powerful when we were trying to standardize our build processes. For example, most of the apps we build are .NET backends and React frontends. We built docker images for every project and pushed them to our private registry. TeamCity gave us the ability to standardize the naming convention and really streamline the build process. Enough about that though, the rest of this series will assume that you are using TeamCity. This post will focus on getting up and running using Ansible.
-
-
-
-
-
-
-
-
Installation and Setup
-
-
-
-
For this I will assume that you already have Ansible on your machine and that you will be installing TeamCity locally. You can simply follow along with the installation guide here. We will be creating an Ansible playbook based on the following steps. If you just want the finished code, you can find it on my Gitea instance here:
-
-
-
-
Step 1 : Create project and initial playbook
-
-
-
-
To get started go ahead and create a new directory to hold our configuration:
Now open up install-teamcity-server.yml and add a task to install Java 17 as it is a prerequisite. You will need sudo for this task. ***As of this writing TeamCity does not support Java 18 or 19. If you try to install one of these you will get an error when trying to start TeamCity.
The next step is to create a dedicated user account. Add the following task to install-teamcity-server.yml
-
-
-
-
- name: Add Teamcity User
- ansible.builtin.user:
- name: teamcity
-
-
-
-
Next we will need to download the latest version of TeamCity. 2023.11.4 is the latest as of this writing. Add the following task to your install-teamcity-server.yml
Now that we have everything set up and installed we want to make sure that our new teamcity user has access to everything they need to get up and running. We will add the following lines:
This gives us a pretty nice setup. We have TeamCity server installed with a dedicated user account. The last thing we will do is create a systemd service so that we can easily start/stop the server. For this we will need to add a few things.
-
-
-
-
-
A service file that tells our system how to manage TeamCity
-
-
-
-
A j2 template file that is used to create this service file
-
-
-
-
A handler that tells the system to run systemctl daemon-reload once the service has been installed.
-
-
-
-
-
Go ahead and create a new templates folder with the following teamcity.service.j2 file
That’s it! Now you should have a fully automated installed of TeamCity Server ready to be deployed wherever you need it. Here is the final playbook file, also you can find the most up to date version in my repo:
-
-
-
-
---
-- name: Install Teamcity
- hosts: localhost
- become: true
- become_method: sudo
-
- vars:
- java_version: "17"
- teamcity:
- installation_path: /opt/TeamCity
- version: "2023.11.4"
-
- tasks:
- - name: Install Java
- ansible.builtin.apt:
- name: openjdk-{{ java_version }}-jdk # This is important because TeamCity will fail to start if we try to use 18 or 19
- update_cache: yes
- state: latest
- install_recommends: no
-
- - name: Add TeamCity User
- ansible.builtin.user:
- name: teamcity
-
- - name: Download TeamCity Server
- ansible.builtin.get_url:
- url: https://download.jetbrains.com/teamcity/TeamCity-{{teamcity.version}}.tar.gz
- dest: /opt/TeamCity-{{teamcity.version}}.tar.gz
- mode: '0770'
-
- - name: Install TeamCity Server
- ansible.builtin.shell: |
- tar xfz /opt/TeamCity-{{teamcity.version}}.tar.gz
- rm -rf /opt/TeamCity-{{teamcity.version}}.tar.gz
- args:
- chdir: /opt
-
- - name: Update permissions
- ansible.builtin.shell: chown -R teamcity:teamcity /opt/TeamCity
-
- - name: TeamCity | Create environment file
- template: src=teamcity.service.j2 dest=/etc/systemd/system/teamcityserver.service
- notify:
- - reload systemctl
- - name: TeamCity | Start teamcity
- service: name=teamcityserver.service state=started enabled=yes
-
- # Trigger a reload of systemctl after the service file has been created.
- handlers:
- - name: reload systemctl
- command: systemctl daemon-reload
-
-
-
-
\ No newline at end of file
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/category/traefik/feed/index.xml b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/category/traefik/feed/index.xml
deleted file mode 100644
index 3251953..0000000
--- a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/category/traefik/feed/index.xml
+++ /dev/null
@@ -1,529 +0,0 @@
-
-
-
- Traefik – hackanooga
-
- /
- Confessions of a homelab hacker
- Sat, 11 May 2024 13:44:01 +0000
- en-US
-
- hourly
-
- 1
- https://wordpress.org/?v=6.6.2
-
-
- /wp-content/uploads/2024/03/cropped-cropped-avatar-32x32.png
- Traefik – hackanooga
- /
- 32
- 32
-
-
- Traefik 3.0 service discovery in Docker Swarm mode
- /traefik-3-0-service-discovery-in-docker-swarm-mode/
-
-
- Sat, 11 May 2024 13:44:01 +0000
-
-
-
-
-
-
- /?p=564
-
-
- I recently decided to set up a Docker swarm cluster for a project I was working on. If you aren’t familiar with Swarm mode, it is similar in some ways to k8s but with much less complexity and it is built into Docker. If you are looking for a fairly straightforward way to deploy containers across a number of nodes without all the overhead of k8s it can be a good choice, however it isn’t a very popular or widespread solution these days.
-
-
-
-
Anyway, I set up a VM scaling set in Azure with 10 Ubuntu 22.04 vms and wrote some Ansible scripts to automate the process of installing Docker on each machine as well as setting 3 up as swarm managers and the other 7 as worker nodes. I ssh’d into the primary manager node and created a docker compose file for launching an observability stack.
Everything deploys properly but when I view the Traefik logs there is an issue with all the services except for the grafana service. I get errors like this:
-
-
-
-
traefik_traefik.1.tm5iqb9x59on@dockerswa2V8BY4 | 2024-05-11T13:14:16Z ERR error="service \"observability-prometheus\" error: port is missing" container=observability-prometheus-37i852h4o36c23lzwuu9pvee9 providerName=swarm
-
-
-
-
-
It drove me crazy for about half a day or so. I couldn’t find any reason why the grafana service worked as expected but none of the others did. Part of my love/hate relationship with Traefik stems from the fact that configuration issues like this can be hard to track and debug. Ultimately after lots of searching and banging my head against a wall I found the answer in the Traefik docs and thought I would share here for anyone else who might run into this issue. Again, this solution is specific to Docker Swarm mode.
Expand that first section and you will see the solution:
-
-
-
-
-
-
-
-
It turns out I just needed to update my docker-compose.yml and nest the labels under a deploy section, redeploy and everything was working as expected.
-]]>
-
-
-
-
-
- Traefik with Let’s Encrypt and Cloudflare (pt 2)
- /traefik-with-lets-encrypt-and-cloudflare-pt-2/
-
-
- Thu, 15 Feb 2024 20:19:12 +0000
-
-
-
-
-
- https://wordpress.hackanooga.com/?p=425
-
-
- In this article we are gonna get into setting up Traefik to request dynamic certs from Lets Encrypt. I had a few issues getting this up and running and the documentation is a little fuzzy. In my case I decided to go with the DNS challenge route. Really the only reason I went with this option is because I was having issues with the TLS and HTTP challenges. Well as it turns out my issues didn’t have as much to do with my configuration as they did with my router.
-
-
-
-
Sometime in the past I had set up some special rules on my router to force all clients on my network to send DNS requests through a self hosted DNS server. I did this to keep some of my “smart” devices from misbehaving by blocking there access to the outside world. As it turns out some devices will ignore the DNS servers that you hand out via DHCP and will use their own instead. That is of course unless you force DNS redirection but that is another post for another day.
-
-
-
-
Let’s revisit our current configuration:
-
-
-
-
version: '3'
-
-services:
- reverse-proxy:
- # The official v2 Traefik docker image
- image: traefik:v2.11
- # Enables the web UI and tells Traefik to listen to docker
- command:
- - --api.insecure=true
- - --providers.docker=true
- - --providers.file.filename=/config.yml
- - --entrypoints.web.address=:80
- - --entrypoints.websecure.address=:443
- # Set up LetsEncrypt
- - --certificatesresolvers.letsencrypt.acme.dnschallenge=true
- - --certificatesresolvers.letsencrypt.acme.dnschallenge.provider=cloudflare
- - --certificatesresolvers.letsencrypt.acme.email=mikeconrad@onmail.com
- - --certificatesresolvers.letsencrypt.acme.storage=/letsencrypt/acme.json
- - --entryPoints.web.http.redirections.entryPoint.to=websecure
- - --entryPoints.web.http.redirections.entryPoint.scheme=https
- - --entryPoints.web.http.redirections.entrypoint.permanent=true
- - --log=true
- - --log.level=INFO
-# - '--certificatesresolvers.letsencrypt.acme.caserver=https://acme-staging-v02.api.letsencrypt.org/directory'
-
- environment:
- - CF_DNS_API_TOKEN=${CF_DNS_API_TOKEN}
- ports:
- # The HTTP port
- - "80:80"
- - "443:443"
- # The Web UI (enabled by --api.insecure=true)
- - "8080:8080"
- volumes:
- # So that Traefik can listen to the Docker events
- - /var/run/docker.sock:/var/run/docker.sock:ro
- - ./letsencrypt:/letsencrypt
- - ./volumes/traefik/logs:/logs
- - ./traefik/config.yml:/config.yml:ro
- networks:
- - traefik
- ots:
- image: luzifer/ots
- container_name: ots
- restart: always
- environment:
- # Optional, see "Customization" in README
- #CUSTOMIZE: '/etc/ots/customize.yaml'
- # See README for details
- REDIS_URL: redis://redis:6379/0
- # 168h = 1w
- SECRET_EXPIRY: "604800"
- # "mem" or "redis" (See README)
- STORAGE_TYPE: redis
- depends_on:
- - redis
- labels:
- - traefik.enable=true
- - traefik.http.routers.ots.rule=Host(`ots.hackanooga.com`)
- - traefik.http.routers.ots.entrypoints=websecure
- - traefik.http.routers.ots.tls=true
- - traefik.http.routers.ots.tls.certresolver=letsencrypt
- networks:
- - traefik
- redis:
- image: redis:alpine
- restart: always
- volumes:
- - ./redis-data:/data
- networks:
- - traefik
-networks:
- traefik:
- external: true
-
-
-
-
-
-
Now that we have all of this in place there are a couple more things we need to do on the Cloudflare side:
-
-
-
-
Step 1: Setup wildcard DNS entry
-
-
-
-
This is pretty straightforward. Follow the Cloudflare documentation if you aren’t familiar with setting this up.
-
-
-
-
Step 2: Create API Token
-
-
-
-
This is where the Traefik documentation is a little lacking. I had some issues getting this set up initially but ultimately found this documentation which pointed me in the right direction. In your Cloudflare account you will need to create an API token. Navigate to the dashboard, go to your profile -> API Tokens and create new token. It should have the following permissions:
-
-
-
-
Zone.Zone.Read
-Zone.DNS.Edit
-
-
-
-
-
-
-
-
Also be sure to give it permission to access all zones in your account. Now simply provide that token when starting up the stack and you should be good to go:
-
-
-
-
CF_DNS_API_TOKEN=[redacted] docker compose up -d
-]]>
-
-
-
-
-
- Traefik with Let’s Encrypt and Cloudflare (pt 1)
- /traefik-with-lets-encrypt-and-cloudflare-pt-1/
-
-
- Thu, 01 Feb 2024 19:35:00 +0000
-
-
-
-
-
- https://wordpress.hackanooga.com/?p=422
-
-
- Recently I decided to rebuild one of my homelab servers. Previously I was using Nginx as my reverse proxy but I decided to switch to Traefik since I have been using it professionally for some time now. One of the reasons I like Traefik is that it is stupid simple to set up certificates and when I am using it with Docker I don’t have to worry about a bunch of configuration files. If you aren’t familiar with how Traefik works with Docker, here is a brief example of a docker-compose.yaml
-
-
-
-
version: '3'
-
-services:
- reverse-proxy:
- # The official v2 Traefik docker image
- image: traefik:v2.11
- # Enables the web UI and tells Traefik to listen to docker
- command:
- - --api.insecure=true
- - --providers.docker=true
- - --entrypoints.web.address=:80
- - --entrypoints.websecure.address=:443
- # Set up LetsEncrypt
- - --certificatesresolvers.letsencrypt.acme.dnschallenge=true
- - --certificatesresolvers.letsencrypt.acme.dnschallenge.provider=cloudflare
- - --certificatesresolvers.letsencrypt.acme.email=user@example.com
- - --certificatesresolvers.letsencrypt.acme.storage=/letsencrypt/acme.json
- # Redirect all http requests to https
- - --entryPoints.web.http.redirections.entryPoint.to=websecure
- - --entryPoints.web.http.redirections.entryPoint.scheme=https
- - --entryPoints.web.http.redirections.entrypoint.permanent=true
- - --log=true
- - --log.level=INFO
- # Needed to request certs via lets encrypt
- environment:
- - CF_DNS_API_TOKEN=[redacted]
- ports:
- # The HTTP port
- - "80:80"
- - "443:443"
- # The Web UI (enabled by --api.insecure=true)
- - "8080:8080"
- volumes:
- # So that Traefik can listen to the Docker events
- - /var/run/docker.sock:/var/run/docker.sock:ro
- # Used for storing letsencrypt certificates
- - ./letsencrypt:/letsencrypt
- - ./volumes/traefik/logs:/logs
- networks:
- - traefik
- ots:
- image: luzifer/ots
- container_name: ots
- restart: always
- environment:
- REDIS_URL: redis://redis:6379/0
- SECRET_EXPIRY: "604800"
- STORAGE_TYPE: redis
- depends_on:
- - redis
- labels:
- - traefik.enable=true
- - traefik.http.routers.ots.rule=Host(`ots.example.com`)
- - traefik.http.routers.ots.entrypoints=websecure
- - traefik.http.routers.ots.tls=true
- - traefik.http.routers.ots.tls.certresolver=letsencrypt
- - traefik.http.services.ots.loadbalancer.server.port=3000
- networks:
- - traefik
- redis:
- image: redis:alpine
- restart: always
- volumes:
- - ./redis-data:/data
- networks:
- - traefik
-networks:
- traefik:
- external: true
-
-
-
-
-
-
-
In part one of this series I will be going over some of the basics of Traefik and how dynamic routing works. If you want to skip to the good stuff and get everything configured with Cloudflare, you can skip to part 2.
-
-
-
-
This example set’s up the primary Traefik container which acts as the ingress controller as well as a handy One Time Secret sharing service I use. Traefik handles routing in Docker via labels. For this to work properly the services that Traefik is trying to route to all need to be on the same Docker network. For this example we created a network called traefik by running the following:
-
-
-
-
docker network create traefik
-
-
-
-
-
Let’s take a look at the labels we applied to the ots container a little closer:
traefik.enable=true – This should be pretty self explanatory but it tells Traefik that we want it to know about this service.
-
-
-
-
traefik.http.routers.ots.rule=Host('ots.example.com') - This is where some of the magic comes in. Here we are defining a router called ots. The name is arbitrary in that it doesn’t have to match the name of the service but for our example it does. There are many rules that you can specify but the easiest for this example is host. Basically we are saying that any request coming in for ots.example.com should be picked up by this router. You can find more options for routers in the Traefik docs.
We are using these three labels to tell our router that we want it to use the websecure entrypoint, and that it should use the letsencrypt certresolver to grab it’s certificates. websecure is an arbitrary name that we assigned to our :443 interface. There are multiple ways to configure this, I choose to use the cli format in my traefik config:
-
-
-
-
“`
-
-
-
-
command:
- - --api.insecure=true
- - --providers.docker=true
- # Our entrypoint names are arbitrary but these are convention.
- # The important part is the port binding that we associate.
- - --entrypoints.web.address=:80
- - --entrypoints.websecure.address=:443
-
-
-
-
-
-
These last label is optional depending on your setup but it is important to understand as the documentation is a little fuzzy.
Here’s how it works. Suppose you have a container that exposes multiple ports. Maybe one of those is a web ui and another is something that you don’t want exposed. By default Traefik will try and guess which port to route requests to. My understanding is that it will try and use the first exposed port. However you can override this functionality by using the label above which will tell Traefik specifically which port you want to route to inside the container.
The service name is derived automatically from the definition in the docker compose file:
-
-
-
-
ots: # This will become the service name image: luzifer/ots container_name: ots
-
-
-
-
\ No newline at end of file
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/cdn-cgi/images/browser-bar.png b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/cdn-cgi/images/browser-bar.png
deleted file mode 100644
index 781cf16..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/cdn-cgi/images/browser-bar.png and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/cdn-cgi/images/cf-icon-browser.png b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/cdn-cgi/images/cf-icon-browser.png
deleted file mode 100644
index de23104..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/cdn-cgi/images/cf-icon-browser.png and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/cdn-cgi/images/cf-icon-cloud.png b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/cdn-cgi/images/cf-icon-cloud.png
deleted file mode 100644
index d8ed395..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/cdn-cgi/images/cf-icon-cloud.png and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/cdn-cgi/images/cf-icon-error.png b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/cdn-cgi/images/cf-icon-error.png
deleted file mode 100644
index 3a7c8be..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/cdn-cgi/images/cf-icon-error.png and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/cdn-cgi/images/cf-icon-horizontal-arrow.png b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/cdn-cgi/images/cf-icon-horizontal-arrow.png
deleted file mode 100644
index 16da67a..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/cdn-cgi/images/cf-icon-horizontal-arrow.png and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/cdn-cgi/images/cf-icon-ok.png b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/cdn-cgi/images/cf-icon-ok.png
deleted file mode 100644
index df246db..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/cdn-cgi/images/cf-icon-ok.png and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/cdn-cgi/images/cf-icon-server.png b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/cdn-cgi/images/cf-icon-server.png
deleted file mode 100644
index 8b32fbc..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/cdn-cgi/images/cf-icon-server.png and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/cdn-cgi/images/cf-no-screenshot-error.png b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/cdn-cgi/images/cf-no-screenshot-error.png
deleted file mode 100644
index e3eeeeb..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/cdn-cgi/images/cf-no-screenshot-error.png and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/cdn-cgi/images/cf-no-screenshot-warn.png b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/cdn-cgi/images/cf-no-screenshot-warn.png
deleted file mode 100644
index 0bc13a0..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/cdn-cgi/images/cf-no-screenshot-warn.png and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/cdn-cgi/images/icon-exclamation.png b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/cdn-cgi/images/icon-exclamation.png
deleted file mode 100644
index 56c413c..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/cdn-cgi/images/icon-exclamation.png and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/cdn-cgi/l/email-protection/index.html b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/cdn-cgi/l/email-protection/index.html
deleted file mode 100644
index 7c895f6..0000000
--- a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/cdn-cgi/l/email-protection/index.html
+++ /dev/null
@@ -1,80 +0,0 @@
-
-
-
-Email Protection | Cloudflare
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
Please enable cookies.
-
-
-
Email Protection
-
You are unable to access this email address hackanooga.com
-
-
-
-
-
-
The website from which you got to this page is protected by Cloudflare. Email addresses on that page have been hidden in order to keep them from being accessed by malicious bots. You must enable Javascript in your browser in order to decode the e-mail address.
-
If you have a website and are interested in protecting it in a similar way, you can sign up for Cloudflare.
I was recently working on project where a client had cPanel/WHM with Nginx and Apache. They had a large number of sites managed by Nginx with a large number of includes. I created a custom config to override a location block and needed to be certain that my changes where actually being picked up. Anytime I make changes to an Nginx config, I try to be vigilant about running:
-
-
-
-
nginx -t
-
-
-
-
to test my configuration and ensure I don’t have any syntax errors. I was looking for an easy way to view the actual compiled config and found the -T flag which will test the configuration and dump it to standard out. This is pretty handy if you have a large number of includes in various locations. Here is an example from a fresh Nginx Docker container:
As you can see from the output above, we get all of the various Nginx config files in use printed to the console, perfect for grepping or searching/filtering with other tools.
-
-
\ No newline at end of file
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/feed/index.xml b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/feed/index.xml
deleted file mode 100644
index 59b6f54..0000000
--- a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/feed/index.xml
+++ /dev/null
@@ -1,1835 +0,0 @@
-
-
-
- hackanooga
-
- /
- Confessions of a homelab hacker
- Wed, 25 Sep 2024 13:56:04 +0000
- en-US
-
- hourly
-
- 1
- https://wordpress.org/?v=6.6.2
-
-
- /wp-content/uploads/2024/03/cropped-cropped-avatar-32x32.png
- hackanooga
- /
- 32
- 32
-
-
- Standing up a Wireguard VPN
- /standing-up-a-wireguard-vpn/
-
-
- Wed, 25 Sep 2024 13:56:04 +0000
-
-
-
-
-
-
-
-
- /?p=619
-
-
- VPN’s have traditionally been slow, complex and hard to set up and configure. That all changed several years ago when Wireguard was officially merged into the mainline Linux kernel (src). I won’t go over all the reasons for why you should want to use Wireguard in this article, instead I will be focusing on just how easy it is to set up and configure.
-
-
-
-
For this tutorial we will be using Terraform to stand up a Digital Ocean droplet and then install Wireguard onto that. The Digital Ocean droplet will be acting as our “server” in this example and we will be using our own computer as the “client”. Of course, you don’t have to use Terraform, you just need a Linux box to install Wireguard on. You can find the code for this tutorial on my personal Git server here.
-
-
-
-
Create Droplet with Terraform
-
-
-
-
I have written some very basic Terraform to get us started. The Terraform is very basic and just creates a droplet with a predefined ssh key and a setup script passed as user data. When the droplet gets created, the script will get copied to the instance and automatically executed. After a few minutes everything should be ready to go. If you want to clone the repo above, feel free to, or if you would rather do everything by hand that’s great too. I will assume that you are doing everything by hand. The process of deploying from the repo should be pretty self explainitory. My reasoning for doing it this way is because I wanted to better understand the process.
-
-
-
-
First create our main.tf with the following contents:
-
-
-
-
# main.tf
-# Attach an SSH key to our droplet
-resource "digitalocean_ssh_key" "default" {
- name = "Terraform Example"
- public_key = file("./tf-digitalocean.pub")
-}
-
-# Create a new Web Droplet in the nyc1 region
-resource "digitalocean_droplet" "web" {
- image = "ubuntu-22-04-x64"
- name = "wireguard"
- region = "nyc1"
- size = "s-2vcpu-4gb"
- ssh_keys = [digitalocean_ssh_key.default.fingerprint]
- user_data = file("setup.sh")
-}
-
-output "droplet_output" {
- value = digitalocean_droplet.web.ipv4_address
-}
-
-
-
-
Next create a terraform.tf file in the same directory with the following contents:
Now we are ready to initialize our Terraform and apply it:
-
-
-
-
$ terraform init
-$ terraform apply
-
-Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
- + create
-
-Terraform will perform the following actions:
-
- # digitalocean_droplet.web will be created
- + resource "digitalocean_droplet" "web" {
- + backups = false
- + created_at = (known after apply)
- + disk = (known after apply)
- + graceful_shutdown = false
- + id = (known after apply)
- + image = "ubuntu-22-04-x64"
- + ipv4_address = (known after apply)
- + ipv4_address_private = (known after apply)
- + ipv6 = false
- + ipv6_address = (known after apply)
- + locked = (known after apply)
- + memory = (known after apply)
- + monitoring = false
- + name = "wireguard"
- + price_hourly = (known after apply)
- + price_monthly = (known after apply)
- + private_networking = (known after apply)
- + region = "nyc1"
- + resize_disk = true
- + size = "s-2vcpu-4gb"
- + ssh_keys = (known after apply)
- + status = (known after apply)
- + urn = (known after apply)
- + user_data = "69d130f386b262b136863be5fcffc32bff055ac0"
- + vcpus = (known after apply)
- + volume_ids = (known after apply)
- + vpc_uuid = (known after apply)
- }
-
- # digitalocean_ssh_key.default will be created
- + resource "digitalocean_ssh_key" "default" {
- + fingerprint = (known after apply)
- + id = (known after apply)
- + name = "Terraform Example"
- + public_key = <<-EOT
- ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDXOBlFdNqV48oxWobrn2rPt4y1FTqrqscA5bSu2f3CogwbDKDyNglXu8RL4opjfdBHQES+pEqvt21niqes8z2QsBTF3TRQ39SaHM8wnOTeC8d0uSgyrp9b7higHd0SDJVJZT0Bz5AlpYfCO/gpEW51XrKKeud7vImj8nGPDHnENN0Ie0UVYZ5+V1zlr0BBI7LX01MtzUOgSldDX0lif7IZWW4XEv40ojWyYJNQwO/gwyDrdAq+kl+xZu7LmBhngcqd02+X6w4SbdgYg2flu25Td0MME0DEsXKiZYf7kniTrKgCs4kJAmidCDYlYRt43dlM69pB5jVD/u4r3O+erTapH/O1EDhsdA9y0aYpKOv26ssYU+ZXK/nax+Heu0giflm7ENTCblKTPCtpG1DBthhX6Ml0AYjZF1cUaaAvpN8UjElxQ9r+PSwXloSnf25/r9UOBs1uco8VDwbx5cM0SpdYm6ERtLqGRYrG2SDJ8yLgiCE9EK9n3uQExyrTMKWzVAc= WireguardVPN
- EOT
- }
-
-Plan: 2 to add, 0 to change, 0 to destroy.
-
-Changes to Outputs:
- + droplet_output = (known after apply)
-
-Do you want to perform these actions?
- Terraform will perform the actions described above.
- Only 'yes' will be accepted to approve.
-
- Enter a value: yes
-
-digitalocean_ssh_key.default: Creating...
-digitalocean_ssh_key.default: Creation complete after 1s [id=43499750]
-digitalocean_droplet.web: Creating...
-digitalocean_droplet.web: Still creating... [10s elapsed]
-digitalocean_droplet.web: Still creating... [20s elapsed]
-digitalocean_droplet.web: Still creating... [30s elapsed]
-digitalocean_droplet.web: Creation complete after 31s [id=447469336]
-
-Apply complete! Resources: 2 added, 0 changed, 0 destroyed.
-
-Outputs:
-
-droplet_output = "159.223.113.207"
-
-
-
-
-
All pretty standard stuff. Nice! It only took about 30 seconds or so on my machine to spin up a droplet and start provisioning it. It is worth noting that the setup script will take a few minutes to run. Before we log into our new droplet, let’s take a quick look at the setup script that we are running.
-
-
-
-
#!/usr/bin/env sh
-set -e
-set -u
-# Set the listen port used by Wireguard, this is the default so feel free to change it.
-LISTENPORT=51820
-CONFIG_DIR=/root/wireguard-conf
-umask 077
-mkdir -p $CONFIG_DIR/client
-
-# Install wireguard
-apt update && apt install -y wireguard
-
-# Generate public/private key for the "server".
-wg genkey > $CONFIG_DIR/privatekey
-wg pubkey < $CONFIG_DIR/privatekey > $CONFIG_DIR/publickey
-
-# Generate public/private key for the "client"
-wg genkey > $CONFIG_DIR/client/privatekey
-wg pubkey < $CONFIG_DIR/client/privatekey > $CONFIG_DIR/client/publickey
-
-
-# Generate server config
-echo "[Interface]
-Address = 10.66.66.1/24,fd42:42:42::1/64
-ListenPort = $LISTENPORT
-PrivateKey = $(cat $CONFIG_DIR/privatekey)
-
-### Client config
-[Peer]
-PublicKey = $(cat $CONFIG_DIR/client/publickey)
-AllowedIPs = 10.66.66.2/32,fd42:42:42::2/128
-" > /etc/wireguard/do.conf
-
-
-# Generate client config. This will need to be copied to your machine.
-echo "[Interface]
-PrivateKey = $(cat $CONFIG_DIR/client/privatekey)
-Address = 10.66.66.2/32,fd42:42:42::2/128
-DNS = 1.1.1.1,1.0.0.1
-
-[Peer]
-PublicKey = $(cat publickey)
-Endpoint = $(curl icanhazip.com):$LISTENPORT
-AllowedIPs = 0.0.0.0/0,::/0
-" > client-config.conf
-
-wg-quick up do
-
-# Add iptables rules to forward internet traffic through this box
-# We are assuming our Wireguard interface is called do and our
-# primary public facing interface is called eth0.
-
-iptables -I INPUT -p udp --dport 51820 -j ACCEPT
-iptables -I FORWARD -i eth0 -o do -j ACCEPT
-iptables -I FORWARD -i do -j ACCEPT
-iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
-ip6tables -I FORWARD -i do -j ACCEPT
-ip6tables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
-
-# Enable routing on the server
-echo "net.ipv4.ip_forward = 1
- net.ipv6.conf.all.forwarding = 1" >/etc/sysctl.d/wg.conf
-sysctl --system
-
-
-
-
As you can see, it is pretty straightforward. All you really need to do is:
-
-
-
-
On the “server” side:
-
-
-
-
-
Generate a private key and derive a public key from it for both the “server” and the “client”.
-
-
-
-
Create a “server” config that tells the droplet what address to bind to for the wireguard interface, which private key to use to secure that interface and what port to listen on.
-
-
-
-
The “server” config also needs to know what peers or “clients” to accept connections from in the AllowedIPs block. In this case we are just specifying one. The “server” also needs to know the public key of the “client” that will be connecting.
-
-
-
-
-
On the “client” side:
-
-
-
-
-
Create a “client” config that tells our machine what address to assign to the wireguard interface (obviously needs to be on the same subnet as the interface on the server side).
-
-
-
-
The client needs to know which private key to use to secure the interface.
-
-
-
-
It also needs to know the public key of the server as well as the public IP address/hostname of the “server” it is connecting to as well as the port it is listening on.
-
-
-
-
Finally it needs to know what traffic to route over the wireguard interface. In this example we are simply routing all traffic but you could restrict this as you see fit.
-
-
-
-
-
Now that we have our configs in place, we need to copy the client config to our local machine. The following command should work as long as you make sure to replace the IP address with the IP address of your newly created droplet:
-
-
-
-
## Make sure you have Wireguard installed on your local machine as well.
-## https://wireguard.com/install
-
-## Copy the client config to our local machine and move it to our wireguard directory.
-$ ssh -i tf-digitalocean root@157.230.177.54 -- cat /root/wireguard-conf/client-config.conf| sudo tee /etc/wireguard/do.conf
-
-
-
-
Before we try to connect, let’s log into the server and make sure everything is set up correctly:
-
-
-
-
$ ssh -i tf-digitalocean root@159.223.113.207
-Welcome to Ubuntu 22.04.4 LTS (GNU/Linux 5.15.0-113-generic x86_64)
-
- * Documentation: https://help.ubuntu.com/
- * Management: https://landscape.canonical.com/
- * Support: https://ubuntu.com/pro
-
- System information as of Wed Sep 25 13:19:02 UTC 2024
-
- System load: 0.03 Processes: 113
- Usage of /: 2.1% of 77.35GB Users logged in: 0
- Memory usage: 6% IPv4 address for eth0: 157.230.221.196
- Swap usage: 0% IPv4 address for eth0: 10.10.0.5
-
-Expanded Security Maintenance for Applications is not enabled.
-
-70 updates can be applied immediately.
-40 of these updates are standard security updates.
-To see these additional updates run: apt list --upgradable
-
-Enable ESM Apps to receive additional future security updates.
-See https://ubuntu.com/esm or run: sudo pro status
-
-New release '24.04.1 LTS' available.
-Run 'do-release-upgrade' to upgrade to it.
-
-
-Last login: Wed Sep 25 13:16:25 2024 from 74.221.191.214
-root@wireguard:~#
-
-
-
-
-
-
Awesome! We are connected. Now let’s check the wireguard interface using the wg command. If our config was correct, we should see an interface line and 1 peer line like so. If the peer line is missing then something is wrong with the configuration. Most likely a mismatch between public/private key.:
-
-
-
-
root@wireguard:~# wg
-interface: do
- public key: fTvqo/cZVofJ9IZgWHwU6XKcIwM/EcxUsMw4voeS/Hg=
- private key: (hidden)
- listening port: 51820
-
-peer: 5RxMenh1L+rNJobROkUrub4DBUj+nEUPKiNe4DFR8iY=
- allowed ips: 10.66.66.2/32, fd42:42:42::2/128
-root@wireguard:~#
-
-
-
-
So now we should be ready to go! On your local machine go ahead and try it out:
-
-
-
-
## Start the interface with wg-quick up [interface_name]
-$ sudo wg-quick up do
-[sudo] password for mikeconrad:
-[#] ip link add do type wireguard
-[#] wg setconf do /dev/fd/63
-[#] ip -4 address add 10.66.66.2/32 dev do
-[#] ip -6 address add fd42:42:42::2/128 dev do
-[#] ip link set mtu 1420 up dev do
-[#] resolvconf -a do -m 0 -x
-[#] wg set do fwmark 51820
-[#] ip -6 route add ::/0 dev do table 51820
-[#] ip -6 rule add not fwmark 51820 table 51820
-[#] ip -6 rule add table main suppress_prefixlength 0
-[#] ip6tables-restore -n
-[#] ip -4 route add 0.0.0.0/0 dev do table 51820
-[#] ip -4 rule add not fwmark 51820 table 51820
-[#] ip -4 rule add table main suppress_prefixlength 0
-[#] sysctl -q net.ipv4.conf.all.src_valid_mark=1
-[#] iptables-restore -n
-
-## Check our config
-$ sudo wg
-interface: do
- public key: fJ8mptCR/utCR4K2LmJTKTjn3xc4RDmZ3NNEQGwI7iI=
- private key: (hidden)
- listening port: 34596
- fwmark: 0xca6c
-
-peer: duTHwMhzSZxnRJ2GFCUCHE4HgY5tSeRn9EzQt9XVDx4=
- endpoint: 157.230.177.54:51820
- allowed ips: 0.0.0.0/0, ::/0
- latest handshake: 1 second ago
- transfer: 1.82 KiB received, 2.89 KiB sent
-
-## Make sure we can ping the outside world
-mikeconrad@pop-os:~/projects/wireguard-terraform-digitalocean$ ping 1.1.1.1
-PING 1.1.1.1 (1.1.1.1) 56(84) bytes of data.
-64 bytes from 1.1.1.1: icmp_seq=1 ttl=56 time=28.0 ms
-^C
---- 1.1.1.1 ping statistics ---
-1 packets transmitted, 1 received, 0% packet loss, time 0ms
-rtt min/avg/max/mdev = 27.991/27.991/27.991/0.000 ms
-
-## Verify our traffic is actually going over the tunnel.
-$ curl icanhazip.com
-157.230.177.54
-
-
-
-
-
-
-
We should also be able to ssh into our instance over the VPN using the 10.66.66.1 address:
-
-
-
-
$ ssh -i tf-digitalocean root@10.66.66.1
-The authenticity of host '10.66.66.1 (10.66.66.1)' can't be established.
-ED25519 key fingerprint is SHA256:E7BKSO3qP+iVVXfb/tLaUfKIc4RvtZ0k248epdE04m8.
-This host key is known by the following other names/addresses:
- ~/.ssh/known_hosts:130: [hashed name]
-Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
-Warning: Permanently added '10.66.66.1' (ED25519) to the list of known hosts.
-Welcome to Ubuntu 22.04.4 LTS (GNU/Linux 5.15.0-113-generic x86_64)
-
- * Documentation: https://help.ubuntu.com/
- * Management: https://landscape.canonical.com/
- * Support: https://ubuntu.com/pro
-
- System information as of Wed Sep 25 13:32:12 UTC 2024
-
- System load: 0.02 Processes: 109
- Usage of /: 2.1% of 77.35GB Users logged in: 0
- Memory usage: 6% IPv4 address for eth0: 157.230.177.54
- Swap usage: 0% IPv4 address for eth0: 10.10.0.5
-
-Expanded Security Maintenance for Applications is not enabled.
-
-73 updates can be applied immediately.
-40 of these updates are standard security updates.
-To see these additional updates run: apt list --upgradable
-
-Enable ESM Apps to receive additional future security updates.
-See https://ubuntu.com/esm or run: sudo pro status
-
-New release '24.04.1 LTS' available.
-Run 'do-release-upgrade' to upgrade to it.
-
-
-root@wireguard:~#
-
-
-
-
-
Looks like everything is working! If you run the script from the repo you will have a fully functioning Wireguard VPN in less than 5 minutes! Pretty cool stuff! This article was not meant to be exhaustive but instead a simple primer to get your feet wet. The setup script I used is heavily inspired by angristan/wireguard-install. Another great resource is the Unofficial docs repo.
-]]>
-
-
-
-
-
- Hardening your web server by only allowing traffic from Cloudflare
- /hardening-your-web-server-by-only-allowing-traffic-from-cloudflare/
-
-
- Thu, 01 Aug 2024 21:02:29 +0000
-
-
-
-
-
- /?p=607
-
-
- TDLR:
-
-
-
-
If you just want the code you can find a convenient script on my Gitea server here. This version has been slightly modified so that it will work on more systems.
-
-
-
-
-
-
-
-
I have been using Cloudflare for several years for both personal and professional projects. The free plan has some various gracious limits and it’s a great way to clear out some low hanging fruit and improve the security of your application. If you’re not familiar with how it works, basically Cloudflare has two modes for DNS records. DNS Only and Proxied. The only way to get the advantages of Cloudflare is to use Proxied mode. Cloudflare has some great documentation on how all of their services work but basically what happens is that you are pointing your domain to Cloudflare and Cloudflare provisions their network of Proxy servers to handle requests for your domain.
-
-
-
-
These proxy servers allow you to secure your domain by implementing things like WAF and Rate limiting. You can also enforce HTTPS only mode and modify/add custom request/response headers. You will notice that once you turn this mode on, your webserver will log requests as coming from Cloudflare IP addresses. They have great documentation on how to configure your webserver to restore these IP addresses in your log files.
-
-
-
-
This is a very easy step to start securing your origin server but it still allows attackers to access your servers directly if they know the IP address. We can take our security one step forward by only allowing requests from IP addresses originating within Cloudflare meaning that we will only allow requests if they are coming from a Cloudflare proxy server. The setup is fairly straightforward. In this example I will be using a Linux server.
-
-
-
-
We can achieve this pretty easily because Cloudflare provides a sort of API where they regular publish their network blocks. Here is the basic script we will use:
-
-
-
-
for ip in $(curl https://www.cloudflare.com/ips-v4/); do iptables -I INPUT -p tcp -m multiport --dports http,https -s $ip -j ACCEPT; done
-
-for ip in $(curl https://www.cloudflare.com/ips-v6/); do ip6tables -I INPUT -p tcp -m multiport --dports http,https -s $ip -j ACCEPT; done
-
-iptables -A INPUT -p tcp -m multiport --dports http,https -j DROP
-ip6tables -A INPUT -p tcp -m multiport --dports http,https -j DROP
-
-
-
-
-
This will pull down the latest network addresses from Cloudflare and create iptables rules for us. These IP addresses do change from time to time so you may want to put this in a script and run it via a cronjob to have it update on a regular basis.
-
-
-
-
Now with this in place, here is the results:
-
-
-
-
-
-
-
-
This should cut down on some of the noise from attackers and script kiddies trying to find holes in your security.
We are looking for an experienced professional to help us set up an SFTP server that will allow our vendors to send us inventory files on a daily basis. The server should ensure secure and reliable file transfers, allowing our vendors to easily upload their inventory updates. The successful candidate will possess expertise in SFTP server setup and configuration, as well as knowledge of network security protocols. The required skills for this job include:
-
-
-
-
– SFTP server setup and configuration – Network security protocols – Troubleshooting and problem-solving skills
-
-
-
-
If you have demonstrated experience in setting up SFTP servers and ensuring smooth daily file transfers, we would love to hear from you.
-
-
-
-
-
-
-
-
-
-
-
-
-
My Role
-
-
-
-
I walked the client through the process of setting up a Digital Ocean account. I created a Ubuntu 22.04 VM and installed SFTPGo. I set the client up with an administrator user so that they could easily login and manage users and shares. I implemented some basic security practices as well and set the client up with a custom domain and free TLS/SSL certificate from LetsEncrypt. With the documentation and screenshots I provided the client, they were able to get everything up and running and add users and connect other systems easily and securly.
-
-
-
-
-
-
-
-
-
-
-
-
Client Feedback
-
-
-
-
-
Rating is 5 out of 5.
-
-
-
-
Michael was EXTREMELY helpful and great to work with. We really benefited from his support and help with everything.
-
-]]>
-
-
-
-
-
- Debugging running Nginx config
- /debugging-running-nginx-config/
-
-
- Wed, 17 Jul 2024 01:42:43 +0000
-
-
-
-
- /?p=596
-
-
- I was recently working on project where a client had cPanel/WHM with Nginx and Apache. They had a large number of sites managed by Nginx with a large number of includes. I created a custom config to override a location block and needed to be certain that my changes where actually being picked up. Anytime I make changes to an Nginx config, I try to be vigilant about running:
-
-
-
-
nginx -t
-
-
-
-
to test my configuration and ensure I don’t have any syntax errors. I was looking for an easy way to view the actual compiled config and found the -T flag which will test the configuration and dump it to standard out. This is pretty handy if you have a large number of includes in various locations. Here is an example from a fresh Nginx Docker container:
As you can see from the output above, we get all of the various Nginx config files in use printed to the console, perfect for grepping or searching/filtering with other tools.
-]]>
-
-
-
-
-
- Fun with bots – SSH tarpitting
- /fun-with-bots-ssh-tarpitting/
-
-
- Mon, 24 Jun 2024 13:37:43 +0000
-
-
-
-
-
-
- /?p=576
-
-
- For those of you who aren’t familiar with the concept of a network tarpit it is a fairly simple concept. Wikipedia defines it like this:
-
-
-
-
-
A tarpit is a service on a computer system (usually a server) that purposely delays incoming connections. The technique was developed as a defense against a computer worm, and the idea is that network abuses such as spamming or broad scanning are less effective, and therefore less attractive, if they take too long. The concept is analogous with a tar pit, in which animals can get bogged down and slowly sink under the surface, like in a swamp.
If you run any sort of service on the internet then you know as soon as your server has a public IP address and open ports, there are scanners and bots trying to get in constantly. If you take decent steps towards security then it is little more than an annoyance, but annoying all the less. One day when I had some extra time on my hands I started researching ways to mess with the bots trying to scan/attack my site.
-
-
-
-
It turns out that this problem has been solved multiple times in multiple ways. One of the most popular tools for tarpitting ssh connections is endlessh. The way it works is actually pretty simple. The SSH RFC states that when an SSH connection is established, both sides MUST send an identification string. Further down the spec is the line that allows this behavior:
-
-
-
-
-
The server MAY send other lines of data before sending the version
- string. Each line SHOULD be terminated by a Carriage Return and Line
- Feed. Such lines MUST NOT begin with "SSH-", and SHOULD be encoded
- in ISO-10646 UTF-8 [RFC3629] (language is not specified). Clients
- MUST be able to process such lines. Such lines MAY be silently
- ignored, or MAY be displayed to the client user. If they are
- displayed, control character filtering, as discussed in [SSH-ARCH],
- SHOULD be used. The primary use of this feature is to allow TCP-
- wrappers to display an error message before disconnecting.
-SSH RFC
-
-
-
-
Essentially this means that their is no limit to the amount of data that a server can send back to the client and the client must be able to wait and process all of this data. Now let’s see it in action.
By default this fake server listens on port 2222. I have a port forward set up that forwards all ssh traffic from port 22 to 2222. Now try to connect via ssh:
-
-
-
-
ssh -vvv localhost -p 2222
-
-
-
-
If you wait a few seconds you will see the server send back the version string and then start sending a random banner:
-
-
-
-
$:/tmp/endlessh$ 2024-06-24T13:05:59.488Z Port 2222
-2024-06-24T13:05:59.488Z Delay 10000
-2024-06-24T13:05:59.488Z MaxLineLength 32
-2024-06-24T13:05:59.488Z MaxClients 4096
-2024-06-24T13:05:59.488Z BindFamily IPv4 Mapped IPv6
-2024-06-24T13:05:59.488Z socket() = 3
-2024-06-24T13:05:59.488Z setsockopt(3, SO_REUSEADDR, true) = 0
-2024-06-24T13:05:59.488Z setsockopt(3, IPV6_V6ONLY, true) = 0
-2024-06-24T13:05:59.488Z bind(3, port=2222) = 0
-2024-06-24T13:05:59.488Z listen(3) = 0
-2024-06-24T13:05:59.488Z poll(1, -1)
-ssh -vvv localhost -p 2222
-OpenSSH_8.9p1 Ubuntu-3ubuntu0.7, OpenSSL 3.0.2 15 Mar 2022
-debug1: Reading configuration data /home/mikeconrad/.ssh/config
-debug1: Reading configuration data /etc/ssh/ssh_config
-debug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files
-debug1: /etc/ssh/ssh_config line 21: Applying options for *
-debug3: expanded UserKnownHostsFile '~/.ssh/known_hosts' -> '/home/mikeconrad/.ssh/known_hosts'
-debug3: expanded UserKnownHostsFile '~/.ssh/known_hosts2' -> '/home/mikeconrad/.ssh/known_hosts2'
-debug2: resolving "localhost" port 2222
-debug3: resolve_host: lookup localhost:2222
-debug3: ssh_connect_direct: entering
-debug1: Connecting to localhost [::1] port 2222.
-debug3: set_sock_tos: set socket 3 IPV6_TCLASS 0x10
-debug1: Connection established.
-2024-06-24T13:06:08.635Z = 1
-2024-06-24T13:06:08.635Z accept() = 4
-2024-06-24T13:06:08.635Z setsockopt(4, SO_RCVBUF, 1) = 0
-2024-06-24T13:06:08.635Z ACCEPT host=::1 port=43696 fd=4 n=1/4096
-2024-06-24T13:06:08.635Z poll(1, 10000)
-debug1: identity file /home/mikeconrad/.ssh/id_rsa type 0
-debug1: identity file /home/mikeconrad/.ssh/id_rsa-cert type 4
-debug1: identity file /home/mikeconrad/.ssh/id_ecdsa type -1
-debug1: identity file /home/mikeconrad/.ssh/id_ecdsa-cert type -1
-debug1: identity file /home/mikeconrad/.ssh/id_ecdsa_sk type -1
-debug1: identity file /home/mikeconrad/.ssh/id_ecdsa_sk-cert type -1
-debug1: identity file /home/mikeconrad/.ssh/id_ed25519 type -1
-debug1: identity file /home/mikeconrad/.ssh/id_ed25519-cert type -1
-debug1: identity file /home/mikeconrad/.ssh/id_ed25519_sk type -1
-debug1: identity file /home/mikeconrad/.ssh/id_ed25519_sk-cert type -1
-debug1: identity file /home/mikeconrad/.ssh/id_xmss type -1
-debug1: identity file /home/mikeconrad/.ssh/id_xmss-cert type -1
-debug1: identity file /home/mikeconrad/.ssh/id_dsa type -1
-debug1: identity file /home/mikeconrad/.ssh/id_dsa-cert type -1
-debug1: Local version string SSH-2.0-OpenSSH_8.9p1 Ubuntu-3ubuntu0.7
-2024-06-24T13:06:18.684Z = 0
-2024-06-24T13:06:18.684Z write(4) = 3
-2024-06-24T13:06:18.684Z poll(1, 10000)
-debug1: kex_exchange_identification: banner line 0: V
-2024-06-24T13:06:28.734Z = 0
-2024-06-24T13:06:28.734Z write(4) = 25
-2024-06-24T13:06:28.734Z poll(1, 10000)
-debug1: kex_exchange_identification: banner line 1: 2I=ED}PZ,z T_Y|Yc]$b{R]
-
-
-
-
-
-
This is a great way to give back to those bots and script kiddies. In my research into other methods I also stumbled across this brilliant program fakessh. While fakessh isn’t technically a tarpit, it’s more of a honeypot but very interesting nonetheless. It creates a fake SSH server and logs the ip address, connection string and any commands executed by the attacker. Essentially it allows any username/password combination to connect and gives them a fake shell prompt. There is no actual access to any file system and all of their commands basically return gibberish.
-
-
-
-
Here are some logs from an actual server of mine running fakessh
Those are mostly connections and disconnections. They probably connected, realized it was fake and disconnected. There are a couple that tried to execute some commands though:
Fun fact: Cloudflare’s Bot Fight Mode uses a form of tarpitting:
-
-
-
-
-
Once enabled, when we detect a bad bot, we will do three things: (1) we’re going to disincentivize the bot maker economically by tarpitting them, including requiring them to solve a computationally intensive challenge that will require more of their bot’s CPU; (2) for Bandwidth Alliance partners, we’re going to hand the IP of the bot to the partner and get the bot kicked offline; and (3) we’re going to plant trees to make up for the bot’s carbon cost.
-]]>
-
-
-
-
-
- Traefik 3.0 service discovery in Docker Swarm mode
- /traefik-3-0-service-discovery-in-docker-swarm-mode/
-
-
- Sat, 11 May 2024 13:44:01 +0000
-
-
-
-
-
-
- /?p=564
-
-
- I recently decided to set up a Docker swarm cluster for a project I was working on. If you aren’t familiar with Swarm mode, it is similar in some ways to k8s but with much less complexity and it is built into Docker. If you are looking for a fairly straightforward way to deploy containers across a number of nodes without all the overhead of k8s it can be a good choice, however it isn’t a very popular or widespread solution these days.
-
-
-
-
Anyway, I set up a VM scaling set in Azure with 10 Ubuntu 22.04 vms and wrote some Ansible scripts to automate the process of installing Docker on each machine as well as setting 3 up as swarm managers and the other 7 as worker nodes. I ssh’d into the primary manager node and created a docker compose file for launching an observability stack.
Everything deploys properly but when I view the Traefik logs there is an issue with all the services except for the grafana service. I get errors like this:
-
-
-
-
traefik_traefik.1.tm5iqb9x59on@dockerswa2V8BY4 | 2024-05-11T13:14:16Z ERR error="service \"observability-prometheus\" error: port is missing" container=observability-prometheus-37i852h4o36c23lzwuu9pvee9 providerName=swarm
-
-
-
-
-
It drove me crazy for about half a day or so. I couldn’t find any reason why the grafana service worked as expected but none of the others did. Part of my love/hate relationship with Traefik stems from the fact that configuration issues like this can be hard to track and debug. Ultimately after lots of searching and banging my head against a wall I found the answer in the Traefik docs and thought I would share here for anyone else who might run into this issue. Again, this solution is specific to Docker Swarm mode.
Expand that first section and you will see the solution:
-
-
-
-
-
-
-
-
It turns out I just needed to update my docker-compose.yml and nest the labels under a deploy section, redeploy and everything was working as expected.
-]]>
-
-
-
-
-
- Stop all running containers with Docker
- /stop-all-running-containers-with-docker/
-
-
- Wed, 03 Apr 2024 13:12:41 +0000
-
-
-
-
- /?p=557
-
-
- These are some handy snippets I use on a regular basis when managing containers. I have one server in particular that can sometimes end up with 50 to 100 orphaned containers for various reasons. The easiest/quickest way to stop all of them is to do something like this:
-
-
-
-
docker container stop $(docker container ps -q)
-
-
-
-
Let me break this down in case you are not familiar with the syntax. Basically we are passing the output of docker container ps -q into docker container stop. This works because the stop command can take a list of container ids which is what we get when passing the -q flag to docker container ps.
-]]>
-
-
-
-
-
- Automating CI/CD with TeamCity and Ansible
- /automating-ci-cd-with-teamcity-ansible/
-
-
- Mon, 11 Mar 2024 13:37:47 +0000
-
-
-
-
-
- https://wordpress.hackanooga.com/?p=393
-
-
- In part one of this series we are going to explore a CI/CD option you may not be familiar with but should definitely be on your radar. I used Jetbrains TeamCity for several months at my last company and really enjoyed my time with it. A couple of the things I like most about it are:
-
-
-
-
-
Ability to declare global variables and have them be passed down to all projects
-
-
-
-
Ability to declare variables that are made up of other variables
-
-
-
-
-
I like to use private or self hosted Docker registries for a lot of my projects and one of the pain points I have had with some other solutions (well mostly Bitbucket) is that they don’t integrate well with these private registries and when I run into a situation where I am pushing an image to or pulling an image from a private registry it get’s a little messy. TeamCity is nice in that I can add a connection to my private registry in my root project and them simply add that as a build feature to any projects that may need it. Essentially, now I only have one place where I have to keep those credentials and manage that connection.
-
-
-
-
-
-
-
-
Another reason I love it is the fact that you can create really powerful build templates that you can reuse. This became very powerful when we were trying to standardize our build processes. For example, most of the apps we build are .NET backends and React frontends. We built docker images for every project and pushed them to our private registry. TeamCity gave us the ability to standardize the naming convention and really streamline the build process. Enough about that though, the rest of this series will assume that you are using TeamCity. This post will focus on getting up and running using Ansible.
-
-
-
-
-
-
-
-
Installation and Setup
-
-
-
-
For this I will assume that you already have Ansible on your machine and that you will be installing TeamCity locally. You can simply follow along with the installation guide here. We will be creating an Ansible playbook based on the following steps. If you just want the finished code, you can find it on my Gitea instance here:
-
-
-
-
Step 1 : Create project and initial playbook
-
-
-
-
To get started go ahead and create a new directory to hold our configuration:
Now open up install-teamcity-server.yml and add a task to install Java 17 as it is a prerequisite. You will need sudo for this task. ***As of this writing TeamCity does not support Java 18 or 19. If you try to install one of these you will get an error when trying to start TeamCity.
The next step is to create a dedicated user account. Add the following task to install-teamcity-server.yml
-
-
-
-
- name: Add Teamcity User
- ansible.builtin.user:
- name: teamcity
-
-
-
-
Next we will need to download the latest version of TeamCity. 2023.11.4 is the latest as of this writing. Add the following task to your install-teamcity-server.yml
Now that we have everything set up and installed we want to make sure that our new teamcity user has access to everything they need to get up and running. We will add the following lines:
This gives us a pretty nice setup. We have TeamCity server installed with a dedicated user account. The last thing we will do is create a systemd service so that we can easily start/stop the server. For this we will need to add a few things.
-
-
-
-
-
A service file that tells our system how to manage TeamCity
-
-
-
-
A j2 template file that is used to create this service file
-
-
-
-
A handler that tells the system to run systemctl daemon-reload once the service has been installed.
-
-
-
-
-
Go ahead and create a new templates folder with the following teamcity.service.j2 file
That’s it! Now you should have a fully automated installed of TeamCity Server ready to be deployed wherever you need it. Here is the final playbook file, also you can find the most up to date version in my repo:
-
-
-
-
---
-- name: Install Teamcity
- hosts: localhost
- become: true
- become_method: sudo
-
- vars:
- java_version: "17"
- teamcity:
- installation_path: /opt/TeamCity
- version: "2023.11.4"
-
- tasks:
- - name: Install Java
- ansible.builtin.apt:
- name: openjdk-{{ java_version }}-jdk # This is important because TeamCity will fail to start if we try to use 18 or 19
- update_cache: yes
- state: latest
- install_recommends: no
-
- - name: Add TeamCity User
- ansible.builtin.user:
- name: teamcity
-
- - name: Download TeamCity Server
- ansible.builtin.get_url:
- url: https://download.jetbrains.com/teamcity/TeamCity-{{teamcity.version}}.tar.gz
- dest: /opt/TeamCity-{{teamcity.version}}.tar.gz
- mode: '0770'
-
- - name: Install TeamCity Server
- ansible.builtin.shell: |
- tar xfz /opt/TeamCity-{{teamcity.version}}.tar.gz
- rm -rf /opt/TeamCity-{{teamcity.version}}.tar.gz
- args:
- chdir: /opt
-
- - name: Update permissions
- ansible.builtin.shell: chown -R teamcity:teamcity /opt/TeamCity
-
- - name: TeamCity | Create environment file
- template: src=teamcity.service.j2 dest=/etc/systemd/system/teamcityserver.service
- notify:
- - reload systemctl
- - name: TeamCity | Start teamcity
- service: name=teamcityserver.service state=started enabled=yes
-
- # Trigger a reload of systemctl after the service file has been created.
- handlers:
- - name: reload systemctl
- command: systemctl daemon-reload
-]]>
-
-
-
-
-
- Self hosted package registries with Gitea
- /self-hosted-package-registries-with-gitea/
-
-
- Thu, 07 Mar 2024 15:07:07 +0000
-
-
-
-
-
- https://wordpress.hackanooga.com/?p=413
-
-
- I am a big proponent of open source technologies. I have been using Gitea for a couple years now in my homelab. A few years ago I moved most of my code off of Github and onto my self hosted instance. I recently came across a really handy feature that I didn’t know Gitea had and was pleasantly surprised by: Package Registry. You are no doubt familiar with what a package registry is in the broad context. Here are some examples of package registries you probably use on a regular basis:
-
-
-
-
-
npm
-
-
-
-
cargo
-
-
-
-
docker
-
-
-
-
composer
-
-
-
-
nuget
-
-
-
-
helm
-
-
-
-
-
There are a number of reasons why you would want to self host a registry. For example, in my home lab I have some Docker images that are specific to my use cases and I don’t necessarily want them on a public registry. I’m also not concerned about losing the artifacts as I can easily recreate them from code. Gitea makes this really easy to setup, in fact it comes baked in with the installation. For the sake of this post I will just assume that you already have Gitea installed and setup.
-
-
-
-
Since the package registry is baked in and enabled by default, I will demonstrate how easy it is to push a docker image. We will pull the default alpine image, re-tag it and push it to our internal registry:
-
-
-
-
# Pull the official Alpine image
-docker pull alpine:latest
-
-# Re tag the image with our local registry information
-docker tag alpine:latest git.hackanooga.com/mikeconrad/alpine:latest
-
-# Login using your gitea user account
-docker login git.hackanooga.com
-
-# Push the image to our registry
-docker push git.hackanooga.com/mikeconrad/alpine:latest
-
-
-
-
-
-
Now log into your Gitea instance, navigate to your user account and look for packages. You should see the newly uploaded alpine image.
-
-
-
-
-
-
-
-
You can see that the package type is container. Clicking on it will give you more information:
-
-
-
-
-]]>
-
-
-
-
-
- Traefik with Let’s Encrypt and Cloudflare (pt 2)
- /traefik-with-lets-encrypt-and-cloudflare-pt-2/
-
-
- Thu, 15 Feb 2024 20:19:12 +0000
-
-
-
-
-
- https://wordpress.hackanooga.com/?p=425
-
-
- In this article we are gonna get into setting up Traefik to request dynamic certs from Lets Encrypt. I had a few issues getting this up and running and the documentation is a little fuzzy. In my case I decided to go with the DNS challenge route. Really the only reason I went with this option is because I was having issues with the TLS and HTTP challenges. Well as it turns out my issues didn’t have as much to do with my configuration as they did with my router.
-
-
-
-
Sometime in the past I had set up some special rules on my router to force all clients on my network to send DNS requests through a self hosted DNS server. I did this to keep some of my “smart” devices from misbehaving by blocking there access to the outside world. As it turns out some devices will ignore the DNS servers that you hand out via DHCP and will use their own instead. That is of course unless you force DNS redirection but that is another post for another day.
-
-
-
-
Let’s revisit our current configuration:
-
-
-
-
version: '3'
-
-services:
- reverse-proxy:
- # The official v2 Traefik docker image
- image: traefik:v2.11
- # Enables the web UI and tells Traefik to listen to docker
- command:
- - --api.insecure=true
- - --providers.docker=true
- - --providers.file.filename=/config.yml
- - --entrypoints.web.address=:80
- - --entrypoints.websecure.address=:443
- # Set up LetsEncrypt
- - --certificatesresolvers.letsencrypt.acme.dnschallenge=true
- - --certificatesresolvers.letsencrypt.acme.dnschallenge.provider=cloudflare
- - --certificatesresolvers.letsencrypt.acme.email=mikeconrad@onmail.com
- - --certificatesresolvers.letsencrypt.acme.storage=/letsencrypt/acme.json
- - --entryPoints.web.http.redirections.entryPoint.to=websecure
- - --entryPoints.web.http.redirections.entryPoint.scheme=https
- - --entryPoints.web.http.redirections.entrypoint.permanent=true
- - --log=true
- - --log.level=INFO
-# - '--certificatesresolvers.letsencrypt.acme.caserver=https://acme-staging-v02.api.letsencrypt.org/directory'
-
- environment:
- - CF_DNS_API_TOKEN=${CF_DNS_API_TOKEN}
- ports:
- # The HTTP port
- - "80:80"
- - "443:443"
- # The Web UI (enabled by --api.insecure=true)
- - "8080:8080"
- volumes:
- # So that Traefik can listen to the Docker events
- - /var/run/docker.sock:/var/run/docker.sock:ro
- - ./letsencrypt:/letsencrypt
- - ./volumes/traefik/logs:/logs
- - ./traefik/config.yml:/config.yml:ro
- networks:
- - traefik
- ots:
- image: luzifer/ots
- container_name: ots
- restart: always
- environment:
- # Optional, see "Customization" in README
- #CUSTOMIZE: '/etc/ots/customize.yaml'
- # See README for details
- REDIS_URL: redis://redis:6379/0
- # 168h = 1w
- SECRET_EXPIRY: "604800"
- # "mem" or "redis" (See README)
- STORAGE_TYPE: redis
- depends_on:
- - redis
- labels:
- - traefik.enable=true
- - traefik.http.routers.ots.rule=Host(`ots.hackanooga.com`)
- - traefik.http.routers.ots.entrypoints=websecure
- - traefik.http.routers.ots.tls=true
- - traefik.http.routers.ots.tls.certresolver=letsencrypt
- networks:
- - traefik
- redis:
- image: redis:alpine
- restart: always
- volumes:
- - ./redis-data:/data
- networks:
- - traefik
-networks:
- traefik:
- external: true
-
-
-
-
-
-
Now that we have all of this in place there are a couple more things we need to do on the Cloudflare side:
-
-
-
-
Step 1: Setup wildcard DNS entry
-
-
-
-
This is pretty straightforward. Follow the Cloudflare documentation if you aren’t familiar with setting this up.
-
-
-
-
Step 2: Create API Token
-
-
-
-
This is where the Traefik documentation is a little lacking. I had some issues getting this set up initially but ultimately found this documentation which pointed me in the right direction. In your Cloudflare account you will need to create an API token. Navigate to the dashboard, go to your profile -> API Tokens and create new token. It should have the following permissions:
-
-
-
-
Zone.Zone.Read
-Zone.DNS.Edit
-
-
-
-
-
-
-
-
Also be sure to give it permission to access all zones in your account. Now simply provide that token when starting up the stack and you should be good to go:
For those of you who aren’t familiar with the concept of a network tarpit it is a fairly simple concept. Wikipedia defines it like this:
-
-
-
-
-
A tarpit is a service on a computer system (usually a server) that purposely delays incoming connections. The technique was developed as a defense against a computer worm, and the idea is that network abuses such as spamming or broad scanning are less effective, and therefore less attractive, if they take too long. The concept is analogous with a tar pit, in which animals can get bogged down and slowly sink under the surface, like in a swamp.
If you run any sort of service on the internet then you know as soon as your server has a public IP address and open ports, there are scanners and bots trying to get in constantly. If you take decent steps towards security then it is little more than an annoyance, but annoying all the less. One day when I had some extra time on my hands I started researching ways to mess with the bots trying to scan/attack my site.
-
-
-
-
It turns out that this problem has been solved multiple times in multiple ways. One of the most popular tools for tarpitting ssh connections is endlessh. The way it works is actually pretty simple. The SSH RFC states that when an SSH connection is established, both sides MUST send an identification string. Further down the spec is the line that allows this behavior:
-
-
-
-
-
The server MAY send other lines of data before sending the version
- string. Each line SHOULD be terminated by a Carriage Return and Line
- Feed. Such lines MUST NOT begin with "SSH-", and SHOULD be encoded
- in ISO-10646 UTF-8 [RFC3629] (language is not specified). Clients
- MUST be able to process such lines. Such lines MAY be silently
- ignored, or MAY be displayed to the client user. If they are
- displayed, control character filtering, as discussed in [SSH-ARCH],
- SHOULD be used. The primary use of this feature is to allow TCP-
- wrappers to display an error message before disconnecting.
-SSH RFC
-
-
-
-
-
Essentially this means that their is no limit to the amount of data that a server can send back to the client and the client must be able to wait and process all of this data. Now let’s see it in action.
By default this fake server listens on port 2222. I have a port forward set up that forwards all ssh traffic from port 22 to 2222. Now try to connect via ssh:
-
-
-
-
ssh -vvv localhost -p 2222
-
-
-
-
If you wait a few seconds you will see the server send back the version string and then start sending a random banner:
-
-
-
-
$:/tmp/endlessh$ 2024-06-24T13:05:59.488Z Port 2222
-2024-06-24T13:05:59.488Z Delay 10000
-2024-06-24T13:05:59.488Z MaxLineLength 32
-2024-06-24T13:05:59.488Z MaxClients 4096
-2024-06-24T13:05:59.488Z BindFamily IPv4 Mapped IPv6
-2024-06-24T13:05:59.488Z socket() = 3
-2024-06-24T13:05:59.488Z setsockopt(3, SO_REUSEADDR, true) = 0
-2024-06-24T13:05:59.488Z setsockopt(3, IPV6_V6ONLY, true) = 0
-2024-06-24T13:05:59.488Z bind(3, port=2222) = 0
-2024-06-24T13:05:59.488Z listen(3) = 0
-2024-06-24T13:05:59.488Z poll(1, -1)
-ssh -vvv localhost -p 2222
-OpenSSH_8.9p1 Ubuntu-3ubuntu0.7, OpenSSL 3.0.2 15 Mar 2022
-debug1: Reading configuration data /home/mikeconrad/.ssh/config
-debug1: Reading configuration data /etc/ssh/ssh_config
-debug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files
-debug1: /etc/ssh/ssh_config line 21: Applying options for *
-debug3: expanded UserKnownHostsFile '~/.ssh/known_hosts' -> '/home/mikeconrad/.ssh/known_hosts'
-debug3: expanded UserKnownHostsFile '~/.ssh/known_hosts2' -> '/home/mikeconrad/.ssh/known_hosts2'
-debug2: resolving "localhost" port 2222
-debug3: resolve_host: lookup localhost:2222
-debug3: ssh_connect_direct: entering
-debug1: Connecting to localhost [::1] port 2222.
-debug3: set_sock_tos: set socket 3 IPV6_TCLASS 0x10
-debug1: Connection established.
-2024-06-24T13:06:08.635Z = 1
-2024-06-24T13:06:08.635Z accept() = 4
-2024-06-24T13:06:08.635Z setsockopt(4, SO_RCVBUF, 1) = 0
-2024-06-24T13:06:08.635Z ACCEPT host=::1 port=43696 fd=4 n=1/4096
-2024-06-24T13:06:08.635Z poll(1, 10000)
-debug1: identity file /home/mikeconrad/.ssh/id_rsa type 0
-debug1: identity file /home/mikeconrad/.ssh/id_rsa-cert type 4
-debug1: identity file /home/mikeconrad/.ssh/id_ecdsa type -1
-debug1: identity file /home/mikeconrad/.ssh/id_ecdsa-cert type -1
-debug1: identity file /home/mikeconrad/.ssh/id_ecdsa_sk type -1
-debug1: identity file /home/mikeconrad/.ssh/id_ecdsa_sk-cert type -1
-debug1: identity file /home/mikeconrad/.ssh/id_ed25519 type -1
-debug1: identity file /home/mikeconrad/.ssh/id_ed25519-cert type -1
-debug1: identity file /home/mikeconrad/.ssh/id_ed25519_sk type -1
-debug1: identity file /home/mikeconrad/.ssh/id_ed25519_sk-cert type -1
-debug1: identity file /home/mikeconrad/.ssh/id_xmss type -1
-debug1: identity file /home/mikeconrad/.ssh/id_xmss-cert type -1
-debug1: identity file /home/mikeconrad/.ssh/id_dsa type -1
-debug1: identity file /home/mikeconrad/.ssh/id_dsa-cert type -1
-debug1: Local version string SSH-2.0-OpenSSH_8.9p1 Ubuntu-3ubuntu0.7
-2024-06-24T13:06:18.684Z = 0
-2024-06-24T13:06:18.684Z write(4) = 3
-2024-06-24T13:06:18.684Z poll(1, 10000)
-debug1: kex_exchange_identification: banner line 0: V
-2024-06-24T13:06:28.734Z = 0
-2024-06-24T13:06:28.734Z write(4) = 25
-2024-06-24T13:06:28.734Z poll(1, 10000)
-debug1: kex_exchange_identification: banner line 1: 2I=ED}PZ,z T_Y|Yc]$b{R]
-
-
-
-
-
-
This is a great way to give back to those bots and script kiddies. In my research into other methods I also stumbled across this brilliant program fakessh. While fakessh isn’t technically a tarpit, it’s more of a honeypot but very interesting nonetheless. It creates a fake SSH server and logs the ip address, connection string and any commands executed by the attacker. Essentially it allows any username/password combination to connect and gives them a fake shell prompt. There is no actual access to any file system and all of their commands basically return gibberish.
-
-
-
-
Here are some logs from an actual server of mine running fakessh
Those are mostly connections and disconnections. They probably connected, realized it was fake and disconnected. There are a couple that tried to execute some commands though:
Fun fact: Cloudflare’s Bot Fight Mode uses a form of tarpitting:
-
-
-
-
-
Once enabled, when we detect a bad bot, we will do three things: (1) we’re going to disincentivize the bot maker economically by tarpitting them, including requiring them to solve a computationally intensive challenge that will require more of their bot’s CPU; (2) for Bandwidth Alliance partners, we’re going to hand the IP of the bot to the partner and get the bot kicked offline; and (3) we’re going to plant trees to make up for the bot’s carbon cost.
Hardening your web server by only allowing traffic from Cloudflare
-
-
-
-
-
-
-
TDLR:
-
-
-
-
If you just want the code you can find a convenient script on my Gitea server here. This version has been slightly modified so that it will work on more systems.
-
-
-
-
-
-
-
-
I have been using Cloudflare for several years for both personal and professional projects. The free plan has some various gracious limits and it’s a great way to clear out some low hanging fruit and improve the security of your application. If you’re not familiar with how it works, basically Cloudflare has two modes for DNS records. DNS Only and Proxied. The only way to get the advantages of Cloudflare is to use Proxied mode. Cloudflare has some great documentation on how all of their services work but basically what happens is that you are pointing your domain to Cloudflare and Cloudflare provisions their network of Proxy servers to handle requests for your domain.
-
-
-
-
These proxy servers allow you to secure your domain by implementing things like WAF and Rate limiting. You can also enforce HTTPS only mode and modify/add custom request/response headers. You will notice that once you turn this mode on, your webserver will log requests as coming from Cloudflare IP addresses. They have great documentation on how to configure your webserver to restore these IP addresses in your log files.
-
-
-
-
This is a very easy step to start securing your origin server but it still allows attackers to access your servers directly if they know the IP address. We can take our security one step forward by only allowing requests from IP addresses originating within Cloudflare meaning that we will only allow requests if they are coming from a Cloudflare proxy server. The setup is fairly straightforward. In this example I will be using a Linux server.
-
-
-
-
We can achieve this pretty easily because Cloudflare provides a sort of API where they regular publish their network blocks. Here is the basic script we will use:
-
-
-
-
for ip in $(curl https://www.cloudflare.com/ips-v4/); do iptables -I INPUT -p tcp -m multiport --dports http,https -s $ip -j ACCEPT; done
-
-for ip in $(curl https://www.cloudflare.com/ips-v6/); do ip6tables -I INPUT -p tcp -m multiport --dports http,https -s $ip -j ACCEPT; done
-
-iptables -A INPUT -p tcp -m multiport --dports http,https -j DROP
-ip6tables -A INPUT -p tcp -m multiport --dports http,https -j DROP
-
-
-
-
-
This will pull down the latest network addresses from Cloudflare and create iptables rules for us. These IP addresses do change from time to time so you may want to put this in a script and run it via a cronjob to have it update on a regular basis.
-
-
-
-
Now with this in place, here is the results:
-
-
-
-
-
-
-
-
This should cut down on some of the noise from attackers and script kiddies trying to find holes in your security.
Back around 2014 I took on my first freelance development project for a Homeschool Co-op here in Chattanooga called Hilger Higher Learning. The problem that they were trying to solve involved managing grades and report cards for their students. In the past, they had a developer build a rudimentary web application that would allow them to enter grades for students, however it lacked any sort of concurrency meaning that if two teachers were making changes to the same student at the same time, teacher b’s changes would overwrite teacher a’s changes. This was obviously a huge headache.
-
-
-
-
I built out the first version of the app using PHP and HTML, CSS and Datatables with lots of jQuery sprinkled in. I built in custom functionality that allowed them to easily compile and print all the report cards for all students with the simple click of a button. It was a game changer or them and streamlined the process significantly.
-
-
-
-
That system was in production for 5 years or so with minimal updates and maintance. I recently rebuilt it using React and ChakraUI on the frontend and KeystoneJS on the backend. I also modernized the deployment by building Docker images for the frontend/backend. I actually ended up keeping parts of it in PHP due to the fact that I couldn’t find a JavaScript library that would solve the challenges I had. Here are some screenshots of it in action:
-
-
-
-
-
-
-
-
This is the page listing all teachers in the system and whether or not they have admin privileges. Any admin user can grant other admin users this privilege. There is also a button to send the teacher a password reset email (via Postmark API integration) and an option that allows admin users to impersonate other users for troubleshooting and diagnostic purposes.
-
-
-
-
-
-
-
-
-
-
-
-
The data is all coming from the KeystoneJS backend GraphQL API. I am using urql for fetching the data and handling mutations. This is the page that displays students. It is filterable and searchable. Teachers also have the ability to mark a student as active or inactive for the semester as well as delete them from the system.
-
-
-
-
-
-
-
-
Clicking on a student takes the teacher/admin to an edit course screen where they can add and remove courses for each student. A teacher can add as many courses as they need. If multiple teachers have added courses for this student, the user will only see the courses they have entered.
-
-
-
-
-
-
-
-
There is another page that allows admin users to view and manage all of the parents in the system. It allows them to easily send a password reset email to the parents as well as to view the parent portal.
While working for Morrison I had the pleasure of building a website for Hoots Wings. The CMS was Perch and it was mostly HTML, CSS, PHP and JavaScript on the frontend, however I built out a customer store locator using NodeJS and VueJS.
-
-
-
-
-
-
-
-
I was the sole frontend developer responsible for taking the designs from SketchUp and translating them to the site you see now. Most of the blocks and templates are built using a mix of PHP and HTML/SCSS. There was also some JavaScript for things like getting the users location and rendering popups/modals.
-
-
-
-
The store locator was a separate piece that was built in Vue2.0 with a NodeJS backend. For the backend I used KeystoneJS to hold all of the store information. There was also some custom development that was done in order to sync the stores added via the CMS with Yext and vice versa.
-
-
-
-
-
-
-
-
For that piece I ended up having to write a custom integration in Perch that would connect to the NodeJS backend and pull the stores but also make sure that those were in sync with Yext. This required diving into the Yext API some and examining a similar integration that we had for another client site.
-
-
-
-
Unfortunately I don’t have any screen grabs of the admin side of things since that is proprietary but the system I built allowed a site admin to go in and add/edit store locations that would show up on the site and also show up in Yext with the appropriate information.
Mike is a solutions driven professional with 10+ years of experience in the tech industry. He has filled various roles from technical support, frontend and backend development, system administration, networking, IT and DevOps.
-
-
-
-
-
-
I am a problem solver and a change maker who has a unique ability to understand high level concepts and requirements and translate those into actionable, meaningful results.
-
-
-
-
-
-
Things I care about
-
-
-
-
-
-
-
-
-
-
-
Chadev
-
-
-
-
A community for developers by developers right here in Chattanooga TN.
I am a software engineer with 10+ years in the industry. My journey into software development started out in a call center in Tampa Florida doing tech support for Dell Computers. The year was 2010. I was into technology and computers and had studied hard to ace my CompTIA Network+ certificate but I didn’t have much background in programming.
-
-
-
-
I remember some co-workers of mine built some small web applications to help us automate some of the mundane tasks of data entry that we would have to do on support calls. I thought those guys were so cool that they could just make something out of nothing.
-
-
-
-
In 2011 I decided it was time for a change in my life and I quit my job, packed my bags and headed to Chattanooga TN. I got a job working for Comcast as a field service technician since I already had 4 years of experience doing that back in Florida.
-
-
-
-
I continued to mess around with tech and home servers, trying to hone my skills and get my foot in the door in IT. In 2014 I got the opportunity to work full time at my local church in the IT department. At the time we had a staff of around 50 people not including volunteers. My day to day responsibilities included everything from installing/configuring routers and switches to running cabling and setting up servers.
-
-
-
-
It was in that position that my passing for software really started to ignite. As I was involved in managing so many different systems on a daily basis, I began to see the need for automation and software that could make things easier. I was also working part time in a restaurant attached to the church. One of my first software projects was creating a digital menu board with a Raspberry PI and a TV mounted on a wall outside the restaurant.
-
-
-
-
I remember I struggled to get a WordPress site up and running. I had never touched PHP or JavaScript up to that point but I was pretty comfortable with BASH and some basic scripting. I also have a knack for diving in and trying to figure things out. I finally got that menu board up and running and maintained it for a year or so. It was my first “production” system and sparked something in me that has never left.
-
-
-
-
While in that role, I also designed and built a Kitchen display system that synced with the restaurants POS and pulled in tickets every 30 seconds or so. It was my first full stack development project. HTML, CSS and JavaScript on the frontend and PHP with a MySQL database on the backend. I was pretty proud of that project even though it never really got use in “production”.
-
-
-
-
From there I started learning web proxies and device filtering technologies. I left that role in 2017 and started my own venture. Enxo LLC. My goal was to create parental control software that actually worked. That was effective and not trivial to get around. I bought a small Poweredge server, installed Ubuntu Server on it and started playing around with LXC containers.
-
-
-
-
I found an option source proxy solution that was written in Go that I deployed to my server and also set up VPN connections and firewalls. I was using an MDM solution to manage my customers devices and I put a web portal with NodeJS and PHP to help manage everything as well as to generate weekly reports to send out to my customers.
-
-
-
-
I even built an iOS app in Swift but unfortunately was never able to get it into the app store. I learned a lot from that venture but ultimately I felt like I would never be able to take on big tech by myself and I just wouldn’t be able to build it the way that I wanted.
-
-
-
-
Professional Career
-
-
-
-
Fast forward to 2019 the start of my “professional” career. I was doing the freelance thing for awhile and also working in a restaurant part time. It was cool and all but then my wife got pregnant with our first child and I knew that it was time for me to get a “real” job.
-
-
-
-
I faced the same challenge that a lot of people face in that I had some practical hands on experience but I didn’t have any real world or professional experience. I applied to a number of jobs and eventually got a call back from a company out of Massachusetts called Virtual Inc. They were hiring for a junior and senior web developer position.
-
-
-
-
I applied for the junior position knowing that I didn’t have enough background to get the senior role. The job was OK and the company was pretty cool. I was doing mostly WordPress, a little bit of Salesforce and some other one off CRM’s thrown in the mix as well. It was great to put on my resume and pay the bills but it wasn’t necessarily what I wanted to do long term.
While working for Morrison I had the pleasure of building a website for Hoots Wings. The CMS was Perch and it was mostly HTML, CSS, PHP and JavaScript on the frontend, however I built out a customer store locator using NodeJS and VueJS. I was the sole frontend developer responsible for taking the designs from…
Back around 2014 I took on my first freelance development project for a Homeschool Co-op here in Chattanooga called Hilger Higher Learning. The problem that they were trying to solve involved managing grades and report cards for their students. In the past, they had a developer build a rudimentary web application that would allow them…
Roll your own authenticator app with KeystoneJS and React – pt 2
-
-
-
-
-
-
-
In part 1 of this series we built out a basic backend using KeystoneJS. In this part we will go ahead and start a new React frontend that will interact with our backend. We will be using Vite. Let’s get started. Make sure you are in the authenticator folder and run the following:
-
-
-
-
$ yarn create vite@latest
-yarn create v1.22.21
-[1/4] Resolving packages...
-[2/4] Fetching packages...
-[3/4] Linking dependencies...
-[4/4] Building fresh packages...
-
-success Installed "[email protected]" with binaries:
- - create-vite
- - cva
-✔ Project name: … frontend
-✔ Select a framework: › React
-✔ Select a variant: › TypeScript
-
-Scaffolding project in /home/mikeconrad/projects/authenticator/frontend...
-
-Done. Now run:
-
- cd frontend
- yarn
- yarn dev
-
-Done in 10.20s.
-
-
-
-
Let’s go ahead and go into our frontend directory and get started:
-
-
-
-
$ cd frontend
-$ yarn
-yarn install v1.22.21
-info No lockfile found.
-[1/4] Resolving packages...
-[2/4] Fetching packages...
-[3/4] Linking dependencies...
-[4/4] Building fresh packages...
-success Saved lockfile.
-Done in 10.21s.
-
-$ yarn dev
-yarn run v1.22.21
-$ vite
-Port 5173 is in use, trying another one...
-
- VITE v5.1.6 ready in 218 ms
-
- ➜ Local: http://localhost:5174/
- ➜ Network: use --host to expose
- ➜ press h + enter to show help
-
-
-
-
-
Next go ahead and open the project up in your IDE of choice. I prefer VSCodium:
-
-
-
-
codium frontend
-
-
-
-
Go ahead and open up src/App.tsx and remove all the boilerplate so it looks like this:
Now you should have something that looks like this:
-
-
-
-
-
-
-
-
Alright, we have some of the boring stuff out of the way, now let’s start making some magic. If you aren’t familiar with how TOTP tokens work, basically there is an Algorithm that generates them. I would encourage you to read the RFC for a detailed explanation. Basically it is an algorithm that generates a one time password using the current time as a source of uniqueness along with the secret key.
-
-
-
-
If we really wanted to we could implement this algorithm ourselves but thankfully there are some really simple libraries that do it for us. For our project we will be using one called totp-generator. Let’s go ahead and install it and check it out:
-
-
-
-
$ yarn add totp-generator
-
-
-
-
Now let’s add it to our card component and see what happens. Using it is really simple. We just need to import it, instantiate a new TokenGenerator and pass it our Secret key:
Now save and go back to your browser and you should see that our secret keys are now being displayed as tokens:
-
-
-
-
-
-
-
-
That is pretty cool, the only problem is you need to refresh the page to refresh the token. We will take care of that in part 3 of this series as well as handling fetching tokens from our backend.
Roll your own authenticator app with KeystoneJS and React – pt 3
-
-
-
-
-
-
-
In our previous post we got to the point of displaying an OTP in our card component. Now it is time to refactor a bit and implement a countdown functionality to see when this token will expire. For now we will go ahead and add this logic into our Card component. In order to figure out how to build this countdown timer we first need to understand how the TOTP counter is calculated.
-
-
-
-
In other words, we know that at TOTP token is derived from a secret key and the current time. If we dig into the spec some we can find that time is a reference to Linux epoch time or the number of seconds that have elapsed since January 1st 1970. For a little more clarification check out this Stackexchange article.
-
-
-
-
So if we know that the time is based on epoch time, we also need to know that most TOTP tokens have a validity period of either 30 seconds or 60 seconds. 30 seconds is the most common standard so we will use that for our implementation. If we put all that together then basically all we need is 2 variables:
-
-
-
-
-
Number of seconds since epoch
-
-
-
-
How many seconds until this token expires
-
-
-
-
-
The first one is easy:
-
-
-
-
let secondsSinceEpoch;
-secondsSinceEpoch = Math.ceil(Date.now() / 1000) - 1;
-
-# This gives us a time like so: 1710338609
-
-
-
-
For the second one we will need to do a little math but it’s pretty straightforward. We need to divide secondsSinceEpoch by 30 seconds and then subtract this number from 30. Here is what that looks like:
-
-
-
-
let secondsSinceEpoch;
-let secondsRemaining;
-const period = 30;
-
-secondsSinceEpoch = Math.ceil(Date.now() / 1000) - 1;
-secondsRemaining = period - (secondsSinceEpoch % period);
-
-
-
-
Now let’s put all of that together into a function that we can test out to make sure we are getting the results we expect.
Running this function should give you output similar to the following. In this example we are stopping the timer once it hits 1 second just to show that everything is working as we expect. In our application we will want this time to keep going forever:
Here is a JSfiddle that shows it in action: https://jsfiddle.net/561vg3k7/
-
-
-
-
We can go ahead and add this function to our Card component and get it wired up. I am going to skip ahead a bit and add a progress bar to our card that is synced with our countdown timer and changes colors as it drops below 10 seconds. For now we will be using a setInterval function to accomplish this.
-
-
-
-
Here is what my updated src/Components/Card.tsx looks like:
Pretty straightforward. I also updated my src/index.css and added a style for our progress bar:
-
-
-
-
.progressBar {
- height: 10px;
- position: absolute;
- top: 0;
- left: 0;
- right: inherit;
-}
-// Also be sure to add position:relative to .card which is the parent of this.
-
-
-
-
Here is what it all looks like in action:
-
-
-
-
-
-
-
-
If you look closely you will notice a few interesting things. First is that the color of the progress bar changes from green to red. This is handled by our timerStyle variable. That part is pretty simple, if the timer is less than 10 seconds we set the background color as salmon otherwise we use light green. The width of the progress bar is controlled by `${100 – (100 / 30 * (30 – secondsRemaining))}%`
-
-
-
-
The other interesting thing to note is that when the timer runs out it automatically restarts at 30 seconds with a new OTP. This is due to the fact that this component is re-rendering every 1/4 second and every time it re-renders it is running the entire function body including: const { otp } = TOTP.generate(token.token);.
-
-
-
-
This is to be expected since we are using React and we are just using a setInterval. It may be a little unexpected though if you aren’t as familiar with React render cycles. For our purposes this will work just fine for now. Stay tuned for pt 4 of this series where we wire up the backend API.
Roll your own authenticator app with KeystoneJS and React
-
-
-
-
-
-
-
In this series of articles we are going to be building an authenticator app using KeystoneJS for the backend and React for the frontend. The concept is pretty simple and yes there are a bunch out there already but I recently had a need to learn some of the ins and outs of TOTP tokens and thought this project would be a fun idea. Let’s get started.
-
-
-
-
Step 1: Init keystone app
-
-
-
-
Open up a terminal and create a blank keystone project. We are going to call our app authenticator to keep things simple.
-
-
-
-
$ yarn create keystone-app
-yarn create v1.22.21
-[1/4] Resolving packages...
-[2/4] Fetching packages...
-[3/4] Linking dependencies...
-[4/4] Building fresh packages...
-
-success Installed "[email protected]" with binaries:
- - create-keystone-app
-[###################################################################################################################################################################################] 273/273
-✨ You're about to generate a project using Keystone 6 packages.
-
-✔ What directory should create-keystone-app generate your app into? · authenticator
-
-⠸ Installing dependencies with yarn. This may take a few minutes.
-⚠ Failed to install with yarn.
-✔ Installed dependencies with npm.
-
-
-🎉 Keystone created a starter project in: authenticator
-
- To launch your app, run:
-
- - cd authenticator
- - npm run dev
-
- Next steps:
-
- - Read authenticator/README.md for additional getting started details.
- - Edit authenticator/keystone.ts to customize your app.
- - Open the Admin UI
- - Open the Graphql API
- - Read the docs
- - Star Keystone on GitHub
-
-Done in 84.06s.
-
-
-
-
-
After a few minutes you should be ready to go. Ignore the error about yarn not being able to install dependencies, it’s an issue with my setup. Next go ahead and open up the project folder with your editor of choice. I use VSCodium:
-
-
-
-
codium authenticator
-
-
-
-
Let’s go ahead and remove all the comments from the schema.ts file and clean it up some:
-
-
-
-
sed -i '/\/\//d' schema.ts
-
-
-
-
Also, go ahead and delete the Post and Tag list as we won’t be using them. Our cleaned up schema.ts should look like this:
Next we will define the schema for our tokens. We will need 3 basic things to start with:
-
-
-
-
-
Issuer
-
-
-
-
Secret Key
-
-
-
-
Account
-
-
-
-
-
The only thing that really matters for generating a TOTP is actually the secret key. The other two fields are mostly for identifying and differentiating tokens. Go ahead and add the following to our schema.ts underneath the User list:
Now that we have defined our Token, we should probably link it to a user. KeystoneJS makes this really easily. We simply need to add a relationship field to our User list. Add the following field to the user list:
-
-
-
-
tokens: relationship({ ref:'Token', many: true })
-
-
-
-
We are defining a tokens field on the User list and tying it to our Token list. We are also passing many: true saying that a user can have one or more tokens. Now that we have the basics set up, let’s go ahead and spin up our app and see what we have:
-
-
-
-
$ yarn dev
-yarn run v1.22.21
-$ keystone dev
-✨ Starting Keystone
-⭐️ Server listening on :3000 (http://localhost:3000/)
-⭐️ GraphQL API available at /api/graphql
-✨ Generating GraphQL and Prisma schemas
-✨ The database is already in sync with the Prisma schema
-✨ Connecting to the database
-✨ Creating server
-✅ GraphQL API ready
-✨ Generating Admin UI code
-✨ Preparing Admin UI app
-✅ Admin UI ready
-
-
-
-
-
Our server should be running on localhost:3000 so let’s check it out! The first time we open it up we will be greeted with the initialization screen. Go ahead and create an account to login:
-
-
-
-
-
-
-
-
Once you login you should see a dashboard similar to this:
-
-
-
-
-
-
-
-
You can see we have Users and Tokens that we can manage. The beauty of KeystoneJS is that you get full CRUD functionality out of the box just by defining our schema! Go ahead and click on Tokens to add a token:
-
-
-
-
-
-
-
-
For this example I just entered some random text as an example. This is enough to start testing out our TOTP functionality. Click ‘Create Token’ and you should see a list displaying existing tokens:
-
-
-
-
-
-
-
-
We are now ready to jump into the frontend. Stay tuned for pt 2 of this series.
I am a big proponent of open source technologies. I have been using Gitea for a couple years now in my homelab. A few years ago I moved most of my code off of Github and onto my self hosted instance. I recently came across a really handy feature that I didn’t know Gitea had and was pleasantly surprised by: Package Registry. You are no doubt familiar with what a package registry is in the broad context. Here are some examples of package registries you probably use on a regular basis:
-
-
-
-
-
npm
-
-
-
-
cargo
-
-
-
-
docker
-
-
-
-
composer
-
-
-
-
nuget
-
-
-
-
helm
-
-
-
-
-
There are a number of reasons why you would want to self host a registry. For example, in my home lab I have some Docker images that are specific to my use cases and I don’t necessarily want them on a public registry. I’m also not concerned about losing the artifacts as I can easily recreate them from code. Gitea makes this really easy to setup, in fact it comes baked in with the installation. For the sake of this post I will just assume that you already have Gitea installed and setup.
-
-
-
-
Since the package registry is baked in and enabled by default, I will demonstrate how easy it is to push a docker image. We will pull the default alpine image, re-tag it and push it to our internal registry:
-
-
-
-
# Pull the official Alpine image
-docker pull alpine:latest
-
-# Re tag the image with our local registry information
-docker tag alpine:latest git.hackanooga.com/mikeconrad/alpine:latest
-
-# Login using your gitea user account
-docker login git.hackanooga.com
-
-# Push the image to our registry
-docker push git.hackanooga.com/mikeconrad/alpine:latest
-
-
-
-
-
-
Now log into your Gitea instance, navigate to your user account and look for packages. You should see the newly uploaded alpine image.
-
-
-
-
-
-
-
-
You can see that the package type is container. Clicking on it will give you more information:
SFTP Server Setup for Daily Inventory File Transfers
-
-
-
-
-
-
-
Job Description
-
-
-
-
-
We are looking for an experienced professional to help us set up an SFTP server that will allow our vendors to send us inventory files on a daily basis. The server should ensure secure and reliable file transfers, allowing our vendors to easily upload their inventory updates. The successful candidate will possess expertise in SFTP server setup and configuration, as well as knowledge of network security protocols. The required skills for this job include:
-
-
-
-
– SFTP server setup and configuration – Network security protocols – Troubleshooting and problem-solving skills
-
-
-
-
If you have demonstrated experience in setting up SFTP servers and ensuring smooth daily file transfers, we would love to hear from you.
-
-
-
-
-
-
-
-
-
-
-
-
-
My Role
-
-
-
-
I walked the client through the process of setting up a Digital Ocean account. I created a Ubuntu 22.04 VM and installed SFTPGo. I set the client up with an administrator user so that they could easily login and manage users and shares. I implemented some basic security practices as well and set the client up with a custom domain and free TLS/SSL certificate from LetsEncrypt. With the documentation and screenshots I provided the client, they were able to get everything up and running and add users and connect other systems easily and securly.
-
-
-
-
-
-
-
-
-
-
-
-
Client Feedback
-
-
-
-
-
Rating is 5 out of 5.
-
-
-
-
Michael was EXTREMELY helpful and great to work with. We really benefited from his support and help with everything.
VPN’s have traditionally been slow, complex and hard to set up and configure. That all changed several years ago when Wireguard was officially merged into the mainline Linux kernel (src). I won’t go over all the reasons for why you should want to use Wireguard in this article, instead I will be focusing on just how easy it is to set up and configure.
-
-
-
-
For this tutorial we will be using Terraform to stand up a Digital Ocean droplet and then install Wireguard onto that. The Digital Ocean droplet will be acting as our “server” in this example and we will be using our own computer as the “client”. Of course, you don’t have to use Terraform, you just need a Linux box to install Wireguard on. You can find the code for this tutorial on my personal Git server here.
-
-
-
-
Create Droplet with Terraform
-
-
-
-
I have written some very basic Terraform to get us started. The Terraform is very basic and just creates a droplet with a predefined ssh key and a setup script passed as user data. When the droplet gets created, the script will get copied to the instance and automatically executed. After a few minutes everything should be ready to go. If you want to clone the repo above, feel free to, or if you would rather do everything by hand that’s great too. I will assume that you are doing everything by hand. The process of deploying from the repo should be pretty self explainitory. My reasoning for doing it this way is because I wanted to better understand the process.
-
-
-
-
First create our main.tf with the following contents:
-
-
-
-
# main.tf
-# Attach an SSH key to our droplet
-resource "digitalocean_ssh_key" "default" {
- name = "Terraform Example"
- public_key = file("./tf-digitalocean.pub")
-}
-
-# Create a new Web Droplet in the nyc1 region
-resource "digitalocean_droplet" "web" {
- image = "ubuntu-22-04-x64"
- name = "wireguard"
- region = "nyc1"
- size = "s-2vcpu-4gb"
- ssh_keys = [digitalocean_ssh_key.default.fingerprint]
- user_data = file("setup.sh")
-}
-
-output "droplet_output" {
- value = digitalocean_droplet.web.ipv4_address
-}
-
-
-
-
Next create a terraform.tf file in the same directory with the following contents:
Now we are ready to initialize our Terraform and apply it:
-
-
-
-
$ terraform init
-$ terraform apply
-
-Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
- + create
-
-Terraform will perform the following actions:
-
- # digitalocean_droplet.web will be created
- + resource "digitalocean_droplet" "web" {
- + backups = false
- + created_at = (known after apply)
- + disk = (known after apply)
- + graceful_shutdown = false
- + id = (known after apply)
- + image = "ubuntu-22-04-x64"
- + ipv4_address = (known after apply)
- + ipv4_address_private = (known after apply)
- + ipv6 = false
- + ipv6_address = (known after apply)
- + locked = (known after apply)
- + memory = (known after apply)
- + monitoring = false
- + name = "wireguard"
- + price_hourly = (known after apply)
- + price_monthly = (known after apply)
- + private_networking = (known after apply)
- + region = "nyc1"
- + resize_disk = true
- + size = "s-2vcpu-4gb"
- + ssh_keys = (known after apply)
- + status = (known after apply)
- + urn = (known after apply)
- + user_data = "69d130f386b262b136863be5fcffc32bff055ac0"
- + vcpus = (known after apply)
- + volume_ids = (known after apply)
- + vpc_uuid = (known after apply)
- }
-
- # digitalocean_ssh_key.default will be created
- + resource "digitalocean_ssh_key" "default" {
- + fingerprint = (known after apply)
- + id = (known after apply)
- + name = "Terraform Example"
- + public_key = <<-EOT
- ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDXOBlFdNqV48oxWobrn2rPt4y1FTqrqscA5bSu2f3CogwbDKDyNglXu8RL4opjfdBHQES+pEqvt21niqes8z2QsBTF3TRQ39SaHM8wnOTeC8d0uSgyrp9b7higHd0SDJVJZT0Bz5AlpYfCO/gpEW51XrKKeud7vImj8nGPDHnENN0Ie0UVYZ5+V1zlr0BBI7LX01MtzUOgSldDX0lif7IZWW4XEv40ojWyYJNQwO/gwyDrdAq+kl+xZu7LmBhngcqd02+X6w4SbdgYg2flu25Td0MME0DEsXKiZYf7kniTrKgCs4kJAmidCDYlYRt43dlM69pB5jVD/u4r3O+erTapH/O1EDhsdA9y0aYpKOv26ssYU+ZXK/nax+Heu0giflm7ENTCblKTPCtpG1DBthhX6Ml0AYjZF1cUaaAvpN8UjElxQ9r+PSwXloSnf25/r9UOBs1uco8VDwbx5cM0SpdYm6ERtLqGRYrG2SDJ8yLgiCE9EK9n3uQExyrTMKWzVAc= WireguardVPN
- EOT
- }
-
-Plan: 2 to add, 0 to change, 0 to destroy.
-
-Changes to Outputs:
- + droplet_output = (known after apply)
-
-Do you want to perform these actions?
- Terraform will perform the actions described above.
- Only 'yes' will be accepted to approve.
-
- Enter a value: yes
-
-digitalocean_ssh_key.default: Creating...
-digitalocean_ssh_key.default: Creation complete after 1s [id=43499750]
-digitalocean_droplet.web: Creating...
-digitalocean_droplet.web: Still creating... [10s elapsed]
-digitalocean_droplet.web: Still creating... [20s elapsed]
-digitalocean_droplet.web: Still creating... [30s elapsed]
-digitalocean_droplet.web: Creation complete after 31s [id=447469336]
-
-Apply complete! Resources: 2 added, 0 changed, 0 destroyed.
-
-Outputs:
-
-droplet_output = "159.223.113.207"
-
-
-
-
-
All pretty standard stuff. Nice! It only took about 30 seconds or so on my machine to spin up a droplet and start provisioning it. It is worth noting that the setup script will take a few minutes to run. Before we log into our new droplet, let’s take a quick look at the setup script that we are running.
-
-
-
-
#!/usr/bin/env sh
-set -e
-set -u
-# Set the listen port used by Wireguard, this is the default so feel free to change it.
-LISTENPORT=51820
-CONFIG_DIR=/root/wireguard-conf
-umask 077
-mkdir -p $CONFIG_DIR/client
-
-# Install wireguard
-apt update && apt install -y wireguard
-
-# Generate public/private key for the "server".
-wg genkey > $CONFIG_DIR/privatekey
-wg pubkey < $CONFIG_DIR/privatekey > $CONFIG_DIR/publickey
-
-# Generate public/private key for the "client"
-wg genkey > $CONFIG_DIR/client/privatekey
-wg pubkey < $CONFIG_DIR/client/privatekey > $CONFIG_DIR/client/publickey
-
-
-# Generate server config
-echo "[Interface]
-Address = 10.66.66.1/24,fd42:42:42::1/64
-ListenPort = $LISTENPORT
-PrivateKey = $(cat $CONFIG_DIR/privatekey)
-
-### Client config
-[Peer]
-PublicKey = $(cat $CONFIG_DIR/client/publickey)
-AllowedIPs = 10.66.66.2/32,fd42:42:42::2/128
-" > /etc/wireguard/do.conf
-
-
-# Generate client config. This will need to be copied to your machine.
-echo "[Interface]
-PrivateKey = $(cat $CONFIG_DIR/client/privatekey)
-Address = 10.66.66.2/32,fd42:42:42::2/128
-DNS = 1.1.1.1,1.0.0.1
-
-[Peer]
-PublicKey = $(cat publickey)
-Endpoint = $(curl icanhazip.com):$LISTENPORT
-AllowedIPs = 0.0.0.0/0,::/0
-" > client-config.conf
-
-wg-quick up do
-
-# Add iptables rules to forward internet traffic through this box
-# We are assuming our Wireguard interface is called do and our
-# primary public facing interface is called eth0.
-
-iptables -I INPUT -p udp --dport 51820 -j ACCEPT
-iptables -I FORWARD -i eth0 -o do -j ACCEPT
-iptables -I FORWARD -i do -j ACCEPT
-iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
-ip6tables -I FORWARD -i do -j ACCEPT
-ip6tables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
-
-# Enable routing on the server
-echo "net.ipv4.ip_forward = 1
- net.ipv6.conf.all.forwarding = 1" >/etc/sysctl.d/wg.conf
-sysctl --system
-
-
-
-
As you can see, it is pretty straightforward. All you really need to do is:
-
-
-
-
On the “server” side:
-
-
-
-
-
Generate a private key and derive a public key from it for both the “server” and the “client”.
-
-
-
-
Create a “server” config that tells the droplet what address to bind to for the wireguard interface, which private key to use to secure that interface and what port to listen on.
-
-
-
-
The “server” config also needs to know what peers or “clients” to accept connections from in the AllowedIPs block. In this case we are just specifying one. The “server” also needs to know the public key of the “client” that will be connecting.
-
-
-
-
-
On the “client” side:
-
-
-
-
-
Create a “client” config that tells our machine what address to assign to the wireguard interface (obviously needs to be on the same subnet as the interface on the server side).
-
-
-
-
The client needs to know which private key to use to secure the interface.
-
-
-
-
It also needs to know the public key of the server as well as the public IP address/hostname of the “server” it is connecting to as well as the port it is listening on.
-
-
-
-
Finally it needs to know what traffic to route over the wireguard interface. In this example we are simply routing all traffic but you could restrict this as you see fit.
-
-
-
-
-
Now that we have our configs in place, we need to copy the client config to our local machine. The following command should work as long as you make sure to replace the IP address with the IP address of your newly created droplet:
-
-
-
-
## Make sure you have Wireguard installed on your local machine as well.
-## https://wireguard.com/install
-
-## Copy the client config to our local machine and move it to our wireguard directory.
-$ ssh -i tf-digitalocean [email protected] -- cat /root/wireguard-conf/client-config.conf| sudo tee /etc/wireguard/do.conf
-
-
-
-
Before we try to connect, let’s log into the server and make sure everything is set up correctly:
-
-
-
-
$ ssh -i tf-digitalocean [email protected]
-Welcome to Ubuntu 22.04.4 LTS (GNU/Linux 5.15.0-113-generic x86_64)
-
- * Documentation: https://help.ubuntu.com
- * Management: https://landscape.canonical.com
- * Support: https://ubuntu.com/pro
-
- System information as of Wed Sep 25 13:19:02 UTC 2024
-
- System load: 0.03 Processes: 113
- Usage of /: 2.1% of 77.35GB Users logged in: 0
- Memory usage: 6% IPv4 address for eth0: 157.230.221.196
- Swap usage: 0% IPv4 address for eth0: 10.10.0.5
-
-Expanded Security Maintenance for Applications is not enabled.
-
-70 updates can be applied immediately.
-40 of these updates are standard security updates.
-To see these additional updates run: apt list --upgradable
-
-Enable ESM Apps to receive additional future security updates.
-See https://ubuntu.com/esm or run: sudo pro status
-
-New release '24.04.1 LTS' available.
-Run 'do-release-upgrade' to upgrade to it.
-
-
-Last login: Wed Sep 25 13:16:25 2024 from 74.221.191.214
-root@wireguard:~#
-
-
-
-
-
-
Awesome! We are connected. Now let’s check the wireguard interface using the wg command. If our config was correct, we should see an interface line and 1 peer line like so. If the peer line is missing then something is wrong with the configuration. Most likely a mismatch between public/private key.:
-
-
-
-
root@wireguard:~# wg
-interface: do
- public key: fTvqo/cZVofJ9IZgWHwU6XKcIwM/EcxUsMw4voeS/Hg=
- private key: (hidden)
- listening port: 51820
-
-peer: 5RxMenh1L+rNJobROkUrub4DBUj+nEUPKiNe4DFR8iY=
- allowed ips: 10.66.66.2/32, fd42:42:42::2/128
-root@wireguard:~#
-
-
-
-
So now we should be ready to go! On your local machine go ahead and try it out:
-
-
-
-
## Start the interface with wg-quick up [interface_name]
-$ sudo wg-quick up do
-[sudo] password for mikeconrad:
-[#] ip link add do type wireguard
-[#] wg setconf do /dev/fd/63
-[#] ip -4 address add 10.66.66.2/32 dev do
-[#] ip -6 address add fd42:42:42::2/128 dev do
-[#] ip link set mtu 1420 up dev do
-[#] resolvconf -a do -m 0 -x
-[#] wg set do fwmark 51820
-[#] ip -6 route add ::/0 dev do table 51820
-[#] ip -6 rule add not fwmark 51820 table 51820
-[#] ip -6 rule add table main suppress_prefixlength 0
-[#] ip6tables-restore -n
-[#] ip -4 route add 0.0.0.0/0 dev do table 51820
-[#] ip -4 rule add not fwmark 51820 table 51820
-[#] ip -4 rule add table main suppress_prefixlength 0
-[#] sysctl -q net.ipv4.conf.all.src_valid_mark=1
-[#] iptables-restore -n
-
-## Check our config
-$ sudo wg
-interface: do
- public key: fJ8mptCR/utCR4K2LmJTKTjn3xc4RDmZ3NNEQGwI7iI=
- private key: (hidden)
- listening port: 34596
- fwmark: 0xca6c
-
-peer: duTHwMhzSZxnRJ2GFCUCHE4HgY5tSeRn9EzQt9XVDx4=
- endpoint: 157.230.177.54:51820
- allowed ips: 0.0.0.0/0, ::/0
- latest handshake: 1 second ago
- transfer: 1.82 KiB received, 2.89 KiB sent
-
-## Make sure we can ping the outside world
-mikeconrad@pop-os:~/projects/wireguard-terraform-digitalocean$ ping 1.1.1.1
-PING 1.1.1.1 (1.1.1.1) 56(84) bytes of data.
-64 bytes from 1.1.1.1: icmp_seq=1 ttl=56 time=28.0 ms
-^C
---- 1.1.1.1 ping statistics ---
-1 packets transmitted, 1 received, 0% packet loss, time 0ms
-rtt min/avg/max/mdev = 27.991/27.991/27.991/0.000 ms
-
-## Verify our traffic is actually going over the tunnel.
-$ curl icanhazip.com
-157.230.177.54
-
-
-
-
-
-
-
We should also be able to ssh into our instance over the VPN using the 10.66.66.1 address:
-
-
-
-
$ ssh -i tf-digitalocean [email protected]
-The authenticity of host '10.66.66.1 (10.66.66.1)' can't be established.
-ED25519 key fingerprint is SHA256:E7BKSO3qP+iVVXfb/tLaUfKIc4RvtZ0k248epdE04m8.
-This host key is known by the following other names/addresses:
- ~/.ssh/known_hosts:130: [hashed name]
-Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
-Warning: Permanently added '10.66.66.1' (ED25519) to the list of known hosts.
-Welcome to Ubuntu 22.04.4 LTS (GNU/Linux 5.15.0-113-generic x86_64)
-
- * Documentation: https://help.ubuntu.com
- * Management: https://landscape.canonical.com
- * Support: https://ubuntu.com/pro
-
- System information as of Wed Sep 25 13:32:12 UTC 2024
-
- System load: 0.02 Processes: 109
- Usage of /: 2.1% of 77.35GB Users logged in: 0
- Memory usage: 6% IPv4 address for eth0: 157.230.177.54
- Swap usage: 0% IPv4 address for eth0: 10.10.0.5
-
-Expanded Security Maintenance for Applications is not enabled.
-
-73 updates can be applied immediately.
-40 of these updates are standard security updates.
-To see these additional updates run: apt list --upgradable
-
-Enable ESM Apps to receive additional future security updates.
-See https://ubuntu.com/esm or run: sudo pro status
-
-New release '24.04.1 LTS' available.
-Run 'do-release-upgrade' to upgrade to it.
-
-
-root@wireguard:~#
-
-
-
-
-
Looks like everything is working! If you run the script from the repo you will have a fully functioning Wireguard VPN in less than 5 minutes! Pretty cool stuff! This article was not meant to be exhaustive but instead a simple primer to get your feet wet. The setup script I used is heavily inspired by angristan/wireguard-install. Another great resource is the Unofficial docs repo.
These are some handy snippets I use on a regular basis when managing containers. I have one server in particular that can sometimes end up with 50 to 100 orphaned containers for various reasons. The easiest/quickest way to stop all of them is to do something like this:
-
-
-
-
docker container stop $(docker container ps -q)
-
-
-
-
Let me break this down in case you are not familiar with the syntax. Basically we are passing the output of docker container ps -q into docker container stop. This works because the stop command can take a list of container ids which is what we get when passing the -q flag to docker container ps.
-
-
-
-
-
\ No newline at end of file
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/tag/authentication/feed/index.xml b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/tag/authentication/feed/index.xml
deleted file mode 100644
index dee58ac..0000000
--- a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/tag/authentication/feed/index.xml
+++ /dev/null
@@ -1,227 +0,0 @@
-
-
-
- Authentication – hackanooga
-
- /
- Confessions of a homelab hacker
- Wed, 13 Mar 2024 16:14:24 +0000
- en-US
-
- hourly
-
- 1
- https://wordpress.org/?v=6.6.2
-
-
- /wp-content/uploads/2024/03/cropped-cropped-avatar-32x32.png
- Authentication – hackanooga
- /
- 32
- 32
-
-
- Roll your own authenticator app with KeystoneJS and React – pt 3
- /roll-your-own-authenticator-app-with-keystonejs-and-react-pt-3/
-
-
- Wed, 17 Jan 2024 17:11:00 +0000
-
-
-
-
-
- /?p=546
-
-
- In our previous post we got to the point of displaying an OTP in our card component. Now it is time to refactor a bit and implement a countdown functionality to see when this token will expire. For now we will go ahead and add this logic into our Card component. In order to figure out how to build this countdown timer we first need to understand how the TOTP counter is calculated.
-
-
-
-
In other words, we know that at TOTP token is derived from a secret key and the current time. If we dig into the spec some we can find that time is a reference to Linux epoch time or the number of seconds that have elapsed since January 1st 1970. For a little more clarification check out this Stackexchange article.
-
-
-
-
So if we know that the time is based on epoch time, we also need to know that most TOTP tokens have a validity period of either 30 seconds or 60 seconds. 30 seconds is the most common standard so we will use that for our implementation. If we put all that together then basically all we need is 2 variables:
-
-
-
-
-
Number of seconds since epoch
-
-
-
-
How many seconds until this token expires
-
-
-
-
-
The first one is easy:
-
-
-
-
let secondsSinceEpoch;
-secondsSinceEpoch = Math.ceil(Date.now() / 1000) - 1;
-
-# This gives us a time like so: 1710338609
-
-
-
-
For the second one we will need to do a little math but it’s pretty straightforward. We need to divide secondsSinceEpoch by 30 seconds and then subtract this number from 30. Here is what that looks like:
-
-
-
-
let secondsSinceEpoch;
-let secondsRemaining;
-const period = 30;
-
-secondsSinceEpoch = Math.ceil(Date.now() / 1000) - 1;
-secondsRemaining = period - (secondsSinceEpoch % period);
-
-
-
-
Now let’s put all of that together into a function that we can test out to make sure we are getting the results we expect.
Running this function should give you output similar to the following. In this example we are stopping the timer once it hits 1 second just to show that everything is working as we expect. In our application we will want this time to keep going forever:
Here is a JSfiddle that shows it in action: https://jsfiddle.net/561vg3k7/
-
-
-
-
We can go ahead and add this function to our Card component and get it wired up. I am going to skip ahead a bit and add a progress bar to our card that is synced with our countdown timer and changes colors as it drops below 10 seconds. For now we will be using a setInterval function to accomplish this.
-
-
-
-
Here is what my updated src/Components/Card.tsx looks like:
Pretty straightforward. I also updated my src/index.css and added a style for our progress bar:
-
-
-
-
.progressBar {
- height: 10px;
- position: absolute;
- top: 0;
- left: 0;
- right: inherit;
-}
-// Also be sure to add position:relative to .card which is the parent of this.
-
-
-
-
Here is what it all looks like in action:
-
-
-
-
-
-
-
-
If you look closely you will notice a few interesting things. First is that the color of the progress bar changes from green to red. This is handled by our timerStyle variable. That part is pretty simple, if the timer is less than 10 seconds we set the background color as salmon otherwise we use light green. The width of the progress bar is controlled by `${100 – (100 / 30 * (30 – secondsRemaining))}%`
-
-
-
-
The other interesting thing to note is that when the timer runs out it automatically restarts at 30 seconds with a new OTP. This is due to the fact that this component is re-rendering every 1/4 second and every time it re-renders it is running the entire function body including: const { otp } = TOTP.generate(token.token);.
-
-
-
-
This is to be expected since we are using React and we are just using a setInterval. It may be a little unexpected though if you aren’t as familiar with React render cycles. For our purposes this will work just fine for now. Stay tuned for pt 4 of this series where we wire up the backend API.
-
-
-
-
\ No newline at end of file
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/tag/blog-post/feed/index.xml b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/tag/blog-post/feed/index.xml
deleted file mode 100644
index f7f8610..0000000
--- a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/tag/blog-post/feed/index.xml
+++ /dev/null
@@ -1,1924 +0,0 @@
-
-
-
- Blog Post – hackanooga
-
- /
- Confessions of a homelab hacker
- Wed, 25 Sep 2024 13:56:04 +0000
- en-US
-
- hourly
-
- 1
- https://wordpress.org/?v=6.6.2
-
-
- /wp-content/uploads/2024/03/cropped-cropped-avatar-32x32.png
- Blog Post – hackanooga
- /
- 32
- 32
-
-
- Standing up a Wireguard VPN
- /standing-up-a-wireguard-vpn/
-
-
- Wed, 25 Sep 2024 13:56:04 +0000
-
-
-
-
-
-
-
-
- /?p=619
-
-
- VPN’s have traditionally been slow, complex and hard to set up and configure. That all changed several years ago when Wireguard was officially merged into the mainline Linux kernel (src). I won’t go over all the reasons for why you should want to use Wireguard in this article, instead I will be focusing on just how easy it is to set up and configure.
-
-
-
-
For this tutorial we will be using Terraform to stand up a Digital Ocean droplet and then install Wireguard onto that. The Digital Ocean droplet will be acting as our “server” in this example and we will be using our own computer as the “client”. Of course, you don’t have to use Terraform, you just need a Linux box to install Wireguard on. You can find the code for this tutorial on my personal Git server here.
-
-
-
-
Create Droplet with Terraform
-
-
-
-
I have written some very basic Terraform to get us started. The Terraform is very basic and just creates a droplet with a predefined ssh key and a setup script passed as user data. When the droplet gets created, the script will get copied to the instance and automatically executed. After a few minutes everything should be ready to go. If you want to clone the repo above, feel free to, or if you would rather do everything by hand that’s great too. I will assume that you are doing everything by hand. The process of deploying from the repo should be pretty self explainitory. My reasoning for doing it this way is because I wanted to better understand the process.
-
-
-
-
First create our main.tf with the following contents:
-
-
-
-
# main.tf
-# Attach an SSH key to our droplet
-resource "digitalocean_ssh_key" "default" {
- name = "Terraform Example"
- public_key = file("./tf-digitalocean.pub")
-}
-
-# Create a new Web Droplet in the nyc1 region
-resource "digitalocean_droplet" "web" {
- image = "ubuntu-22-04-x64"
- name = "wireguard"
- region = "nyc1"
- size = "s-2vcpu-4gb"
- ssh_keys = [digitalocean_ssh_key.default.fingerprint]
- user_data = file("setup.sh")
-}
-
-output "droplet_output" {
- value = digitalocean_droplet.web.ipv4_address
-}
-
-
-
-
Next create a terraform.tf file in the same directory with the following contents:
Now we are ready to initialize our Terraform and apply it:
-
-
-
-
$ terraform init
-$ terraform apply
-
-Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
- + create
-
-Terraform will perform the following actions:
-
- # digitalocean_droplet.web will be created
- + resource "digitalocean_droplet" "web" {
- + backups = false
- + created_at = (known after apply)
- + disk = (known after apply)
- + graceful_shutdown = false
- + id = (known after apply)
- + image = "ubuntu-22-04-x64"
- + ipv4_address = (known after apply)
- + ipv4_address_private = (known after apply)
- + ipv6 = false
- + ipv6_address = (known after apply)
- + locked = (known after apply)
- + memory = (known after apply)
- + monitoring = false
- + name = "wireguard"
- + price_hourly = (known after apply)
- + price_monthly = (known after apply)
- + private_networking = (known after apply)
- + region = "nyc1"
- + resize_disk = true
- + size = "s-2vcpu-4gb"
- + ssh_keys = (known after apply)
- + status = (known after apply)
- + urn = (known after apply)
- + user_data = "69d130f386b262b136863be5fcffc32bff055ac0"
- + vcpus = (known after apply)
- + volume_ids = (known after apply)
- + vpc_uuid = (known after apply)
- }
-
- # digitalocean_ssh_key.default will be created
- + resource "digitalocean_ssh_key" "default" {
- + fingerprint = (known after apply)
- + id = (known after apply)
- + name = "Terraform Example"
- + public_key = <<-EOT
- ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDXOBlFdNqV48oxWobrn2rPt4y1FTqrqscA5bSu2f3CogwbDKDyNglXu8RL4opjfdBHQES+pEqvt21niqes8z2QsBTF3TRQ39SaHM8wnOTeC8d0uSgyrp9b7higHd0SDJVJZT0Bz5AlpYfCO/gpEW51XrKKeud7vImj8nGPDHnENN0Ie0UVYZ5+V1zlr0BBI7LX01MtzUOgSldDX0lif7IZWW4XEv40ojWyYJNQwO/gwyDrdAq+kl+xZu7LmBhngcqd02+X6w4SbdgYg2flu25Td0MME0DEsXKiZYf7kniTrKgCs4kJAmidCDYlYRt43dlM69pB5jVD/u4r3O+erTapH/O1EDhsdA9y0aYpKOv26ssYU+ZXK/nax+Heu0giflm7ENTCblKTPCtpG1DBthhX6Ml0AYjZF1cUaaAvpN8UjElxQ9r+PSwXloSnf25/r9UOBs1uco8VDwbx5cM0SpdYm6ERtLqGRYrG2SDJ8yLgiCE9EK9n3uQExyrTMKWzVAc= WireguardVPN
- EOT
- }
-
-Plan: 2 to add, 0 to change, 0 to destroy.
-
-Changes to Outputs:
- + droplet_output = (known after apply)
-
-Do you want to perform these actions?
- Terraform will perform the actions described above.
- Only 'yes' will be accepted to approve.
-
- Enter a value: yes
-
-digitalocean_ssh_key.default: Creating...
-digitalocean_ssh_key.default: Creation complete after 1s [id=43499750]
-digitalocean_droplet.web: Creating...
-digitalocean_droplet.web: Still creating... [10s elapsed]
-digitalocean_droplet.web: Still creating... [20s elapsed]
-digitalocean_droplet.web: Still creating... [30s elapsed]
-digitalocean_droplet.web: Creation complete after 31s [id=447469336]
-
-Apply complete! Resources: 2 added, 0 changed, 0 destroyed.
-
-Outputs:
-
-droplet_output = "159.223.113.207"
-
-
-
-
-
All pretty standard stuff. Nice! It only took about 30 seconds or so on my machine to spin up a droplet and start provisioning it. It is worth noting that the setup script will take a few minutes to run. Before we log into our new droplet, let’s take a quick look at the setup script that we are running.
-
-
-
-
#!/usr/bin/env sh
-set -e
-set -u
-# Set the listen port used by Wireguard, this is the default so feel free to change it.
-LISTENPORT=51820
-CONFIG_DIR=/root/wireguard-conf
-umask 077
-mkdir -p $CONFIG_DIR/client
-
-# Install wireguard
-apt update && apt install -y wireguard
-
-# Generate public/private key for the "server".
-wg genkey > $CONFIG_DIR/privatekey
-wg pubkey < $CONFIG_DIR/privatekey > $CONFIG_DIR/publickey
-
-# Generate public/private key for the "client"
-wg genkey > $CONFIG_DIR/client/privatekey
-wg pubkey < $CONFIG_DIR/client/privatekey > $CONFIG_DIR/client/publickey
-
-
-# Generate server config
-echo "[Interface]
-Address = 10.66.66.1/24,fd42:42:42::1/64
-ListenPort = $LISTENPORT
-PrivateKey = $(cat $CONFIG_DIR/privatekey)
-
-### Client config
-[Peer]
-PublicKey = $(cat $CONFIG_DIR/client/publickey)
-AllowedIPs = 10.66.66.2/32,fd42:42:42::2/128
-" > /etc/wireguard/do.conf
-
-
-# Generate client config. This will need to be copied to your machine.
-echo "[Interface]
-PrivateKey = $(cat $CONFIG_DIR/client/privatekey)
-Address = 10.66.66.2/32,fd42:42:42::2/128
-DNS = 1.1.1.1,1.0.0.1
-
-[Peer]
-PublicKey = $(cat publickey)
-Endpoint = $(curl icanhazip.com):$LISTENPORT
-AllowedIPs = 0.0.0.0/0,::/0
-" > client-config.conf
-
-wg-quick up do
-
-# Add iptables rules to forward internet traffic through this box
-# We are assuming our Wireguard interface is called do and our
-# primary public facing interface is called eth0.
-
-iptables -I INPUT -p udp --dport 51820 -j ACCEPT
-iptables -I FORWARD -i eth0 -o do -j ACCEPT
-iptables -I FORWARD -i do -j ACCEPT
-iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
-ip6tables -I FORWARD -i do -j ACCEPT
-ip6tables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
-
-# Enable routing on the server
-echo "net.ipv4.ip_forward = 1
- net.ipv6.conf.all.forwarding = 1" >/etc/sysctl.d/wg.conf
-sysctl --system
-
-
-
-
As you can see, it is pretty straightforward. All you really need to do is:
-
-
-
-
On the “server” side:
-
-
-
-
-
Generate a private key and derive a public key from it for both the “server” and the “client”.
-
-
-
-
Create a “server” config that tells the droplet what address to bind to for the wireguard interface, which private key to use to secure that interface and what port to listen on.
-
-
-
-
The “server” config also needs to know what peers or “clients” to accept connections from in the AllowedIPs block. In this case we are just specifying one. The “server” also needs to know the public key of the “client” that will be connecting.
-
-
-
-
-
On the “client” side:
-
-
-
-
-
Create a “client” config that tells our machine what address to assign to the wireguard interface (obviously needs to be on the same subnet as the interface on the server side).
-
-
-
-
The client needs to know which private key to use to secure the interface.
-
-
-
-
It also needs to know the public key of the server as well as the public IP address/hostname of the “server” it is connecting to as well as the port it is listening on.
-
-
-
-
Finally it needs to know what traffic to route over the wireguard interface. In this example we are simply routing all traffic but you could restrict this as you see fit.
-
-
-
-
-
Now that we have our configs in place, we need to copy the client config to our local machine. The following command should work as long as you make sure to replace the IP address with the IP address of your newly created droplet:
-
-
-
-
## Make sure you have Wireguard installed on your local machine as well.
-## https://wireguard.com/install
-
-## Copy the client config to our local machine and move it to our wireguard directory.
-$ ssh -i tf-digitalocean root@157.230.177.54 -- cat /root/wireguard-conf/client-config.conf| sudo tee /etc/wireguard/do.conf
-
-
-
-
Before we try to connect, let’s log into the server and make sure everything is set up correctly:
-
-
-
-
$ ssh -i tf-digitalocean root@159.223.113.207
-Welcome to Ubuntu 22.04.4 LTS (GNU/Linux 5.15.0-113-generic x86_64)
-
- * Documentation: https://help.ubuntu.com/
- * Management: https://landscape.canonical.com/
- * Support: https://ubuntu.com/pro
-
- System information as of Wed Sep 25 13:19:02 UTC 2024
-
- System load: 0.03 Processes: 113
- Usage of /: 2.1% of 77.35GB Users logged in: 0
- Memory usage: 6% IPv4 address for eth0: 157.230.221.196
- Swap usage: 0% IPv4 address for eth0: 10.10.0.5
-
-Expanded Security Maintenance for Applications is not enabled.
-
-70 updates can be applied immediately.
-40 of these updates are standard security updates.
-To see these additional updates run: apt list --upgradable
-
-Enable ESM Apps to receive additional future security updates.
-See https://ubuntu.com/esm or run: sudo pro status
-
-New release '24.04.1 LTS' available.
-Run 'do-release-upgrade' to upgrade to it.
-
-
-Last login: Wed Sep 25 13:16:25 2024 from 74.221.191.214
-root@wireguard:~#
-
-
-
-
-
-
Awesome! We are connected. Now let’s check the wireguard interface using the wg command. If our config was correct, we should see an interface line and 1 peer line like so. If the peer line is missing then something is wrong with the configuration. Most likely a mismatch between public/private key.:
-
-
-
-
root@wireguard:~# wg
-interface: do
- public key: fTvqo/cZVofJ9IZgWHwU6XKcIwM/EcxUsMw4voeS/Hg=
- private key: (hidden)
- listening port: 51820
-
-peer: 5RxMenh1L+rNJobROkUrub4DBUj+nEUPKiNe4DFR8iY=
- allowed ips: 10.66.66.2/32, fd42:42:42::2/128
-root@wireguard:~#
-
-
-
-
So now we should be ready to go! On your local machine go ahead and try it out:
-
-
-
-
## Start the interface with wg-quick up [interface_name]
-$ sudo wg-quick up do
-[sudo] password for mikeconrad:
-[#] ip link add do type wireguard
-[#] wg setconf do /dev/fd/63
-[#] ip -4 address add 10.66.66.2/32 dev do
-[#] ip -6 address add fd42:42:42::2/128 dev do
-[#] ip link set mtu 1420 up dev do
-[#] resolvconf -a do -m 0 -x
-[#] wg set do fwmark 51820
-[#] ip -6 route add ::/0 dev do table 51820
-[#] ip -6 rule add not fwmark 51820 table 51820
-[#] ip -6 rule add table main suppress_prefixlength 0
-[#] ip6tables-restore -n
-[#] ip -4 route add 0.0.0.0/0 dev do table 51820
-[#] ip -4 rule add not fwmark 51820 table 51820
-[#] ip -4 rule add table main suppress_prefixlength 0
-[#] sysctl -q net.ipv4.conf.all.src_valid_mark=1
-[#] iptables-restore -n
-
-## Check our config
-$ sudo wg
-interface: do
- public key: fJ8mptCR/utCR4K2LmJTKTjn3xc4RDmZ3NNEQGwI7iI=
- private key: (hidden)
- listening port: 34596
- fwmark: 0xca6c
-
-peer: duTHwMhzSZxnRJ2GFCUCHE4HgY5tSeRn9EzQt9XVDx4=
- endpoint: 157.230.177.54:51820
- allowed ips: 0.0.0.0/0, ::/0
- latest handshake: 1 second ago
- transfer: 1.82 KiB received, 2.89 KiB sent
-
-## Make sure we can ping the outside world
-mikeconrad@pop-os:~/projects/wireguard-terraform-digitalocean$ ping 1.1.1.1
-PING 1.1.1.1 (1.1.1.1) 56(84) bytes of data.
-64 bytes from 1.1.1.1: icmp_seq=1 ttl=56 time=28.0 ms
-^C
---- 1.1.1.1 ping statistics ---
-1 packets transmitted, 1 received, 0% packet loss, time 0ms
-rtt min/avg/max/mdev = 27.991/27.991/27.991/0.000 ms
-
-## Verify our traffic is actually going over the tunnel.
-$ curl icanhazip.com
-157.230.177.54
-
-
-
-
-
-
-
We should also be able to ssh into our instance over the VPN using the 10.66.66.1 address:
-
-
-
-
$ ssh -i tf-digitalocean root@10.66.66.1
-The authenticity of host '10.66.66.1 (10.66.66.1)' can't be established.
-ED25519 key fingerprint is SHA256:E7BKSO3qP+iVVXfb/tLaUfKIc4RvtZ0k248epdE04m8.
-This host key is known by the following other names/addresses:
- ~/.ssh/known_hosts:130: [hashed name]
-Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
-Warning: Permanently added '10.66.66.1' (ED25519) to the list of known hosts.
-Welcome to Ubuntu 22.04.4 LTS (GNU/Linux 5.15.0-113-generic x86_64)
-
- * Documentation: https://help.ubuntu.com/
- * Management: https://landscape.canonical.com/
- * Support: https://ubuntu.com/pro
-
- System information as of Wed Sep 25 13:32:12 UTC 2024
-
- System load: 0.02 Processes: 109
- Usage of /: 2.1% of 77.35GB Users logged in: 0
- Memory usage: 6% IPv4 address for eth0: 157.230.177.54
- Swap usage: 0% IPv4 address for eth0: 10.10.0.5
-
-Expanded Security Maintenance for Applications is not enabled.
-
-73 updates can be applied immediately.
-40 of these updates are standard security updates.
-To see these additional updates run: apt list --upgradable
-
-Enable ESM Apps to receive additional future security updates.
-See https://ubuntu.com/esm or run: sudo pro status
-
-New release '24.04.1 LTS' available.
-Run 'do-release-upgrade' to upgrade to it.
-
-
-root@wireguard:~#
-
-
-
-
-
Looks like everything is working! If you run the script from the repo you will have a fully functioning Wireguard VPN in less than 5 minutes! Pretty cool stuff! This article was not meant to be exhaustive but instead a simple primer to get your feet wet. The setup script I used is heavily inspired by angristan/wireguard-install. Another great resource is the Unofficial docs repo.
-]]>
-
-
-
-
-
- Hardening your web server by only allowing traffic from Cloudflare
- /hardening-your-web-server-by-only-allowing-traffic-from-cloudflare/
-
-
- Thu, 01 Aug 2024 21:02:29 +0000
-
-
-
-
-
- /?p=607
-
-
- TDLR:
-
-
-
-
If you just want the code you can find a convenient script on my Gitea server here. This version has been slightly modified so that it will work on more systems.
-
-
-
-
-
-
-
-
I have been using Cloudflare for several years for both personal and professional projects. The free plan has some various gracious limits and it’s a great way to clear out some low hanging fruit and improve the security of your application. If you’re not familiar with how it works, basically Cloudflare has two modes for DNS records. DNS Only and Proxied. The only way to get the advantages of Cloudflare is to use Proxied mode. Cloudflare has some great documentation on how all of their services work but basically what happens is that you are pointing your domain to Cloudflare and Cloudflare provisions their network of Proxy servers to handle requests for your domain.
-
-
-
-
These proxy servers allow you to secure your domain by implementing things like WAF and Rate limiting. You can also enforce HTTPS only mode and modify/add custom request/response headers. You will notice that once you turn this mode on, your webserver will log requests as coming from Cloudflare IP addresses. They have great documentation on how to configure your webserver to restore these IP addresses in your log files.
-
-
-
-
This is a very easy step to start securing your origin server but it still allows attackers to access your servers directly if they know the IP address. We can take our security one step forward by only allowing requests from IP addresses originating within Cloudflare meaning that we will only allow requests if they are coming from a Cloudflare proxy server. The setup is fairly straightforward. In this example I will be using a Linux server.
-
-
-
-
We can achieve this pretty easily because Cloudflare provides a sort of API where they regular publish their network blocks. Here is the basic script we will use:
-
-
-
-
for ip in $(curl https://www.cloudflare.com/ips-v4/); do iptables -I INPUT -p tcp -m multiport --dports http,https -s $ip -j ACCEPT; done
-
-for ip in $(curl https://www.cloudflare.com/ips-v6/); do ip6tables -I INPUT -p tcp -m multiport --dports http,https -s $ip -j ACCEPT; done
-
-iptables -A INPUT -p tcp -m multiport --dports http,https -j DROP
-ip6tables -A INPUT -p tcp -m multiport --dports http,https -j DROP
-
-
-
-
-
This will pull down the latest network addresses from Cloudflare and create iptables rules for us. These IP addresses do change from time to time so you may want to put this in a script and run it via a cronjob to have it update on a regular basis.
-
-
-
-
Now with this in place, here is the results:
-
-
-
-
-
-
-
-
This should cut down on some of the noise from attackers and script kiddies trying to find holes in your security.
-]]>
-
-
-
-
-
- Debugging running Nginx config
- /debugging-running-nginx-config/
-
-
- Wed, 17 Jul 2024 01:42:43 +0000
-
-
-
-
- /?p=596
-
-
- I was recently working on project where a client had cPanel/WHM with Nginx and Apache. They had a large number of sites managed by Nginx with a large number of includes. I created a custom config to override a location block and needed to be certain that my changes where actually being picked up. Anytime I make changes to an Nginx config, I try to be vigilant about running:
-
-
-
-
nginx -t
-
-
-
-
to test my configuration and ensure I don’t have any syntax errors. I was looking for an easy way to view the actual compiled config and found the -T flag which will test the configuration and dump it to standard out. This is pretty handy if you have a large number of includes in various locations. Here is an example from a fresh Nginx Docker container:
As you can see from the output above, we get all of the various Nginx config files in use printed to the console, perfect for grepping or searching/filtering with other tools.
-]]>
-
-
-
-
-
- Fun with bots – SSH tarpitting
- /fun-with-bots-ssh-tarpitting/
-
-
- Mon, 24 Jun 2024 13:37:43 +0000
-
-
-
-
-
-
- /?p=576
-
-
- For those of you who aren’t familiar with the concept of a network tarpit it is a fairly simple concept. Wikipedia defines it like this:
-
-
-
-
-
A tarpit is a service on a computer system (usually a server) that purposely delays incoming connections. The technique was developed as a defense against a computer worm, and the idea is that network abuses such as spamming or broad scanning are less effective, and therefore less attractive, if they take too long. The concept is analogous with a tar pit, in which animals can get bogged down and slowly sink under the surface, like in a swamp.
If you run any sort of service on the internet then you know as soon as your server has a public IP address and open ports, there are scanners and bots trying to get in constantly. If you take decent steps towards security then it is little more than an annoyance, but annoying all the less. One day when I had some extra time on my hands I started researching ways to mess with the bots trying to scan/attack my site.
-
-
-
-
It turns out that this problem has been solved multiple times in multiple ways. One of the most popular tools for tarpitting ssh connections is endlessh. The way it works is actually pretty simple. The SSH RFC states that when an SSH connection is established, both sides MUST send an identification string. Further down the spec is the line that allows this behavior:
-
-
-
-
-
The server MAY send other lines of data before sending the version
- string. Each line SHOULD be terminated by a Carriage Return and Line
- Feed. Such lines MUST NOT begin with "SSH-", and SHOULD be encoded
- in ISO-10646 UTF-8 [RFC3629] (language is not specified). Clients
- MUST be able to process such lines. Such lines MAY be silently
- ignored, or MAY be displayed to the client user. If they are
- displayed, control character filtering, as discussed in [SSH-ARCH],
- SHOULD be used. The primary use of this feature is to allow TCP-
- wrappers to display an error message before disconnecting.
-SSH RFC
-
-
-
-
Essentially this means that their is no limit to the amount of data that a server can send back to the client and the client must be able to wait and process all of this data. Now let’s see it in action.
By default this fake server listens on port 2222. I have a port forward set up that forwards all ssh traffic from port 22 to 2222. Now try to connect via ssh:
-
-
-
-
ssh -vvv localhost -p 2222
-
-
-
-
If you wait a few seconds you will see the server send back the version string and then start sending a random banner:
-
-
-
-
$:/tmp/endlessh$ 2024-06-24T13:05:59.488Z Port 2222
-2024-06-24T13:05:59.488Z Delay 10000
-2024-06-24T13:05:59.488Z MaxLineLength 32
-2024-06-24T13:05:59.488Z MaxClients 4096
-2024-06-24T13:05:59.488Z BindFamily IPv4 Mapped IPv6
-2024-06-24T13:05:59.488Z socket() = 3
-2024-06-24T13:05:59.488Z setsockopt(3, SO_REUSEADDR, true) = 0
-2024-06-24T13:05:59.488Z setsockopt(3, IPV6_V6ONLY, true) = 0
-2024-06-24T13:05:59.488Z bind(3, port=2222) = 0
-2024-06-24T13:05:59.488Z listen(3) = 0
-2024-06-24T13:05:59.488Z poll(1, -1)
-ssh -vvv localhost -p 2222
-OpenSSH_8.9p1 Ubuntu-3ubuntu0.7, OpenSSL 3.0.2 15 Mar 2022
-debug1: Reading configuration data /home/mikeconrad/.ssh/config
-debug1: Reading configuration data /etc/ssh/ssh_config
-debug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files
-debug1: /etc/ssh/ssh_config line 21: Applying options for *
-debug3: expanded UserKnownHostsFile '~/.ssh/known_hosts' -> '/home/mikeconrad/.ssh/known_hosts'
-debug3: expanded UserKnownHostsFile '~/.ssh/known_hosts2' -> '/home/mikeconrad/.ssh/known_hosts2'
-debug2: resolving "localhost" port 2222
-debug3: resolve_host: lookup localhost:2222
-debug3: ssh_connect_direct: entering
-debug1: Connecting to localhost [::1] port 2222.
-debug3: set_sock_tos: set socket 3 IPV6_TCLASS 0x10
-debug1: Connection established.
-2024-06-24T13:06:08.635Z = 1
-2024-06-24T13:06:08.635Z accept() = 4
-2024-06-24T13:06:08.635Z setsockopt(4, SO_RCVBUF, 1) = 0
-2024-06-24T13:06:08.635Z ACCEPT host=::1 port=43696 fd=4 n=1/4096
-2024-06-24T13:06:08.635Z poll(1, 10000)
-debug1: identity file /home/mikeconrad/.ssh/id_rsa type 0
-debug1: identity file /home/mikeconrad/.ssh/id_rsa-cert type 4
-debug1: identity file /home/mikeconrad/.ssh/id_ecdsa type -1
-debug1: identity file /home/mikeconrad/.ssh/id_ecdsa-cert type -1
-debug1: identity file /home/mikeconrad/.ssh/id_ecdsa_sk type -1
-debug1: identity file /home/mikeconrad/.ssh/id_ecdsa_sk-cert type -1
-debug1: identity file /home/mikeconrad/.ssh/id_ed25519 type -1
-debug1: identity file /home/mikeconrad/.ssh/id_ed25519-cert type -1
-debug1: identity file /home/mikeconrad/.ssh/id_ed25519_sk type -1
-debug1: identity file /home/mikeconrad/.ssh/id_ed25519_sk-cert type -1
-debug1: identity file /home/mikeconrad/.ssh/id_xmss type -1
-debug1: identity file /home/mikeconrad/.ssh/id_xmss-cert type -1
-debug1: identity file /home/mikeconrad/.ssh/id_dsa type -1
-debug1: identity file /home/mikeconrad/.ssh/id_dsa-cert type -1
-debug1: Local version string SSH-2.0-OpenSSH_8.9p1 Ubuntu-3ubuntu0.7
-2024-06-24T13:06:18.684Z = 0
-2024-06-24T13:06:18.684Z write(4) = 3
-2024-06-24T13:06:18.684Z poll(1, 10000)
-debug1: kex_exchange_identification: banner line 0: V
-2024-06-24T13:06:28.734Z = 0
-2024-06-24T13:06:28.734Z write(4) = 25
-2024-06-24T13:06:28.734Z poll(1, 10000)
-debug1: kex_exchange_identification: banner line 1: 2I=ED}PZ,z T_Y|Yc]$b{R]
-
-
-
-
-
-
This is a great way to give back to those bots and script kiddies. In my research into other methods I also stumbled across this brilliant program fakessh. While fakessh isn’t technically a tarpit, it’s more of a honeypot but very interesting nonetheless. It creates a fake SSH server and logs the ip address, connection string and any commands executed by the attacker. Essentially it allows any username/password combination to connect and gives them a fake shell prompt. There is no actual access to any file system and all of their commands basically return gibberish.
-
-
-
-
Here are some logs from an actual server of mine running fakessh
Those are mostly connections and disconnections. They probably connected, realized it was fake and disconnected. There are a couple that tried to execute some commands though:
Fun fact: Cloudflare’s Bot Fight Mode uses a form of tarpitting:
-
-
-
-
-
Once enabled, when we detect a bad bot, we will do three things: (1) we’re going to disincentivize the bot maker economically by tarpitting them, including requiring them to solve a computationally intensive challenge that will require more of their bot’s CPU; (2) for Bandwidth Alliance partners, we’re going to hand the IP of the bot to the partner and get the bot kicked offline; and (3) we’re going to plant trees to make up for the bot’s carbon cost.
-]]>
-
-
-
-
-
- Traefik 3.0 service discovery in Docker Swarm mode
- /traefik-3-0-service-discovery-in-docker-swarm-mode/
-
-
- Sat, 11 May 2024 13:44:01 +0000
-
-
-
-
-
-
- /?p=564
-
-
- I recently decided to set up a Docker swarm cluster for a project I was working on. If you aren’t familiar with Swarm mode, it is similar in some ways to k8s but with much less complexity and it is built into Docker. If you are looking for a fairly straightforward way to deploy containers across a number of nodes without all the overhead of k8s it can be a good choice, however it isn’t a very popular or widespread solution these days.
-
-
-
-
Anyway, I set up a VM scaling set in Azure with 10 Ubuntu 22.04 vms and wrote some Ansible scripts to automate the process of installing Docker on each machine as well as setting 3 up as swarm managers and the other 7 as worker nodes. I ssh’d into the primary manager node and created a docker compose file for launching an observability stack.
Everything deploys properly but when I view the Traefik logs there is an issue with all the services except for the grafana service. I get errors like this:
-
-
-
-
traefik_traefik.1.tm5iqb9x59on@dockerswa2V8BY4 | 2024-05-11T13:14:16Z ERR error="service \"observability-prometheus\" error: port is missing" container=observability-prometheus-37i852h4o36c23lzwuu9pvee9 providerName=swarm
-
-
-
-
-
It drove me crazy for about half a day or so. I couldn’t find any reason why the grafana service worked as expected but none of the others did. Part of my love/hate relationship with Traefik stems from the fact that configuration issues like this can be hard to track and debug. Ultimately after lots of searching and banging my head against a wall I found the answer in the Traefik docs and thought I would share here for anyone else who might run into this issue. Again, this solution is specific to Docker Swarm mode.
Expand that first section and you will see the solution:
-
-
-
-
-
-
-
-
It turns out I just needed to update my docker-compose.yml and nest the labels under a deploy section, redeploy and everything was working as expected.
-]]>
-
-
-
-
-
- Stop all running containers with Docker
- /stop-all-running-containers-with-docker/
-
-
- Wed, 03 Apr 2024 13:12:41 +0000
-
-
-
-
- /?p=557
-
-
- These are some handy snippets I use on a regular basis when managing containers. I have one server in particular that can sometimes end up with 50 to 100 orphaned containers for various reasons. The easiest/quickest way to stop all of them is to do something like this:
-
-
-
-
docker container stop $(docker container ps -q)
-
-
-
-
Let me break this down in case you are not familiar with the syntax. Basically we are passing the output of docker container ps -q into docker container stop. This works because the stop command can take a list of container ids which is what we get when passing the -q flag to docker container ps.
-]]>
-
-
-
-
-
- Automating CI/CD with TeamCity and Ansible
- /automating-ci-cd-with-teamcity-ansible/
-
-
- Mon, 11 Mar 2024 13:37:47 +0000
-
-
-
-
-
- https://wordpress.hackanooga.com/?p=393
-
-
- In part one of this series we are going to explore a CI/CD option you may not be familiar with but should definitely be on your radar. I used Jetbrains TeamCity for several months at my last company and really enjoyed my time with it. A couple of the things I like most about it are:
-
-
-
-
-
Ability to declare global variables and have them be passed down to all projects
-
-
-
-
Ability to declare variables that are made up of other variables
-
-
-
-
-
I like to use private or self hosted Docker registries for a lot of my projects and one of the pain points I have had with some other solutions (well mostly Bitbucket) is that they don’t integrate well with these private registries and when I run into a situation where I am pushing an image to or pulling an image from a private registry it get’s a little messy. TeamCity is nice in that I can add a connection to my private registry in my root project and them simply add that as a build feature to any projects that may need it. Essentially, now I only have one place where I have to keep those credentials and manage that connection.
-
-
-
-
-
-
-
-
Another reason I love it is the fact that you can create really powerful build templates that you can reuse. This became very powerful when we were trying to standardize our build processes. For example, most of the apps we build are .NET backends and React frontends. We built docker images for every project and pushed them to our private registry. TeamCity gave us the ability to standardize the naming convention and really streamline the build process. Enough about that though, the rest of this series will assume that you are using TeamCity. This post will focus on getting up and running using Ansible.
-
-
-
-
-
-
-
-
Installation and Setup
-
-
-
-
For this I will assume that you already have Ansible on your machine and that you will be installing TeamCity locally. You can simply follow along with the installation guide here. We will be creating an Ansible playbook based on the following steps. If you just want the finished code, you can find it on my Gitea instance here:
-
-
-
-
Step 1 : Create project and initial playbook
-
-
-
-
To get started go ahead and create a new directory to hold our configuration:
Now open up install-teamcity-server.yml and add a task to install Java 17 as it is a prerequisite. You will need sudo for this task. ***As of this writing TeamCity does not support Java 18 or 19. If you try to install one of these you will get an error when trying to start TeamCity.
The next step is to create a dedicated user account. Add the following task to install-teamcity-server.yml
-
-
-
-
- name: Add Teamcity User
- ansible.builtin.user:
- name: teamcity
-
-
-
-
Next we will need to download the latest version of TeamCity. 2023.11.4 is the latest as of this writing. Add the following task to your install-teamcity-server.yml
Now that we have everything set up and installed we want to make sure that our new teamcity user has access to everything they need to get up and running. We will add the following lines:
This gives us a pretty nice setup. We have TeamCity server installed with a dedicated user account. The last thing we will do is create a systemd service so that we can easily start/stop the server. For this we will need to add a few things.
-
-
-
-
-
A service file that tells our system how to manage TeamCity
-
-
-
-
A j2 template file that is used to create this service file
-
-
-
-
A handler that tells the system to run systemctl daemon-reload once the service has been installed.
-
-
-
-
-
Go ahead and create a new templates folder with the following teamcity.service.j2 file
That’s it! Now you should have a fully automated installed of TeamCity Server ready to be deployed wherever you need it. Here is the final playbook file, also you can find the most up to date version in my repo:
-
-
-
-
---
-- name: Install Teamcity
- hosts: localhost
- become: true
- become_method: sudo
-
- vars:
- java_version: "17"
- teamcity:
- installation_path: /opt/TeamCity
- version: "2023.11.4"
-
- tasks:
- - name: Install Java
- ansible.builtin.apt:
- name: openjdk-{{ java_version }}-jdk # This is important because TeamCity will fail to start if we try to use 18 or 19
- update_cache: yes
- state: latest
- install_recommends: no
-
- - name: Add TeamCity User
- ansible.builtin.user:
- name: teamcity
-
- - name: Download TeamCity Server
- ansible.builtin.get_url:
- url: https://download.jetbrains.com/teamcity/TeamCity-{{teamcity.version}}.tar.gz
- dest: /opt/TeamCity-{{teamcity.version}}.tar.gz
- mode: '0770'
-
- - name: Install TeamCity Server
- ansible.builtin.shell: |
- tar xfz /opt/TeamCity-{{teamcity.version}}.tar.gz
- rm -rf /opt/TeamCity-{{teamcity.version}}.tar.gz
- args:
- chdir: /opt
-
- - name: Update permissions
- ansible.builtin.shell: chown -R teamcity:teamcity /opt/TeamCity
-
- - name: TeamCity | Create environment file
- template: src=teamcity.service.j2 dest=/etc/systemd/system/teamcityserver.service
- notify:
- - reload systemctl
- - name: TeamCity | Start teamcity
- service: name=teamcityserver.service state=started enabled=yes
-
- # Trigger a reload of systemctl after the service file has been created.
- handlers:
- - name: reload systemctl
- command: systemctl daemon-reload
-]]>
-
-
-
-
-
- Self hosted package registries with Gitea
- /self-hosted-package-registries-with-gitea/
-
-
- Thu, 07 Mar 2024 15:07:07 +0000
-
-
-
-
-
- https://wordpress.hackanooga.com/?p=413
-
-
- I am a big proponent of open source technologies. I have been using Gitea for a couple years now in my homelab. A few years ago I moved most of my code off of Github and onto my self hosted instance. I recently came across a really handy feature that I didn’t know Gitea had and was pleasantly surprised by: Package Registry. You are no doubt familiar with what a package registry is in the broad context. Here are some examples of package registries you probably use on a regular basis:
-
-
-
-
-
npm
-
-
-
-
cargo
-
-
-
-
docker
-
-
-
-
composer
-
-
-
-
nuget
-
-
-
-
helm
-
-
-
-
-
There are a number of reasons why you would want to self host a registry. For example, in my home lab I have some Docker images that are specific to my use cases and I don’t necessarily want them on a public registry. I’m also not concerned about losing the artifacts as I can easily recreate them from code. Gitea makes this really easy to setup, in fact it comes baked in with the installation. For the sake of this post I will just assume that you already have Gitea installed and setup.
-
-
-
-
Since the package registry is baked in and enabled by default, I will demonstrate how easy it is to push a docker image. We will pull the default alpine image, re-tag it and push it to our internal registry:
-
-
-
-
# Pull the official Alpine image
-docker pull alpine:latest
-
-# Re tag the image with our local registry information
-docker tag alpine:latest git.hackanooga.com/mikeconrad/alpine:latest
-
-# Login using your gitea user account
-docker login git.hackanooga.com
-
-# Push the image to our registry
-docker push git.hackanooga.com/mikeconrad/alpine:latest
-
-
-
-
-
-
Now log into your Gitea instance, navigate to your user account and look for packages. You should see the newly uploaded alpine image.
-
-
-
-
-
-
-
-
You can see that the package type is container. Clicking on it will give you more information:
-
-
-
-
-]]>
-
-
-
-
-
- Traefik with Let’s Encrypt and Cloudflare (pt 2)
- /traefik-with-lets-encrypt-and-cloudflare-pt-2/
-
-
- Thu, 15 Feb 2024 20:19:12 +0000
-
-
-
-
-
- https://wordpress.hackanooga.com/?p=425
-
-
- In this article we are gonna get into setting up Traefik to request dynamic certs from Lets Encrypt. I had a few issues getting this up and running and the documentation is a little fuzzy. In my case I decided to go with the DNS challenge route. Really the only reason I went with this option is because I was having issues with the TLS and HTTP challenges. Well as it turns out my issues didn’t have as much to do with my configuration as they did with my router.
-
-
-
-
Sometime in the past I had set up some special rules on my router to force all clients on my network to send DNS requests through a self hosted DNS server. I did this to keep some of my “smart” devices from misbehaving by blocking there access to the outside world. As it turns out some devices will ignore the DNS servers that you hand out via DHCP and will use their own instead. That is of course unless you force DNS redirection but that is another post for another day.
-
-
-
-
Let’s revisit our current configuration:
-
-
-
-
version: '3'
-
-services:
- reverse-proxy:
- # The official v2 Traefik docker image
- image: traefik:v2.11
- # Enables the web UI and tells Traefik to listen to docker
- command:
- - --api.insecure=true
- - --providers.docker=true
- - --providers.file.filename=/config.yml
- - --entrypoints.web.address=:80
- - --entrypoints.websecure.address=:443
- # Set up LetsEncrypt
- - --certificatesresolvers.letsencrypt.acme.dnschallenge=true
- - --certificatesresolvers.letsencrypt.acme.dnschallenge.provider=cloudflare
- - --certificatesresolvers.letsencrypt.acme.email=mikeconrad@onmail.com
- - --certificatesresolvers.letsencrypt.acme.storage=/letsencrypt/acme.json
- - --entryPoints.web.http.redirections.entryPoint.to=websecure
- - --entryPoints.web.http.redirections.entryPoint.scheme=https
- - --entryPoints.web.http.redirections.entrypoint.permanent=true
- - --log=true
- - --log.level=INFO
-# - '--certificatesresolvers.letsencrypt.acme.caserver=https://acme-staging-v02.api.letsencrypt.org/directory'
-
- environment:
- - CF_DNS_API_TOKEN=${CF_DNS_API_TOKEN}
- ports:
- # The HTTP port
- - "80:80"
- - "443:443"
- # The Web UI (enabled by --api.insecure=true)
- - "8080:8080"
- volumes:
- # So that Traefik can listen to the Docker events
- - /var/run/docker.sock:/var/run/docker.sock:ro
- - ./letsencrypt:/letsencrypt
- - ./volumes/traefik/logs:/logs
- - ./traefik/config.yml:/config.yml:ro
- networks:
- - traefik
- ots:
- image: luzifer/ots
- container_name: ots
- restart: always
- environment:
- # Optional, see "Customization" in README
- #CUSTOMIZE: '/etc/ots/customize.yaml'
- # See README for details
- REDIS_URL: redis://redis:6379/0
- # 168h = 1w
- SECRET_EXPIRY: "604800"
- # "mem" or "redis" (See README)
- STORAGE_TYPE: redis
- depends_on:
- - redis
- labels:
- - traefik.enable=true
- - traefik.http.routers.ots.rule=Host(`ots.hackanooga.com`)
- - traefik.http.routers.ots.entrypoints=websecure
- - traefik.http.routers.ots.tls=true
- - traefik.http.routers.ots.tls.certresolver=letsencrypt
- networks:
- - traefik
- redis:
- image: redis:alpine
- restart: always
- volumes:
- - ./redis-data:/data
- networks:
- - traefik
-networks:
- traefik:
- external: true
-
-
-
-
-
-
Now that we have all of this in place there are a couple more things we need to do on the Cloudflare side:
-
-
-
-
Step 1: Setup wildcard DNS entry
-
-
-
-
This is pretty straightforward. Follow the Cloudflare documentation if you aren’t familiar with setting this up.
-
-
-
-
Step 2: Create API Token
-
-
-
-
This is where the Traefik documentation is a little lacking. I had some issues getting this set up initially but ultimately found this documentation which pointed me in the right direction. In your Cloudflare account you will need to create an API token. Navigate to the dashboard, go to your profile -> API Tokens and create new token. It should have the following permissions:
-
-
-
-
Zone.Zone.Read
-Zone.DNS.Edit
-
-
-
-
-
-
-
-
Also be sure to give it permission to access all zones in your account. Now simply provide that token when starting up the stack and you should be good to go:
-
-
-
-
CF_DNS_API_TOKEN=[redacted] docker compose up -d
-]]>
-
-
-
-
-
- Traefik with Let’s Encrypt and Cloudflare (pt 1)
- /traefik-with-lets-encrypt-and-cloudflare-pt-1/
-
-
- Thu, 01 Feb 2024 19:35:00 +0000
-
-
-
-
-
- https://wordpress.hackanooga.com/?p=422
-
-
- Recently I decided to rebuild one of my homelab servers. Previously I was using Nginx as my reverse proxy but I decided to switch to Traefik since I have been using it professionally for some time now. One of the reasons I like Traefik is that it is stupid simple to set up certificates and when I am using it with Docker I don’t have to worry about a bunch of configuration files. If you aren’t familiar with how Traefik works with Docker, here is a brief example of a docker-compose.yaml
-
-
-
-
version: '3'
-
-services:
- reverse-proxy:
- # The official v2 Traefik docker image
- image: traefik:v2.11
- # Enables the web UI and tells Traefik to listen to docker
- command:
- - --api.insecure=true
- - --providers.docker=true
- - --entrypoints.web.address=:80
- - --entrypoints.websecure.address=:443
- # Set up LetsEncrypt
- - --certificatesresolvers.letsencrypt.acme.dnschallenge=true
- - --certificatesresolvers.letsencrypt.acme.dnschallenge.provider=cloudflare
- - --certificatesresolvers.letsencrypt.acme.email=user@example.com
- - --certificatesresolvers.letsencrypt.acme.storage=/letsencrypt/acme.json
- # Redirect all http requests to https
- - --entryPoints.web.http.redirections.entryPoint.to=websecure
- - --entryPoints.web.http.redirections.entryPoint.scheme=https
- - --entryPoints.web.http.redirections.entrypoint.permanent=true
- - --log=true
- - --log.level=INFO
- # Needed to request certs via lets encrypt
- environment:
- - CF_DNS_API_TOKEN=[redacted]
- ports:
- # The HTTP port
- - "80:80"
- - "443:443"
- # The Web UI (enabled by --api.insecure=true)
- - "8080:8080"
- volumes:
- # So that Traefik can listen to the Docker events
- - /var/run/docker.sock:/var/run/docker.sock:ro
- # Used for storing letsencrypt certificates
- - ./letsencrypt:/letsencrypt
- - ./volumes/traefik/logs:/logs
- networks:
- - traefik
- ots:
- image: luzifer/ots
- container_name: ots
- restart: always
- environment:
- REDIS_URL: redis://redis:6379/0
- SECRET_EXPIRY: "604800"
- STORAGE_TYPE: redis
- depends_on:
- - redis
- labels:
- - traefik.enable=true
- - traefik.http.routers.ots.rule=Host(`ots.example.com`)
- - traefik.http.routers.ots.entrypoints=websecure
- - traefik.http.routers.ots.tls=true
- - traefik.http.routers.ots.tls.certresolver=letsencrypt
- - traefik.http.services.ots.loadbalancer.server.port=3000
- networks:
- - traefik
- redis:
- image: redis:alpine
- restart: always
- volumes:
- - ./redis-data:/data
- networks:
- - traefik
-networks:
- traefik:
- external: true
-
-
-
-
-
-
-
In part one of this series I will be going over some of the basics of Traefik and how dynamic routing works. If you want to skip to the good stuff and get everything configured with Cloudflare, you can skip to part 2.
-
-
-
-
This example set’s up the primary Traefik container which acts as the ingress controller as well as a handy One Time Secret sharing service I use. Traefik handles routing in Docker via labels. For this to work properly the services that Traefik is trying to route to all need to be on the same Docker network. For this example we created a network called traefik by running the following:
-
-
-
-
docker network create traefik
-
-
-
-
-
Let’s take a look at the labels we applied to the ots container a little closer:
traefik.enable=true – This should be pretty self explanatory but it tells Traefik that we want it to know about this service.
-
-
-
-
traefik.http.routers.ots.rule=Host('ots.example.com') - This is where some of the magic comes in. Here we are defining a router called ots. The name is arbitrary in that it doesn’t have to match the name of the service but for our example it does. There are many rules that you can specify but the easiest for this example is host. Basically we are saying that any request coming in for ots.example.com should be picked up by this router. You can find more options for routers in the Traefik docs.
We are using these three labels to tell our router that we want it to use the websecure entrypoint, and that it should use the letsencrypt certresolver to grab it’s certificates. websecure is an arbitrary name that we assigned to our :443 interface. There are multiple ways to configure this, I choose to use the cli format in my traefik config:
-
-
-
-
“`
-
-
-
-
command:
- - --api.insecure=true
- - --providers.docker=true
- # Our entrypoint names are arbitrary but these are convention.
- # The important part is the port binding that we associate.
- - --entrypoints.web.address=:80
- - --entrypoints.websecure.address=:443
-
-
-
-
-
-
These last label is optional depending on your setup but it is important to understand as the documentation is a little fuzzy.
Here’s how it works. Suppose you have a container that exposes multiple ports. Maybe one of those is a web ui and another is something that you don’t want exposed. By default Traefik will try and guess which port to route requests to. My understanding is that it will try and use the first exposed port. However you can override this functionality by using the label above which will tell Traefik specifically which port you want to route to inside the container.
The service name is derived automatically from the definition in the docker compose file:
-
-
-
-
ots: # This will become the service name image: luzifer/ots container_name: ots
-
-
-
-
\ No newline at end of file
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/tag/graphql/feed/index.xml b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/tag/graphql/feed/index.xml
deleted file mode 100644
index b71980a..0000000
--- a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/tag/graphql/feed/index.xml
+++ /dev/null
@@ -1,544 +0,0 @@
-
-
-
- GraphQL – hackanooga
-
- /
- Confessions of a homelab hacker
- Wed, 13 Mar 2024 12:54:04 +0000
- en-US
-
- hourly
-
- 1
- https://wordpress.org/?v=6.6.2
-
-
- /wp-content/uploads/2024/03/cropped-cropped-avatar-32x32.png
- GraphQL – hackanooga
- /
- 32
- 32
-
-
- Roll your own authenticator app with KeystoneJS and React – pt 2
- /roll-your-own-authenticator-app-with-keystonejs-and-react-pt-2/
-
-
- Thu, 11 Jan 2024 01:41:00 +0000
-
-
-
-
-
-
-
- /?p=539
-
-
- In part 1 of this series we built out a basic backend using KeystoneJS. In this part we will go ahead and start a new React frontend that will interact with our backend. We will be using Vite. Let’s get started. Make sure you are in the authenticator folder and run the following:
-
-
-
-
$ yarn create vite@latest
-yarn create v1.22.21
-[1/4] Resolving packages...
-[2/4] Fetching packages...
-[3/4] Linking dependencies...
-[4/4] Building fresh packages...
-
-success Installed "create-vite@5.2.2" with binaries:
- - create-vite
- - cva
-✔ Project name: … frontend
-✔ Select a framework: › React
-✔ Select a variant: › TypeScript
-
-Scaffolding project in /home/mikeconrad/projects/authenticator/frontend...
-
-Done. Now run:
-
- cd frontend
- yarn
- yarn dev
-
-Done in 10.20s.
-
-
-
-
Let’s go ahead and go into our frontend directory and get started:
-
-
-
-
$ cd frontend
-$ yarn
-yarn install v1.22.21
-info No lockfile found.
-[1/4] Resolving packages...
-[2/4] Fetching packages...
-[3/4] Linking dependencies...
-[4/4] Building fresh packages...
-success Saved lockfile.
-Done in 10.21s.
-
-$ yarn dev
-yarn run v1.22.21
-$ vite
-Port 5173 is in use, trying another one...
-
- VITE v5.1.6 ready in 218 ms
-
- ➜ Local: http://localhost:5174/
- ➜ Network: use --host to expose
- ➜ press h + enter to show help
-
-
-
-
-
Next go ahead and open the project up in your IDE of choice. I prefer VSCodium:
-
-
-
-
codium frontend
-
-
-
-
Go ahead and open up src/App.tsx and remove all the boilerplate so it looks like this:
Now you should have something that looks like this:
-
-
-
-
-
-
-
-
Alright, we have some of the boring stuff out of the way, now let’s start making some magic. If you aren’t familiar with how TOTP tokens work, basically there is an Algorithm that generates them. I would encourage you to read the RFC for a detailed explanation. Basically it is an algorithm that generates a one time password using the current time as a source of uniqueness along with the secret key.
-
-
-
-
If we really wanted to we could implement this algorithm ourselves but thankfully there are some really simple libraries that do it for us. For our project we will be using one called totp-generator. Let’s go ahead and install it and check it out:
-
-
-
-
$ yarn add totp-generator
-
-
-
-
Now let’s add it to our card component and see what happens. Using it is really simple. We just need to import it, instantiate a new TokenGenerator and pass it our Secret key:
Now save and go back to your browser and you should see that our secret keys are now being displayed as tokens:
-
-
-
-
-
-
-
-
That is pretty cool, the only problem is you need to refresh the page to refresh the token. We will take care of that in part 3 of this series as well as handling fetching tokens from our backend.
-]]>
-
-
-
-
-
- Roll your own authenticator app with KeystoneJS and React
- /roll-your-own-authenticator-app-with-keystonejs-and-react/
-
-
- Thu, 04 Jan 2024 00:59:49 +0000
-
-
-
-
-
-
-
- /?p=533
-
-
- In this series of articles we are going to be building an authenticator app using KeystoneJS for the backend and React for the frontend. The concept is pretty simple and yes there are a bunch out there already but I recently had a need to learn some of the ins and outs of TOTP tokens and thought this project would be a fun idea. Let’s get started.
-
-
-
-
Step 1: Init keystone app
-
-
-
-
Open up a terminal and create a blank keystone project. We are going to call our app authenticator to keep things simple.
-
-
-
-
$ yarn create keystone-app
-yarn create v1.22.21
-[1/4] Resolving packages...
-[2/4] Fetching packages...
-[3/4] Linking dependencies...
-[4/4] Building fresh packages...
-
-success Installed "create-keystone-app@9.0.1" with binaries:
- - create-keystone-app
-[###################################################################################################################################################################################] 273/273
-✨ You're about to generate a project using Keystone 6 packages.
-
-✔ What directory should create-keystone-app generate your app into? · authenticator
-
-⠸ Installing dependencies with yarn. This may take a few minutes.
-⚠ Failed to install with yarn.
-✔ Installed dependencies with npm.
-
-
-🎉 Keystone created a starter project in: authenticator
-
- To launch your app, run:
-
- - cd authenticator
- - npm run dev
-
- Next steps:
-
- - Read authenticator/README.md for additional getting started details.
- - Edit authenticator/keystone.ts to customize your app.
- - Open the Admin UI
- - Open the Graphql API
- - Read the docs
- - Star Keystone on GitHub
-
-Done in 84.06s.
-
-
-
-
-
After a few minutes you should be ready to go. Ignore the error about yarn not being able to install dependencies, it’s an issue with my setup. Next go ahead and open up the project folder with your editor of choice. I use VSCodium:
-
-
-
-
codium authenticator
-
-
-
-
Let’s go ahead and remove all the comments from the schema.ts file and clean it up some:
-
-
-
-
sed -i '/\/\//d' schema.ts
-
-
-
-
Also, go ahead and delete the Post and Tag list as we won’t be using them. Our cleaned up schema.ts should look like this:
Next we will define the schema for our tokens. We will need 3 basic things to start with:
-
-
-
-
-
Issuer
-
-
-
-
Secret Key
-
-
-
-
Account
-
-
-
-
-
The only thing that really matters for generating a TOTP is actually the secret key. The other two fields are mostly for identifying and differentiating tokens. Go ahead and add the following to our schema.ts underneath the User list:
Now that we have defined our Token, we should probably link it to a user. KeystoneJS makes this really easily. We simply need to add a relationship field to our User list. Add the following field to the user list:
-
-
-
-
tokens: relationship({ ref:'Token', many: true })
-
-
-
-
We are defining a tokens field on the User list and tying it to our Token list. We are also passing many: true saying that a user can have one or more tokens. Now that we have the basics set up, let’s go ahead and spin up our app and see what we have:
-
-
-
-
$ yarn dev
-yarn run v1.22.21
-$ keystone dev
-✨ Starting Keystone
-⭐ Server listening on :3000 (http://localhost:3000/)
-⭐ GraphQL API available at /api/graphql
-✨ Generating GraphQL and Prisma schemas
-✨ The database is already in sync with the Prisma schema
-✨ Connecting to the database
-✨ Creating server
-✅ GraphQL API ready
-✨ Generating Admin UI code
-✨ Preparing Admin UI app
-✅ Admin UI ready
-
-
-
-
-
Our server should be running on localhost:3000 so let’s check it out! The first time we open it up we will be greeted with the initialization screen. Go ahead and create an account to login:
-
-
-
-
-
-
-
-
Once you login you should see a dashboard similar to this:
-
-
-
-
-
-
-
-
You can see we have Users and Tokens that we can manage. The beauty of KeystoneJS is that you get full CRUD functionality out of the box just by defining our schema! Go ahead and click on Tokens to add a token:
-
-
-
-
-
-
-
-
For this example I just entered some random text as an example. This is enough to start testing out our TOTP functionality. Click ‘Create Token’ and you should see a list displaying existing tokens:
-
-
-
-
-
-
-
-
We are now ready to jump into the frontend. Stay tuned for pt 2 of this series.
-
-
-
-
\ No newline at end of file
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/tag/keystonejs/feed/index.xml b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/tag/keystonejs/feed/index.xml
deleted file mode 100644
index f0acb04..0000000
--- a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/tag/keystonejs/feed/index.xml
+++ /dev/null
@@ -1,544 +0,0 @@
-
-
-
- KeystoneJS – hackanooga
-
- /
- Confessions of a homelab hacker
- Wed, 13 Mar 2024 12:54:04 +0000
- en-US
-
- hourly
-
- 1
- https://wordpress.org/?v=6.6.2
-
-
- /wp-content/uploads/2024/03/cropped-cropped-avatar-32x32.png
- KeystoneJS – hackanooga
- /
- 32
- 32
-
-
- Roll your own authenticator app with KeystoneJS and React – pt 2
- /roll-your-own-authenticator-app-with-keystonejs-and-react-pt-2/
-
-
- Thu, 11 Jan 2024 01:41:00 +0000
-
-
-
-
-
-
-
- /?p=539
-
-
- In part 1 of this series we built out a basic backend using KeystoneJS. In this part we will go ahead and start a new React frontend that will interact with our backend. We will be using Vite. Let’s get started. Make sure you are in the authenticator folder and run the following:
-
-
-
-
$ yarn create vite@latest
-yarn create v1.22.21
-[1/4] Resolving packages...
-[2/4] Fetching packages...
-[3/4] Linking dependencies...
-[4/4] Building fresh packages...
-
-success Installed "create-vite@5.2.2" with binaries:
- - create-vite
- - cva
-✔ Project name: … frontend
-✔ Select a framework: › React
-✔ Select a variant: › TypeScript
-
-Scaffolding project in /home/mikeconrad/projects/authenticator/frontend...
-
-Done. Now run:
-
- cd frontend
- yarn
- yarn dev
-
-Done in 10.20s.
-
-
-
-
Let’s go ahead and go into our frontend directory and get started:
-
-
-
-
$ cd frontend
-$ yarn
-yarn install v1.22.21
-info No lockfile found.
-[1/4] Resolving packages...
-[2/4] Fetching packages...
-[3/4] Linking dependencies...
-[4/4] Building fresh packages...
-success Saved lockfile.
-Done in 10.21s.
-
-$ yarn dev
-yarn run v1.22.21
-$ vite
-Port 5173 is in use, trying another one...
-
- VITE v5.1.6 ready in 218 ms
-
- ➜ Local: http://localhost:5174/
- ➜ Network: use --host to expose
- ➜ press h + enter to show help
-
-
-
-
-
Next go ahead and open the project up in your IDE of choice. I prefer VSCodium:
-
-
-
-
codium frontend
-
-
-
-
Go ahead and open up src/App.tsx and remove all the boilerplate so it looks like this:
Now you should have something that looks like this:
-
-
-
-
-
-
-
-
Alright, we have some of the boring stuff out of the way, now let’s start making some magic. If you aren’t familiar with how TOTP tokens work, basically there is an Algorithm that generates them. I would encourage you to read the RFC for a detailed explanation. Basically it is an algorithm that generates a one time password using the current time as a source of uniqueness along with the secret key.
-
-
-
-
If we really wanted to we could implement this algorithm ourselves but thankfully there are some really simple libraries that do it for us. For our project we will be using one called totp-generator. Let’s go ahead and install it and check it out:
-
-
-
-
$ yarn add totp-generator
-
-
-
-
Now let’s add it to our card component and see what happens. Using it is really simple. We just need to import it, instantiate a new TokenGenerator and pass it our Secret key:
Now save and go back to your browser and you should see that our secret keys are now being displayed as tokens:
-
-
-
-
-
-
-
-
That is pretty cool, the only problem is you need to refresh the page to refresh the token. We will take care of that in part 3 of this series as well as handling fetching tokens from our backend.
-]]>
-
-
-
-
-
- Roll your own authenticator app with KeystoneJS and React
- /roll-your-own-authenticator-app-with-keystonejs-and-react/
-
-
- Thu, 04 Jan 2024 00:59:49 +0000
-
-
-
-
-
-
-
- /?p=533
-
-
- In this series of articles we are going to be building an authenticator app using KeystoneJS for the backend and React for the frontend. The concept is pretty simple and yes there are a bunch out there already but I recently had a need to learn some of the ins and outs of TOTP tokens and thought this project would be a fun idea. Let’s get started.
-
-
-
-
Step 1: Init keystone app
-
-
-
-
Open up a terminal and create a blank keystone project. We are going to call our app authenticator to keep things simple.
-
-
-
-
$ yarn create keystone-app
-yarn create v1.22.21
-[1/4] Resolving packages...
-[2/4] Fetching packages...
-[3/4] Linking dependencies...
-[4/4] Building fresh packages...
-
-success Installed "create-keystone-app@9.0.1" with binaries:
- - create-keystone-app
-[###################################################################################################################################################################################] 273/273
-✨ You're about to generate a project using Keystone 6 packages.
-
-✔ What directory should create-keystone-app generate your app into? · authenticator
-
-⠸ Installing dependencies with yarn. This may take a few minutes.
-⚠ Failed to install with yarn.
-✔ Installed dependencies with npm.
-
-
-🎉 Keystone created a starter project in: authenticator
-
- To launch your app, run:
-
- - cd authenticator
- - npm run dev
-
- Next steps:
-
- - Read authenticator/README.md for additional getting started details.
- - Edit authenticator/keystone.ts to customize your app.
- - Open the Admin UI
- - Open the Graphql API
- - Read the docs
- - Star Keystone on GitHub
-
-Done in 84.06s.
-
-
-
-
-
After a few minutes you should be ready to go. Ignore the error about yarn not being able to install dependencies, it’s an issue with my setup. Next go ahead and open up the project folder with your editor of choice. I use VSCodium:
-
-
-
-
codium authenticator
-
-
-
-
Let’s go ahead and remove all the comments from the schema.ts file and clean it up some:
-
-
-
-
sed -i '/\/\//d' schema.ts
-
-
-
-
Also, go ahead and delete the Post and Tag list as we won’t be using them. Our cleaned up schema.ts should look like this:
Next we will define the schema for our tokens. We will need 3 basic things to start with:
-
-
-
-
-
Issuer
-
-
-
-
Secret Key
-
-
-
-
Account
-
-
-
-
-
The only thing that really matters for generating a TOTP is actually the secret key. The other two fields are mostly for identifying and differentiating tokens. Go ahead and add the following to our schema.ts underneath the User list:
Now that we have defined our Token, we should probably link it to a user. KeystoneJS makes this really easily. We simply need to add a relationship field to our User list. Add the following field to the user list:
-
-
-
-
tokens: relationship({ ref:'Token', many: true })
-
-
-
-
We are defining a tokens field on the User list and tying it to our Token list. We are also passing many: true saying that a user can have one or more tokens. Now that we have the basics set up, let’s go ahead and spin up our app and see what we have:
-
-
-
-
$ yarn dev
-yarn run v1.22.21
-$ keystone dev
-✨ Starting Keystone
-⭐ Server listening on :3000 (http://localhost:3000/)
-⭐ GraphQL API available at /api/graphql
-✨ Generating GraphQL and Prisma schemas
-✨ The database is already in sync with the Prisma schema
-✨ Connecting to the database
-✨ Creating server
-✅ GraphQL API ready
-✨ Generating Admin UI code
-✨ Preparing Admin UI app
-✅ Admin UI ready
-
-
-
-
-
Our server should be running on localhost:3000 so let’s check it out! The first time we open it up we will be greeted with the initialization screen. Go ahead and create an account to login:
-
-
-
-
-
-
-
-
Once you login you should see a dashboard similar to this:
-
-
-
-
-
-
-
-
You can see we have Users and Tokens that we can manage. The beauty of KeystoneJS is that you get full CRUD functionality out of the box just by defining our schema! Go ahead and click on Tokens to add a token:
-
-
-
-
-
-
-
-
For this example I just entered some random text as an example. This is enough to start testing out our TOTP functionality. Click ‘Create Token’ and you should see a list displaying existing tokens:
-
-
-
-
-
-
-
-
We are now ready to jump into the frontend. Stay tuned for pt 2 of this series.
-
-
-
-
\ No newline at end of file
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/tag/nodejs/feed/index.xml b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/tag/nodejs/feed/index.xml
deleted file mode 100644
index ff24d40..0000000
--- a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/tag/nodejs/feed/index.xml
+++ /dev/null
@@ -1,544 +0,0 @@
-
-
-
- NodeJS – hackanooga
-
- /
- Confessions of a homelab hacker
- Wed, 13 Mar 2024 12:54:04 +0000
- en-US
-
- hourly
-
- 1
- https://wordpress.org/?v=6.6.2
-
-
- /wp-content/uploads/2024/03/cropped-cropped-avatar-32x32.png
- NodeJS – hackanooga
- /
- 32
- 32
-
-
- Roll your own authenticator app with KeystoneJS and React – pt 2
- /roll-your-own-authenticator-app-with-keystonejs-and-react-pt-2/
-
-
- Thu, 11 Jan 2024 01:41:00 +0000
-
-
-
-
-
-
-
- /?p=539
-
-
- In part 1 of this series we built out a basic backend using KeystoneJS. In this part we will go ahead and start a new React frontend that will interact with our backend. We will be using Vite. Let’s get started. Make sure you are in the authenticator folder and run the following:
-
-
-
-
$ yarn create vite@latest
-yarn create v1.22.21
-[1/4] Resolving packages...
-[2/4] Fetching packages...
-[3/4] Linking dependencies...
-[4/4] Building fresh packages...
-
-success Installed "create-vite@5.2.2" with binaries:
- - create-vite
- - cva
-✔ Project name: … frontend
-✔ Select a framework: › React
-✔ Select a variant: › TypeScript
-
-Scaffolding project in /home/mikeconrad/projects/authenticator/frontend...
-
-Done. Now run:
-
- cd frontend
- yarn
- yarn dev
-
-Done in 10.20s.
-
-
-
-
Let’s go ahead and go into our frontend directory and get started:
-
-
-
-
$ cd frontend
-$ yarn
-yarn install v1.22.21
-info No lockfile found.
-[1/4] Resolving packages...
-[2/4] Fetching packages...
-[3/4] Linking dependencies...
-[4/4] Building fresh packages...
-success Saved lockfile.
-Done in 10.21s.
-
-$ yarn dev
-yarn run v1.22.21
-$ vite
-Port 5173 is in use, trying another one...
-
- VITE v5.1.6 ready in 218 ms
-
- ➜ Local: http://localhost:5174/
- ➜ Network: use --host to expose
- ➜ press h + enter to show help
-
-
-
-
-
Next go ahead and open the project up in your IDE of choice. I prefer VSCodium:
-
-
-
-
codium frontend
-
-
-
-
Go ahead and open up src/App.tsx and remove all the boilerplate so it looks like this:
Now you should have something that looks like this:
-
-
-
-
-
-
-
-
Alright, we have some of the boring stuff out of the way, now let’s start making some magic. If you aren’t familiar with how TOTP tokens work, basically there is an Algorithm that generates them. I would encourage you to read the RFC for a detailed explanation. Basically it is an algorithm that generates a one time password using the current time as a source of uniqueness along with the secret key.
-
-
-
-
If we really wanted to we could implement this algorithm ourselves but thankfully there are some really simple libraries that do it for us. For our project we will be using one called totp-generator. Let’s go ahead and install it and check it out:
-
-
-
-
$ yarn add totp-generator
-
-
-
-
Now let’s add it to our card component and see what happens. Using it is really simple. We just need to import it, instantiate a new TokenGenerator and pass it our Secret key:
Now save and go back to your browser and you should see that our secret keys are now being displayed as tokens:
-
-
-
-
-
-
-
-
That is pretty cool, the only problem is you need to refresh the page to refresh the token. We will take care of that in part 3 of this series as well as handling fetching tokens from our backend.
-]]>
-
-
-
-
-
- Roll your own authenticator app with KeystoneJS and React
- /roll-your-own-authenticator-app-with-keystonejs-and-react/
-
-
- Thu, 04 Jan 2024 00:59:49 +0000
-
-
-
-
-
-
-
- /?p=533
-
-
- In this series of articles we are going to be building an authenticator app using KeystoneJS for the backend and React for the frontend. The concept is pretty simple and yes there are a bunch out there already but I recently had a need to learn some of the ins and outs of TOTP tokens and thought this project would be a fun idea. Let’s get started.
-
-
-
-
Step 1: Init keystone app
-
-
-
-
Open up a terminal and create a blank keystone project. We are going to call our app authenticator to keep things simple.
-
-
-
-
$ yarn create keystone-app
-yarn create v1.22.21
-[1/4] Resolving packages...
-[2/4] Fetching packages...
-[3/4] Linking dependencies...
-[4/4] Building fresh packages...
-
-success Installed "create-keystone-app@9.0.1" with binaries:
- - create-keystone-app
-[###################################################################################################################################################################################] 273/273
-✨ You're about to generate a project using Keystone 6 packages.
-
-✔ What directory should create-keystone-app generate your app into? · authenticator
-
-⠸ Installing dependencies with yarn. This may take a few minutes.
-⚠ Failed to install with yarn.
-✔ Installed dependencies with npm.
-
-
-🎉 Keystone created a starter project in: authenticator
-
- To launch your app, run:
-
- - cd authenticator
- - npm run dev
-
- Next steps:
-
- - Read authenticator/README.md for additional getting started details.
- - Edit authenticator/keystone.ts to customize your app.
- - Open the Admin UI
- - Open the Graphql API
- - Read the docs
- - Star Keystone on GitHub
-
-Done in 84.06s.
-
-
-
-
-
After a few minutes you should be ready to go. Ignore the error about yarn not being able to install dependencies, it’s an issue with my setup. Next go ahead and open up the project folder with your editor of choice. I use VSCodium:
-
-
-
-
codium authenticator
-
-
-
-
Let’s go ahead and remove all the comments from the schema.ts file and clean it up some:
-
-
-
-
sed -i '/\/\//d' schema.ts
-
-
-
-
Also, go ahead and delete the Post and Tag list as we won’t be using them. Our cleaned up schema.ts should look like this:
Next we will define the schema for our tokens. We will need 3 basic things to start with:
-
-
-
-
-
Issuer
-
-
-
-
Secret Key
-
-
-
-
Account
-
-
-
-
-
The only thing that really matters for generating a TOTP is actually the secret key. The other two fields are mostly for identifying and differentiating tokens. Go ahead and add the following to our schema.ts underneath the User list:
Now that we have defined our Token, we should probably link it to a user. KeystoneJS makes this really easily. We simply need to add a relationship field to our User list. Add the following field to the user list:
-
-
-
-
tokens: relationship({ ref:'Token', many: true })
-
-
-
-
We are defining a tokens field on the User list and tying it to our Token list. We are also passing many: true saying that a user can have one or more tokens. Now that we have the basics set up, let’s go ahead and spin up our app and see what we have:
-
-
-
-
$ yarn dev
-yarn run v1.22.21
-$ keystone dev
-✨ Starting Keystone
-⭐ Server listening on :3000 (http://localhost:3000/)
-⭐ GraphQL API available at /api/graphql
-✨ Generating GraphQL and Prisma schemas
-✨ The database is already in sync with the Prisma schema
-✨ Connecting to the database
-✨ Creating server
-✅ GraphQL API ready
-✨ Generating Admin UI code
-✨ Preparing Admin UI app
-✅ Admin UI ready
-
-
-
-
-
Our server should be running on localhost:3000 so let’s check it out! The first time we open it up we will be greeted with the initialization screen. Go ahead and create an account to login:
-
-
-
-
-
-
-
-
Once you login you should see a dashboard similar to this:
-
-
-
-
-
-
-
-
You can see we have Users and Tokens that we can manage. The beauty of KeystoneJS is that you get full CRUD functionality out of the box just by defining our schema! Go ahead and click on Tokens to add a token:
-
-
-
-
-
-
-
-
For this example I just entered some random text as an example. This is enough to start testing out our TOTP functionality. Click ‘Create Token’ and you should see a list displaying existing tokens:
-
-
-
-
-
-
-
-
We are now ready to jump into the frontend. Stay tuned for pt 2 of this series.
-
-
-
-
\ No newline at end of file
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/tag/portfolio/feed/index.xml b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/tag/portfolio/feed/index.xml
deleted file mode 100644
index d5d7f47..0000000
--- a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/tag/portfolio/feed/index.xml
+++ /dev/null
@@ -1,237 +0,0 @@
-
-
-
- Portfolio – hackanooga
-
- /
- Confessions of a homelab hacker
- Tue, 12 Mar 2024 21:00:42 +0000
- en-US
-
- hourly
-
- 1
- https://wordpress.org/?v=6.6.2
-
-
- /wp-content/uploads/2024/03/cropped-cropped-avatar-32x32.png
- Portfolio – hackanooga
- /
- 32
- 32
-
-
- Hoots Wings
- /hoots-wings/
-
-
- Wed, 12 Jul 2023 15:32:44 +0000
-
-
- /?p=495
-
-
- While working for Morrison I had the pleasure of building a website for Hoots Wings. The CMS was Perch and it was mostly HTML, CSS, PHP and JavaScript on the frontend, however I built out a customer store locator using NodeJS and VueJS.
-
-
-
-
-
-
-
-
I was the sole frontend developer responsible for taking the designs from SketchUp and translating them to the site you see now. Most of the blocks and templates are built using a mix of PHP and HTML/SCSS. There was also some JavaScript for things like getting the users location and rendering popups/modals.
-
-
-
-
The store locator was a separate piece that was built in Vue2.0 with a NodeJS backend. For the backend I used KeystoneJS to hold all of the store information. There was also some custom development that was done in order to sync the stores added via the CMS with Yext and vice versa.
-
-
-
-
-
-
-
-
For that piece I ended up having to write a custom integration in Perch that would connect to the NodeJS backend and pull the stores but also make sure that those were in sync with Yext. This required diving into the Yext API some and examining a similar integration that we had for another client site.
-
-
-
-
Unfortunately I don’t have any screen grabs of the admin side of things since that is proprietary but the system I built allowed a site admin to go in and add/edit store locations that would show up on the site and also show up in Yext with the appropriate information.
-
-
-
-
Screenshots
-
-
-
-
Here are some full screenshots of the site.
-
-
-
-
Homepage
-
-
-
-
-
-
-
-
-
-
-
-
Menu Page
-
-
-
-
-
-
-
-
-
-
-
-
Locations Page
-
-
-
-
-]]>
-
-
-
-
-
- Hilger Grading Portal
- /hilger-grading-portal/
-
-
- Sun, 21 May 2023 20:07:50 +0000
-
-
- /?p=509
-
-
- Back around 2014 I took on my first freelance development project for a Homeschool Co-op here in Chattanooga called Hilger Higher Learning. The problem that they were trying to solve involved managing grades and report cards for their students. In the past, they had a developer build a rudimentary web application that would allow them to enter grades for students, however it lacked any sort of concurrency meaning that if two teachers were making changes to the same student at the same time, teacher b’s changes would overwrite teacher a’s changes. This was obviously a huge headache.
-
-
-
-
I built out the first version of the app using PHP and HTML, CSS and Datatables with lots of jQuery sprinkled in. I built in custom functionality that allowed them to easily compile and print all the report cards for all students with the simple click of a button. It was a game changer or them and streamlined the process significantly.
-
-
-
-
That system was in production for 5 years or so with minimal updates and maintance. I recently rebuilt it using React and ChakraUI on the frontend and KeystoneJS on the backend. I also modernized the deployment by building Docker images for the frontend/backend. I actually ended up keeping parts of it in PHP due to the fact that I couldn’t find a JavaScript library that would solve the challenges I had. Here are some screenshots of it in action:
-
-
-
-
-
-
-
-
This is the page listing all teachers in the system and whether or not they have admin privileges. Any admin user can grant other admin users this privilege. There is also a button to send the teacher a password reset email (via Postmark API integration) and an option that allows admin users to impersonate other users for troubleshooting and diagnostic purposes.
-
-
-
-
-
-
-
-
-
-
-
-
The data is all coming from the KeystoneJS backend GraphQL API. I am using urql for fetching the data and handling mutations. This is the page that displays students. It is filterable and searchable. Teachers also have the ability to mark a student as active or inactive for the semester as well as delete them from the system.
-
-
-
-
-
-
-
-
Clicking on a student takes the teacher/admin to an edit course screen where they can add and remove courses for each student. A teacher can add as many courses as they need. If multiple teachers have added courses for this student, the user will only see the courses they have entered.
-
-
-
-
-
-
-
-
There is another page that allows admin users to view and manage all of the parents in the system. It allows them to easily send a password reset email to the parents as well as to view the parent portal.
-
-
-
-
\ No newline at end of file
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/tag/prisma/feed/index.xml b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/tag/prisma/feed/index.xml
deleted file mode 100644
index 23a4255..0000000
--- a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/tag/prisma/feed/index.xml
+++ /dev/null
@@ -1,544 +0,0 @@
-
-
-
- Prisma – hackanooga
-
- /
- Confessions of a homelab hacker
- Wed, 13 Mar 2024 12:54:04 +0000
- en-US
-
- hourly
-
- 1
- https://wordpress.org/?v=6.6.2
-
-
- /wp-content/uploads/2024/03/cropped-cropped-avatar-32x32.png
- Prisma – hackanooga
- /
- 32
- 32
-
-
- Roll your own authenticator app with KeystoneJS and React – pt 2
- /roll-your-own-authenticator-app-with-keystonejs-and-react-pt-2/
-
-
- Thu, 11 Jan 2024 01:41:00 +0000
-
-
-
-
-
-
-
- /?p=539
-
-
- In part 1 of this series we built out a basic backend using KeystoneJS. In this part we will go ahead and start a new React frontend that will interact with our backend. We will be using Vite. Let’s get started. Make sure you are in the authenticator folder and run the following:
-
-
-
-
$ yarn create vite@latest
-yarn create v1.22.21
-[1/4] Resolving packages...
-[2/4] Fetching packages...
-[3/4] Linking dependencies...
-[4/4] Building fresh packages...
-
-success Installed "create-vite@5.2.2" with binaries:
- - create-vite
- - cva
-✔ Project name: … frontend
-✔ Select a framework: › React
-✔ Select a variant: › TypeScript
-
-Scaffolding project in /home/mikeconrad/projects/authenticator/frontend...
-
-Done. Now run:
-
- cd frontend
- yarn
- yarn dev
-
-Done in 10.20s.
-
-
-
-
Let’s go ahead and go into our frontend directory and get started:
-
-
-
-
$ cd frontend
-$ yarn
-yarn install v1.22.21
-info No lockfile found.
-[1/4] Resolving packages...
-[2/4] Fetching packages...
-[3/4] Linking dependencies...
-[4/4] Building fresh packages...
-success Saved lockfile.
-Done in 10.21s.
-
-$ yarn dev
-yarn run v1.22.21
-$ vite
-Port 5173 is in use, trying another one...
-
- VITE v5.1.6 ready in 218 ms
-
- ➜ Local: http://localhost:5174/
- ➜ Network: use --host to expose
- ➜ press h + enter to show help
-
-
-
-
-
Next go ahead and open the project up in your IDE of choice. I prefer VSCodium:
-
-
-
-
codium frontend
-
-
-
-
Go ahead and open up src/App.tsx and remove all the boilerplate so it looks like this:
Now you should have something that looks like this:
-
-
-
-
-
-
-
-
Alright, we have some of the boring stuff out of the way, now let’s start making some magic. If you aren’t familiar with how TOTP tokens work, basically there is an Algorithm that generates them. I would encourage you to read the RFC for a detailed explanation. Basically it is an algorithm that generates a one time password using the current time as a source of uniqueness along with the secret key.
-
-
-
-
If we really wanted to we could implement this algorithm ourselves but thankfully there are some really simple libraries that do it for us. For our project we will be using one called totp-generator. Let’s go ahead and install it and check it out:
-
-
-
-
$ yarn add totp-generator
-
-
-
-
Now let’s add it to our card component and see what happens. Using it is really simple. We just need to import it, instantiate a new TokenGenerator and pass it our Secret key:
Now save and go back to your browser and you should see that our secret keys are now being displayed as tokens:
-
-
-
-
-
-
-
-
That is pretty cool, the only problem is you need to refresh the page to refresh the token. We will take care of that in part 3 of this series as well as handling fetching tokens from our backend.
-]]>
-
-
-
-
-
- Roll your own authenticator app with KeystoneJS and React
- /roll-your-own-authenticator-app-with-keystonejs-and-react/
-
-
- Thu, 04 Jan 2024 00:59:49 +0000
-
-
-
-
-
-
-
- /?p=533
-
-
- In this series of articles we are going to be building an authenticator app using KeystoneJS for the backend and React for the frontend. The concept is pretty simple and yes there are a bunch out there already but I recently had a need to learn some of the ins and outs of TOTP tokens and thought this project would be a fun idea. Let’s get started.
-
-
-
-
Step 1: Init keystone app
-
-
-
-
Open up a terminal and create a blank keystone project. We are going to call our app authenticator to keep things simple.
-
-
-
-
$ yarn create keystone-app
-yarn create v1.22.21
-[1/4] Resolving packages...
-[2/4] Fetching packages...
-[3/4] Linking dependencies...
-[4/4] Building fresh packages...
-
-success Installed "create-keystone-app@9.0.1" with binaries:
- - create-keystone-app
-[###################################################################################################################################################################################] 273/273
-✨ You're about to generate a project using Keystone 6 packages.
-
-✔ What directory should create-keystone-app generate your app into? · authenticator
-
-⠸ Installing dependencies with yarn. This may take a few minutes.
-⚠ Failed to install with yarn.
-✔ Installed dependencies with npm.
-
-
-🎉 Keystone created a starter project in: authenticator
-
- To launch your app, run:
-
- - cd authenticator
- - npm run dev
-
- Next steps:
-
- - Read authenticator/README.md for additional getting started details.
- - Edit authenticator/keystone.ts to customize your app.
- - Open the Admin UI
- - Open the Graphql API
- - Read the docs
- - Star Keystone on GitHub
-
-Done in 84.06s.
-
-
-
-
-
After a few minutes you should be ready to go. Ignore the error about yarn not being able to install dependencies, it’s an issue with my setup. Next go ahead and open up the project folder with your editor of choice. I use VSCodium:
-
-
-
-
codium authenticator
-
-
-
-
Let’s go ahead and remove all the comments from the schema.ts file and clean it up some:
-
-
-
-
sed -i '/\/\//d' schema.ts
-
-
-
-
Also, go ahead and delete the Post and Tag list as we won’t be using them. Our cleaned up schema.ts should look like this:
Next we will define the schema for our tokens. We will need 3 basic things to start with:
-
-
-
-
-
Issuer
-
-
-
-
Secret Key
-
-
-
-
Account
-
-
-
-
-
The only thing that really matters for generating a TOTP is actually the secret key. The other two fields are mostly for identifying and differentiating tokens. Go ahead and add the following to our schema.ts underneath the User list:
Now that we have defined our Token, we should probably link it to a user. KeystoneJS makes this really easily. We simply need to add a relationship field to our User list. Add the following field to the user list:
-
-
-
-
tokens: relationship({ ref:'Token', many: true })
-
-
-
-
We are defining a tokens field on the User list and tying it to our Token list. We are also passing many: true saying that a user can have one or more tokens. Now that we have the basics set up, let’s go ahead and spin up our app and see what we have:
-
-
-
-
$ yarn dev
-yarn run v1.22.21
-$ keystone dev
-✨ Starting Keystone
-⭐ Server listening on :3000 (http://localhost:3000/)
-⭐ GraphQL API available at /api/graphql
-✨ Generating GraphQL and Prisma schemas
-✨ The database is already in sync with the Prisma schema
-✨ Connecting to the database
-✨ Creating server
-✅ GraphQL API ready
-✨ Generating Admin UI code
-✨ Preparing Admin UI app
-✅ Admin UI ready
-
-
-
-
-
Our server should be running on localhost:3000 so let’s check it out! The first time we open it up we will be greeted with the initialization screen. Go ahead and create an account to login:
-
-
-
-
-
-
-
-
Once you login you should see a dashboard similar to this:
-
-
-
-
-
-
-
-
You can see we have Users and Tokens that we can manage. The beauty of KeystoneJS is that you get full CRUD functionality out of the box just by defining our schema! Go ahead and click on Tokens to add a token:
-
-
-
-
-
-
-
-
For this example I just entered some random text as an example. This is enough to start testing out our TOTP functionality. Click ‘Create Token’ and you should see a list displaying existing tokens:
-
-
-
-
-
-
-
-
We are now ready to jump into the frontend. Stay tuned for pt 2 of this series.
-
-
-
-
\ No newline at end of file
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/tag/react/feed/index.xml b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/tag/react/feed/index.xml
deleted file mode 100644
index 48bc7d3..0000000
--- a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/tag/react/feed/index.xml
+++ /dev/null
@@ -1,227 +0,0 @@
-
-
-
- React – hackanooga
-
- /
- Confessions of a homelab hacker
- Wed, 13 Mar 2024 16:14:24 +0000
- en-US
-
- hourly
-
- 1
- https://wordpress.org/?v=6.6.2
-
-
- /wp-content/uploads/2024/03/cropped-cropped-avatar-32x32.png
- React – hackanooga
- /
- 32
- 32
-
-
- Roll your own authenticator app with KeystoneJS and React – pt 3
- /roll-your-own-authenticator-app-with-keystonejs-and-react-pt-3/
-
-
- Wed, 17 Jan 2024 17:11:00 +0000
-
-
-
-
-
- /?p=546
-
-
- In our previous post we got to the point of displaying an OTP in our card component. Now it is time to refactor a bit and implement a countdown functionality to see when this token will expire. For now we will go ahead and add this logic into our Card component. In order to figure out how to build this countdown timer we first need to understand how the TOTP counter is calculated.
-
-
-
-
In other words, we know that at TOTP token is derived from a secret key and the current time. If we dig into the spec some we can find that time is a reference to Linux epoch time or the number of seconds that have elapsed since January 1st 1970. For a little more clarification check out this Stackexchange article.
-
-
-
-
So if we know that the time is based on epoch time, we also need to know that most TOTP tokens have a validity period of either 30 seconds or 60 seconds. 30 seconds is the most common standard so we will use that for our implementation. If we put all that together then basically all we need is 2 variables:
-
-
-
-
-
Number of seconds since epoch
-
-
-
-
How many seconds until this token expires
-
-
-
-
-
The first one is easy:
-
-
-
-
let secondsSinceEpoch;
-secondsSinceEpoch = Math.ceil(Date.now() / 1000) - 1;
-
-# This gives us a time like so: 1710338609
-
-
-
-
For the second one we will need to do a little math but it’s pretty straightforward. We need to divide secondsSinceEpoch by 30 seconds and then subtract this number from 30. Here is what that looks like:
-
-
-
-
let secondsSinceEpoch;
-let secondsRemaining;
-const period = 30;
-
-secondsSinceEpoch = Math.ceil(Date.now() / 1000) - 1;
-secondsRemaining = period - (secondsSinceEpoch % period);
-
-
-
-
Now let’s put all of that together into a function that we can test out to make sure we are getting the results we expect.
Running this function should give you output similar to the following. In this example we are stopping the timer once it hits 1 second just to show that everything is working as we expect. In our application we will want this time to keep going forever:
Here is a JSfiddle that shows it in action: https://jsfiddle.net/561vg3k7/
-
-
-
-
We can go ahead and add this function to our Card component and get it wired up. I am going to skip ahead a bit and add a progress bar to our card that is synced with our countdown timer and changes colors as it drops below 10 seconds. For now we will be using a setInterval function to accomplish this.
-
-
-
-
Here is what my updated src/Components/Card.tsx looks like:
Pretty straightforward. I also updated my src/index.css and added a style for our progress bar:
-
-
-
-
.progressBar {
- height: 10px;
- position: absolute;
- top: 0;
- left: 0;
- right: inherit;
-}
-// Also be sure to add position:relative to .card which is the parent of this.
-
-
-
-
Here is what it all looks like in action:
-
-
-
-
-
-
-
-
If you look closely you will notice a few interesting things. First is that the color of the progress bar changes from green to red. This is handled by our timerStyle variable. That part is pretty simple, if the timer is less than 10 seconds we set the background color as salmon otherwise we use light green. The width of the progress bar is controlled by `${100 – (100 / 30 * (30 – secondsRemaining))}%`
-
-
-
-
The other interesting thing to note is that when the timer runs out it automatically restarts at 30 seconds with a new OTP. This is due to the fact that this component is re-rendering every 1/4 second and every time it re-renders it is running the entire function body including: const { otp } = TOTP.generate(token.token);.
-
-
-
-
This is to be expected since we are using React and we are just using a setInterval. It may be a little unexpected though if you aren’t as familiar with React render cycles. For our purposes this will work just fine for now. Stay tuned for pt 4 of this series where we wire up the backend API.
-
-
-
-
\ No newline at end of file
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/tag/typescript/feed/index.xml b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/tag/typescript/feed/index.xml
deleted file mode 100644
index 6aa135f..0000000
--- a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/tag/typescript/feed/index.xml
+++ /dev/null
@@ -1,740 +0,0 @@
-
-
-
- TypeScript – hackanooga
-
- /
- Confessions of a homelab hacker
- Wed, 13 Mar 2024 16:14:24 +0000
- en-US
-
- hourly
-
- 1
- https://wordpress.org/?v=6.6.2
-
-
- /wp-content/uploads/2024/03/cropped-cropped-avatar-32x32.png
- TypeScript – hackanooga
- /
- 32
- 32
-
-
- Roll your own authenticator app with KeystoneJS and React – pt 3
- /roll-your-own-authenticator-app-with-keystonejs-and-react-pt-3/
-
-
- Wed, 17 Jan 2024 17:11:00 +0000
-
-
-
-
-
- /?p=546
-
-
- In our previous post we got to the point of displaying an OTP in our card component. Now it is time to refactor a bit and implement a countdown functionality to see when this token will expire. For now we will go ahead and add this logic into our Card component. In order to figure out how to build this countdown timer we first need to understand how the TOTP counter is calculated.
-
-
-
-
In other words, we know that at TOTP token is derived from a secret key and the current time. If we dig into the spec some we can find that time is a reference to Linux epoch time or the number of seconds that have elapsed since January 1st 1970. For a little more clarification check out this Stackexchange article.
-
-
-
-
So if we know that the time is based on epoch time, we also need to know that most TOTP tokens have a validity period of either 30 seconds or 60 seconds. 30 seconds is the most common standard so we will use that for our implementation. If we put all that together then basically all we need is 2 variables:
-
-
-
-
-
Number of seconds since epoch
-
-
-
-
How many seconds until this token expires
-
-
-
-
-
The first one is easy:
-
-
-
-
let secondsSinceEpoch;
-secondsSinceEpoch = Math.ceil(Date.now() / 1000) - 1;
-
-# This gives us a time like so: 1710338609
-
-
-
-
For the second one we will need to do a little math but it’s pretty straightforward. We need to divide secondsSinceEpoch by 30 seconds and then subtract this number from 30. Here is what that looks like:
-
-
-
-
let secondsSinceEpoch;
-let secondsRemaining;
-const period = 30;
-
-secondsSinceEpoch = Math.ceil(Date.now() / 1000) - 1;
-secondsRemaining = period - (secondsSinceEpoch % period);
-
-
-
-
Now let’s put all of that together into a function that we can test out to make sure we are getting the results we expect.
Running this function should give you output similar to the following. In this example we are stopping the timer once it hits 1 second just to show that everything is working as we expect. In our application we will want this time to keep going forever:
Here is a JSfiddle that shows it in action: https://jsfiddle.net/561vg3k7/
-
-
-
-
We can go ahead and add this function to our Card component and get it wired up. I am going to skip ahead a bit and add a progress bar to our card that is synced with our countdown timer and changes colors as it drops below 10 seconds. For now we will be using a setInterval function to accomplish this.
-
-
-
-
Here is what my updated src/Components/Card.tsx looks like:
Pretty straightforward. I also updated my src/index.css and added a style for our progress bar:
-
-
-
-
.progressBar {
- height: 10px;
- position: absolute;
- top: 0;
- left: 0;
- right: inherit;
-}
-// Also be sure to add position:relative to .card which is the parent of this.
-
-
-
-
Here is what it all looks like in action:
-
-
-
-
-
-
-
-
If you look closely you will notice a few interesting things. First is that the color of the progress bar changes from green to red. This is handled by our timerStyle variable. That part is pretty simple, if the timer is less than 10 seconds we set the background color as salmon otherwise we use light green. The width of the progress bar is controlled by `${100 – (100 / 30 * (30 – secondsRemaining))}%`
-
-
-
-
The other interesting thing to note is that when the timer runs out it automatically restarts at 30 seconds with a new OTP. This is due to the fact that this component is re-rendering every 1/4 second and every time it re-renders it is running the entire function body including: const { otp } = TOTP.generate(token.token);.
-
-
-
-
This is to be expected since we are using React and we are just using a setInterval. It may be a little unexpected though if you aren’t as familiar with React render cycles. For our purposes this will work just fine for now. Stay tuned for pt 4 of this series where we wire up the backend API.
-]]>
-
-
-
-
-
-
- Roll your own authenticator app with KeystoneJS and React – pt 2
- /roll-your-own-authenticator-app-with-keystonejs-and-react-pt-2/
-
-
- Thu, 11 Jan 2024 01:41:00 +0000
-
-
-
-
-
-
-
- /?p=539
-
-
- In part 1 of this series we built out a basic backend using KeystoneJS. In this part we will go ahead and start a new React frontend that will interact with our backend. We will be using Vite. Let’s get started. Make sure you are in the authenticator folder and run the following:
-
-
-
-
$ yarn create vite@latest
-yarn create v1.22.21
-[1/4] Resolving packages...
-[2/4] Fetching packages...
-[3/4] Linking dependencies...
-[4/4] Building fresh packages...
-
-success Installed "create-vite@5.2.2" with binaries:
- - create-vite
- - cva
-✔ Project name: … frontend
-✔ Select a framework: › React
-✔ Select a variant: › TypeScript
-
-Scaffolding project in /home/mikeconrad/projects/authenticator/frontend...
-
-Done. Now run:
-
- cd frontend
- yarn
- yarn dev
-
-Done in 10.20s.
-
-
-
-
Let’s go ahead and go into our frontend directory and get started:
-
-
-
-
$ cd frontend
-$ yarn
-yarn install v1.22.21
-info No lockfile found.
-[1/4] Resolving packages...
-[2/4] Fetching packages...
-[3/4] Linking dependencies...
-[4/4] Building fresh packages...
-success Saved lockfile.
-Done in 10.21s.
-
-$ yarn dev
-yarn run v1.22.21
-$ vite
-Port 5173 is in use, trying another one...
-
- VITE v5.1.6 ready in 218 ms
-
- ➜ Local: http://localhost:5174/
- ➜ Network: use --host to expose
- ➜ press h + enter to show help
-
-
-
-
-
Next go ahead and open the project up in your IDE of choice. I prefer VSCodium:
-
-
-
-
codium frontend
-
-
-
-
Go ahead and open up src/App.tsx and remove all the boilerplate so it looks like this:
Now you should have something that looks like this:
-
-
-
-
-
-
-
-
Alright, we have some of the boring stuff out of the way, now let’s start making some magic. If you aren’t familiar with how TOTP tokens work, basically there is an Algorithm that generates them. I would encourage you to read the RFC for a detailed explanation. Basically it is an algorithm that generates a one time password using the current time as a source of uniqueness along with the secret key.
-
-
-
-
If we really wanted to we could implement this algorithm ourselves but thankfully there are some really simple libraries that do it for us. For our project we will be using one called totp-generator. Let’s go ahead and install it and check it out:
-
-
-
-
$ yarn add totp-generator
-
-
-
-
Now let’s add it to our card component and see what happens. Using it is really simple. We just need to import it, instantiate a new TokenGenerator and pass it our Secret key:
Now save and go back to your browser and you should see that our secret keys are now being displayed as tokens:
-
-
-
-
-
-
-
-
That is pretty cool, the only problem is you need to refresh the page to refresh the token. We will take care of that in part 3 of this series as well as handling fetching tokens from our backend.
-]]>
-
-
-
-
-
- Roll your own authenticator app with KeystoneJS and React
- /roll-your-own-authenticator-app-with-keystonejs-and-react/
-
-
- Thu, 04 Jan 2024 00:59:49 +0000
-
-
-
-
-
-
-
- /?p=533
-
-
- In this series of articles we are going to be building an authenticator app using KeystoneJS for the backend and React for the frontend. The concept is pretty simple and yes there are a bunch out there already but I recently had a need to learn some of the ins and outs of TOTP tokens and thought this project would be a fun idea. Let’s get started.
-
-
-
-
Step 1: Init keystone app
-
-
-
-
Open up a terminal and create a blank keystone project. We are going to call our app authenticator to keep things simple.
-
-
-
-
$ yarn create keystone-app
-yarn create v1.22.21
-[1/4] Resolving packages...
-[2/4] Fetching packages...
-[3/4] Linking dependencies...
-[4/4] Building fresh packages...
-
-success Installed "create-keystone-app@9.0.1" with binaries:
- - create-keystone-app
-[###################################################################################################################################################################################] 273/273
-✨ You're about to generate a project using Keystone 6 packages.
-
-✔ What directory should create-keystone-app generate your app into? · authenticator
-
-⠸ Installing dependencies with yarn. This may take a few minutes.
-⚠ Failed to install with yarn.
-✔ Installed dependencies with npm.
-
-
-🎉 Keystone created a starter project in: authenticator
-
- To launch your app, run:
-
- - cd authenticator
- - npm run dev
-
- Next steps:
-
- - Read authenticator/README.md for additional getting started details.
- - Edit authenticator/keystone.ts to customize your app.
- - Open the Admin UI
- - Open the Graphql API
- - Read the docs
- - Star Keystone on GitHub
-
-Done in 84.06s.
-
-
-
-
-
After a few minutes you should be ready to go. Ignore the error about yarn not being able to install dependencies, it’s an issue with my setup. Next go ahead and open up the project folder with your editor of choice. I use VSCodium:
-
-
-
-
codium authenticator
-
-
-
-
Let’s go ahead and remove all the comments from the schema.ts file and clean it up some:
-
-
-
-
sed -i '/\/\//d' schema.ts
-
-
-
-
Also, go ahead and delete the Post and Tag list as we won’t be using them. Our cleaned up schema.ts should look like this:
Next we will define the schema for our tokens. We will need 3 basic things to start with:
-
-
-
-
-
Issuer
-
-
-
-
Secret Key
-
-
-
-
Account
-
-
-
-
-
The only thing that really matters for generating a TOTP is actually the secret key. The other two fields are mostly for identifying and differentiating tokens. Go ahead and add the following to our schema.ts underneath the User list:
Now that we have defined our Token, we should probably link it to a user. KeystoneJS makes this really easily. We simply need to add a relationship field to our User list. Add the following field to the user list:
-
-
-
-
tokens: relationship({ ref:'Token', many: true })
-
-
-
-
We are defining a tokens field on the User list and tying it to our Token list. We are also passing many: true saying that a user can have one or more tokens. Now that we have the basics set up, let’s go ahead and spin up our app and see what we have:
-
-
-
-
$ yarn dev
-yarn run v1.22.21
-$ keystone dev
-✨ Starting Keystone
-⭐ Server listening on :3000 (http://localhost:3000/)
-⭐ GraphQL API available at /api/graphql
-✨ Generating GraphQL and Prisma schemas
-✨ The database is already in sync with the Prisma schema
-✨ Connecting to the database
-✨ Creating server
-✅ GraphQL API ready
-✨ Generating Admin UI code
-✨ Preparing Admin UI app
-✅ Admin UI ready
-
-
-
-
-
Our server should be running on localhost:3000 so let’s check it out! The first time we open it up we will be greeted with the initialization screen. Go ahead and create an account to login:
-
-
-
-
-
-
-
-
Once you login you should see a dashboard similar to this:
-
-
-
-
-
-
-
-
You can see we have Users and Tokens that we can manage. The beauty of KeystoneJS is that you get full CRUD functionality out of the box just by defining our schema! Go ahead and click on Tokens to add a token:
-
-
-
-
-
-
-
-
For this example I just entered some random text as an example. This is enough to start testing out our TOTP functionality. Click ‘Create Token’ and you should see a list displaying existing tokens:
-
-
-
-
-
-
-
-
We are now ready to jump into the frontend. Stay tuned for pt 2 of this series.
We are looking for an experienced professional to help us set up an SFTP server that will allow our vendors to send us inventory files on a daily basis. The server should ensure secure and reliable file transfers, allowing our vendors to easily upload their inventory updates. The successful candidate will possess expertise in SFTP server setup and configuration, as well as knowledge of network security protocols. The required skills for this job include:
-
-
-
-
– SFTP server setup and configuration – Network security protocols – Troubleshooting and problem-solving skills
-
-
-
-
If you have demonstrated experience in setting up SFTP servers and ensuring smooth daily file transfers, we would love to hear from you.
-
-
-
-
-
-
-
-
-
-
-
-
-
My Role
-
-
-
-
I walked the client through the process of setting up a Digital Ocean account. I created a Ubuntu 22.04 VM and installed SFTPGo. I set the client up with an administrator user so that they could easily login and manage users and shares. I implemented some basic security practices as well and set the client up with a custom domain and free TLS/SSL certificate from LetsEncrypt. With the documentation and screenshots I provided the client, they were able to get everything up and running and add users and connect other systems easily and securly.
-
-
-
-
-
-
-
-
-
-
-
-
Client Feedback
-
-
-
-
-
Rating is 5 out of 5.
-
-
-
-
Michael was EXTREMELY helpful and great to work with. We really benefited from his support and help with everything.
Traefik 3.0 service discovery in Docker Swarm mode
-
-
-
-
-
-
-
I recently decided to set up a Docker swarm cluster for a project I was working on. If you aren’t familiar with Swarm mode, it is similar in some ways to k8s but with much less complexity and it is built into Docker. If you are looking for a fairly straightforward way to deploy containers across a number of nodes without all the overhead of k8s it can be a good choice, however it isn’t a very popular or widespread solution these days.
-
-
-
-
Anyway, I set up a VM scaling set in Azure with 10 Ubuntu 22.04 vms and wrote some Ansible scripts to automate the process of installing Docker on each machine as well as setting 3 up as swarm managers and the other 7 as worker nodes. I ssh’d into the primary manager node and created a docker compose file for launching an observability stack.
Everything deploys properly but when I view the Traefik logs there is an issue with all the services except for the grafana service. I get errors like this:
-
-
-
-
traefik_traefik.1.tm5iqb9x59on@dockerswa2V8BY4 | 2024-05-11T13:14:16Z ERR error="service \"observability-prometheus\" error: port is missing" container=observability-prometheus-37i852h4o36c23lzwuu9pvee9 providerName=swarm
-
-
-
-
-
It drove me crazy for about half a day or so. I couldn’t find any reason why the grafana service worked as expected but none of the others did. Part of my love/hate relationship with Traefik stems from the fact that configuration issues like this can be hard to track and debug. Ultimately after lots of searching and banging my head against a wall I found the answer in the Traefik docs and thought I would share here for anyone else who might run into this issue. Again, this solution is specific to Docker Swarm mode.
Expand that first section and you will see the solution:
-
-
-
-
-
-
-
-
It turns out I just needed to update my docker-compose.yml and nest the labels under a deploy section, redeploy and everything was working as expected.
Recently I decided to rebuild one of my homelab servers. Previously I was using Nginx as my reverse proxy but I decided to switch to Traefik since I have been using it professionally for some time now. One of the reasons I like Traefik is that it is stupid simple to set up certificates and when I am using it with Docker I don’t have to worry about a bunch of configuration files. If you aren’t familiar with how Traefik works with Docker, here is a brief example of a docker-compose.yaml
-
-
-
-
version: '3'
-
-services:
- reverse-proxy:
- # The official v2 Traefik docker image
- image: traefik:v2.11
- # Enables the web UI and tells Traefik to listen to docker
- command:
- - --api.insecure=true
- - --providers.docker=true
- - --entrypoints.web.address=:80
- - --entrypoints.websecure.address=:443
- # Set up LetsEncrypt
- - --certificatesresolvers.letsencrypt.acme.dnschallenge=true
- - --certificatesresolvers.letsencrypt.acme.dnschallenge.provider=cloudflare
- - [email protected]
- - --certificatesresolvers.letsencrypt.acme.storage=/letsencrypt/acme.json
- # Redirect all http requests to https
- - --entryPoints.web.http.redirections.entryPoint.to=websecure
- - --entryPoints.web.http.redirections.entryPoint.scheme=https
- - --entryPoints.web.http.redirections.entrypoint.permanent=true
- - --log=true
- - --log.level=INFO
- # Needed to request certs via lets encrypt
- environment:
- - CF_DNS_API_TOKEN=[redacted]
- ports:
- # The HTTP port
- - "80:80"
- - "443:443"
- # The Web UI (enabled by --api.insecure=true)
- - "8080:8080"
- volumes:
- # So that Traefik can listen to the Docker events
- - /var/run/docker.sock:/var/run/docker.sock:ro
- # Used for storing letsencrypt certificates
- - ./letsencrypt:/letsencrypt
- - ./volumes/traefik/logs:/logs
- networks:
- - traefik
- ots:
- image: luzifer/ots
- container_name: ots
- restart: always
- environment:
- REDIS_URL: redis://redis:6379/0
- SECRET_EXPIRY: "604800"
- STORAGE_TYPE: redis
- depends_on:
- - redis
- labels:
- - traefik.enable=true
- - traefik.http.routers.ots.rule=Host(`ots.example.com`)
- - traefik.http.routers.ots.entrypoints=websecure
- - traefik.http.routers.ots.tls=true
- - traefik.http.routers.ots.tls.certresolver=letsencrypt
- - traefik.http.services.ots.loadbalancer.server.port=3000
- networks:
- - traefik
- redis:
- image: redis:alpine
- restart: always
- volumes:
- - ./redis-data:/data
- networks:
- - traefik
-networks:
- traefik:
- external: true
-
-
-
-
-
-
-
In part one of this series I will be going over some of the basics of Traefik and how dynamic routing works. If you want to skip to the good stuff and get everything configured with Cloudflare, you can skip to part 2.
-
-
-
-
This example set’s up the primary Traefik container which acts as the ingress controller as well as a handy One Time Secret sharing service I use. Traefik handles routing in Docker via labels. For this to work properly the services that Traefik is trying to route to all need to be on the same Docker network. For this example we created a network called traefik by running the following:
-
-
-
-
docker network create traefik
-
-
-
-
-
Let’s take a look at the labels we applied to the ots container a little closer:
traefik.enable=true – This should be pretty self explanatory but it tells Traefik that we want it to know about this service.
-
-
-
-
traefik.http.routers.ots.rule=Host('ots.example.com') - This is where some of the magic comes in. Here we are defining a router called ots. The name is arbitrary in that it doesn’t have to match the name of the service but for our example it does. There are many rules that you can specify but the easiest for this example is host. Basically we are saying that any request coming in for ots.example.com should be picked up by this router. You can find more options for routers in the Traefik docs.
We are using these three labels to tell our router that we want it to use the websecure entrypoint, and that it should use the letsencrypt certresolver to grab it’s certificates. websecure is an arbitrary name that we assigned to our :443 interface. There are multiple ways to configure this, I choose to use the cli format in my traefik config:
-
-
-
-
“`
-
-
-
-
command:
- - --api.insecure=true
- - --providers.docker=true
- # Our entrypoint names are arbitrary but these are convention.
- # The important part is the port binding that we associate.
- - --entrypoints.web.address=:80
- - --entrypoints.websecure.address=:443
-
-
-
-
-
-
These last label is optional depending on your setup but it is important to understand as the documentation is a little fuzzy.
Here’s how it works. Suppose you have a container that exposes multiple ports. Maybe one of those is a web ui and another is something that you don’t want exposed. By default Traefik will try and guess which port to route requests to. My understanding is that it will try and use the first exposed port. However you can override this functionality by using the label above which will tell Traefik specifically which port you want to route to inside the container.
The service name is derived automatically from the definition in the docker compose file:
-
-
-
-
ots: # This will become the service name image: luzifer/ots container_name: ots
In this article we are gonna get into setting up Traefik to request dynamic certs from Lets Encrypt. I had a few issues getting this up and running and the documentation is a little fuzzy. In my case I decided to go with the DNS challenge route. Really the only reason I went with this option is because I was having issues with the TLS and HTTP challenges. Well as it turns out my issues didn’t have as much to do with my configuration as they did with my router.
-
-
-
-
Sometime in the past I had set up some special rules on my router to force all clients on my network to send DNS requests through a self hosted DNS server. I did this to keep some of my “smart” devices from misbehaving by blocking there access to the outside world. As it turns out some devices will ignore the DNS servers that you hand out via DHCP and will use their own instead. That is of course unless you force DNS redirection but that is another post for another day.
-
-
-
-
Let’s revisit our current configuration:
-
-
-
-
version: '3'
-
-services:
- reverse-proxy:
- # The official v2 Traefik docker image
- image: traefik:v2.11
- # Enables the web UI and tells Traefik to listen to docker
- command:
- - --api.insecure=true
- - --providers.docker=true
- - --providers.file.filename=/config.yml
- - --entrypoints.web.address=:80
- - --entrypoints.websecure.address=:443
- # Set up LetsEncrypt
- - --certificatesresolvers.letsencrypt.acme.dnschallenge=true
- - --certificatesresolvers.letsencrypt.acme.dnschallenge.provider=cloudflare
- - --certificatesresolvers.letsencrypt.acme.email=mikeconrad@onmail.com
- - --certificatesresolvers.letsencrypt.acme.storage=/letsencrypt/acme.json
- - --entryPoints.web.http.redirections.entryPoint.to=websecure
- - --entryPoints.web.http.redirections.entryPoint.scheme=https
- - --entryPoints.web.http.redirections.entrypoint.permanent=true
- - --log=true
- - --log.level=INFO
-# - '--certificatesresolvers.letsencrypt.acme.caserver=https://acme-staging-v02.api.letsencrypt.org/directory'
-
- environment:
- - CF_DNS_API_TOKEN=${CF_DNS_API_TOKEN}
- ports:
- # The HTTP port
- - "80:80"
- - "443:443"
- # The Web UI (enabled by --api.insecure=true)
- - "8080:8080"
- volumes:
- # So that Traefik can listen to the Docker events
- - /var/run/docker.sock:/var/run/docker.sock:ro
- - ./letsencrypt:/letsencrypt
- - ./volumes/traefik/logs:/logs
- - ./traefik/config.yml:/config.yml:ro
- networks:
- - traefik
- ots:
- image: luzifer/ots
- container_name: ots
- restart: always
- environment:
- # Optional, see "Customization" in README
- #CUSTOMIZE: '/etc/ots/customize.yaml'
- # See README for details
- REDIS_URL: redis://redis:6379/0
- # 168h = 1w
- SECRET_EXPIRY: "604800"
- # "mem" or "redis" (See README)
- STORAGE_TYPE: redis
- depends_on:
- - redis
- labels:
- - traefik.enable=true
- - traefik.http.routers.ots.rule=Host(`ots.hackanooga.com`)
- - traefik.http.routers.ots.entrypoints=websecure
- - traefik.http.routers.ots.tls=true
- - traefik.http.routers.ots.tls.certresolver=letsencrypt
- networks:
- - traefik
- redis:
- image: redis:alpine
- restart: always
- volumes:
- - ./redis-data:/data
- networks:
- - traefik
-networks:
- traefik:
- external: true
-
-
-
-
-
-
Now that we have all of this in place there are a couple more things we need to do on the Cloudflare side:
-
-
-
-
Step 1: Setup wildcard DNS entry
-
-
-
-
This is pretty straightforward. Follow the Cloudflare documentation if you aren’t familiar with setting this up.
-
-
-
-
Step 2: Create API Token
-
-
-
-
This is where the Traefik documentation is a little lacking. I had some issues getting this set up initially but ultimately found this documentation which pointed me in the right direction. In your Cloudflare account you will need to create an API token. Navigate to the dashboard, go to your profile -> API Tokens and create new token. It should have the following permissions:
-
-
-
-
Zone.Zone.Read
-Zone.DNS.Edit
-
-
-
-
-
-
-
-
Also be sure to give it permission to access all zones in your account. Now simply provide that token when starting up the stack and you should be good to go:
-
-
-
-
-
\ No newline at end of file
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/themes/bjork/assets/fonts/AlbertSans-Italic-VariableFont_wght.woff2 b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/themes/bjork/assets/fonts/AlbertSans-Italic-VariableFont_wght.woff2
deleted file mode 100644
index 53a2ed2..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/themes/bjork/assets/fonts/AlbertSans-Italic-VariableFont_wght.woff2 and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/themes/bjork/assets/fonts/AlbertSans-VariableFont_wght.woff2 b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/themes/bjork/assets/fonts/AlbertSans-VariableFont_wght.woff2
deleted file mode 100644
index 5eae626..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/themes/bjork/assets/fonts/AlbertSans-VariableFont_wght.woff2 and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/themes/bjork/assets/js/openpanel.js b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/themes/bjork/assets/js/openpanel.js
deleted file mode 100644
index 42f91d3..0000000
--- a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/themes/bjork/assets/js/openpanel.js
+++ /dev/null
@@ -1,7 +0,0 @@
- window.op = window.op || function (...args) { (window.op.q = window.op.q || []).push(args); };
- window.op('ctor', {
- clientId: '894cb0b1-e7e4-4ef2-8f38-a4ba8ef66cb4',
- trackScreenViews: true,
- trackOutgoingLinks: true,
- trackAttributes: true,
- });
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/themes/bjork/style.css b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/themes/bjork/style.css
deleted file mode 100644
index ac45c73..0000000
--- a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/themes/bjork/style.css
+++ /dev/null
@@ -1,352 +0,0 @@
-/* ---------------------------------------------------------------------------------------------
-
- Theme Name: Björk
- Text Domain: bjork
- Version: 0.2.3
- Description: Björk is a minimal theme for blogs and personal websites. It features a sticky sidebar menu on desktop for quick and easy navigation, and the clean design puts your content front and center. Björk comes with seven different theme styles to choose from, and over 15 different block patterns that you can use to quickly build unique page layouts. Demo: https://andersnoren.se/themes/bjork/
- Tags: blog, portfolio, grid-layout, one-column, two-columns, custom-background, custom-colors, custom-logo, custom-menu, editor-style, featured-images, sticky-post, threaded-comments, translation-ready, block-styles, wide-blocks, full-site-editing, left-sidebar
- Author: Anders Norén
- Author URI: https://andersnoren.se
- Theme URI: https://andersnoren.se/teman/bjork-wordpress-theme/
- License: GNU General Public License version 2.0
- License URI: http://www.gnu.org/licenses/gpl-2.0.html
- Requires PHP: 5.6
- Tested up to: 6.5
-
- All files, unless otherwise stated, are released under the GNU General Public License
- version 2.0 (http://www.gnu.org/licenses/gpl-2.0.html)
-
-/* --------------------------------------------------------------------------------------------- */
-
-
-body {
- -moz-osx-font-smoothing: grayscale;
- -webkit-font-smoothing: antialiased;
-}
-
-a { text-underline-offset: 0.2em; }
-
-/* Input styles */
-
-input, textarea, select, button {
- background-color: inherit;
- border-radius: 0;
- font-family: inherit;
- font-size: inherit;
- letter-spacing: inherit;
- margin: 0;
-}
-
-input, textarea, select {
- background-color: var( --wp--preset--color--background );
- border: .1rem solid var( --wp--preset--color--tertiary );
- border-radius: .4rem;
- box-sizing: border-box;
- color: var( --wp--preset--color--foreground );
- max-width: 100%;
- padding: .5em;
-}
-
-label {
- font-size: var( --wp--preset--font-size--small );
- font-weight: 500;
-}
-
-/* Editor Post Title */
-
-.editor-post-title__input {
- text-align: center;
-}
-
-/* Background Padding */
-
-h1.has-background,
-h2.has-background,
-h3.has-background,
-h4.has-background,
-h5.has-background,
-h6.has-background,
-p.has-background {
- padding: min( 1em, var( --wp--custom--spacing--small ) );
-}
-
-:where(.wp-block-group.has-background) {
- padding: min( 2em, var( --wp--custom--spacing--small ) );
-}
-
-/* Alignment Styles */
-
-@media ( max-width: 780px ) {
- .wp-site-blocks > .has-global-padding > .wp-block-columns > .wp-block-column > .wp-block-post-content.has-global-padding > .alignfull {
- margin-left: calc(-1 * var(--wp--style--root--padding-left)) !important;
- margin-right: calc(-1 * var(--wp--style--root--padding-left)) !important;
- width: unset;
- }
-}
-
-
-/* ------------------------------------------- */
-/* Typography
-/* ------------------------------------------- */
-
-
-.has-huge-font-size,
-.has-heading-1-font-size,
-.has-heading-2-font-size,
-.has-heading-3-font-size,
-.has-heading-4-font-size,
-.has-heading-5-font-size {
- letter-spacing: var( --wp--custom--typography--letter-spacing--heading );
- line-height: var( --wp--custom--typography--line-height--headings--large );
-}
-
-.has-gigantic-font-size {
- letter-spacing: var( --wp--custom--typography--letter-spacing--gigantic );
- line-height: var( --wp--custom--typography--line-height--headings--gigantic );
-}
-
-.has-medium-font-size {
- line-height: var( --wp--custom--typography--line-height--body );
-}
-
-
-/* ------------------------------------------- */
-/* Template Parts
-/* ------------------------------------------- */
-
-
-/* TEMPLATE PART: HEADER */
-
-@media ( min-width: 782px ) {
- :root:not(.editor-styles-wrapper) .site-header { display: none !important; }
-}
-
-/* TEMPLATE PART: SIDEBAR */
-
-.site-sidebar {
- display: flex;
- flex-direction: column;
- min-height: calc( 100vh - ( var( --wp--custom--spacing--outer ) * 2 ) );
-}
-
-.admin-bar .site-sidebar {
- min-height: calc( 100vh - ( var( --wp--custom--spacing--outer ) * 2 ) - var( --wp-admin--admin-bar--height ) );
-}
-
-.site-sidebar > .wp-block-group.is-vertical {
- flex-grow: 1;
- justify-content: space-between;
-}
-
-:root:not(.editor-styles-wrapper) .site-sidebar {
- position: sticky;
- top: var( --wp--custom--spacing--outer );
-}
-
-@media ( max-width: 781px ) {
- :root:not(.editor-styles-wrapper) .site-sidebar-col { display: none !important; }
-}
-
-/* TEMPLATE PART: FOOTER */
-
-@media ( max-width: 500px ) {
- .theme-credit { display: none !important; }
-}
-
-
-/* ------------------------------------------- */
-/* Blocks
-/* ------------------------------------------- */
-
-
-/* Block: Avatar ----------------------------- */
-
-.wp-block-avatar img {
- display: block;
-}
-
-/* Block: Comments --------------------------- */
-
-#cancel-comment-reply-link {
- font-weight: 500;
- letter-spacing: var(--wp--custom--typography--letter-spacing--body);
-}
-
-/* Block: File ------------------------------- */
-
-.wp-block-file {
- align-items: center;
- display: flex;
- justify-content: space-between;
-}
-
-:root .wp-block-file__button:not(:only-child) {
- margin-left: var( --wp--custom--spacing--baseline );
-}
-
-/* Block: Navigation ------------------------- */
-
-.wp-block-navigation__responsive-container-close svg { transform: scale( 1.25 ); }
-.wp-block-navigation__responsive-container-open svg { transform: scale( 1.5, 1.25 ); }
-
-/* Block: Pagination ------------------------- */
-
-.wp-block-query-pagination-numbers {
- display: flex;
- gap: .88em;
-}
-
-:root .wp-block-query-pagination-numbers:first-child {
- margin: 0 auto;
- padding-left: 7.5em;
-}
-
-:root .wp-block-query-pagination-numbers:last-child {
- margin: 0 auto;
- padding-right: 7.5em;
-}
-
-[class^="wp-block-query-pagination-"][class*="-arrow"] {
- color: #fff;
- font-weight: 500;
- text-align: center;
- width: 1.75em;
-}
-
-.wp-block-query-pagination-previous,
-.wp-block-query-pagination-next {
- position: relative;
-}
-
-.wp-block-query-pagination-previous:before,
-.wp-block-query-pagination-next:before {
- background-color: currentColor;
- border-radius: 50%;
- content: "";
- display: block;
- position: absolute;
- top: calc( 50% - .875em );
- height: 1.75em;
- width: 1.75em;
-}
-
-.wp-block-query-pagination-previous:before { left: 0; }
-.wp-block-query-pagination-next:before { right: 0; }
-
-:root .wp-block-query-pagination-previous-arrow { margin-right: .88em; }
-:root .wp-block-query-pagination-next-arrow { margin-left: .88em; }
-
-@media ( max-width: 600px ) {
- .wp-block-query-pagination-numbers:not(:only-child) {
- display: none;
- }
-}
-
-/* Block: Paragraph -------------------------- */
-
-.has-drop-cap:not(:focus):first-letter {
- border: .2rem solid currentColor;
- font-size: 2.75em;
- font-weight: var( --wp--custom--typography--font-weight--bold );
- margin: .09em 1rem .5rem 0;
- min-width: .6875em;
- padding: 0.3em;
- text-align: center;
-}
-
-/* Block: Post Comments Form ----------------- */
-
-ol.wp-block-comment-template {
- margin: 0;
-}
-
-.wp-block-post-comments-form input:not([type=submit]),
-.wp-block-post-comments-form textarea {
- border-color: var( --wp--preset--color--tertiary );
-}
-
-.required-field-message,
-.comment-notes {
- display: none;
-}
-
-.logged-in-as {
- color: var( --wp--preset--color--secondary );
-}
-
-.comment-reply-title {
- margin: 0;
-}
-
-/* Block: Post Featured Image ---------------- */
-
-.wp-block-post-featured-image img {
- border-radius: 8px;
-}
-
-/* Block: Post Navigation -------------------- */
-
-.post-navigation-link-previous a:before { content: "← "; }
-.post-navigation-link-next a:after { content: " →"; }
-
-/* Block: Pull Quote ------------------------- */
-
-:root .wp-block-pullquote blockquote p {
- hanging-punctuation: first;
- font-size: inherit;
-}
-
-:root .wp-block-pullquote.has-text-align-left,
-:root .wp-block-pullquote.has-text-align-right {
- max-width: 100%;
-}
-
-/* Block: Query Pagination ------------------- */
-
-.wp-block-query-pagination > .wp-block-query-pagination-next,
-.wp-block-query-pagination > .wp-block-query-pagination-numbers,
-.wp-block-query-pagination > .wp-block-query-pagination-previous {
- margin-bottom: 0;
-}
-
-.wp-block-query-pagination-next:only-child {
- margin-left: auto;
-}
-
-/* Block: Separator -------------------------- */
-
-:root hr[class*="is-style-bjork-angled-separator"] {
- background-color: transparent !important;
- background-image: linear-gradient( -45deg, currentColor 25%, transparent 25%, transparent 50%, currentColor 50%, currentColor 75%, transparent 75%, transparent );
- background-size: 5px 5px;
- border: none;
- height: 10px !important;
- max-width: 100%;
-}
-
-:root hr.is-style-bjork-angled-separator-wide { width: 100% !important; }
-
-/* Block: Search Form ------------------------ */
-
-.wp-block-search {
- font-size: var( --wp--preset--font-size--small );
-}
-
-.wp-block-search .wp-block-search__label {
- font-weight: inherit;
-}
-
-.wp-block-search__input {
- margin: 0;
- padding: .75em 1.25em;
-}
-
-.wp-block-search__button-inside .wp-block-search__inside-wrapper,
-.wp-block-search__button-inside .wp-block-search__input {
- margin: 0;
- padding: .375em .5em !important;
-}
-
-.wp-block-search__button {
- margin: 0 0 0 .75em;
-}
\ No newline at end of file
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/avatar-min-300x225.jpg b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/avatar-min-300x225.jpg
deleted file mode 100644
index b6ebe46..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/avatar-min-300x225.jpg and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/avatar-min-768x576.jpg b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/avatar-min-768x576.jpg
deleted file mode 100644
index 77ffbc4..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/avatar-min-768x576.jpg and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/avatar-min.jpg b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/avatar-min.jpg
deleted file mode 100644
index 0b22c2f..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/avatar-min.jpg and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/chadev-tile-alt-min-1024x768.jpg b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/chadev-tile-alt-min-1024x768.jpg
deleted file mode 100644
index a7f5632..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/chadev-tile-alt-min-1024x768.jpg and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/chadev-tile-alt-min-1536x1152.jpg b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/chadev-tile-alt-min-1536x1152.jpg
deleted file mode 100644
index 9a3923d..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/chadev-tile-alt-min-1536x1152.jpg and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/chadev-tile-alt-min-300x225.jpg b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/chadev-tile-alt-min-300x225.jpg
deleted file mode 100644
index c3d91db..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/chadev-tile-alt-min-300x225.jpg and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/chadev-tile-alt-min-768x576.jpg b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/chadev-tile-alt-min-768x576.jpg
deleted file mode 100644
index 2a9e7ff..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/chadev-tile-alt-min-768x576.jpg and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/chadev-tile-alt-min.jpg b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/chadev-tile-alt-min.jpg
deleted file mode 100644
index b5648c0..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/chadev-tile-alt-min.jpg and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/create-token-min-1024x572.webp b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/create-token-min-1024x572.webp
deleted file mode 100644
index 8f54748..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/create-token-min-1024x572.webp and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/create-token-min-300x167.webp b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/create-token-min-300x167.webp
deleted file mode 100644
index 2c7f891..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/create-token-min-300x167.webp and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/create-token-min-768x429.webp b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/create-token-min-768x429.webp
deleted file mode 100644
index b10d33a..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/create-token-min-768x429.webp and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/create-token-min.webp b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/create-token-min.webp
deleted file mode 100644
index 0de023e..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/create-token-min.webp and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/cropped-cropped-avatar-180x180.png b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/cropped-cropped-avatar-180x180.png
deleted file mode 100644
index 53a5402..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/cropped-cropped-avatar-180x180.png and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/cropped-cropped-avatar-192x192.png b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/cropped-cropped-avatar-192x192.png
deleted file mode 100644
index d3a68d3..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/cropped-cropped-avatar-192x192.png and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/cropped-cropped-avatar-270x270.png b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/cropped-cropped-avatar-270x270.png
deleted file mode 100644
index d229f7d..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/cropped-cropped-avatar-270x270.png and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/cropped-cropped-avatar-32x32.png b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/cropped-cropped-avatar-32x32.png
deleted file mode 100644
index 7da746d..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/cropped-cropped-avatar-32x32.png and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/display-token-dashboard-min-1024x275.webp b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/display-token-dashboard-min-1024x275.webp
deleted file mode 100644
index 8e38cd6..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/display-token-dashboard-min-1024x275.webp and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/display-token-dashboard-min-1536x413.webp b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/display-token-dashboard-min-1536x413.webp
deleted file mode 100644
index 6602146..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/display-token-dashboard-min-1536x413.webp and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/display-token-dashboard-min-300x81.webp b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/display-token-dashboard-min-300x81.webp
deleted file mode 100644
index 23b0a13..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/display-token-dashboard-min-300x81.webp and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/display-token-dashboard-min-768x206.webp b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/display-token-dashboard-min-768x206.webp
deleted file mode 100644
index d922bb4..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/display-token-dashboard-min-768x206.webp and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/display-token-dashboard-min.webp b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/display-token-dashboard-min.webp
deleted file mode 100644
index 6fade78..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/display-token-dashboard-min.webp and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/ente-auth-300x103.webp b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/ente-auth-300x103.webp
deleted file mode 100644
index b59219a..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/ente-auth-300x103.webp and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/ente-auth-768x265.webp b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/ente-auth-768x265.webp
deleted file mode 100644
index bd7f702..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/ente-auth-768x265.webp and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/ente-auth.webp b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/ente-auth.webp
deleted file mode 100644
index 1584e21..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/ente-auth.webp and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hilger-portal-all-students-1-1024x513.webp b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hilger-portal-all-students-1-1024x513.webp
deleted file mode 100644
index bdaf9ea..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hilger-portal-all-students-1-1024x513.webp and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hilger-portal-all-students-1-1536x770.webp b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hilger-portal-all-students-1-1536x770.webp
deleted file mode 100644
index db82a25..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hilger-portal-all-students-1-1536x770.webp and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hilger-portal-all-students-1-300x150.webp b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hilger-portal-all-students-1-300x150.webp
deleted file mode 100644
index fab5777..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hilger-portal-all-students-1-300x150.webp and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hilger-portal-all-students-1-768x385.webp b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hilger-portal-all-students-1-768x385.webp
deleted file mode 100644
index 75ccb20..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hilger-portal-all-students-1-768x385.webp and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hilger-portal-all-students-1.webp b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hilger-portal-all-students-1.webp
deleted file mode 100644
index cf73c6a..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hilger-portal-all-students-1.webp and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hilger-portal-all-teachers-1024x514.webp b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hilger-portal-all-teachers-1024x514.webp
deleted file mode 100644
index 5209379..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hilger-portal-all-teachers-1024x514.webp and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hilger-portal-all-teachers-1536x770.webp b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hilger-portal-all-teachers-1536x770.webp
deleted file mode 100644
index 2280399..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hilger-portal-all-teachers-1536x770.webp and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hilger-portal-all-teachers-300x150.webp b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hilger-portal-all-teachers-300x150.webp
deleted file mode 100644
index babaa82..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hilger-portal-all-teachers-300x150.webp and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hilger-portal-all-teachers-768x385.webp b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hilger-portal-all-teachers-768x385.webp
deleted file mode 100644
index b3659db..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hilger-portal-all-teachers-768x385.webp and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hilger-portal-all-teachers.webp b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hilger-portal-all-teachers.webp
deleted file mode 100644
index d7a7522..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hilger-portal-all-teachers.webp and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hilger-portal-edit-student-1024x514.webp b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hilger-portal-edit-student-1024x514.webp
deleted file mode 100644
index 4f38453..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hilger-portal-edit-student-1024x514.webp and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hilger-portal-edit-student-1536x770.webp b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hilger-portal-edit-student-1536x770.webp
deleted file mode 100644
index de6c04f..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hilger-portal-edit-student-1536x770.webp and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hilger-portal-edit-student-300x150.webp b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hilger-portal-edit-student-300x150.webp
deleted file mode 100644
index 477e8d4..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hilger-portal-edit-student-300x150.webp and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hilger-portal-edit-student-768x385.webp b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hilger-portal-edit-student-768x385.webp
deleted file mode 100644
index b400508..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hilger-portal-edit-student-768x385.webp and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hilger-portal-edit-student.webp b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hilger-portal-edit-student.webp
deleted file mode 100644
index 63e8a9b..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hilger-portal-edit-student.webp and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hilger-portal-home-1024x514.webp b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hilger-portal-home-1024x514.webp
deleted file mode 100644
index b2c6e6b..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hilger-portal-home-1024x514.webp and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hilger-portal-home-1536x770.webp b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hilger-portal-home-1536x770.webp
deleted file mode 100644
index f02239d..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hilger-portal-home-1536x770.webp and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hilger-portal-home-300x150.webp b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hilger-portal-home-300x150.webp
deleted file mode 100644
index 548d281..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hilger-portal-home-300x150.webp and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hilger-portal-home-768x385.webp b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hilger-portal-home-768x385.webp
deleted file mode 100644
index 4c8b4ae..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hilger-portal-home-768x385.webp and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hilger-portal-home.webp b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hilger-portal-home.webp
deleted file mode 100644
index b6655d8..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hilger-portal-home.webp and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hilger-portal-parents-1024x513.webp b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hilger-portal-parents-1024x513.webp
deleted file mode 100644
index dcb65ad..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hilger-portal-parents-1024x513.webp and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hilger-portal-parents-1536x770.webp b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hilger-portal-parents-1536x770.webp
deleted file mode 100644
index cb2da7a..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hilger-portal-parents-1536x770.webp and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hilger-portal-parents-300x150.webp b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hilger-portal-parents-300x150.webp
deleted file mode 100644
index ce4d41c..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hilger-portal-parents-300x150.webp and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hilger-portal-parents-768x385.webp b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hilger-portal-parents-768x385.webp
deleted file mode 100644
index cf6d1af..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hilger-portal-parents-768x385.webp and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hilger-portal-parents.webp b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hilger-portal-parents.webp
deleted file mode 100644
index a772a8a..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hilger-portal-parents.webp and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hoots-home-300x151.webp b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hoots-home-300x151.webp
deleted file mode 100644
index 2048589..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hoots-home-300x151.webp and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hoots-home-768x386.webp b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hoots-home-768x386.webp
deleted file mode 100644
index 7e3d5da..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hoots-home-768x386.webp and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hoots-home.webp b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hoots-home.webp
deleted file mode 100644
index 7842140..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hoots-home.webp and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hoots-locations-min-1024x522.webp b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hoots-locations-min-1024x522.webp
deleted file mode 100644
index 5257b9f..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hoots-locations-min-1024x522.webp and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hoots-locations-min-1536x783.webp b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hoots-locations-min-1536x783.webp
deleted file mode 100644
index 6071b71..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hoots-locations-min-1536x783.webp and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hoots-locations-min-300x153.webp b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hoots-locations-min-300x153.webp
deleted file mode 100644
index a05d6a5..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hoots-locations-min-300x153.webp and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hoots-locations-min-768x391.webp b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hoots-locations-min-768x391.webp
deleted file mode 100644
index 8462ea3..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hoots-locations-min-768x391.webp and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hoots-locations-min.webp b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hoots-locations-min.webp
deleted file mode 100644
index 5fde2d4..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hoots-locations-min.webp and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hootswings-home-full-1270x2048.webp b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hootswings-home-full-1270x2048.webp
deleted file mode 100644
index 0a49982..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hootswings-home-full-1270x2048.webp and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hootswings-home-full-186x300.webp b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hootswings-home-full-186x300.webp
deleted file mode 100644
index b8b625f..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hootswings-home-full-186x300.webp and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hootswings-home-full-635x1024.webp b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hootswings-home-full-635x1024.webp
deleted file mode 100644
index 7d8555a..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hootswings-home-full-635x1024.webp and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hootswings-home-full-768x1238.webp b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hootswings-home-full-768x1238.webp
deleted file mode 100644
index 38e50c9..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hootswings-home-full-768x1238.webp and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hootswings-home-full-953x1536.webp b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hootswings-home-full-953x1536.webp
deleted file mode 100644
index d5c56ef..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hootswings-home-full-953x1536.webp and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hootswings-home-full-scaled.webp b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hootswings-home-full-scaled.webp
deleted file mode 100644
index 39ac2d0..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hootswings-home-full-scaled.webp and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hootswings-locations-page-126x300.png b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hootswings-locations-page-126x300.png
deleted file mode 100644
index 73a0d5d..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hootswings-locations-page-126x300.png and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hootswings-locations-page-432x1024.png b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hootswings-locations-page-432x1024.png
deleted file mode 100644
index b2b73ad..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hootswings-locations-page-432x1024.png and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hootswings-locations-page-768x1822.png b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hootswings-locations-page-768x1822.png
deleted file mode 100644
index 4624b36..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hootswings-locations-page-768x1822.png and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hootswings-locations-page-863x2048.png b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hootswings-locations-page-863x2048.png
deleted file mode 100644
index bd96cb4..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hootswings-locations-page-863x2048.png and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hootswings-locations-page.png b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hootswings-locations-page.png
deleted file mode 100644
index 5b7a453..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hootswings-locations-page.png and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hootswings-menu-page-1100x2048.png b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hootswings-menu-page-1100x2048.png
deleted file mode 100644
index 205a730..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hootswings-menu-page-1100x2048.png and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hootswings-menu-page-161x300.png b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hootswings-menu-page-161x300.png
deleted file mode 100644
index b494c3c..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hootswings-menu-page-161x300.png and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hootswings-menu-page-550x1024.png b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hootswings-menu-page-550x1024.png
deleted file mode 100644
index 24c9f44..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hootswings-menu-page-550x1024.png and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hootswings-menu-page-768x1430.png b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hootswings-menu-page-768x1430.png
deleted file mode 100644
index ceecd6b..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hootswings-menu-page-768x1430.png and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hootswings-menu-page-825x1536.png b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hootswings-menu-page-825x1536.png
deleted file mode 100644
index 1c41ed6..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hootswings-menu-page-825x1536.png and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hootswings-menu-page.png b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hootswings-menu-page.png
deleted file mode 100644
index 7b2e0a2..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/hootswings-menu-page.png and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/image-6-1024x313.png b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/image-6-1024x313.png
deleted file mode 100644
index d696514..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/image-6-1024x313.png and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/image-6-1536x469.png b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/image-6-1536x469.png
deleted file mode 100644
index e7f3284..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/image-6-1536x469.png and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/image-6-300x92.png b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/image-6-300x92.png
deleted file mode 100644
index a9e654e..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/image-6-300x92.png and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/image-6-768x235.png b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/image-6-768x235.png
deleted file mode 100644
index db5e4b7..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/image-6-768x235.png and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/image-6.png b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/image-6.png
deleted file mode 100644
index 3c0ac44..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/image-6.png and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/image-7-1024x437.png b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/image-7-1024x437.png
deleted file mode 100644
index 202cb58..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/image-7-1024x437.png and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/image-7-1536x655.png b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/image-7-1536x655.png
deleted file mode 100644
index bb2b836..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/image-7-1536x655.png and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/image-7-300x128.png b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/image-7-300x128.png
deleted file mode 100644
index 9079db1..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/image-7-300x128.png and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/image-7-768x328.png b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/image-7-768x328.png
deleted file mode 100644
index f61fb47..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/image-7-768x328.png and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/image-7.png b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/image-7.png
deleted file mode 100644
index b68b9e5..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/image-7.png and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/image-9-300x228.png b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/image-9-300x228.png
deleted file mode 100644
index 8717f97..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/image-9-300x228.png and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/image-9-768x583.png b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/image-9-768x583.png
deleted file mode 100644
index b66220d..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/image-9-768x583.png and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/image-9.png b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/image-9.png
deleted file mode 100644
index 37b3b38..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/image-9.png and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/initial-token-frontend-300x103.webp b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/initial-token-frontend-300x103.webp
deleted file mode 100644
index a524793..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/initial-token-frontend-300x103.webp and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/initial-token-frontend-768x265.webp b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/initial-token-frontend-768x265.webp
deleted file mode 100644
index 828aac3..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/initial-token-frontend-768x265.webp and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/initial-token-frontend.webp b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/initial-token-frontend.webp
deleted file mode 100644
index 8e3577c..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/initial-token-frontend.webp and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/keystone-dashboard-min-1024x607.webp b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/keystone-dashboard-min-1024x607.webp
deleted file mode 100644
index 34b2c21..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/keystone-dashboard-min-1024x607.webp and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/keystone-dashboard-min-300x178.webp b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/keystone-dashboard-min-300x178.webp
deleted file mode 100644
index c92a24d..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/keystone-dashboard-min-300x178.webp and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/keystone-dashboard-min-768x455.webp b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/keystone-dashboard-min-768x455.webp
deleted file mode 100644
index 3046929..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/keystone-dashboard-min-768x455.webp and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/keystone-dashboard-min.webp b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/keystone-dashboard-min.webp
deleted file mode 100644
index 6be269c..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/keystone-dashboard-min.webp and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/keystone-init-min-300x239.webp b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/keystone-init-min-300x239.webp
deleted file mode 100644
index 13ddbc9..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/keystone-init-min-300x239.webp and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/keystone-init-min-768x612.webp b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/keystone-init-min-768x612.webp
deleted file mode 100644
index f105011..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/keystone-init-min-768x612.webp and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/keystone-init-min.webp b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/keystone-init-min.webp
deleted file mode 100644
index 50ffebc..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/keystone-init-min.webp and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/mikeconrad-devops.pdf b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/mikeconrad-devops.pdf
deleted file mode 100644
index 0b11343..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/mikeconrad-devops.pdf and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/otp-countdown.mp4 b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/otp-countdown.mp4
deleted file mode 100644
index cd1d380..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/otp-countdown.mp4 and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/token-display-1-300x103.png b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/token-display-1-300x103.png
deleted file mode 100644
index ddf9f5b..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/token-display-1-300x103.png and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/token-display-1-768x265.png b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/token-display-1-768x265.png
deleted file mode 100644
index 42937ee..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/token-display-1-768x265.png and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/token-display-1.png b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/token-display-1.png
deleted file mode 100644
index 5791cc8..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/03/token-display-1.png and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/04/docker-logo-blue-min-1024x233.png b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/04/docker-logo-blue-min-1024x233.png
deleted file mode 100644
index 32e50fe..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/04/docker-logo-blue-min-1024x233.png and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/04/docker-logo-blue-min-1536x349.png b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/04/docker-logo-blue-min-1536x349.png
deleted file mode 100644
index 13a1116..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/04/docker-logo-blue-min-1536x349.png and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/04/docker-logo-blue-min-2048x466.png b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/04/docker-logo-blue-min-2048x466.png
deleted file mode 100644
index 6e008fa..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/04/docker-logo-blue-min-2048x466.png and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/04/docker-logo-blue-min-300x68.png b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/04/docker-logo-blue-min-300x68.png
deleted file mode 100644
index 92dd0b6..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/04/docker-logo-blue-min-300x68.png and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/04/docker-logo-blue-min-768x175.png b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/04/docker-logo-blue-min-768x175.png
deleted file mode 100644
index 9fc8999..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/04/docker-logo-blue-min-768x175.png and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/04/docker-logo-blue-min.png b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/04/docker-logo-blue-min.png
deleted file mode 100644
index 5a0663c..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/04/docker-logo-blue-min.png and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/05/image-300x86.png b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/05/image-300x86.png
deleted file mode 100644
index 5278c37..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/05/image-300x86.png and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/05/image.png b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/05/image.png
deleted file mode 100644
index 2d56ce9..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/05/image.png and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/07/nginx-300x61.png b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/07/nginx-300x61.png
deleted file mode 100644
index f95312c..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/07/nginx-300x61.png and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/07/nginx.png b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/07/nginx.png
deleted file mode 100644
index 51bba47..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/07/nginx.png and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/09/image-1024x320.png b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/09/image-1024x320.png
deleted file mode 100644
index b2edb98..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/09/image-1024x320.png and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/09/image-300x94.png b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/09/image-300x94.png
deleted file mode 100644
index f7e455d..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/09/image-300x94.png and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/09/image-768x240.png b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/09/image-768x240.png
deleted file mode 100644
index 63fd2ae..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/09/image-768x240.png and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/09/image.png b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/09/image.png
deleted file mode 100644
index 43227cb..0000000
Binary files a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-content/uploads/2024/09/image.png and /dev/null differ
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-includes/blocks/cover/style.min.css b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-includes/blocks/cover/style.min.css
deleted file mode 100644
index 7aacd4c..0000000
--- a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-includes/blocks/cover/style.min.css
+++ /dev/null
@@ -1 +0,0 @@
-.wp-block-cover,.wp-block-cover-image{align-items:center;background-position:50%;box-sizing:border-box;display:flex;justify-content:center;min-height:430px;overflow:hidden;overflow:clip;padding:1em;position:relative}.wp-block-cover .has-background-dim:not([class*=-background-color]),.wp-block-cover-image .has-background-dim:not([class*=-background-color]),.wp-block-cover-image.has-background-dim:not([class*=-background-color]),.wp-block-cover.has-background-dim:not([class*=-background-color]){background-color:#000}.wp-block-cover .has-background-dim.has-background-gradient,.wp-block-cover-image .has-background-dim.has-background-gradient{background-color:initial}.wp-block-cover-image.has-background-dim:before,.wp-block-cover.has-background-dim:before{background-color:inherit;content:""}.wp-block-cover .wp-block-cover__background,.wp-block-cover .wp-block-cover__gradient-background,.wp-block-cover-image .wp-block-cover__background,.wp-block-cover-image .wp-block-cover__gradient-background,.wp-block-cover-image.has-background-dim:not(.has-background-gradient):before,.wp-block-cover.has-background-dim:not(.has-background-gradient):before{bottom:0;left:0;opacity:.5;position:absolute;right:0;top:0;z-index:1}.wp-block-cover-image.has-background-dim.has-background-dim-10 .wp-block-cover__background,.wp-block-cover-image.has-background-dim.has-background-dim-10 .wp-block-cover__gradient-background,.wp-block-cover-image.has-background-dim.has-background-dim-10:not(.has-background-gradient):before,.wp-block-cover.has-background-dim.has-background-dim-10 .wp-block-cover__background,.wp-block-cover.has-background-dim.has-background-dim-10 .wp-block-cover__gradient-background,.wp-block-cover.has-background-dim.has-background-dim-10:not(.has-background-gradient):before{opacity:.1}.wp-block-cover-image.has-background-dim.has-background-dim-20 .wp-block-cover__background,.wp-block-cover-image.has-background-dim.has-background-dim-20 .wp-block-cover__gradient-background,.wp-block-cover-image.has-background-dim.has-background-dim-20:not(.has-background-gradient):before,.wp-block-cover.has-background-dim.has-background-dim-20 .wp-block-cover__background,.wp-block-cover.has-background-dim.has-background-dim-20 .wp-block-cover__gradient-background,.wp-block-cover.has-background-dim.has-background-dim-20:not(.has-background-gradient):before{opacity:.2}.wp-block-cover-image.has-background-dim.has-background-dim-30 .wp-block-cover__background,.wp-block-cover-image.has-background-dim.has-background-dim-30 .wp-block-cover__gradient-background,.wp-block-cover-image.has-background-dim.has-background-dim-30:not(.has-background-gradient):before,.wp-block-cover.has-background-dim.has-background-dim-30 .wp-block-cover__background,.wp-block-cover.has-background-dim.has-background-dim-30 .wp-block-cover__gradient-background,.wp-block-cover.has-background-dim.has-background-dim-30:not(.has-background-gradient):before{opacity:.3}.wp-block-cover-image.has-background-dim.has-background-dim-40 .wp-block-cover__background,.wp-block-cover-image.has-background-dim.has-background-dim-40 .wp-block-cover__gradient-background,.wp-block-cover-image.has-background-dim.has-background-dim-40:not(.has-background-gradient):before,.wp-block-cover.has-background-dim.has-background-dim-40 .wp-block-cover__background,.wp-block-cover.has-background-dim.has-background-dim-40 .wp-block-cover__gradient-background,.wp-block-cover.has-background-dim.has-background-dim-40:not(.has-background-gradient):before{opacity:.4}.wp-block-cover-image.has-background-dim.has-background-dim-50 .wp-block-cover__background,.wp-block-cover-image.has-background-dim.has-background-dim-50 .wp-block-cover__gradient-background,.wp-block-cover-image.has-background-dim.has-background-dim-50:not(.has-background-gradient):before,.wp-block-cover.has-background-dim.has-background-dim-50 .wp-block-cover__background,.wp-block-cover.has-background-dim.has-background-dim-50 .wp-block-cover__gradient-background,.wp-block-cover.has-background-dim.has-background-dim-50:not(.has-background-gradient):before{opacity:.5}.wp-block-cover-image.has-background-dim.has-background-dim-60 .wp-block-cover__background,.wp-block-cover-image.has-background-dim.has-background-dim-60 .wp-block-cover__gradient-background,.wp-block-cover-image.has-background-dim.has-background-dim-60:not(.has-background-gradient):before,.wp-block-cover.has-background-dim.has-background-dim-60 .wp-block-cover__background,.wp-block-cover.has-background-dim.has-background-dim-60 .wp-block-cover__gradient-background,.wp-block-cover.has-background-dim.has-background-dim-60:not(.has-background-gradient):before{opacity:.6}.wp-block-cover-image.has-background-dim.has-background-dim-70 .wp-block-cover__background,.wp-block-cover-image.has-background-dim.has-background-dim-70 .wp-block-cover__gradient-background,.wp-block-cover-image.has-background-dim.has-background-dim-70:not(.has-background-gradient):before,.wp-block-cover.has-background-dim.has-background-dim-70 .wp-block-cover__background,.wp-block-cover.has-background-dim.has-background-dim-70 .wp-block-cover__gradient-background,.wp-block-cover.has-background-dim.has-background-dim-70:not(.has-background-gradient):before{opacity:.7}.wp-block-cover-image.has-background-dim.has-background-dim-80 .wp-block-cover__background,.wp-block-cover-image.has-background-dim.has-background-dim-80 .wp-block-cover__gradient-background,.wp-block-cover-image.has-background-dim.has-background-dim-80:not(.has-background-gradient):before,.wp-block-cover.has-background-dim.has-background-dim-80 .wp-block-cover__background,.wp-block-cover.has-background-dim.has-background-dim-80 .wp-block-cover__gradient-background,.wp-block-cover.has-background-dim.has-background-dim-80:not(.has-background-gradient):before{opacity:.8}.wp-block-cover-image.has-background-dim.has-background-dim-90 .wp-block-cover__background,.wp-block-cover-image.has-background-dim.has-background-dim-90 .wp-block-cover__gradient-background,.wp-block-cover-image.has-background-dim.has-background-dim-90:not(.has-background-gradient):before,.wp-block-cover.has-background-dim.has-background-dim-90 .wp-block-cover__background,.wp-block-cover.has-background-dim.has-background-dim-90 .wp-block-cover__gradient-background,.wp-block-cover.has-background-dim.has-background-dim-90:not(.has-background-gradient):before{opacity:.9}.wp-block-cover-image.has-background-dim.has-background-dim-100 .wp-block-cover__background,.wp-block-cover-image.has-background-dim.has-background-dim-100 .wp-block-cover__gradient-background,.wp-block-cover-image.has-background-dim.has-background-dim-100:not(.has-background-gradient):before,.wp-block-cover.has-background-dim.has-background-dim-100 .wp-block-cover__background,.wp-block-cover.has-background-dim.has-background-dim-100 .wp-block-cover__gradient-background,.wp-block-cover.has-background-dim.has-background-dim-100:not(.has-background-gradient):before{opacity:1}.wp-block-cover .wp-block-cover__background.has-background-dim.has-background-dim-0,.wp-block-cover .wp-block-cover__gradient-background.has-background-dim.has-background-dim-0,.wp-block-cover-image .wp-block-cover__background.has-background-dim.has-background-dim-0,.wp-block-cover-image .wp-block-cover__gradient-background.has-background-dim.has-background-dim-0{opacity:0}.wp-block-cover .wp-block-cover__background.has-background-dim.has-background-dim-10,.wp-block-cover .wp-block-cover__gradient-background.has-background-dim.has-background-dim-10,.wp-block-cover-image .wp-block-cover__background.has-background-dim.has-background-dim-10,.wp-block-cover-image .wp-block-cover__gradient-background.has-background-dim.has-background-dim-10{opacity:.1}.wp-block-cover .wp-block-cover__background.has-background-dim.has-background-dim-20,.wp-block-cover .wp-block-cover__gradient-background.has-background-dim.has-background-dim-20,.wp-block-cover-image .wp-block-cover__background.has-background-dim.has-background-dim-20,.wp-block-cover-image .wp-block-cover__gradient-background.has-background-dim.has-background-dim-20{opacity:.2}.wp-block-cover .wp-block-cover__background.has-background-dim.has-background-dim-30,.wp-block-cover .wp-block-cover__gradient-background.has-background-dim.has-background-dim-30,.wp-block-cover-image .wp-block-cover__background.has-background-dim.has-background-dim-30,.wp-block-cover-image .wp-block-cover__gradient-background.has-background-dim.has-background-dim-30{opacity:.3}.wp-block-cover .wp-block-cover__background.has-background-dim.has-background-dim-40,.wp-block-cover .wp-block-cover__gradient-background.has-background-dim.has-background-dim-40,.wp-block-cover-image .wp-block-cover__background.has-background-dim.has-background-dim-40,.wp-block-cover-image .wp-block-cover__gradient-background.has-background-dim.has-background-dim-40{opacity:.4}.wp-block-cover .wp-block-cover__background.has-background-dim.has-background-dim-50,.wp-block-cover .wp-block-cover__gradient-background.has-background-dim.has-background-dim-50,.wp-block-cover-image .wp-block-cover__background.has-background-dim.has-background-dim-50,.wp-block-cover-image .wp-block-cover__gradient-background.has-background-dim.has-background-dim-50{opacity:.5}.wp-block-cover .wp-block-cover__background.has-background-dim.has-background-dim-60,.wp-block-cover .wp-block-cover__gradient-background.has-background-dim.has-background-dim-60,.wp-block-cover-image .wp-block-cover__background.has-background-dim.has-background-dim-60,.wp-block-cover-image .wp-block-cover__gradient-background.has-background-dim.has-background-dim-60{opacity:.6}.wp-block-cover .wp-block-cover__background.has-background-dim.has-background-dim-70,.wp-block-cover .wp-block-cover__gradient-background.has-background-dim.has-background-dim-70,.wp-block-cover-image .wp-block-cover__background.has-background-dim.has-background-dim-70,.wp-block-cover-image .wp-block-cover__gradient-background.has-background-dim.has-background-dim-70{opacity:.7}.wp-block-cover .wp-block-cover__background.has-background-dim.has-background-dim-80,.wp-block-cover .wp-block-cover__gradient-background.has-background-dim.has-background-dim-80,.wp-block-cover-image .wp-block-cover__background.has-background-dim.has-background-dim-80,.wp-block-cover-image .wp-block-cover__gradient-background.has-background-dim.has-background-dim-80{opacity:.8}.wp-block-cover .wp-block-cover__background.has-background-dim.has-background-dim-90,.wp-block-cover .wp-block-cover__gradient-background.has-background-dim.has-background-dim-90,.wp-block-cover-image .wp-block-cover__background.has-background-dim.has-background-dim-90,.wp-block-cover-image .wp-block-cover__gradient-background.has-background-dim.has-background-dim-90{opacity:.9}.wp-block-cover .wp-block-cover__background.has-background-dim.has-background-dim-100,.wp-block-cover .wp-block-cover__gradient-background.has-background-dim.has-background-dim-100,.wp-block-cover-image .wp-block-cover__background.has-background-dim.has-background-dim-100,.wp-block-cover-image .wp-block-cover__gradient-background.has-background-dim.has-background-dim-100{opacity:1}.wp-block-cover-image.alignleft,.wp-block-cover-image.alignright,.wp-block-cover.alignleft,.wp-block-cover.alignright{max-width:420px;width:100%}.wp-block-cover-image.aligncenter,.wp-block-cover-image.alignleft,.wp-block-cover-image.alignright,.wp-block-cover.aligncenter,.wp-block-cover.alignleft,.wp-block-cover.alignright{display:flex}.wp-block-cover .wp-block-cover__inner-container,.wp-block-cover-image .wp-block-cover__inner-container{color:inherit;width:100%;z-index:1}.wp-block-cover-image.is-position-top-left,.wp-block-cover.is-position-top-left{align-items:flex-start;justify-content:flex-start}.wp-block-cover-image.is-position-top-center,.wp-block-cover.is-position-top-center{align-items:flex-start;justify-content:center}.wp-block-cover-image.is-position-top-right,.wp-block-cover.is-position-top-right{align-items:flex-start;justify-content:flex-end}.wp-block-cover-image.is-position-center-left,.wp-block-cover.is-position-center-left{align-items:center;justify-content:flex-start}.wp-block-cover-image.is-position-center-center,.wp-block-cover.is-position-center-center{align-items:center;justify-content:center}.wp-block-cover-image.is-position-center-right,.wp-block-cover.is-position-center-right{align-items:center;justify-content:flex-end}.wp-block-cover-image.is-position-bottom-left,.wp-block-cover.is-position-bottom-left{align-items:flex-end;justify-content:flex-start}.wp-block-cover-image.is-position-bottom-center,.wp-block-cover.is-position-bottom-center{align-items:flex-end;justify-content:center}.wp-block-cover-image.is-position-bottom-right,.wp-block-cover.is-position-bottom-right{align-items:flex-end;justify-content:flex-end}.wp-block-cover-image.has-custom-content-position.has-custom-content-position .wp-block-cover__inner-container,.wp-block-cover.has-custom-content-position.has-custom-content-position .wp-block-cover__inner-container{margin:0}.wp-block-cover-image.has-custom-content-position.has-custom-content-position.is-position-bottom-left .wp-block-cover__inner-container,.wp-block-cover-image.has-custom-content-position.has-custom-content-position.is-position-bottom-right .wp-block-cover__inner-container,.wp-block-cover-image.has-custom-content-position.has-custom-content-position.is-position-center-left .wp-block-cover__inner-container,.wp-block-cover-image.has-custom-content-position.has-custom-content-position.is-position-center-right .wp-block-cover__inner-container,.wp-block-cover-image.has-custom-content-position.has-custom-content-position.is-position-top-left .wp-block-cover__inner-container,.wp-block-cover-image.has-custom-content-position.has-custom-content-position.is-position-top-right .wp-block-cover__inner-container,.wp-block-cover.has-custom-content-position.has-custom-content-position.is-position-bottom-left .wp-block-cover__inner-container,.wp-block-cover.has-custom-content-position.has-custom-content-position.is-position-bottom-right .wp-block-cover__inner-container,.wp-block-cover.has-custom-content-position.has-custom-content-position.is-position-center-left .wp-block-cover__inner-container,.wp-block-cover.has-custom-content-position.has-custom-content-position.is-position-center-right .wp-block-cover__inner-container,.wp-block-cover.has-custom-content-position.has-custom-content-position.is-position-top-left .wp-block-cover__inner-container,.wp-block-cover.has-custom-content-position.has-custom-content-position.is-position-top-right .wp-block-cover__inner-container{margin:0;width:auto}.wp-block-cover .wp-block-cover__image-background,.wp-block-cover video.wp-block-cover__video-background,.wp-block-cover-image .wp-block-cover__image-background,.wp-block-cover-image video.wp-block-cover__video-background{border:none;bottom:0;box-shadow:none;height:100%;left:0;margin:0;max-height:none;max-width:none;object-fit:cover;outline:none;padding:0;position:absolute;right:0;top:0;width:100%}.wp-block-cover-image.has-parallax,.wp-block-cover.has-parallax,.wp-block-cover__image-background.has-parallax,video.wp-block-cover__video-background.has-parallax{background-attachment:fixed;background-repeat:no-repeat;background-size:cover}@supports (-webkit-touch-callout:inherit){.wp-block-cover-image.has-parallax,.wp-block-cover.has-parallax,.wp-block-cover__image-background.has-parallax,video.wp-block-cover__video-background.has-parallax{background-attachment:scroll}}@media (prefers-reduced-motion:reduce){.wp-block-cover-image.has-parallax,.wp-block-cover.has-parallax,.wp-block-cover__image-background.has-parallax,video.wp-block-cover__video-background.has-parallax{background-attachment:scroll}}.wp-block-cover-image.is-repeated,.wp-block-cover.is-repeated,.wp-block-cover__image-background.is-repeated,video.wp-block-cover__video-background.is-repeated{background-repeat:repeat;background-size:auto}.wp-block-cover__image-background,.wp-block-cover__video-background{z-index:0}.wp-block-cover-image-text,.wp-block-cover-image-text a,.wp-block-cover-image-text a:active,.wp-block-cover-image-text a:focus,.wp-block-cover-image-text a:hover,.wp-block-cover-text,.wp-block-cover-text a,.wp-block-cover-text a:active,.wp-block-cover-text a:focus,.wp-block-cover-text a:hover,section.wp-block-cover-image h2,section.wp-block-cover-image h2 a,section.wp-block-cover-image h2 a:active,section.wp-block-cover-image h2 a:focus,section.wp-block-cover-image h2 a:hover{color:#fff}.wp-block-cover-image .wp-block-cover.has-left-content{justify-content:flex-start}.wp-block-cover-image .wp-block-cover.has-right-content{justify-content:flex-end}.wp-block-cover-image.has-left-content .wp-block-cover-image-text,.wp-block-cover.has-left-content .wp-block-cover-text,section.wp-block-cover-image.has-left-content>h2{margin-left:0;text-align:left}.wp-block-cover-image.has-right-content .wp-block-cover-image-text,.wp-block-cover.has-right-content .wp-block-cover-text,section.wp-block-cover-image.has-right-content>h2{margin-right:0;text-align:right}.wp-block-cover .wp-block-cover-text,.wp-block-cover-image .wp-block-cover-image-text,section.wp-block-cover-image>h2{font-size:2em;line-height:1.25;margin-bottom:0;max-width:840px;padding:.44em;text-align:center;z-index:1}:where(.wp-block-cover-image:not(.has-text-color)),:where(.wp-block-cover:not(.has-text-color)){color:#fff}:where(.wp-block-cover-image.is-light:not(.has-text-color)),:where(.wp-block-cover.is-light:not(.has-text-color)){color:#000}:root :where(.wp-block-cover h1:not(.has-text-color)),:root :where(.wp-block-cover h2:not(.has-text-color)),:root :where(.wp-block-cover h3:not(.has-text-color)),:root :where(.wp-block-cover h4:not(.has-text-color)),:root :where(.wp-block-cover h5:not(.has-text-color)),:root :where(.wp-block-cover h6:not(.has-text-color)),:root :where(.wp-block-cover p:not(.has-text-color)){color:inherit}
\ No newline at end of file
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-includes/blocks/image/style.min.css b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-includes/blocks/image/style.min.css
deleted file mode 100644
index afd6dbc..0000000
--- a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-includes/blocks/image/style.min.css
+++ /dev/null
@@ -1 +0,0 @@
-.wp-block-image img{box-sizing:border-box;height:auto;max-width:100%;vertical-align:bottom}.wp-block-image[style*=border-radius] img,.wp-block-image[style*=border-radius]>a{border-radius:inherit}.wp-block-image.has-custom-border img{box-sizing:border-box}.wp-block-image.aligncenter{text-align:center}.wp-block-image.alignfull img,.wp-block-image.alignwide img{height:auto;width:100%}.wp-block-image .aligncenter,.wp-block-image .alignleft,.wp-block-image .alignright,.wp-block-image.aligncenter,.wp-block-image.alignleft,.wp-block-image.alignright{display:table}.wp-block-image .aligncenter>figcaption,.wp-block-image .alignleft>figcaption,.wp-block-image .alignright>figcaption,.wp-block-image.aligncenter>figcaption,.wp-block-image.alignleft>figcaption,.wp-block-image.alignright>figcaption{caption-side:bottom;display:table-caption}.wp-block-image .alignleft{float:left;margin:.5em 1em .5em 0}.wp-block-image .alignright{float:right;margin:.5em 0 .5em 1em}.wp-block-image .aligncenter{margin-left:auto;margin-right:auto}.wp-block-image :where(figcaption){margin-bottom:1em;margin-top:.5em}.wp-block-image.is-style-circle-mask img{border-radius:9999px}@supports ((-webkit-mask-image:none) or (mask-image:none)) or (-webkit-mask-image:none){.wp-block-image.is-style-circle-mask img{border-radius:0;-webkit-mask-image:url('data:image/svg+xml;utf8,');mask-image:url('data:image/svg+xml;utf8,');mask-mode:alpha;-webkit-mask-position:center;mask-position:center;-webkit-mask-repeat:no-repeat;mask-repeat:no-repeat;-webkit-mask-size:contain;mask-size:contain}}:root :where(.wp-block-image.is-style-rounded img,.wp-block-image .is-style-rounded img){border-radius:9999px}.wp-block-image figure{margin:0}.wp-lightbox-container{display:flex;flex-direction:column;position:relative}.wp-lightbox-container img{cursor:zoom-in}.wp-lightbox-container img:hover+button{opacity:1}.wp-lightbox-container button{align-items:center;-webkit-backdrop-filter:blur(16px) saturate(180%);backdrop-filter:blur(16px) saturate(180%);background-color:#5a5a5a40;border:none;border-radius:4px;cursor:zoom-in;display:flex;height:20px;justify-content:center;opacity:0;padding:0;position:absolute;right:16px;text-align:center;top:16px;transition:opacity .2s ease;width:20px;z-index:100}.wp-lightbox-container button:focus-visible{outline:3px auto #5a5a5a40;outline:3px auto -webkit-focus-ring-color;outline-offset:3px}.wp-lightbox-container button:hover{cursor:pointer;opacity:1}.wp-lightbox-container button:focus{opacity:1}.wp-lightbox-container button:focus,.wp-lightbox-container button:hover,.wp-lightbox-container button:not(:hover):not(:active):not(.has-background){background-color:#5a5a5a40;border:none}.wp-lightbox-overlay{box-sizing:border-box;cursor:zoom-out;height:100vh;left:0;overflow:hidden;position:fixed;top:0;visibility:hidden;width:100%;z-index:100000}.wp-lightbox-overlay .close-button{align-items:center;cursor:pointer;display:flex;justify-content:center;min-height:40px;min-width:40px;padding:0;position:absolute;right:calc(env(safe-area-inset-right) + 16px);top:calc(env(safe-area-inset-top) + 16px);z-index:5000000}.wp-lightbox-overlay .close-button:focus,.wp-lightbox-overlay .close-button:hover,.wp-lightbox-overlay .close-button:not(:hover):not(:active):not(.has-background){background:none;border:none}.wp-lightbox-overlay .lightbox-image-container{height:var(--wp--lightbox-container-height);left:50%;overflow:hidden;position:absolute;top:50%;transform:translate(-50%,-50%);transform-origin:top left;width:var(--wp--lightbox-container-width);z-index:9999999999}.wp-lightbox-overlay .wp-block-image{align-items:center;box-sizing:border-box;display:flex;height:100%;justify-content:center;margin:0;position:relative;transform-origin:0 0;width:100%;z-index:3000000}.wp-lightbox-overlay .wp-block-image img{height:var(--wp--lightbox-image-height);min-height:var(--wp--lightbox-image-height);min-width:var(--wp--lightbox-image-width);width:var(--wp--lightbox-image-width)}.wp-lightbox-overlay .wp-block-image figcaption{display:none}.wp-lightbox-overlay button{background:none;border:none}.wp-lightbox-overlay .scrim{background-color:#fff;height:100%;opacity:.9;position:absolute;width:100%;z-index:2000000}.wp-lightbox-overlay.active{animation:turn-on-visibility .25s both;visibility:visible}.wp-lightbox-overlay.active img{animation:turn-on-visibility .35s both}.wp-lightbox-overlay.show-closing-animation:not(.active){animation:turn-off-visibility .35s both}.wp-lightbox-overlay.show-closing-animation:not(.active) img{animation:turn-off-visibility .25s both}@media (prefers-reduced-motion:no-preference){.wp-lightbox-overlay.zoom.active{animation:none;opacity:1;visibility:visible}.wp-lightbox-overlay.zoom.active .lightbox-image-container{animation:lightbox-zoom-in .4s}.wp-lightbox-overlay.zoom.active .lightbox-image-container img{animation:none}.wp-lightbox-overlay.zoom.active .scrim{animation:turn-on-visibility .4s forwards}.wp-lightbox-overlay.zoom.show-closing-animation:not(.active){animation:none}.wp-lightbox-overlay.zoom.show-closing-animation:not(.active) .lightbox-image-container{animation:lightbox-zoom-out .4s}.wp-lightbox-overlay.zoom.show-closing-animation:not(.active) .lightbox-image-container img{animation:none}.wp-lightbox-overlay.zoom.show-closing-animation:not(.active) .scrim{animation:turn-off-visibility .4s forwards}}@keyframes turn-on-visibility{0%{opacity:0}to{opacity:1}}@keyframes turn-off-visibility{0%{opacity:1;visibility:visible}99%{opacity:0;visibility:visible}to{opacity:0;visibility:hidden}}@keyframes lightbox-zoom-in{0%{transform:translate(calc((-100vw + var(--wp--lightbox-scrollbar-width))/2 + var(--wp--lightbox-initial-left-position)),calc(-50vh + var(--wp--lightbox-initial-top-position))) scale(var(--wp--lightbox-scale))}to{transform:translate(-50%,-50%) scale(1)}}@keyframes lightbox-zoom-out{0%{transform:translate(-50%,-50%) scale(1);visibility:visible}99%{visibility:visible}to{transform:translate(calc((-100vw + var(--wp--lightbox-scrollbar-width))/2 + var(--wp--lightbox-initial-left-position)),calc(-50vh + var(--wp--lightbox-initial-top-position))) scale(var(--wp--lightbox-scale));visibility:hidden}}
\ No newline at end of file
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-includes/blocks/navigation/style.min.css b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-includes/blocks/navigation/style.min.css
deleted file mode 100644
index d9bb119..0000000
--- a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-includes/blocks/navigation/style.min.css
+++ /dev/null
@@ -1 +0,0 @@
-.wp-block-navigation{position:relative;--navigation-layout-justification-setting:flex-start;--navigation-layout-direction:row;--navigation-layout-wrap:wrap;--navigation-layout-justify:flex-start;--navigation-layout-align:center}.wp-block-navigation ul{margin-bottom:0;margin-left:0;margin-top:0;padding-left:0}.wp-block-navigation ul,.wp-block-navigation ul li{list-style:none;padding:0}.wp-block-navigation .wp-block-navigation-item{align-items:center;background-color:inherit;display:flex;position:relative}.wp-block-navigation .wp-block-navigation-item .wp-block-navigation__submenu-container:empty{display:none}.wp-block-navigation .wp-block-navigation-item__content{display:block}.wp-block-navigation .wp-block-navigation-item__content.wp-block-navigation-item__content{color:inherit}.wp-block-navigation.has-text-decoration-underline .wp-block-navigation-item__content,.wp-block-navigation.has-text-decoration-underline .wp-block-navigation-item__content:active,.wp-block-navigation.has-text-decoration-underline .wp-block-navigation-item__content:focus{text-decoration:underline}.wp-block-navigation.has-text-decoration-line-through .wp-block-navigation-item__content,.wp-block-navigation.has-text-decoration-line-through .wp-block-navigation-item__content:active,.wp-block-navigation.has-text-decoration-line-through .wp-block-navigation-item__content:focus{text-decoration:line-through}.wp-block-navigation :where(a),.wp-block-navigation :where(a:active),.wp-block-navigation :where(a:focus){text-decoration:none}.wp-block-navigation .wp-block-navigation__submenu-icon{align-self:center;background-color:inherit;border:none;color:currentColor;display:inline-block;font-size:inherit;height:.6em;line-height:0;margin-left:.25em;padding:0;width:.6em}.wp-block-navigation .wp-block-navigation__submenu-icon svg{display:inline-block;stroke:currentColor;height:inherit;margin-top:.075em;width:inherit}.wp-block-navigation.is-vertical{--navigation-layout-direction:column;--navigation-layout-justify:initial;--navigation-layout-align:flex-start}.wp-block-navigation.no-wrap{--navigation-layout-wrap:nowrap}.wp-block-navigation.items-justified-center{--navigation-layout-justification-setting:center;--navigation-layout-justify:center}.wp-block-navigation.items-justified-center.is-vertical{--navigation-layout-align:center}.wp-block-navigation.items-justified-right{--navigation-layout-justification-setting:flex-end;--navigation-layout-justify:flex-end}.wp-block-navigation.items-justified-right.is-vertical{--navigation-layout-align:flex-end}.wp-block-navigation.items-justified-space-between{--navigation-layout-justification-setting:space-between;--navigation-layout-justify:space-between}.wp-block-navigation .has-child .wp-block-navigation__submenu-container{align-items:normal;background-color:inherit;color:inherit;display:flex;flex-direction:column;height:0;left:-1px;opacity:0;overflow:hidden;position:absolute;top:100%;transition:opacity .1s linear;visibility:hidden;width:0;z-index:2}.wp-block-navigation .has-child .wp-block-navigation__submenu-container>.wp-block-navigation-item>.wp-block-navigation-item__content{display:flex;flex-grow:1}.wp-block-navigation .has-child .wp-block-navigation__submenu-container>.wp-block-navigation-item>.wp-block-navigation-item__content .wp-block-navigation__submenu-icon{margin-left:auto;margin-right:0}.wp-block-navigation .has-child .wp-block-navigation__submenu-container .wp-block-navigation-item__content{margin:0}@media (min-width:782px){.wp-block-navigation .has-child .wp-block-navigation__submenu-container .wp-block-navigation__submenu-container{left:100%;top:-1px}.wp-block-navigation .has-child .wp-block-navigation__submenu-container .wp-block-navigation__submenu-container:before{background:#0000;content:"";display:block;height:100%;position:absolute;right:100%;width:.5em}.wp-block-navigation .has-child .wp-block-navigation__submenu-container .wp-block-navigation__submenu-icon{margin-right:.25em}.wp-block-navigation .has-child .wp-block-navigation__submenu-container .wp-block-navigation__submenu-icon svg{transform:rotate(-90deg)}}.wp-block-navigation .has-child .wp-block-navigation-submenu__toggle[aria-expanded=true]~.wp-block-navigation__submenu-container,.wp-block-navigation .has-child:not(.open-on-click):hover>.wp-block-navigation__submenu-container,.wp-block-navigation .has-child:not(.open-on-click):not(.open-on-hover-click):focus-within>.wp-block-navigation__submenu-container{height:auto;min-width:200px;opacity:1;overflow:visible;visibility:visible;width:auto}.wp-block-navigation.has-background .has-child .wp-block-navigation__submenu-container{left:0;top:100%}@media (min-width:782px){.wp-block-navigation.has-background .has-child .wp-block-navigation__submenu-container .wp-block-navigation__submenu-container{left:100%;top:0}}.wp-block-navigation-submenu{display:flex;position:relative}.wp-block-navigation-submenu .wp-block-navigation__submenu-icon svg{stroke:currentColor}button.wp-block-navigation-item__content{background-color:initial;border:none;color:currentColor;font-family:inherit;font-size:inherit;font-style:inherit;font-weight:inherit;letter-spacing:inherit;line-height:inherit;text-align:left;text-transform:inherit}.wp-block-navigation-submenu__toggle{cursor:pointer}.wp-block-navigation-item.open-on-click .wp-block-navigation-submenu__toggle{padding-left:0;padding-right:.85em}.wp-block-navigation-item.open-on-click .wp-block-navigation-submenu__toggle+.wp-block-navigation__submenu-icon{margin-left:-.6em;pointer-events:none}.wp-block-navigation-item.open-on-click button.wp-block-navigation-item__content:not(.wp-block-navigation-submenu__toggle){padding:0}.wp-block-navigation .wp-block-page-list,.wp-block-navigation__container,.wp-block-navigation__responsive-close,.wp-block-navigation__responsive-container,.wp-block-navigation__responsive-container-content,.wp-block-navigation__responsive-dialog{gap:inherit}:where(.wp-block-navigation.has-background .wp-block-navigation-item a:not(.wp-element-button)),:where(.wp-block-navigation.has-background .wp-block-navigation-submenu a:not(.wp-element-button)){padding:.5em 1em}:where(.wp-block-navigation .wp-block-navigation__submenu-container .wp-block-navigation-item a:not(.wp-element-button)),:where(.wp-block-navigation .wp-block-navigation__submenu-container .wp-block-navigation-submenu a:not(.wp-element-button)),:where(.wp-block-navigation .wp-block-navigation__submenu-container .wp-block-navigation-submenu button.wp-block-navigation-item__content),:where(.wp-block-navigation .wp-block-navigation__submenu-container .wp-block-pages-list__item button.wp-block-navigation-item__content){padding:.5em 1em}.wp-block-navigation.items-justified-right .wp-block-navigation__container .has-child .wp-block-navigation__submenu-container,.wp-block-navigation.items-justified-right .wp-block-page-list>.has-child .wp-block-navigation__submenu-container,.wp-block-navigation.items-justified-space-between .wp-block-page-list>.has-child:last-child .wp-block-navigation__submenu-container,.wp-block-navigation.items-justified-space-between>.wp-block-navigation__container>.has-child:last-child .wp-block-navigation__submenu-container{left:auto;right:0}.wp-block-navigation.items-justified-right .wp-block-navigation__container .has-child .wp-block-navigation__submenu-container .wp-block-navigation__submenu-container,.wp-block-navigation.items-justified-right .wp-block-page-list>.has-child .wp-block-navigation__submenu-container .wp-block-navigation__submenu-container,.wp-block-navigation.items-justified-space-between .wp-block-page-list>.has-child:last-child .wp-block-navigation__submenu-container .wp-block-navigation__submenu-container,.wp-block-navigation.items-justified-space-between>.wp-block-navigation__container>.has-child:last-child .wp-block-navigation__submenu-container .wp-block-navigation__submenu-container{left:-1px;right:-1px}@media (min-width:782px){.wp-block-navigation.items-justified-right .wp-block-navigation__container .has-child .wp-block-navigation__submenu-container .wp-block-navigation__submenu-container,.wp-block-navigation.items-justified-right .wp-block-page-list>.has-child .wp-block-navigation__submenu-container .wp-block-navigation__submenu-container,.wp-block-navigation.items-justified-space-between .wp-block-page-list>.has-child:last-child .wp-block-navigation__submenu-container .wp-block-navigation__submenu-container,.wp-block-navigation.items-justified-space-between>.wp-block-navigation__container>.has-child:last-child .wp-block-navigation__submenu-container .wp-block-navigation__submenu-container{left:auto;right:100%}}.wp-block-navigation:not(.has-background) .wp-block-navigation__submenu-container{background-color:#fff;border:1px solid #00000026}.wp-block-navigation.has-background .wp-block-navigation__submenu-container{background-color:inherit}.wp-block-navigation:not(.has-text-color) .wp-block-navigation__submenu-container{color:#000}.wp-block-navigation__container{align-items:var(--navigation-layout-align,initial);display:flex;flex-direction:var(--navigation-layout-direction,initial);flex-wrap:var(--navigation-layout-wrap,wrap);justify-content:var(--navigation-layout-justify,initial);list-style:none;margin:0;padding-left:0}.wp-block-navigation__container .is-responsive{display:none}.wp-block-navigation__container:only-child,.wp-block-page-list:only-child{flex-grow:1}@keyframes overlay-menu__fade-in-animation{0%{opacity:0;transform:translateY(.5em)}to{opacity:1;transform:translateY(0)}}.wp-block-navigation__responsive-container{bottom:0;display:none;left:0;position:fixed;right:0;top:0}.wp-block-navigation__responsive-container :where(.wp-block-navigation-item a){color:inherit}.wp-block-navigation__responsive-container .wp-block-navigation__responsive-container-content{align-items:var(--navigation-layout-align,initial);display:flex;flex-direction:var(--navigation-layout-direction,initial);flex-wrap:var(--navigation-layout-wrap,wrap);justify-content:var(--navigation-layout-justify,initial)}.wp-block-navigation__responsive-container:not(.is-menu-open.is-menu-open){background-color:inherit!important;color:inherit!important}.wp-block-navigation__responsive-container.is-menu-open{animation:overlay-menu__fade-in-animation .1s ease-out;animation-fill-mode:forwards;background-color:inherit;display:flex;flex-direction:column;overflow:auto;padding:clamp(1rem,var(--wp--style--root--padding-top),20rem) clamp(1rem,var(--wp--style--root--padding-right),20rem) clamp(1rem,var(--wp--style--root--padding-bottom),20rem) clamp(1rem,var(--wp--style--root--padding-left),20em);z-index:100000}@media (prefers-reduced-motion:reduce){.wp-block-navigation__responsive-container.is-menu-open{animation-delay:0s;animation-duration:1ms}}.wp-block-navigation__responsive-container.is-menu-open .wp-block-navigation__responsive-container-content{align-items:var(--navigation-layout-justification-setting,inherit);display:flex;flex-direction:column;flex-wrap:nowrap;overflow:visible;padding-top:calc(2rem + 24px)}.wp-block-navigation__responsive-container.is-menu-open .wp-block-navigation__responsive-container-content,.wp-block-navigation__responsive-container.is-menu-open .wp-block-navigation__responsive-container-content .wp-block-navigation__container,.wp-block-navigation__responsive-container.is-menu-open .wp-block-navigation__responsive-container-content .wp-block-page-list{justify-content:flex-start}.wp-block-navigation__responsive-container.is-menu-open .wp-block-navigation__responsive-container-content .wp-block-navigation__submenu-icon{display:none}.wp-block-navigation__responsive-container.is-menu-open .wp-block-navigation__responsive-container-content .has-child .wp-block-navigation__submenu-container{border:none;height:auto;min-width:200px;opacity:1;overflow:initial;padding-left:2rem;padding-right:2rem;position:static;visibility:visible;width:auto}.wp-block-navigation__responsive-container.is-menu-open .wp-block-navigation__responsive-container-content .wp-block-navigation__container,.wp-block-navigation__responsive-container.is-menu-open .wp-block-navigation__responsive-container-content .wp-block-navigation__submenu-container{gap:inherit}.wp-block-navigation__responsive-container.is-menu-open .wp-block-navigation__responsive-container-content .wp-block-navigation__submenu-container{padding-top:var(--wp--style--block-gap,2em)}.wp-block-navigation__responsive-container.is-menu-open .wp-block-navigation__responsive-container-content .wp-block-navigation-item__content{padding:0}.wp-block-navigation__responsive-container.is-menu-open .wp-block-navigation__responsive-container-content .wp-block-navigation-item,.wp-block-navigation__responsive-container.is-menu-open .wp-block-navigation__responsive-container-content .wp-block-navigation__container,.wp-block-navigation__responsive-container.is-menu-open .wp-block-navigation__responsive-container-content .wp-block-page-list{align-items:var(--navigation-layout-justification-setting,initial);display:flex;flex-direction:column}.wp-block-navigation__responsive-container.is-menu-open .wp-block-navigation-item,.wp-block-navigation__responsive-container.is-menu-open .wp-block-navigation-item .wp-block-navigation__submenu-container,.wp-block-navigation__responsive-container.is-menu-open .wp-block-navigation__container,.wp-block-navigation__responsive-container.is-menu-open .wp-block-page-list{background:#0000!important;color:inherit!important}.wp-block-navigation__responsive-container.is-menu-open .wp-block-navigation__submenu-container.wp-block-navigation__submenu-container.wp-block-navigation__submenu-container.wp-block-navigation__submenu-container{left:auto;right:auto}@media (min-width:600px){.wp-block-navigation__responsive-container:not(.hidden-by-default):not(.is-menu-open){background-color:inherit;display:block;position:relative;width:100%;z-index:auto}.wp-block-navigation__responsive-container:not(.hidden-by-default):not(.is-menu-open) .wp-block-navigation__responsive-container-close{display:none}.wp-block-navigation__responsive-container.is-menu-open .wp-block-navigation__submenu-container.wp-block-navigation__submenu-container.wp-block-navigation__submenu-container.wp-block-navigation__submenu-container{left:0}}.wp-block-navigation:not(.has-background) .wp-block-navigation__responsive-container.is-menu-open{background-color:#fff}.wp-block-navigation:not(.has-text-color) .wp-block-navigation__responsive-container.is-menu-open{color:#000}.wp-block-navigation__toggle_button_label{font-size:1rem;font-weight:700}.wp-block-navigation__responsive-container-close,.wp-block-navigation__responsive-container-open{background:#0000;border:none;color:currentColor;cursor:pointer;margin:0;padding:0;text-transform:inherit;vertical-align:middle}.wp-block-navigation__responsive-container-close svg,.wp-block-navigation__responsive-container-open svg{fill:currentColor;display:block;height:24px;pointer-events:none;width:24px}.wp-block-navigation__responsive-container-open{display:flex}.wp-block-navigation__responsive-container-open.wp-block-navigation__responsive-container-open.wp-block-navigation__responsive-container-open{font-family:inherit;font-size:inherit;font-weight:inherit}@media (min-width:600px){.wp-block-navigation__responsive-container-open:not(.always-shown){display:none}}.wp-block-navigation__responsive-container-close{position:absolute;right:0;top:0;z-index:2}.wp-block-navigation__responsive-container-close.wp-block-navigation__responsive-container-close.wp-block-navigation__responsive-container-close{font-family:inherit;font-size:inherit;font-weight:inherit}.wp-block-navigation__responsive-close{width:100%}.has-modal-open .wp-block-navigation__responsive-close{margin-left:auto;margin-right:auto;max-width:var(--wp--style--global--wide-size,100%)}.wp-block-navigation__responsive-close:focus{outline:none}.is-menu-open .wp-block-navigation__responsive-close,.is-menu-open .wp-block-navigation__responsive-container-content,.is-menu-open .wp-block-navigation__responsive-dialog{box-sizing:border-box}.wp-block-navigation__responsive-dialog{position:relative}.has-modal-open .admin-bar .is-menu-open .wp-block-navigation__responsive-dialog{margin-top:46px}@media (min-width:782px){.has-modal-open .admin-bar .is-menu-open .wp-block-navigation__responsive-dialog{margin-top:32px}}html.has-modal-open{overflow:hidden}
\ No newline at end of file
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-includes/blocks/navigation/view.min.js b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-includes/blocks/navigation/view.min.js
deleted file mode 100644
index 1de9f7a..0000000
--- a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-includes/blocks/navigation/view.min.js
+++ /dev/null
@@ -1 +0,0 @@
-import*as e from"@wordpress/interactivity";var t={d:(e,n)=>{for(var o in n)t.o(n,o)&&!t.o(e,o)&&Object.defineProperty(e,o,{enumerable:!0,get:n[o]})},o:(e,t)=>Object.prototype.hasOwnProperty.call(e,t)};const n=(e=>{var n={};return t.d(n,e),n})({getContext:()=>e.getContext,getElement:()=>e.getElement,store:()=>e.store}),o=["a[href]",'input:not([disabled]):not([type="hidden"]):not([aria-hidden])',"select:not([disabled]):not([aria-hidden])","textarea:not([disabled]):not([aria-hidden])","button:not([disabled]):not([aria-hidden])","[contenteditable]",'[tabindex]:not([tabindex^="-"])'];document.addEventListener("click",(()=>{}));const{state:l,actions:c}=(0,n.store)("core/navigation",{state:{get roleAttribute(){return"overlay"===(0,n.getContext)().type&&l.isMenuOpen?"dialog":null},get ariaModal(){return"overlay"===(0,n.getContext)().type&&l.isMenuOpen?"true":null},get ariaLabel(){const e=(0,n.getContext)();return"overlay"===e.type&&l.isMenuOpen?e.ariaLabel:null},get isMenuOpen(){return Object.values(l.menuOpenedBy).filter(Boolean).length>0},get menuOpenedBy(){const e=(0,n.getContext)();return"overlay"===e.type?e.overlayOpenedBy:e.submenuOpenedBy}},actions:{openMenuOnHover(){const{type:e,overlayOpenedBy:t}=(0,n.getContext)();"submenu"===e&&0===Object.values(t||{}).filter(Boolean).length&&c.openMenu("hover")},closeMenuOnHover(){const{type:e,overlayOpenedBy:t}=(0,n.getContext)();"submenu"===e&&0===Object.values(t||{}).filter(Boolean).length&&c.closeMenu("hover")},openMenuOnClick(){const e=(0,n.getContext)(),{ref:t}=(0,n.getElement)();e.previousFocus=t,c.openMenu("click")},closeMenuOnClick(){c.closeMenu("click"),c.closeMenu("focus")},openMenuOnFocus(){c.openMenu("focus")},toggleMenuOnClick(){const e=(0,n.getContext)(),{ref:t}=(0,n.getElement)();window.document.activeElement!==t&&t.focus();const{menuOpenedBy:o}=l;o.click||o.focus?(c.closeMenu("click"),c.closeMenu("focus")):(e.previousFocus=t,c.openMenu("click"))},handleMenuKeydown(e){const{type:t,firstFocusableElement:o,lastFocusableElement:u}=(0,n.getContext)();if(l.menuOpenedBy.click){if("Escape"===e?.key)return c.closeMenu("click"),void c.closeMenu("focus");"overlay"===t&&"Tab"===e.key&&(e.shiftKey&&window.document.activeElement===o?(e.preventDefault(),u.focus()):e.shiftKey||window.document.activeElement!==u||(e.preventDefault(),o.focus()))}},handleMenuFocusout(e){const{modal:t,type:o}=(0,n.getContext)();(null===e.relatedTarget||!t?.contains(e.relatedTarget)&&e.target!==window.document.activeElement&&"submenu"===o)&&(c.closeMenu("click"),c.closeMenu("focus"))},openMenu(e="click"){const{type:t}=(0,n.getContext)();l.menuOpenedBy[e]=!0,"overlay"===t&&document.documentElement.classList.add("has-modal-open")},closeMenu(e="click"){const t=(0,n.getContext)();l.menuOpenedBy[e]=!1,l.isMenuOpen||(t.modal?.contains(window.document.activeElement)&&t.previousFocus?.focus(),t.modal=null,t.previousFocus=null,"overlay"===t.type&&document.documentElement.classList.remove("has-modal-open"))}},callbacks:{initMenu(){const e=(0,n.getContext)(),{ref:t}=(0,n.getElement)();if(l.isMenuOpen){const n=t.querySelectorAll(o);e.modal=t,e.firstFocusableElement=n[0],e.lastFocusableElement=n[n.length-1]}},focusFirstElement(){const{ref:e}=(0,n.getElement)();if(l.isMenuOpen){const t=e.querySelectorAll(o);t?.[0]?.focus()}}}},{lock:!0});
\ No newline at end of file
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-includes/blocks/social-links/style.min.css b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-includes/blocks/social-links/style.min.css
deleted file mode 100644
index cb9548f..0000000
--- a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-includes/blocks/social-links/style.min.css
+++ /dev/null
@@ -1 +0,0 @@
-.wp-block-social-links{background:none;box-sizing:border-box;margin-left:0;padding-left:0;padding-right:0;text-indent:0}.wp-block-social-links .wp-social-link a,.wp-block-social-links .wp-social-link a:hover{border-bottom:0;box-shadow:none;text-decoration:none}.wp-block-social-links .wp-social-link svg{height:1em;width:1em}.wp-block-social-links .wp-social-link span:not(.screen-reader-text){font-size:.65em;margin-left:.5em;margin-right:.5em}.wp-block-social-links.has-small-icon-size{font-size:16px}.wp-block-social-links,.wp-block-social-links.has-normal-icon-size{font-size:24px}.wp-block-social-links.has-large-icon-size{font-size:36px}.wp-block-social-links.has-huge-icon-size{font-size:48px}.wp-block-social-links.aligncenter{display:flex;justify-content:center}.wp-block-social-links.alignright{justify-content:flex-end}.wp-block-social-link{border-radius:9999px;display:block;height:auto;transition:transform .1s ease}@media (prefers-reduced-motion:reduce){.wp-block-social-link{transition-delay:0s;transition-duration:0s}}.wp-block-social-link a{align-items:center;display:flex;line-height:0;transition:transform .1s ease}.wp-block-social-link:hover{transform:scale(1.1)}.wp-block-social-links .wp-block-social-link.wp-social-link{display:inline-block;margin:0;padding:0}.wp-block-social-links .wp-block-social-link.wp-social-link .wp-block-social-link-anchor,.wp-block-social-links .wp-block-social-link.wp-social-link .wp-block-social-link-anchor svg,.wp-block-social-links .wp-block-social-link.wp-social-link .wp-block-social-link-anchor:active,.wp-block-social-links .wp-block-social-link.wp-social-link .wp-block-social-link-anchor:hover,.wp-block-social-links .wp-block-social-link.wp-social-link .wp-block-social-link-anchor:visited{color:currentColor;fill:currentColor}:where(.wp-block-social-links:not(.is-style-logos-only)) .wp-social-link{background-color:#f0f0f0;color:#444}:where(.wp-block-social-links:not(.is-style-logos-only)) .wp-social-link-amazon{background-color:#f90;color:#fff}:where(.wp-block-social-links:not(.is-style-logos-only)) .wp-social-link-bandcamp{background-color:#1ea0c3;color:#fff}:where(.wp-block-social-links:not(.is-style-logos-only)) .wp-social-link-behance{background-color:#0757fe;color:#fff}:where(.wp-block-social-links:not(.is-style-logos-only)) .wp-social-link-bluesky{background-color:#0a7aff;color:#fff}:where(.wp-block-social-links:not(.is-style-logos-only)) .wp-social-link-codepen{background-color:#1e1f26;color:#fff}:where(.wp-block-social-links:not(.is-style-logos-only)) .wp-social-link-deviantart{background-color:#02e49b;color:#fff}:where(.wp-block-social-links:not(.is-style-logos-only)) .wp-social-link-dribbble{background-color:#e94c89;color:#fff}:where(.wp-block-social-links:not(.is-style-logos-only)) .wp-social-link-dropbox{background-color:#4280ff;color:#fff}:where(.wp-block-social-links:not(.is-style-logos-only)) .wp-social-link-etsy{background-color:#f45800;color:#fff}:where(.wp-block-social-links:not(.is-style-logos-only)) .wp-social-link-facebook{background-color:#1778f2;color:#fff}:where(.wp-block-social-links:not(.is-style-logos-only)) .wp-social-link-fivehundredpx{background-color:#000;color:#fff}:where(.wp-block-social-links:not(.is-style-logos-only)) .wp-social-link-flickr{background-color:#0461dd;color:#fff}:where(.wp-block-social-links:not(.is-style-logos-only)) .wp-social-link-foursquare{background-color:#e65678;color:#fff}:where(.wp-block-social-links:not(.is-style-logos-only)) .wp-social-link-github{background-color:#24292d;color:#fff}:where(.wp-block-social-links:not(.is-style-logos-only)) .wp-social-link-goodreads{background-color:#eceadd;color:#382110}:where(.wp-block-social-links:not(.is-style-logos-only)) .wp-social-link-google{background-color:#ea4434;color:#fff}:where(.wp-block-social-links:not(.is-style-logos-only)) .wp-social-link-gravatar{background-color:#1d4fc4;color:#fff}:where(.wp-block-social-links:not(.is-style-logos-only)) .wp-social-link-instagram{background-color:#f00075;color:#fff}:where(.wp-block-social-links:not(.is-style-logos-only)) .wp-social-link-lastfm{background-color:#e21b24;color:#fff}:where(.wp-block-social-links:not(.is-style-logos-only)) .wp-social-link-linkedin{background-color:#0d66c2;color:#fff}:where(.wp-block-social-links:not(.is-style-logos-only)) .wp-social-link-mastodon{background-color:#3288d4;color:#fff}:where(.wp-block-social-links:not(.is-style-logos-only)) .wp-social-link-medium{background-color:#000;color:#fff}:where(.wp-block-social-links:not(.is-style-logos-only)) .wp-social-link-meetup{background-color:#f6405f;color:#fff}:where(.wp-block-social-links:not(.is-style-logos-only)) .wp-social-link-patreon{background-color:#000;color:#fff}:where(.wp-block-social-links:not(.is-style-logos-only)) .wp-social-link-pinterest{background-color:#e60122;color:#fff}:where(.wp-block-social-links:not(.is-style-logos-only)) .wp-social-link-pocket{background-color:#ef4155;color:#fff}:where(.wp-block-social-links:not(.is-style-logos-only)) .wp-social-link-reddit{background-color:#ff4500;color:#fff}:where(.wp-block-social-links:not(.is-style-logos-only)) .wp-social-link-skype{background-color:#0478d7;color:#fff}:where(.wp-block-social-links:not(.is-style-logos-only)) .wp-social-link-snapchat{background-color:#fefc00;color:#fff;stroke:#000}:where(.wp-block-social-links:not(.is-style-logos-only)) .wp-social-link-soundcloud{background-color:#ff5600;color:#fff}:where(.wp-block-social-links:not(.is-style-logos-only)) .wp-social-link-spotify{background-color:#1bd760;color:#fff}:where(.wp-block-social-links:not(.is-style-logos-only)) .wp-social-link-telegram{background-color:#2aabee;color:#fff}:where(.wp-block-social-links:not(.is-style-logos-only)) .wp-social-link-threads{background-color:#000;color:#fff}:where(.wp-block-social-links:not(.is-style-logos-only)) .wp-social-link-tiktok{background-color:#000;color:#fff}:where(.wp-block-social-links:not(.is-style-logos-only)) .wp-social-link-tumblr{background-color:#011835;color:#fff}:where(.wp-block-social-links:not(.is-style-logos-only)) .wp-social-link-twitch{background-color:#6440a4;color:#fff}:where(.wp-block-social-links:not(.is-style-logos-only)) .wp-social-link-twitter{background-color:#1da1f2;color:#fff}:where(.wp-block-social-links:not(.is-style-logos-only)) .wp-social-link-vimeo{background-color:#1eb7ea;color:#fff}:where(.wp-block-social-links:not(.is-style-logos-only)) .wp-social-link-vk{background-color:#4680c2;color:#fff}:where(.wp-block-social-links:not(.is-style-logos-only)) .wp-social-link-wordpress{background-color:#3499cd;color:#fff}:where(.wp-block-social-links:not(.is-style-logos-only)) .wp-social-link-whatsapp{background-color:#25d366;color:#fff}:where(.wp-block-social-links:not(.is-style-logos-only)) .wp-social-link-x{background-color:#000;color:#fff}:where(.wp-block-social-links:not(.is-style-logos-only)) .wp-social-link-yelp{background-color:#d32422;color:#fff}:where(.wp-block-social-links:not(.is-style-logos-only)) .wp-social-link-youtube{background-color:red;color:#fff}:where(.wp-block-social-links.is-style-logos-only) .wp-social-link{background:none}:where(.wp-block-social-links.is-style-logos-only) .wp-social-link svg{height:1.25em;width:1.25em}:where(.wp-block-social-links.is-style-logos-only) .wp-social-link-amazon{color:#f90}:where(.wp-block-social-links.is-style-logos-only) .wp-social-link-bandcamp{color:#1ea0c3}:where(.wp-block-social-links.is-style-logos-only) .wp-social-link-behance{color:#0757fe}:where(.wp-block-social-links.is-style-logos-only) .wp-social-link-bluesky{color:#0a7aff}:where(.wp-block-social-links.is-style-logos-only) .wp-social-link-codepen{color:#1e1f26}:where(.wp-block-social-links.is-style-logos-only) .wp-social-link-deviantart{color:#02e49b}:where(.wp-block-social-links.is-style-logos-only) .wp-social-link-dribbble{color:#e94c89}:where(.wp-block-social-links.is-style-logos-only) .wp-social-link-dropbox{color:#4280ff}:where(.wp-block-social-links.is-style-logos-only) .wp-social-link-etsy{color:#f45800}:where(.wp-block-social-links.is-style-logos-only) .wp-social-link-facebook{color:#1778f2}:where(.wp-block-social-links.is-style-logos-only) .wp-social-link-fivehundredpx{color:#000}:where(.wp-block-social-links.is-style-logos-only) .wp-social-link-flickr{color:#0461dd}:where(.wp-block-social-links.is-style-logos-only) .wp-social-link-foursquare{color:#e65678}:where(.wp-block-social-links.is-style-logos-only) .wp-social-link-github{color:#24292d}:where(.wp-block-social-links.is-style-logos-only) .wp-social-link-goodreads{color:#382110}:where(.wp-block-social-links.is-style-logos-only) .wp-social-link-google{color:#ea4434}:where(.wp-block-social-links.is-style-logos-only) .wp-social-link-gravatar{color:#1d4fc4}:where(.wp-block-social-links.is-style-logos-only) .wp-social-link-instagram{color:#f00075}:where(.wp-block-social-links.is-style-logos-only) .wp-social-link-lastfm{color:#e21b24}:where(.wp-block-social-links.is-style-logos-only) .wp-social-link-linkedin{color:#0d66c2}:where(.wp-block-social-links.is-style-logos-only) .wp-social-link-mastodon{color:#3288d4}:where(.wp-block-social-links.is-style-logos-only) .wp-social-link-medium{color:#000}:where(.wp-block-social-links.is-style-logos-only) .wp-social-link-meetup{color:#f6405f}:where(.wp-block-social-links.is-style-logos-only) .wp-social-link-patreon{color:#000}:where(.wp-block-social-links.is-style-logos-only) .wp-social-link-pinterest{color:#e60122}:where(.wp-block-social-links.is-style-logos-only) .wp-social-link-pocket{color:#ef4155}:where(.wp-block-social-links.is-style-logos-only) .wp-social-link-reddit{color:#ff4500}:where(.wp-block-social-links.is-style-logos-only) .wp-social-link-skype{color:#0478d7}:where(.wp-block-social-links.is-style-logos-only) .wp-social-link-snapchat{color:#fff;stroke:#000}:where(.wp-block-social-links.is-style-logos-only) .wp-social-link-soundcloud{color:#ff5600}:where(.wp-block-social-links.is-style-logos-only) .wp-social-link-spotify{color:#1bd760}:where(.wp-block-social-links.is-style-logos-only) .wp-social-link-telegram{color:#2aabee}:where(.wp-block-social-links.is-style-logos-only) .wp-social-link-threads{color:#000}:where(.wp-block-social-links.is-style-logos-only) .wp-social-link-tiktok{color:#000}:where(.wp-block-social-links.is-style-logos-only) .wp-social-link-tumblr{color:#011835}:where(.wp-block-social-links.is-style-logos-only) .wp-social-link-twitch{color:#6440a4}:where(.wp-block-social-links.is-style-logos-only) .wp-social-link-twitter{color:#1da1f2}:where(.wp-block-social-links.is-style-logos-only) .wp-social-link-vimeo{color:#1eb7ea}:where(.wp-block-social-links.is-style-logos-only) .wp-social-link-vk{color:#4680c2}:where(.wp-block-social-links.is-style-logos-only) .wp-social-link-whatsapp{color:#25d366}:where(.wp-block-social-links.is-style-logos-only) .wp-social-link-wordpress{color:#3499cd}:where(.wp-block-social-links.is-style-logos-only) .wp-social-link-x{color:#000}:where(.wp-block-social-links.is-style-logos-only) .wp-social-link-yelp{color:#d32422}:where(.wp-block-social-links.is-style-logos-only) .wp-social-link-youtube{color:red}.wp-block-social-links.is-style-pill-shape .wp-social-link{width:auto}:root :where(.wp-block-social-links .wp-social-link a){padding:.25em}:root :where(.wp-block-social-links.is-style-logos-only .wp-social-link a){padding:0}:root :where(.wp-block-social-links.is-style-pill-shape .wp-social-link a){padding-left:.66667em;padding-right:.66667em}.wp-block-social-links:not(.has-icon-color):not(.has-icon-background-color) .wp-social-link-snapchat .wp-block-social-link-label{color:#000}
\ No newline at end of file
diff --git a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-includes/js/comment-reply.min.js b/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-includes/js/comment-reply.min.js
deleted file mode 100644
index ff42ec6..0000000
--- a/static/wp-content/uploads/simply-static/temp-files/simply-static-1-1728840186/wp-includes/js/comment-reply.min.js
+++ /dev/null
@@ -1,2 +0,0 @@
-/*! This file is auto-generated */
-window.addComment=function(v){var I,C,h,E=v.document,b={commentReplyClass:"comment-reply-link",commentReplyTitleId:"reply-title",cancelReplyId:"cancel-comment-reply-link",commentFormId:"commentform",temporaryFormId:"wp-temp-form-div",parentIdFieldId:"comment_parent",postIdFieldId:"comment_post_ID"},e=v.MutationObserver||v.WebKitMutationObserver||v.MozMutationObserver,r="querySelector"in E&&"addEventListener"in v,n=!!E.documentElement.dataset;function t(){d(),e&&new e(o).observe(E.body,{childList:!0,subtree:!0})}function d(e){if(r&&(I=g(b.cancelReplyId),C=g(b.commentFormId),I)){I.addEventListener("touchstart",l),I.addEventListener("click",l);function t(e){if((e.metaKey||e.ctrlKey)&&13===e.keyCode)return C.removeEventListener("keydown",t),e.preventDefault(),C.submit.click(),!1}C&&C.addEventListener("keydown",t);for(var n,d=function(e){var t=b.commentReplyClass;e&&e.childNodes||(e=E);e=E.getElementsByClassName?e.getElementsByClassName(t):e.querySelectorAll("."+t);return e}(e),o=0,i=d.length;o{for(var r in n)t.o(n,r)&&!t.o(e,r)&&Object.defineProperty(e,r,{enumerable:!0,get:n[r]})},o:(t,e)=>Object.prototype.hasOwnProperty.call(t,e)},e={};t.d(e,{zj:()=>we,SD:()=>je,V6:()=>He,jb:()=>Tn,yT:()=>Ke,M_:()=>ke,hb:()=>en,vJ:()=>Ze,ip:()=>Ye,Nf:()=>tn,Kr:()=>nn,li:()=>_t,J0:()=>it,FH:()=>Xe,v4:()=>Qe});var n,r,o,i,s,u,_,c,a,l,f,p,h={},d=[],v=/acit|ex(?:s|g|n|p|$)|rph|grid|ows|mnc|ntw|ine[ch]|zoo|^ord|itera/i,y=Array.isArray;function g(t,e){for(var n in e)t[n]=e[n];return t}function m(t){var e=t.parentNode;e&&e.removeChild(t)}function w(t,e,r){var o,i,s,u={};for(s in e)"key"==s?o=e[s]:"ref"==s?i=e[s]:u[s]=e[s];if(arguments.length>2&&(u.children=arguments.length>3?n.call(arguments,2):r),"function"==typeof t&&null!=t.defaultProps)for(s in t.defaultProps)void 0===u[s]&&(u[s]=t.defaultProps[s]);return b(t,u,o,i,null)}function b(t,e,n,i,s){var u={type:t,props:e,key:n,ref:i,__k:null,__:null,__b:0,__e:null,__d:void 0,__c:null,constructor:void 0,__v:null==s?++o:s,__i:-1,__u:0};return null==s&&null!=r.vnode&&r.vnode(u),u}function k(t){return t.children}function x(t,e){this.props=t,this.context=e}function S(t,e){if(null==e)return t.__?S(t.__,t.__i+1):null;for(var n;ee&&s.sort(c));C.__r=0}function $(t,e,n,r,o,i,s,u,_,c,a){var l,f,p,v,y,g=r&&r.__k||d,m=e.length;for(n.__d=_,M(n,e,g),_=n.__d,l=0;l0?b(o.type,o.props,o.key,o.ref?o.ref:null,o.__v):o)?(o.__=t,o.__b=t.__b+1,u=N(o,n,s,a),o.__i=u,i=null,-1!==u&&(a--,(i=n[u])&&(i.__u|=131072)),null==i||null===i.__v?(-1==u&&l--,"function"!=typeof o.type&&(o.__u|=65536)):u!==s&&(u===s+1?l++:u>s?a>_-s?l+=u-s:l--:u(null!=_&&0==(131072&_.__u)?1:0))for(;s>=0||u=0){if((_=e[s])&&0==(131072&_.__u)&&o==_.key&&i===_.type)return s;s--}if(u2&&(_.children=arguments.length>3?n.call(arguments,2):r),b(t.type,_,o||t.key,i||t.ref,null)}n=d.slice,r={__e:function(t,e,n,r){for(var o,i,s;e=e.__;)if((o=e.__c)&&!o.__)try{if((i=o.constructor)&&null!=i.getDerivedStateFromError&&(o.setState(i.getDerivedStateFromError(t)),s=o.__d),null!=o.componentDidCatch&&(o.componentDidCatch(t,r||{}),s=o.__d),s)return o.__E=o}catch(e){t=e}throw t}},o=0,i=function(t){return null!=t&&null==t.constructor},x.prototype.setState=function(t,e){var n;n=null!=this.__s&&this.__s!==this.state?this.__s:this.__s=g({},this.state),"function"==typeof t&&(t=t(g({},n),this.props)),t&&g(n,t),null!=t&&this.__v&&(e&&this._sb.push(e),P(this))},x.prototype.forceUpdate=function(t){this.__v&&(this.__e=!0,t&&this.__h.push(t),P(this))},x.prototype.render=k,s=[],_="function"==typeof Promise?Promise.prototype.then.bind(Promise.resolve()):setTimeout,c=function(t,e){return t.__v.__b-e.__v.__b},C.__r=0,a=0,l=H(!1),f=H(!0),p=0;var B,z,J,q,K=0,G=[],Q=[],X=r,Y=X.__b,Z=X.__r,tt=X.diffed,et=X.__c,nt=X.unmount,rt=X.__;function ot(t,e){X.__h&&X.__h(z,t,K||e),K=0;var n=z.__H||(z.__H={__:[],__h:[]});return t>=n.__.length&&n.__.push({__V:Q}),n.__[t]}function it(t){return K=1,function(t,e,n){var r=ot(B++,2);if(r.t=t,!r.__c&&(r.__=[n?n(e):gt(void 0,e),function(t){var e=r.__N?r.__N[0]:r.__[0],n=r.t(e,t);e!==n&&(r.__N=[n,r.__[1]],r.__c.setState({}))}],r.__c=z,!z.u)){var o=function(t,e,n){if(!r.__c.__H)return!0;var o=r.__c.__H.__.filter((function(t){return!!t.__c}));if(o.every((function(t){return!t.__N})))return!i||i.call(this,t,e,n);var s=!1;return o.forEach((function(t){if(t.__N){var e=t.__[0];t.__=t.__N,t.__N=void 0,e!==t.__[0]&&(s=!0)}})),!(!s&&r.__c.props===t)&&(!i||i.call(this,t,e,n))};z.u=!0;var i=z.shouldComponentUpdate,s=z.componentWillUpdate;z.componentWillUpdate=function(t,e,n){if(this.__e){var r=i;i=void 0,o(t,e,n),i=r}s&&s.call(this,t,e,n)},z.shouldComponentUpdate=o}return r.__N||r.__}(gt,t)}function st(t,e){var n=ot(B++,3);!X.__s&&yt(n.__H,e)&&(n.__=t,n.i=e,z.__H.__h.push(n))}function ut(t,e){var n=ot(B++,4);!X.__s&&yt(n.__H,e)&&(n.__=t,n.i=e,z.__h.push(n))}function _t(t){return K=5,ct((function(){return{current:t}}),[])}function ct(t,e){var n=ot(B++,7);return yt(n.__H,e)?(n.__V=t(),n.i=e,n.__h=t,n.__V):n.__}function at(t,e){return K=8,ct((function(){return t}),e)}function lt(t){var e=z.context[t.__c],n=ot(B++,9);return n.c=t,e?(null==n.__&&(n.__=!0,e.sub(z)),e.props.value):t.__}function ft(){for(var t;t=G.shift();)if(t.__P&&t.__H)try{t.__H.__h.forEach(dt),t.__H.__h.forEach(vt),t.__H.__h=[]}catch(e){t.__H.__h=[],X.__e(e,t.__v)}}X.__b=function(t){z=null,Y&&Y(t)},X.__=function(t,e){t&&e.__k&&e.__k.__m&&(t.__m=e.__k.__m),rt&&rt(t,e)},X.__r=function(t){Z&&Z(t),B=0;var e=(z=t.__c).__H;e&&(J===z?(e.__h=[],z.__h=[],e.__.forEach((function(t){t.__N&&(t.__=t.__N),t.__V=Q,t.__N=t.i=void 0}))):(e.__h.forEach(dt),e.__h.forEach(vt),e.__h=[],B=0)),J=z},X.diffed=function(t){tt&&tt(t);var e=t.__c;e&&e.__H&&(e.__H.__h.length&&(1!==G.push(e)&&q===X.requestAnimationFrame||((q=X.requestAnimationFrame)||ht)(ft)),e.__H.__.forEach((function(t){t.i&&(t.__H=t.i),t.__V!==Q&&(t.__=t.__V),t.i=void 0,t.__V=Q}))),J=z=null},X.__c=function(t,e){e.some((function(t){try{t.__h.forEach(dt),t.__h=t.__h.filter((function(t){return!t.__||vt(t)}))}catch(n){e.some((function(t){t.__h&&(t.__h=[])})),e=[],X.__e(n,t.__v)}})),et&&et(t,e)},X.unmount=function(t){nt&&nt(t);var e,n=t.__c;n&&n.__H&&(n.__H.__.forEach((function(t){try{dt(t)}catch(t){e=t}})),n.__H=void 0,e&&X.__e(e,n.__v))};var pt="function"==typeof requestAnimationFrame;function ht(t){var e,n=function(){clearTimeout(r),pt&&cancelAnimationFrame(e),setTimeout(t)},r=setTimeout(n,100);pt&&(e=requestAnimationFrame(n))}function dt(t){var e=z,n=t.__c;"function"==typeof n&&(t.__c=void 0,n()),z=e}function vt(t){var e=z;t.__c=t.__(),z=e}function yt(t,e){return!t||t.length!==e.length||e.some((function(e,n){return e!==t[n]}))}function gt(t,e){return"function"==typeof e?e(t):e}var mt=Symbol.for("preact-signals");function wt(){if(Et>1)Et--;else{for(var t,e=!1;void 0!==St;){var n=St;for(St=void 0,Pt++;void 0!==n;){var r=n.o;if(n.o=void 0,n.f&=-3,!(8&n.f)&&Nt(n))try{n.c()}catch(n){e||(t=n,e=!0)}n=r}}if(Pt=0,Et--,e)throw t}}function bt(t){if(Et>0)return t();Et++;try{return t()}finally{wt()}}var kt=void 0;var xt,St=void 0,Et=0,Pt=0,Ct=0;function $t(t){if(void 0!==kt){var e=t.n;if(void 0===e||e.t!==kt)return e={i:0,S:t,p:kt.s,n:void 0,t:kt,e:void 0,x:void 0,r:e},void 0!==kt.s&&(kt.s.n=e),kt.s=e,t.n=e,32&kt.f&&t.S(e),e;if(-1===e.i)return e.i=0,void 0!==e.n&&(e.n.p=e.p,void 0!==e.p&&(e.p.n=e.n),e.p=kt.s,e.n=void 0,kt.s.n=e,kt.s=e),e}}function Mt(t){this.v=t,this.i=0,this.n=void 0,this.t=void 0}function Ot(t){return new Mt(t)}function Nt(t){for(var e=t.s;void 0!==e;e=e.n)if(e.S.i!==e.i||!e.S.h()||e.S.i!==e.i)return!0;return!1}function Tt(t){for(var e=t.s;void 0!==e;e=e.n){var n=e.S.n;if(void 0!==n&&(e.r=n),e.S.n=e,e.i=-1,void 0===e.n){t.s=e;break}}}function jt(t){for(var e=t.s,n=void 0;void 0!==e;){var r=e.p;-1===e.i?(e.S.U(e),void 0!==r&&(r.n=e.n),void 0!==e.n&&(e.n.p=r)):n=e,e.S.n=e.r,void 0!==e.r&&(e.r=void 0),e=r}t.s=n}function Ht(t){Mt.call(this,void 0),this.x=t,this.s=void 0,this.g=Ct-1,this.f=4}function Ut(t){return new Ht(t)}function Wt(t){var e=t.u;if(t.u=void 0,"function"==typeof e){Et++;var n=kt;kt=void 0;try{e()}catch(e){throw t.f&=-2,t.f|=8,Lt(t),e}finally{kt=n,wt()}}}function Lt(t){for(var e=t.s;void 0!==e;e=e.n)e.S.U(e);t.x=void 0,t.s=void 0,Wt(t)}function At(t){if(kt!==this)throw new Error("Out-of-order effect");jt(this),kt=t,this.f&=-2,8&this.f&&Lt(this),wt()}function Ft(t){this.x=t,this.u=void 0,this.s=void 0,this.o=void 0,this.f=32}function Rt(t){var e=new Ft(t);try{e.c()}catch(t){throw e.d(),t}return e.d.bind(e)}function Dt(t,e){r[t]=e.bind(null,r[t]||function(){})}function It(t){xt&&xt(),xt=t&&t.S()}function Vt(t){var e=this,n=t.data,r=function(t){return ct((function(){return Ot(t)}),[])}(n);r.value=n;var o=ct((function(){for(var t=e.__v;t=t.__;)if(t.__c){t.__c.__$f|=4;break}return e.__$u.c=function(){var t;i(o.peek())||3!==(null==(t=e.base)?void 0:t.nodeType)?(e.__$f|=1,e.setState({})):e.base.data=o.peek()},Ut((function(){var t=r.value.value;return 0===t?0:!0===t?"":t||""}))}),[]);return o.value}function Bt(t,e,n,r){var o=e in t&&void 0===t.ownerSVGElement,i=Ot(n);return{o:function(t,e){i.value=t,r=e},d:Rt((function(){var n=i.value.value;r[e]!==n&&(r[e]=n,o?t[e]=n:n?t.setAttribute(e,n):t.removeAttribute(e))}))}}Mt.prototype.brand=mt,Mt.prototype.h=function(){return!0},Mt.prototype.S=function(t){this.t!==t&&void 0===t.e&&(t.x=this.t,void 0!==this.t&&(this.t.e=t),this.t=t)},Mt.prototype.U=function(t){if(void 0!==this.t){var e=t.e,n=t.x;void 0!==e&&(e.x=n,t.e=void 0),void 0!==n&&(n.e=e,t.x=void 0),t===this.t&&(this.t=n)}},Mt.prototype.subscribe=function(t){var e=this;return Rt((function(){var n=e.value,r=kt;kt=void 0;try{t(n)}finally{kt=r}}))},Mt.prototype.valueOf=function(){return this.value},Mt.prototype.toString=function(){return this.value+""},Mt.prototype.toJSON=function(){return this.value},Mt.prototype.peek=function(){var t=kt;kt=void 0;try{return this.value}finally{kt=t}},Object.defineProperty(Mt.prototype,"value",{get:function(){var t=$t(this);return void 0!==t&&(t.i=this.i),this.v},set:function(t){if(t!==this.v){if(Pt>100)throw new Error("Cycle detected");this.v=t,this.i++,Ct++,Et++;try{for(var e=this.t;void 0!==e;e=e.x)e.t.N()}finally{wt()}}}}),(Ht.prototype=new Mt).h=function(){if(this.f&=-3,1&this.f)return!1;if(32==(36&this.f))return!0;if(this.f&=-5,this.g===Ct)return!0;if(this.g=Ct,this.f|=1,this.i>0&&!Nt(this))return this.f&=-2,!0;var t=kt;try{Tt(this),kt=this;var e=this.x();(16&this.f||this.v!==e||0===this.i)&&(this.v=e,this.f&=-17,this.i++)}catch(t){this.v=t,this.f|=16,this.i++}return kt=t,jt(this),this.f&=-2,!0},Ht.prototype.S=function(t){if(void 0===this.t){this.f|=36;for(var e=this.s;void 0!==e;e=e.n)e.S.S(e)}Mt.prototype.S.call(this,t)},Ht.prototype.U=function(t){if(void 0!==this.t&&(Mt.prototype.U.call(this,t),void 0===this.t)){this.f&=-33;for(var e=this.s;void 0!==e;e=e.n)e.S.U(e)}},Ht.prototype.N=function(){if(!(2&this.f)){this.f|=6;for(var t=this.t;void 0!==t;t=t.x)t.t.N()}},Object.defineProperty(Ht.prototype,"value",{get:function(){if(1&this.f)throw new Error("Cycle detected");var t=$t(this);if(this.h(),void 0!==t&&(t.i=this.i),16&this.f)throw this.v;return this.v}}),Ft.prototype.c=function(){var t=this.S();try{if(8&this.f)return;if(void 0===this.x)return;var e=this.x();"function"==typeof e&&(this.u=e)}finally{t()}},Ft.prototype.S=function(){if(1&this.f)throw new Error("Cycle detected");this.f|=1,this.f&=-9,Wt(this),Tt(this),Et++;var t=kt;return kt=this,At.bind(this,t)},Ft.prototype.N=function(){2&this.f||(this.f|=2,this.o=St,St=this)},Ft.prototype.d=function(){this.f|=8,1&this.f||Lt(this)},Vt.displayName="_st",Object.defineProperties(Mt.prototype,{constructor:{configurable:!0,value:void 0},type:{configurable:!0,value:Vt},props:{configurable:!0,get:function(){return{data:this}}},__b:{configurable:!0,value:1}}),Dt("__b",(function(t,e){if("string"==typeof e.type){var n,r=e.props;for(var o in r)if("children"!==o){var i=r[o];i instanceof Mt&&(n||(e.__np=n={}),n[o]=i,r[o]=i.peek())}}t(e)})),Dt("__r",(function(t,e){It();var n,r=e.__c;r&&(r.__$f&=-2,void 0===(n=r.__$u)&&(r.__$u=n=function(t){var e;return Rt((function(){e=this})),e.c=function(){r.__$f|=1,r.setState({})},e}())),r,It(n),t(e)})),Dt("__e",(function(t,e,n,r){It(),void 0,t(e,n,r)})),Dt("diffed",(function(t,e){var n;if(It(),void 0,"string"==typeof e.type&&(n=e.__e)){var r=e.__np,o=e.props;if(r){var i=n.U;if(i)for(var s in i){var u=i[s];void 0===u||s in r||(u.d(),i[s]=void 0)}else n.U=i={};for(var _ in r){var c=i[_],a=r[_];void 0===c?(c=Bt(n,_,a,o),i[_]=c):c.o(a,o)}}}t(e)})),Dt("unmount",(function(t,e){if("string"==typeof e.type){var n=e.__e;if(n){var r=n.U;if(r)for(var o in n.U=void 0,r){var i=r[o];i&&i.d()}}}else{var s=e.__c;if(s){var u=s.__$u;u&&(s.__$u=void 0,u.d())}}t(e)})),Dt("__h",(function(t,e,n,r){(r<3||9===r)&&(e.__$f|=2),t(e,n,r)})),x.prototype.shouldComponentUpdate=function(t,e){var n=this.__$u;if(!(n&&void 0!==n.s||4&this.__$f))return!0;if(3&this.__$f)return!0;for(var r in e)return!0;for(var o in t)if("__source"!==o&&t[o]!==this.props[o])return!0;for(var i in this.props)if(!(i in t))return!0;return!1};var zt=new WeakMap,Jt=new WeakMap,qt=new WeakMap,Kt=new WeakSet,Gt=new WeakMap,Qt=/^\$/,Xt=Object.getOwnPropertyDescriptor,Yt=!1,Zt=function(t){if(!_e(t))throw new Error("This object can't be observed.");return Jt.has(t)||Jt.set(t,ee(t,oe)),Jt.get(t)},te=function(t,e){Yt=!0;var n=t[e];try{Yt=!1}catch(t){}return n};var ee=function(t,e){var n=new Proxy(t,e);return Kt.add(n),n},ne=function(){throw new Error("Don't mutate the signals directly.")},re=function(t){return function(e,n,r){var o;if(Yt)return Reflect.get(e,n,r);var i=t||"$"===n[0];if(!t&&i&&Array.isArray(e)){if("$"===n)return qt.has(e)||qt.set(e,ee(e,ie)),qt.get(e);i="$length"===n}zt.has(r)||zt.set(r,new Map);var s=zt.get(r),u=i?n.replace(Qt,""):n;if(s.has(u)||"function"!=typeof(null==(o=Xt(e,u))?void 0:o.get)){var _=Reflect.get(e,u,r);if(i&&"function"==typeof _)return;if("symbol"==typeof u&&se.has(u))return _;s.has(u)||(_e(_)&&(Jt.has(_)||Jt.set(_,ee(_,oe)),_=Jt.get(_)),s.set(u,Ot(_)))}else s.set(u,Ut((function(){return Reflect.get(e,u,r)})));return i?s.get(u):s.get(u).value}},oe={get:re(!1),set:function(t,e,n,r){var o;if("function"==typeof(null==(o=Xt(t,e))?void 0:o.set))return Reflect.set(t,e,n,r);zt.has(r)||zt.set(r,new Map);var i=zt.get(r);if("$"===e[0]){n instanceof Mt||ne();var s=e.replace(Qt,"");return i.set(s,n),Reflect.set(t,s,n.peek(),r)}var u=n;_e(n)&&(Jt.has(n)||Jt.set(n,ee(n,oe)),u=Jt.get(n));var _=!(e in t),c=Reflect.set(t,e,n,r);return i.has(e)?i.get(e).value=u:i.set(e,Ot(u)),_&&Gt.has(t)&&Gt.get(t).value++,Array.isArray(t)&&i.has("length")&&(i.get("length").value=t.length),c},deleteProperty:function(t,e){"$"===e[0]&&ne();var n=zt.get(Jt.get(t)),r=Reflect.deleteProperty(t,e);return n&&n.has(e)&&(n.get(e).value=void 0),Gt.has(t)&&Gt.get(t).value++,r},ownKeys:function(t){return Gt.has(t)||Gt.set(t,Ot(0)),Gt._=Gt.get(t).value,Reflect.ownKeys(t)}},ie={get:re(!0),set:ne,deleteProperty:ne},se=new Set(Object.getOwnPropertyNames(Symbol).map((function(t){return Symbol[t]})).filter((function(t){return"symbol"==typeof t}))),ue=new Set([Object,Array]),_e=function(t){return"object"==typeof t&&null!==t&&ue.has(t.constructor)&&!Kt.has(t)};const ce=t=>Boolean(t&&"object"==typeof t&&t.constructor===Object),ae=(t,e)=>{if(ce(t)&&ce(e))for(const n in e){const r=Object.getOwnPropertyDescriptor(e,n)?.get;if("function"==typeof r)Object.defineProperty(t,n,{get:r});else if(ce(e[n]))t[n]||(t[n]={}),ae(t[n],e[n]);else try{t[n]=e[n]}catch(t){}}},le=new Map,fe=new Map,pe=new Map,he=new Map,de=new WeakMap,ve=new WeakMap,ye=new WeakMap,ge=(t,e)=>{if(!de.has(t)){const n=new Proxy(t,me);de.set(t,n),ve.set(n,e)}return de.get(t)},me={get:(t,e,n)=>{const r=ve.get(n),o=Object.getOwnPropertyDescriptor(t,e)?.get;if(o){const e=Ue();if(e){const n=ye.get(e)||ye.set(e,new Map).get(e);return n.has(o)||n.set(o,Ut((()=>{Fe(r),We(e);try{return o.call(t)}finally{Le(),Re()}}))),n.get(o).value}}const i=Reflect.get(t,e);if(void 0===i&&n===le.get(r)){const n={};return Reflect.set(t,e,n),ge(n,r)}return"GeneratorFunction"===i?.constructor?.name?async(...t)=>{const e=Ue(),n=i(...t);let o,s;for(;;){Fe(r),We(e);try{s=n.next(o)}finally{Le(),Re()}try{o=await s.value}catch(t){Fe(r),We(e),n.throw(t)}finally{Le(),Re()}if(s.done)break}return o}:"function"==typeof i?(...t)=>{Fe(r);try{return i(...t)}finally{Re()}}:ce(i)?ge(i,r):i},set:(t,e,n)=>Reflect.set(t,e,n)},we=t=>he.get(t||Ae())||{},be="I acknowledge that using a private store means my plugin will inevitably break on the next store release.";function ke(t,{state:e={},...n}={},{lock:r=!1}={}){if(le.has(t)){if(r===be||pe.has(t)){const e=pe.get(t);if(!(r===be||!0!==r&&r===e))throw e?Error("Cannot unlock a private store with an invalid lock code"):Error("Cannot lock a public store")}else pe.set(t,r);const o=fe.get(t);ae(o,n),ae(o.state,e)}else{r!==be&&pe.set(t,r);const o={state:Zt(ce(e)?e:{}),...n},i=new Proxy(o,me);fe.set(t,o),le.set(t,i),ve.set(i,t)}return le.get(t)}const xe=(t=document)=>{var e;const n=null!==(e=t.getElementById("wp-script-module-data-@wordpress/interactivity"))&&void 0!==e?e:t.getElementById("wp-interactivity-data");if(n?.textContent)try{return JSON.parse(n.textContent)}catch{}return{}},Se=t=>{ce(t?.state)&&Object.entries(t.state).forEach((([t,e])=>{ke(t,{state:e},{lock:be})})),ce(t?.config)&&Object.entries(t.config).forEach((([t,e])=>{he.set(t,e)}))},Ee=xe();Se(Ee);const Pe=function(t,e){var n={__c:e="__cC"+p++,__:t,Consumer:function(t,e){return t.children(e)},Provider:function(t){var n,r;return this.getChildContext||(n=[],(r={})[e]=this,this.getChildContext=function(){return r},this.shouldComponentUpdate=function(t){this.props.value!==t.value&&n.some((function(t){t.__e=!0,P(t)}))},this.sub=function(t){n.push(t);var e=t.componentWillUnmount;t.componentWillUnmount=function(){n.splice(n.indexOf(t),1),e&&e.call(t)}}),t.children}};return n.Provider.__=n.Consumer.contextType=n}({}),Ce=new WeakMap,$e=()=>{throw new Error("Please use `data-wp-bind` to modify the attributes of an element.")},Me={get(t,e,n){const r=Reflect.get(t,e,n);return r&&"object"==typeof r?Oe(r):r},set:$e,deleteProperty:$e},Oe=t=>(Ce.has(t)||Ce.set(t,new Proxy(t,Me)),Ce.get(t)),Ne=[],Te=[],je=t=>Ue()?.context[t||Ae()],He=()=>{if(!Ue())throw Error("Cannot call `getElement()` outside getters and actions used by directives.");const{ref:t,attributes:e}=Ue();return Object.freeze({ref:t.current,attributes:Oe(e)})},Ue=()=>Ne.slice(-1)[0],We=t=>{Ne.push(t)},Le=()=>{Ne.pop()},Ae=()=>Te.slice(-1)[0],Fe=t=>{Te.push(t)},Re=()=>{Te.pop()},De={},Ie={},Ve=(t,e,{priority:n=10}={})=>{De[t]=e,Ie[t]=n},Be=({scope:t})=>(e,...n)=>{let{value:r,namespace:o}=e;if("string"!=typeof r)throw new Error("The `value` prop should be a string path");const i="!"===r[0]&&!!(r=r.slice(1));We(t);const s=((t,e)=>{if(!e)return void rn(`Namespace missing for "${t}". The value for that path won't be resolved.`);let n=le.get(e);void 0===n&&(n=ke(e,void 0,{lock:be}));const r={...n,context:Ue().context[e]};try{return t.split(".").reduce(((t,e)=>t[e]),r)}catch(t){}})(r,o),u="function"==typeof s?s(...n):s;return Le(),i?!u:u},ze=({directives:t,priorityLevels:[e,...n],element:r,originalProps:o,previousScope:i})=>{const s=_t({}).current;s.evaluate=at(Be({scope:s}),[]),s.context=lt(Pe),s.ref=i?.ref||_t(null),r=V(r,{ref:s.ref}),s.attributes=r.props;const u=n.length>0?w(ze,{directives:t,priorityLevels:n,element:r,originalProps:o,previousScope:s}):r,_={...o,children:u},c={directives:t,props:_,element:r,context:Pe,evaluate:s.evaluate};We(s);for(const t of e){const e=De[t]?.(c);void 0!==e&&(_.children=e)}return Le(),_.children},Je=r.vnode;r.vnode=t=>{if(t.props.__directives){const e=t.props,n=e.__directives;n.key&&(t.key=n.key.find((({suffix:t})=>"default"===t)).value),delete e.__directives;const r=(t=>{const e=Object.keys(t).reduce(((t,e)=>{if(De[e]){const n=Ie[e];(t[n]=t[n]||[]).push(e)}return t}),{});return Object.entries(e).sort((([t],[e])=>parseInt(t)-parseInt(e))).map((([,t])=>t))})(n);r.length>0&&(t.props={directives:n,priorityLevels:r,originalProps:e,type:t.type,element:w(t.type,e),top:!0},t.type=ze)}Je&&Je(t)};const qe=t=>new Promise((e=>{const n=()=>{clearTimeout(r),window.cancelAnimationFrame(o),setTimeout((()=>{t(),e()}))},r=setTimeout(n,100),o=window.requestAnimationFrame(n)})),Ke=()=>new Promise((t=>{setTimeout(t,0)}));function Ge(t){st((()=>{let e=null,n=!1;return e=function(t,e){let n=()=>{};const r=Rt((function(){return n=this.c.bind(this),this.x=t,this.c=e,t()}));return{flush:n,dispose:r}}(t,(async()=>{e&&!n&&(n=!0,await qe(e.flush),n=!1)})),e.dispose}),[])}function Qe(t){const e=Ue(),n=Ae();return"GeneratorFunction"===t?.constructor?.name?async(...r)=>{const o=t(...r);let i,s;for(;;){Fe(n),We(e);try{s=o.next(i)}finally{Re(),Le()}try{i=await s.value}catch(t){o.throw(t)}if(s.done)break}return i}:(...r)=>{Fe(n),We(e);try{return t(...r)}finally{Re(),Le()}}}function Xe(t){Ge(Qe(t))}function Ye(t){st(Qe(t),[])}function Ze(t,e){st(Qe(t),e)}function tn(t,e){ut(Qe(t),e)}function en(t,e){return at(Qe(t),e)}function nn(t,e){return ct(Qe(t),e)}new Set;const rn=t=>{0},on=new WeakMap,sn=new WeakMap,un=new WeakMap,_n=new WeakMap,cn=t=>Boolean(t&&"object"==typeof t&&t.constructor===Object),an=Reflect.getOwnPropertyDescriptor,ln=(t,e={})=>{if(_n.set(t,e),!sn.has(t)){const e=new Proxy(t,{get:(e,n)=>{const r=_n.get(t),o=e[n];return!(n in e)&&n in r?r[n]:n in e&&!on.get(e)?.has(n)&&cn(te(e,n))?ln(o,r[n]):sn.has(o)?sn.get(o):n in e?o:r[n]},set:(e,n,r)=>{const o=_n.get(t),i=n in e||!(n in o)?e:o;if(r&&"object"==typeof r&&(on.has(i)||on.set(i,new Set),on.get(i).add(n)),un.has(r)){const t=un.get(r);i[n]=t}else i[n]=r;return!0},ownKeys:e=>[...new Set([...Object.keys(_n.get(t)),...Object.keys(e)])],getOwnPropertyDescriptor:(e,n)=>an(e,n)||an(_n.get(t),n)});sn.set(t,e),un.set(e,t)}return sn.get(t)},fn=(t,e)=>{for(const n in e)cn(te(t,n))&&cn(te(e,n))?fn(t[`$${n}`].peek(),e[n]):t[n]=e[n]};function pn(t){return cn(t)?Object.fromEntries(Object.entries(t).map((([t,e])=>[t,pn(e)]))):Array.isArray(t)?t.map((t=>pn(t))):t}const hn=/(?:([\u0080-\uFFFF\w-%@]+) *:? *([^{;]+?);|([^;}{]*?) *{)|(}\s*)/g,dn=/\/\*[^]*?\*\/| +/g,vn=/\n+/g,yn=t=>({directives:e,evaluate:n})=>{e[`on-${t}`].filter((({suffix:t})=>"default"!==t)).forEach((e=>{const r=e.suffix.split("--",1)[0];Ye((()=>{const o=t=>n(e,t),i="window"===t?window:document;return i.addEventListener(r,o),()=>i.removeEventListener(r,o)}))}))},gn=t=>({directives:e,evaluate:n})=>{e[`on-async-${t}`].filter((({suffix:t})=>"default"!==t)).forEach((e=>{const r=e.suffix.split("--",1)[0];Ye((()=>{const o=async t=>{await Ke(),n(e,t)},i="window"===t?window:document;return i.addEventListener(r,o,{passive:!0}),()=>i.removeEventListener(r,o)}))}))},mn=()=>{Ve("context",(({directives:{context:t},props:{children:e},context:n})=>{const{Provider:r}=n,o=lt(n),i=_t(Zt({})),s=t.find((({suffix:t})=>"default"===t));return w(r,{value:ct((()=>{if(s){const{namespace:t,value:e}=s;cn(e)||rn(`The value of data-wp-context in "${t}" store must be a valid stringified JSON object.`),fn(i.current,{[t]:pn(e)})}return ln(i.current,o)}),[s,o])},e)}),{priority:5}),Ve("watch",(({directives:{watch:t},evaluate:e})=>{t.forEach((t=>{Xe((()=>e(t)))}))})),Ve("init",(({directives:{init:t},evaluate:e})=>{t.forEach((t=>{Ye((()=>e(t)))}))})),Ve("on",(({directives:{on:t},element:e,evaluate:n})=>{const r=new Map;t.filter((({suffix:t})=>"default"!==t)).forEach((t=>{const e=t.suffix.split("--")[0];r.has(e)||r.set(e,new Set),r.get(e).add(t)})),r.forEach(((t,r)=>{const o=e.props[`on${r}`];e.props[`on${r}`]=e=>{t.forEach((t=>{o&&o(e),n(t,e)}))}}))})),Ve("on-async",(({directives:{"on-async":t},element:e,evaluate:n})=>{const r=new Map;t.filter((({suffix:t})=>"default"!==t)).forEach((t=>{const e=t.suffix.split("--")[0];r.has(e)||r.set(e,new Set),r.get(e).add(t)})),r.forEach(((t,r)=>{const o=e.props[`on${r}`];e.props[`on${r}`]=e=>{o&&o(e),t.forEach((async t=>{await Ke(),n(t,e)}))}}))})),Ve("on-window",yn("window")),Ve("on-document",yn("document")),Ve("on-async-window",gn("window")),Ve("on-async-document",gn("document")),Ve("class",(({directives:{class:t},element:e,evaluate:n})=>{t.filter((({suffix:t})=>"default"!==t)).forEach((t=>{const r=t.suffix,o=n(t),i=e.props.class||"",s=new RegExp(`(^|\\s)${r}(\\s|$)`,"g");o?s.test(i)||(e.props.class=i?`${i} ${r}`:r):e.props.class=i.replace(s," ").trim(),Ye((()=>{o?e.ref.current.classList.add(r):e.ref.current.classList.remove(r)}))}))})),Ve("style",(({directives:{style:t},element:e,evaluate:n})=>{t.filter((({suffix:t})=>"default"!==t)).forEach((t=>{const r=t.suffix,o=n(t);e.props.style=e.props.style||{},"string"==typeof e.props.style&&(e.props.style=(t=>{const e=[{}];let n,r;for(;n=hn.exec(t.replace(dn,""));)n[4]?e.shift():n[3]?(r=n[3].replace(vn," ").trim(),e.unshift(e[0][r]=e[0][r]||{})):e[0][n[1]]=n[2].replace(vn," ").trim();return e[0]})(e.props.style)),o?e.props.style[r]=o:delete e.props.style[r],Ye((()=>{o?e.ref.current.style[r]=o:e.ref.current.style.removeProperty(r)}))}))})),Ve("bind",(({directives:{bind:t},element:e,evaluate:n})=>{t.filter((({suffix:t})=>"default"!==t)).forEach((t=>{const r=t.suffix,o=n(t);e.props[r]=o,Ye((()=>{const t=e.ref.current;if("style"!==r){if("width"!==r&&"height"!==r&&"href"!==r&&"list"!==r&&"form"!==r&&"tabIndex"!==r&&"download"!==r&&"rowSpan"!==r&&"colSpan"!==r&&"role"!==r&&r in t)try{return void(t[r]=null==o?"":o)}catch(t){}null==o||!1===o&&"-"!==r[4]?t.removeAttribute(r):t.setAttribute(r,o)}else"string"==typeof o&&(t.style.cssText=o)}))}))})),Ve("ignore",(({element:{type:t,props:{innerHTML:e,...n}}})=>w(t,{dangerouslySetInnerHTML:{__html:ct((()=>e),[])},...n}))),Ve("text",(({directives:{text:t},element:e,evaluate:n})=>{const r=t.find((({suffix:t})=>"default"===t));if(r)try{const t=n(r);e.props.children="object"==typeof t?null:t.toString()}catch(t){e.props.children=null}else e.props.children=null})),Ve("run",(({directives:{run:t},evaluate:e})=>{t.forEach((t=>e(t)))})),Ve("each",(({directives:{each:t,"each-key":e},context:n,element:r,evaluate:o})=>{if("template"!==r.type)return;const{Provider:i}=n,s=lt(n),[u]=t,{namespace:_,suffix:c}=u;return o(u).map((t=>{const n="default"===c?"item":c.replace(/^-+|-+$/g,"").toLowerCase().replace(/-([a-z])/g,(function(t,e){return e.toUpperCase()}));const o=Zt({[_]:{}}),u=ln(o,s);u[_][n]=t;const a={...Ue(),context:u},l=e?Be({scope:a})(e[0]):t;return w(i,{value:u,key:l},r.props.content)}))}),{priority:20}),Ve("each-child",(()=>null),{priority:1})},wn="wp",bn=`data-${wn}-ignore`,kn=`data-${wn}-interactive`,xn=`data-${wn}-`,Sn=[],En=new RegExp(`^data-${wn}-([a-z0-9]+(?:-[a-z0-9]+)*)(?:--([a-z0-9_-]+))?$`,"i"),Pn=/^([\w_\/-]+)::(.+)$/,Cn=new WeakSet;function $n(t){const e=document.createTreeWalker(t,205);return function t(n){const{nodeType:r}=n;if(3===r)return[n.data];if(4===r){var o;const t=e.nextSibling();return n.replaceWith(new window.Text(null!==(o=n.nodeValue)&&void 0!==o?o:"")),[n.nodeValue,t]}if(8===r||7===r){const t=e.nextSibling();return n.remove(),[null,t]}const i=n,{attributes:s}=i,u=i.localName,_={},c=[],a=[];let l=!1,f=!1;for(let t=0;t{const o=En.exec(e);if(null===o)return rn(`Found malformed directive name: ${e}.`),t;const i=o[1]||"",s=o[2]||"default";var u;return t[i]=t[i]||[],t[i].push({namespace:null!=n?n:null!==(u=Sn[Sn.length-1])&&void 0!==u?u:null,value:r,suffix:s}),t}),{})),"template"===u)_.content=[...i.content.childNodes].map((t=>$n(t)));else{let n=e.firstChild();if(n){for(;n;){const[r,o]=t(n);r&&c.push(r),n=o||e.nextSibling()}e.parentNode()}}return f&&Sn.pop(),[w(u,_,c)]}(e.currentNode)}const Mn=new WeakMap,On=t=>{if(!t.parentElement)throw Error("The passed region should be an element with a parent.");return Mn.has(t)||Mn.set(t,((t,e)=>{const n=(e=[].concat(e))[e.length-1].nextSibling;function r(e,r){t.insertBefore(e,r||n)}return t.__k={nodeType:1,parentNode:t,firstChild:e[0],childNodes:e,insertBefore:r,appendChild:r,removeChild(e){t.removeChild(e)}}})(t.parentElement,t)),Mn.get(t)},Nn=new WeakMap,Tn=t=>{if("I acknowledge that using private APIs means my theme or plugin will inevitably break in the next version of WordPress."===t)return{directivePrefix:wn,getRegionRootFragment:On,initialVdom:Nn,toVdom:$n,directive:Ve,getNamespace:Ae,h:w,cloneElement:V,render:D,deepSignal:Zt,parseInitialData:xe,populateInitialData:Se,batch:bt};throw new Error("Forbidden access.")};document.addEventListener("DOMContentLoaded",(async()=>{mn(),await(async()=>{const t=document.querySelectorAll(`[data-${wn}-interactive]`);for(const e of t)if(!Cn.has(e)){await Ke();const t=On(e),n=$n(e);Nn.set(e,n),await Ke(),I(n,t)}})()}));var jn=e.zj,Hn=e.SD,Un=e.V6,Wn=e.jb,Ln=e.yT,An=e.M_,Fn=e.hb,Rn=e.vJ,Dn=e.ip,In=e.Nf,Vn=e.Kr,Bn=e.li,zn=e.J0,Jn=e.FH,qn=e.v4;export{jn as getConfig,Hn as getContext,Un as getElement,Wn as privateApis,Ln as splitTask,An as store,Fn as useCallback,Rn as useEffect,Dn as useInit,In as useLayoutEffect,Vn as useMemo,Bn as useRef,zn as useState,Jn as useWatch,qn as withScope};
\ No newline at end of file
diff --git a/static/wp-content/uploads/wpforms/.htaccess b/static/wp-content/uploads/wpforms/.htaccess
deleted file mode 100644
index 21b9811..0000000
--- a/static/wp-content/uploads/wpforms/.htaccess
+++ /dev/null
@@ -1,25 +0,0 @@
-
-# BEGIN WPForms
-# The directives (lines) between "BEGIN WPForms" and "END WPForms" are
-# dynamically generated, and should only be modified via WordPress filters.
-# Any changes to the directives between these markers will be overwritten.
-# Disable PHP and Python scripts parsing.
-
- SetHandler none
- SetHandler default-handler
- RemoveHandler .cgi .php .php3 .php4 .php5 .phtml .pl .py .pyc .pyo
- RemoveType .cgi .php .php3 .php4 .php5 .phtml .pl .py .pyc .pyo
-
-
- php_flag engine off
-
-
- php_flag engine off
-
-
- php_flag engine off
-
-
- Header set X-Robots-Tag "noindex"
-
-# END WPForms
\ No newline at end of file
diff --git a/static/wp-content/uploads/wpforms/cache/.htaccess b/static/wp-content/uploads/wpforms/cache/.htaccess
deleted file mode 100644
index 81fa1ef..0000000
--- a/static/wp-content/uploads/wpforms/cache/.htaccess
+++ /dev/null
@@ -1,16 +0,0 @@
-
-# BEGIN WPForms
-# The directives (lines) between "BEGIN WPForms" and "END WPForms" are
-# dynamically generated, and should only be modified via WordPress filters.
-# Any changes to the directives between these markers will be overwritten.
-# Disable access for any file in the cache dir.
-# Apache 2.2
-
- Deny from all
-
-
-# Apache 2.4+
-
- Require all denied
-
-# END WPForms
\ No newline at end of file
diff --git a/static/wp-content/uploads/wpforms/cache/addons.json b/static/wp-content/uploads/wpforms/cache/addons.json
deleted file mode 100644
index 963d472..0000000
--- a/static/wp-content/uploads/wpforms/cache/addons.json
+++ /dev/null
@@ -1 +0,0 @@
-{"wpforms-activecampaign":{"title":"ActiveCampaign Addon","slug":"wpforms-activecampaign","url":"https:\/\/wpforms.com\/addons\/activecampaign-addon\/","version":"1.5.0","image":"https:\/\/wpforms.com\/wp-content\/uploads\/2020\/03\/addon-icon.png","excerpt":"The WPForms ActiveCampaign addon lets you add contacts to your account, record events, add notes to contacts, and more.","doc":"https:\/\/wpforms.com\/docs\/how-to-install-and-use-the-activecampaign-addon-with-wpforms\/","id":729633,"license":["ultimate","agency","elite"],"category":["providers"],"changelog":["
1.5.0 (2022-09-20)<\/h4>
Fixed: Warning notice on form submissions when no ActiveCampaign account was selected.<\/li><\/ul>","
1.4.0 (2022-06-28)<\/h4>
Added: Possibility to use Smart Tags in Notes.<\/li>
Changed: Minimum WPForms version supported is 1.7.5.<\/li>
Changed: Reorganized locations of 3rd party libraries.<\/li><\/ul>","
1.3.0 (2022-05-26)<\/h4>
IMPORTANT: Support for WordPress 5.1 has been discontinued. If you are running WordPress 5.1, you MUST upgrade WordPress before installing the new WPForms ActiveCampaign. Failure to do that will disable the new WPForms ActiveCampaign functionality.<\/li>
Added: Compatibility with WPForms 1.6.8 and the updated Form Builder.<\/li>
Changed: Minimum WPForms version supported is 1.7.3.<\/li>
Changed: Replaced jQuery.isFunction()<\/code> (deprecated as of jQuery 3.3) usages with a recommended counterpart.<\/li>
Fixed: Properly handle the situation when trying to change the template for the same form multiple times.<\/li>
Fixed: Send to ActiveCampaign form submission data even when the \"Entry storage\" option is disabled in the Form Builder.<\/li>
Fixed: CSS improvements of Add New Event button when a connection with the \"Event Tracking\" action was created.<\/li><\/ul>","
1.2.1 (2020-08-05)<\/h4>
Fixed: API key\/token expiration detection missing.<\/li>
Fixed: Properly initialize a template inside the Form Builder addon configuration area for conditional logic.<\/li>
Fixed: Allow a simple format of the form \"Name\" field to correctly pass data to ActiveCampaign \"Full Name\" field.<\/li><\/ul>","
Changed: Improved various field labels\/descriptions inside the Form Builder.<\/li>
Fixed: Compatibility with WordPress Multisite when WPForms activated Network-wide, and addon - site-wide.<\/li>
Fixed: Do not send Event-related ActiveCampaign API requests when there are no required account ID and event key.<\/li><\/ul>","
1.1.0 (2020-03-16)<\/h4>
Added: Process smart tags inside notes to be recorded for each new\/updated contact.<\/li>
Changed:\u00a0Increase the number of records returned from ActiveCampaign API from 20 to 100.<\/li><\/ul>","
1.0.0 (2020-03-03)<\/h4>
Initial release.<\/li><\/ul>"],"required_versions":{"wp":"5.2","php":"5.6","wpforms":"1.7.5"},"form_builder":{"category":["providers"]},"settings_integrations":{"category":["email-marketing"],"featured":false},"recommended":false,"icon":"addon-icon-activecampaign.png"},"wpforms-authorize-net":{"title":"Authorize.Net Addon","slug":"wpforms-authorize-net","url":"https:\/\/wpforms.com\/addons\/authorize-net-addon\/","version":"1.8.0","image":"https:\/\/wpforms.com\/wp-content\/uploads\/2020\/05\/icon-provider-authorize-net.png","excerpt":"The WPForms Authorize.Net addon allows you to connect your WordPress site with Authorize.Net to easily collect payments, donations, and online orders.","doc":"https:\/\/wpforms.com\/docs\/how-to-install-and-use-the-authorize-net-addon-with-wpforms\/","id":845517,"license":["elite","ultimate","agency"],"category":["payments"],"changelog":["
1.8.0 (2023-09-27)<\/h4>
IMPORTANT: Support for PHP 5.6 has been discontinued. If you are running PHP 5.6, you MUST upgrade PHP before installing WPForms Authorize.Net 1.8.0. Failure to do that will disable WPForms Authorize.Net functionality.<\/li>
IMPORTANT: Support for WordPress 5.4 and below has been discontinued. If you are running any of those outdated versions, you MUST upgrade WordPress before installing WPForms Authorize.Net 1.8.0. Failure to do that will disable WPForms Authorize.Net functionality.<\/li>
Changed: Minimum WPForms version supported is 1.8.4.<\/li><\/ul>","
1.7.0 (2023-08-08)<\/h4>
Changed: Minimum WPForms version supported is 1.8.3.<\/li>
Fixed: Card type for payment method was missing on the single Payment page.<\/li>
Fixed: Payment field was not displayed in the Elementor Builder.<\/li><\/ul>","
1.6.1 (2023-06-09)<\/h4>
Fixed: There were situations when PHP notices were generated on the Single Payment page.<\/li><\/ul>","
1.6.0 (2023-06-08)<\/h4>
Added: Compatibility with WPForms 1.8.2.<\/li>
Changed: Minimum WPForms version supported is 1.8.2.<\/li>
Fixed: Payment error was displayed too close to the Description field.<\/li>
Fixed: JavaScript error occurred when the user was asked to enter verification information for a payment form locked with the Form Locker addon.<\/li><\/ul>","
1.5.1 (2023-03-23)<\/h4>
Fixed: There was a styling conflict with PayPal Commerce field preview in the Form Builder.<\/li>
Fixed: Subfield validation error messages were overlapping each other in certain themes.<\/li><\/ul>","
1.5.0 (2023-03-21)<\/h4>
Added: Compatibility with the upcoming WPForms v1.8.1 release.<\/li>
Fixed: In some cases validation errors were not removed after correcting the values and submitting the form again.<\/li>
Fixed: Local validation error messages overlapped Authorize.Net API error messages.<\/li>
Fixed: Authorize.Net validation error codes are now displayed only in the console.<\/li>
Fixed: On multi-page forms it was possible to continue to the next page even if the field validation failed.<\/li>
Fixed: Expiration and Security Code subfields were too narrow in the Form Builder preview.<\/li><\/ul>","
1.4.0 (2022-10-05)<\/h4>
Changed: Show settings in the Form Builder only if they are enabled.<\/li>
Changed: On form preview, display an alert message with an error when payment configurations are missing.<\/li>
Changed: Minimum WPForms version supported is 1.7.5.5.<\/li>
Fixed: The form couldn't be submitted if several configured payment gateways were executed according to Conditional Logic.<\/li>
Fixed: Completed payment email notifications were sent for non-completed payments.<\/li><\/ul>","
1.3.0 (2022-06-28)<\/h4>
Changed: Minimum WPForms version supported is 1.7.5.<\/li>
Changed: Reorganized locations of 3rd party libraries.<\/li>
Fixed:\u00a0Compatibility with WordPress Multisite installations.<\/li><\/ul>","
1.2.0 (2022-03-16)<\/h4>
Added: Compatibility with WPForms 1.6.8 and the updated Form Builder.<\/li>
Added: Compatibility with WPForms 1.7.3 and Form Revisions.<\/li>
Changed: Updated Authorize.Net PHP SDK to v2.0.2 for PHP 7.4 and PHP 8 support.<\/li>
Changed: Minimum WPForms version supported is 1.6.7.1.<\/li><\/ul>","
1.1.0 (2021-03-31)<\/h4>
Added: Transaction-specific errors logging to make payment issues identification easier.<\/li>
Added: Account credentials validation on WPForms Payments settings page.<\/li>
Added: Optional address field mapping for Authorize.Net accounts requiring customer billing address.<\/li>
Added: Email Notifications option to limit to completed payments only.<\/li><\/ul>","
1.0.2 (2020-08-06)<\/h4>
Fixed: Card field can be mistakenly processed as hidden under some conditional logic configurations.<\/li><\/ul>","
1.0.1 (2020-08-05)<\/h4>
Fixed: Conditionally hidden Authorize.net field should not be processed on form submission.<\/li><\/ul>","
1.0.0 (2020-05-27)<\/h4>
Initial release.<\/li><\/ul>"],"required_versions":{"wp":"5.5","php":"7.0","wpforms":"1.8.4"},"form_builder":{"category":["payments"]},"settings_integrations":{"category":["payment"],"featured":false},"recommended":false,"icon":"addon-icon-authorize-net.png"},"wpforms-aweber":{"title":"AWeber Addon","slug":"wpforms-aweber","url":"https:\/\/wpforms.com\/addons\/aweber-addon\/","version":"2.0.1","image":"https:\/\/wpforms.com\/wp-content\/uploads\/2016\/02\/addon-icon-aweber-1.png","excerpt":"The WPForms AWeber addon allows you to create AWeber newsletter signup forms in WordPress, so you can grow your email list. ","doc":"https:\/\/wpforms.com\/docs\/install-use-aweber-addon-wpforms\/","id":154,"license":["pro","plus","elite","ultimate","agency"],"category":["providers"],"changelog":["
2.0.1 (2023-08-08)<\/h4>
Changed: Minimum WPForms version supported is 1.8.3.<\/li>
Changed: It's no longer possible to create new AWeber (Legacy) OAuth1 account connections.<\/li>
Fixed: There is no more a fatal error when creating a connection in the Form Builder.<\/li><\/ul>","
2.0.0 (2023-07-06)<\/h4>
Added: Compatibility with AWeber OAuth2 authentication.<\/li>
Changed: Minimum WPForms version supported is 1.8.2.2.<\/li><\/ul>","
1.3.2 (2023-07-03)<\/h4>
Fixed: Compatibility with WPForms 1.8.2.2.<\/li><\/ul>","
1.3.1 (2022-08-31)<\/h4>
Fixed: PHP warnings were generated during attempts to add already connected accounts.<\/li>
Fixed: Addon used to generate errors when the cURL<\/code> PHP extension was not installed on a server.<\/li><\/ul>","
1.3.0 (2022-03-16)<\/h4>
Added: Compatibility with WPForms 1.6.8 and the updated Form Builder.<\/li>
Added: Complete translations for French and Portuguese (Brazilian).<\/li><\/ul>","
1.1.0 (2019-02-06)<\/h4>
Added: Complete translations for Spanish, Italian, Japanese, and German.<\/li>
Fixed: Typos, grammar, and other i18n related issues..<\/li><\/ul>","
1.0.7 (2018-03-15)<\/h4>
Fixed: Error when adding account from Settings > Integrations tab.<\/li><\/ul>","
1.0.6 (2017-03-09)<\/h4>
Changed: Adjust display order so that the providers show in alphabetical order<\/li><\/ul>","
1.0.5 (2017-02-09)<\/h4>
Added: Support for tagging<\/li><\/ul>","
1.0.4 (2016-07-07)<\/h4>
Changed: Improved error logging<\/li><\/ul>","
1.0.3 (2016-06-23)<\/h4>
Changed: Prevent plugin from running if WPForms Pro is not activated<\/li><\/ul>","
1.0.2 (2016-04-12)<\/h4>
Changed: Improved error logging<\/li><\/ul>","
1.0.1 (2016-04-06)<\/h4>
Fixed: Issue with Aweber submission failing without custom fields<\/li><\/ul>","
1.0.0 (2016-03-11)<\/h4>
Initial release.<\/li><\/ul>"],"required_versions":{"wp":"5.2","php":"5.6","wpforms":"1.8.3"},"form_builder":{"category":["providers"]},"settings_integrations":{"category":["email-marketing"],"featured":false},"recommended":false,"icon":"addon-icon-aweber.png"},"wpforms-sendinblue":{"title":"Brevo Addon","slug":"wpforms-sendinblue","url":"https:\/\/wpforms.com\/addons\/sendinblue-addon\/","version":"1.3.0","image":"https:\/\/wpforms.com\/wp-content\/uploads\/2020\/12\/icon-brevo.png","excerpt":"The WPForms Brevo addon helps you organize your leads, automate your marketing, and engage your subscribers.","doc":"https:\/\/wpforms.com\/docs\/how-to-install-and-use-the-sendinblue-addon-with-wpforms\/","id":1126732,"license":["agency","elite","plus","pro","ultimate"],"category":["providers"],"changelog":["
1.3.0 (2023-10-24)<\/h4>
IMPORTANT: Updated logo and name to reflect the company's rebranding from Sendinblue to Brevo.<\/li>
IMPORTANT: Support for PHP 5.6 has been discontinued. If you are running PHP 5.6, you MUST upgrade PHP before installing WPForms Brevo 1.3.0. Failure to do that will disable WPForms Brevo functionality.<\/li>
IMPORTANT: Support for WordPress 5.4 and below has been discontinued. If you are running any of those outdated versions, you MUST upgrade WordPress before installing WPForms Brevo 1.3.0. Failure to do that will disable WPForms Brevo functionality.<\/li>
Changed: Minimum WPForms version supported is 1.8.4.<\/li>
Fixed: Account reconnection was required in WPForms when the first workflow was being setup.<\/li>
Fixed: Double opt-in was not working correctly when multiple connections were configured.<\/li><\/ul>","
1.2.0 (2022-10-03)<\/h4>
Added: The Name field now has 4 options: full, first, middle, and last.<\/li>
Fixed: An error occurred in the Form Builder when a user saved Sendinblue configurations with empty fields.<\/li><\/ul>","
1.1.0 (2022-08-31)<\/h4>
IMPORTANT: Support for WordPress 5.1 has been discontinued. If you are running WordPress 5.1, you MUST upgrade WordPress before installing the new WPForms Sendinblue. Failure to do that will disable the new WPForms Sendinblue functionality.<\/li>
Added: New feature: Double Opt-In.<\/li>
Added: Compatibility with WPForms 1.6.8 and the updated Form Builder.<\/li>
Added: Form data can now be filtered for the track event action using a new wpforms_sendinblue_provider_actions_trackeventaction_get_form_data<\/code> filter.<\/li>
Changed: Minimum WPForms version supported is 1.7.6.<\/li>
Changed: Errors logging was improved.<\/li>
Fixed: The addon was not properly handling the situation when a form template was changed multiple times for the same form.<\/li>
Fixed: Form submission was not sent to Sendinblue when the \"Entry storage\" option was disabled in the form settings.<\/li>
Fixed: List field did not refresh properly.<\/li>
Fixed: Form submission to Sendinblue was failing when non-required fields were empty.<\/li>
Fixed: Mapping a dropdown field to a Sendinblue contact attribute did not work.<\/li>
Fixed: A incorrect account was not properly processed.<\/li><\/ul>","
1.0.0 (2020-12-17)<\/h4>
Initial release.<\/li><\/ul>"],"required_versions":{"wp":"5.5","php":"7.0","wpforms":"1.8.4"},"form_builder":{"category":["providers"]},"settings_integrations":{"category":["email-marketing"],"featured":false},"recommended":false,"icon":"addon-icon-brevo.png"},"wpforms-calculations":{"title":"Calculations Addon","slug":"wpforms-calculations","url":"https:\/\/wpforms.com\/addons\/calculations-addon\/","version":"1.1.0","image":"https:\/\/wpforms.com\/wp-content\/uploads\/2023\/10\/icon-calculations.png","excerpt":"WPForms Calculations addon lets you create custom calculations for shipping, quotes, lead magnets, and more. Get more sales and leads.","doc":"https:\/\/wpforms.com\/docs\/calculations-addon\/","id":2671501,"license":["elite","pro","ultimate","agency"],"category":["settings"],"changelog":["
1.1.0 (2024-01-24)<\/h4>
Added: Improved support and processing for the Single Item field.<\/li>
Added: Compatibility with the upcoming WPForms v1.8.7 release.<\/li>
Changed: Updated PHP-Parser library to 4.18.0.<\/li>
Changed: Updated Luxon library to 3.4.4.<\/li>
Fixed: It was impossible to edit a formula after duplicating the Layout Field with the Calculation Field inside.<\/li>
Fixed: In some cases, the calculated field values were inconsistent between displayed value on the front-end and saved value in the database.<\/li>
Fixed: Calculations were not using correct values when the option \"Show values\" for selectable fields was set.<\/li>
Fixed: Line breaks and other special characters were not preserved in the formula code and in the calculation result.<\/li>
Fixed: The formula validation returned the false positive result in some cases when the form was not saved before validation.<\/li>
Fixed: In some cases, incorrect calculation results were shown in the confirmation message.<\/li>
Fixed: The Validate Formula button AJAX calls failed on the servers that do not support $_SERVER[HTTP_REFERER]<\/code>.<\/li>
Fixed: \"Illegal numeric literal\" error appeared in the error.log when the field value was numeric and started with 0.<\/li>
Fixed: The Error Handler threw the invalid callback error in some rare cases.<\/li>
Fixed: Math functions were throwing a TypeError in some rare cases.<\/li><\/ul>","
1.0.0 (2023-10-24)<\/h4>
Added:\u00a0Initial release.<\/li><\/ul>"],"required_versions":{"wp":"5.5","php":"7.0","wpforms":"1.8.4"},"form_builder":{"category":["settings"]},"settings_integrations":{"category":[],"featured":false},"recommended":false,"icon":"addon-icon-calculations.png"},"wpforms-campaign-monitor":{"title":"Campaign Monitor Addon","slug":"wpforms-campaign-monitor","url":"https:\/\/wpforms.com\/addons\/campaign-monitor-addon\/","version":"1.2.3","image":"https:\/\/wpforms.com\/wp-content\/uploads\/2016\/06\/addon-icon-campaign-monitor.png","excerpt":"The WPForms Campaign Monitor addon allows you to create Campaign Monitor newsletter signup forms in WordPress, so you can grow your email list. ","doc":"https:\/\/wpforms.com\/docs\/how-to-install-and-use-campaign-monitor-addon-with-wpforms\/","id":4918,"license":["ultimate","plus","agency","pro","elite"],"category":["providers"],"changelog":["
1.2.3 (2023-07-03)<\/h4>
Fixed: Compatibility with WPForms 1.8.2.2.<\/li><\/ul>","
1.2.2 (2022-08-30)<\/h4>
Fixed: PHP Notice was generated when the Full Name field wasn't mapped for a connection.<\/li>
Fixed: PHP Error was generated when the user tried to connect with empty or invalid API credentials.<\/li><\/ul>","
1.2.1 (2019-08-05)<\/h4>
Fixed: Forms field mapping with Checkbox field.<\/li><\/ul>","
1.2.0 (2019-07-23)<\/h4>
Added: Complete translations for French and Portuguese (Brazilian).<\/li><\/ul>","
1.1.0 (2019-02-06)<\/h4>
Added: Complete translations for Spanish, Italian, Japanese, and German.<\/li>
Fixed: Typos, grammar, and other i18n related issues..<\/li><\/ul>","
1.0.4 (2018-03-15)<\/h4>
Changed: Improved display order of account fields.<\/li>
Fixed: Error when adding account from Settings > Integrations tab.<\/li><\/ul>","
1.0.3 (2017-03-09)<\/h4>
Changed: Adjust display order so that the providers show in alphabetical order<\/li><\/ul>","
1.0.2<\/h4>
Fixed: Improved checking for other Campaign Monitor plugins to avoid conflicts<\/li><\/ul>","
1.0.1 (2016-07-07)<\/h4>
Changed: Improved error logging<\/li><\/ul>","
1.0.0 (2016-06-16)<\/h4>
Initial release.<\/li><\/ul>"],"required_versions":{"wp":"5.2","php":"5.6","wpforms":"1.7.3"},"form_builder":{"category":["providers"]},"settings_integrations":{"category":["email-marketing"],"featured":false},"recommended":false,"icon":"addon-icon-campaign-monitor.png"},"wpforms-conversational-forms":{"title":"Conversational Forms Addon","slug":"wpforms-conversational-forms","url":"https:\/\/wpforms.com\/addons\/conversational-forms-addon\/","version":"1.14.0","image":"https:\/\/wpforms.com\/wp-content\/uploads\/2019\/02\/addon-conversational-forms.png","excerpt":"Want to improve your form completion rate? Conversational Forms addon by WPForms helps make your web forms feel more human, so you can improve your conversions. Interactive web forms made easy.","doc":"https:\/\/wpforms.com\/docs\/how-to-install-and-use-the-conversational-forms-addon\/","id":391235,"license":["ultimate","agency","pro","elite"],"category":["settings"],"changelog":["
1.14.0 (2024-02-20)<\/h4>
Added: Compatibility with the upcoming WPForms 1.8.7.<\/li>
Changed: The minimum WPForms version supported is 1.8.7.<\/li>
Fixed: The Form Builder settings screen had visual issues when an RTL language was used.<\/li><\/ul>","
1.13.0 (2024-01-09)<\/h4>
IMPORTANT: Support for PHP 5.6 has been discontinued. If you are running PHP 5.6, you MUST upgrade PHP before installing WPForms Conversational Forms 1.13.0. Failure to do that will disable WPForms Conversational Forms functionality.<\/li>
IMPORTANT: Support for WordPress 5.4 and below has been discontinued. If you are running any of those outdated versions, you MUST upgrade WordPress before installing WPForms Conversational Forms 1.13.0. Failure to do that will disable WPForms Conversational Forms functionality.<\/li>
Changed: Minimum WPForms version supported is 1.8.6.<\/li>
Fixed: Compatibility with the Popup Maker plugin.<\/li>
Fixed: Compatibility with Link by Stripe.- Improved Rich Text, File Upload and Number fields compatibility with color themes and dark mode.<\/li>
Fixed: In rare cases Turnstile Captcha was not displayed correctly when it expired and was refreshed.<\/li>
Fixed: Incorrect error text was displayed when uploading a file of an illegal format in the Form Builder.<\/li><\/ul>","
1.12.0 (2023-08-22)<\/h4>
Changed: Minimum WPForms version supported is 1.8.3.<\/li>
Fixed: Scrolling to the form error message was not working in some cases.<\/li>
Fixed: Some deprecation notices were generated with PHP 8.2.<\/li>
Fixed: Dropdowns on tablets and mobiles had 2 down-arrows.<\/li><\/ul>","
1.11.0 (2023-06-28)<\/h4>
Added: Compatibility with the WPForms Coupons addon.<\/li><\/ul>","
1.10.0 (2023-03-27)<\/h4>
Added: Compatibility with the upcoming WPForms v1.8.1 release.<\/li>
Changed: The Content field did not have instructions to proceed to the next step.<\/li>
Changed: Disable automatic scroll to the first field when the form description is larger than the viewport.<\/li>
Fixed: The Header Logo preview was not displayed in the Form Builder if the form contains any field with Image Choices turned on.<\/li>
Fixed: Fields with Icon or Image choices configured did not apply line breaks to manually forma<\/li>
Fixed: Appearance of Likert Scale and Net Promoter Score fields was improved in dark themes.<\/li><\/ul>","
1.9.0 (2023-01-03)<\/h4>
Added: Compatibility with Icon Choices feature for Checkboxes, Multiple Choice, Checkbox Items, and Multiple Items payment fields.<\/li>
Fixed: The AJAX spinner was not centered relative to the Submit button.<\/li>
Fixed: Improved compatibility with the Content field.<\/li>
Fixed: Next field wasn't focused after selecting the Likert Scale field option.<\/li>
Fixed: Checkboxes with image labels were displayed enormously big.<\/li>
Fixed: Incorrect positioning if form header contains long-form content.<\/li>
Fixed: Improved handling of line breaks to avoid text overflowing on small screens.<\/li><\/ul>","
1.8.0 (2022-10-06)<\/h4>
Changed: Do not allow enabling the Conversational Form mode when a Layout field was added to the list of form fields.<\/li><\/ul>","
1.7.1 (2022-08-31)<\/h4>
Fixed: Color Scheme setting value was missing from the color picker's input after the Form Builder page refresh.<\/li>
Fixed: Color picker caused a broken conversational form page.<\/li>
Fixed: Field after radio button or dropdown was not correctly selected.<\/li>
Fixed: Font size for Single Item and Total fields now matches other fields.<\/li>
Fixed: Certain buttons sometimes overlapped the Conversational Forms footer.<\/li>
Fixed: Color schemes compatibility with the Rich Text field was improved.<\/li>
Fixed: The {page_title}<\/code> smart tag was getting the incorrect title.<\/li>
Fixed: Incorrect information was displayed in conversational form social previews.<\/li><\/ul>","
1.7.0 (2022-06-28)<\/h4>
IMPORTANT: Support for PHP 5.5 has been discontinued. If you are running PHP 5.5, you MUST upgrade PHP before installing the new WPForms Conversational Forms. Failure to do that will disable the WPForms Conversational Forms plugin.<\/li>
Added: New filter wpforms_conversational_forms_frontend_handle_request_form_data<\/code> that can be used to improve multi-language support.<\/li>
Changed: Minimum WPForms version supported is 1.7.5.<\/li>
Changed: Reorganized locations of 3rd party libraries.<\/li>
Changed: Date field can be filled in when using the Date Picker with custom date formats.<\/li>
Fixed: Incorrect canonical and og:url<\/code> page meta values produced by the Yoast SEO plugin.<\/li>
Fixed: Users with editor permissions were unable to save Conversational Forms slugs.<\/li>
Fixed: Improved an error message color for the modern file upload field.<\/li>
Fixed: Missing styles for links added to the Conversational Message.<\/li>
Fixed: Conditional logic was processed incorrectly for the Multiple Dropdown field.<\/li>
Fixed: Correctly display a placeholder for the Modern Dropdown field in the Firefox browser.<\/li>
Fixed: Single Dropdown field didn't work on mobile devices.<\/li>
Fixed: Date\/Time field didn't support flatpickr's range<\/code> and multiple<\/code> modes.<\/li>
Fixed: Date\/Time field with 24h format for the timepicker wasn't working properly on mobile devices.<\/li>
Fixed: Form couldn't be submitted when a dropdown date option is selected for the required Date\/Time field and Conditional Logic applied to the field.<\/li>
Fixed: Opening a mobile device's keyboard for text fields removed focus from the field which was closing keyboard.<\/li>
Fixed: Improved compatibility with Entry Preview and Rich Text fields.<\/li><\/ul>","
1.6.0 (2021-03-31)<\/h4>
Changed: Visual difference between radio and checkbox elements of Likert Scale field.<\/li>
Changed: \"Next\/Previous\" footer buttons are bigger for small screens.<\/li>
Changed: Radio inputs and select elements look more like traditional HTML elements on mobile.<\/li>
Changed: Improved styling for Authorize.Net and legacy Stripe CC fields on desktop and mobile.<\/li>
Changed: Disable autogenerated the og:description<\/code> meta tag in the Rank Math plugin.<\/li>
Changed: The LikertScale field with a single response per row scrolls to the next row\/field on change.<\/li>
Changed: Radio\/Checkbox field items scroll into view while selecting with arrow keys.<\/li>
Changed: Form Locker UI enhancement when used in conversational mode.<\/li>
Fixed: Compatibility issue with Google v2 reCAPTCHA on certain mobile devices.<\/li>
Fixed: The nav_menu_item<\/code> post type is included in the pool when checking the Conversational Form page slug for uniqueness.<\/li>
Fixed: Textarea and page footer appearance in IE11.<\/li>
Fixed: blockquote<\/code>, ul<\/code>, ol<\/code> elements styling in a form description and a confirmation message.<\/li>
Fixed: Page footer logo appearance in portrait and landscape mobile layouts.<\/li>
Fixed: For the fields without a label, the number indicator is not shown.<\/li>
Fixed: The \"Hide label\" option is not processed for the fields.<\/li>
Fixed: Horizontal line before the Submit button.<\/li>
Fixed: If the checkbox has a label and no options, a long horizontal box appears.<\/li>
Fixed: The Number Slider field appearance.<\/li>
Fixed: Smart Phone field does not display a list of countries when clicking the flag.<\/li>
Fixed: TwentySeventeen and TwentyTwenty themes introduce style conflicts with Conversational Form pages.<\/li>
Fixed: Image Choices field scrolling position is set incorrectly in a rare combination of image\/screen size and field order.<\/li>
Fixed: Date dropdown field processing issue.<\/li>
Fixed: Focusing and positioning for Stripe CC field.<\/li>
Fixed: Dropdown focusing issue on iPhone X and iPhone SE.<\/li>
Fixed: A conditionally hidden field doesn't get focus if triggered to show by a Dropdown field.<\/li>
Fixed: Field sub-Labels do not hide when enabling the \"Hide Sub Label\" option in Advanced Field Settings.<\/li><\/ul>","
1.5.0 (2020-08-05)<\/h4>
Added: Show a notice if permalinks are not configured.<\/li>
Added: Easy access to Conversational Forms classes (e.g. wpforms_conversational_forms()->frontend<\/code>).<\/li>
Added: New wpforms_conversational_form_detected<\/code> hook right before Conversational Form hooks are registered.<\/li>
Changed: Page Title tag and meta tags always use Conversational Page Title if set.<\/li>
Changed: oEmbed links are now removed from Conversational Page HTML.<\/li>
Fixed: Occasionally the form scrolls past or does not activate the conditionally appearing field.<\/li><\/ul>","
1.4.0 (2020-01-09)<\/h4>
Added: Meta tag 'robots' with filterable noindex,nofollow<\/code> value for all Conversational Forms pages.<\/li>
Fixed: Mobile: Virtual keyboard appearing inconsistently while interacting with the fields which have the sub-fields.<\/li>
Fixed: Popup Maker popup content displays in Conversational Forms.<\/li><\/ul>","
1.3.1 (2019-11-07)<\/h4>
Added: Basic compatibility with WPForms modern file uploader.<\/li>
Fixed: \"error404\" body class may appear if the custom permalink structure is used.<\/li>
Fixed: Form preview buttons open two tabs in the Edge browser.<\/li>
Fixed: \"Cannot read property 'addEventListener' of null\" JS error on form creation.<\/li><\/ul>","
1.3.0 (2019-10-14)<\/h4>
Added: Hexcode option to color picker.<\/li><\/ul>","
1.2.0 (2019-07-24)<\/h4>
Added: \"Enter or down arrow to go to the next field\" message for HTML, Section Divider, Payment Single and Payment Total blocks.<\/li>
Changed: Dropdown appearance altered to better mimic a traditional select<\/code> element.<\/li>
Changed: Sublabel placed closer to the input area that it relates to for better visual perception.<\/li>
Changed: Dropdown chevron icon click and \"down arrow\" key open a list with all options visible.<\/li>
Changed: Form scrolls to selected subfields inside multi-input fields for both mobile and desktop.<\/li>
Changed: Active field is considered completed in footer progress bar calculation.<\/li>
Fixed: Image choices in a Conversational Form shows two-column layout no matter what the Choice Layout selection is.<\/li>
Fixed: Conditional logic not working properly on dropdown fields.<\/li>
Fixed: Conversational forms doesn't accept correct credit card expiration date.<\/li>
Fixed: Inconsistent percentage progress bar behavior (negative values) with conditionally hidden form fields.<\/li>
Fixed: Conversational forms won't submit if the field is required and is hidden by conditional logic.<\/li>
Fixed: Multiline form description is displayed as a single line on a frontend.<\/li>
Fixed: Some themes override Conversational Form's templates.<\/li><\/ul>","
1.1.0 (2019-02-28)<\/h4>
Added: Left\/Right arrow navigation support for Checkboxes, Radios, Rating, NetPromoter fields.<\/li>
Added: Esc for unhighlighting an option previously highlighted by arrow keys in Checkboxes, Radios, Rating, NetPromoter fields.<\/li>
Added: Space for selecting (same as Enter) options in Checkboxes, Radios, Rating, NetPromoter fields.<\/li>
Added: Shift+Tab to go to a previous option\/subfield (same as Up Arrow).<\/li>
Changed: Shift+Enter to go to the next field for Checkboxes (Enter just selects\/unselects checkboxes now).<\/li>
Changed: Dropdowns (desktop version) are not auto opening on focus now.<\/li>
Changed: More consistent arrow logic for Checkbox and Radio based fields.<\/li>
Changed: Mobile-native dropdowns are used for mobile devices now.<\/li>
Changed: Layout is optimized to use screen space more effectively on smaller screens.<\/li>
Changed: Tweaked virtual keyboard interaction on mobile devices for better mobile UX.<\/li>
Changed: Mobile Textarea doesn't have \"new line\" capability now due to mobile UI restrictions.<\/li>
Changed: Changed tooltip messages in admin area to be more explanatory.<\/li>
Changed: Changed how mobile\/desktop browsers are detected (mobile-detect.js).<\/li>
Changed: Footer \"Up\/Down\" buttons iterate through subfields on multi-field inputs now instead of instantly skipping to the next field.<\/li>
Fixed: Form's last field (conditionally hidden) was getting focus when trying to go up from \"Submit\" block.<\/li>
Fixed: Rating had no multi-digit keys support (e.g. impossible to select 10).<\/li>
Fixed: \"Active\" key navigation star was the same color as the selected one in Rating field.<\/li>
Fixed: Header was overlapping form content on Firefox and Edge browsers.<\/li>
Fixed: Mobile field focusing issues.<\/li><\/ul>"],"required_versions":{"wp":"5.5","php":"7.0","wpforms":"1.8.7"},"form_builder":{"category":["settings"]},"settings_integrations":{"category":[],"featured":false},"recommended":false,"icon":"addon-icon-conversational-forms.png"},"wpforms-convertkit":{"title":"ConvertKit Addon","slug":"wpforms-convertkit","url":"https:\/\/wpforms.com\/addons\/convertkit-addon\/","version":"1.0.0","image":"https:\/\/wpforms.com\/wp-content\/uploads\/2023\/12\/icon-provider-convertkit.png","excerpt":"The WPForms ConvertKit addon lets you collect subscribers to grow your mailing list, automate email marketing, and connect with your audience.","doc":"https:\/\/wpforms.com\/docs\/convertkit-addon\/","id":2744716,"license":["elite","pro","ultimate","plus","agency"],"category":["providers"],"changelog":["
1.0.0 (2023-12-13)<\/h4>
Added: Initial release.<\/li><\/ul>"],"required_versions":{"wp":"5.5","php":"7.4","wpforms":"1.8.5.3"},"form_builder":{"category":["providers"]},"settings_integrations":{"category":["email-marketing"],"featured":false},"recommended":false,"icon":"addon-icon-convertkit.png"},"wpforms-coupons":{"title":"Coupons Addon","slug":"wpforms-coupons","url":"https:\/\/wpforms.com\/addons\/coupons-addon\/","version":"1.2.0","image":"https:\/\/wpforms.com\/wp-content\/uploads\/2023\/06\/icon-coupons.png","excerpt":"The WPForms Coupons addon makes it easy to drive more sales by offering coupon discounts.","doc":"https:\/\/wpforms.com\/docs\/coupons-addon\/","id":2520337,"license":["elite","pro","ultimate","agency"],"category":["payments"],"changelog":["
1.2.0 (2024-02-20)<\/h4>
Added: Compatibility with the WPForms 1.8.7.<\/li>
Changed: The minimum WPForms version supported is 1.8.7.<\/li>
Changed: Added Coupon field to allowed fields for use in wpforms_get_form_fields()<\/code>\u00a0function.<\/li>
Changed: Improve Coupons page display on mobile devices.<\/li>
Fixed: Space between Currency and Amount in Coupon field was removed.<\/li>
Fixed: Various issues in the user interface when an RTL language was used.<\/li><\/ul>","
1.1.0 (2023-09-26)<\/h4>
IMPORTANT: Support for PHP 5.6 has been discontinued. If you are running PHP 5.6, you MUST upgrade PHP before installing WPForms Coupons 1.1.0. Failure to do that will disable WPForms Coupons functionality.<\/li>
IMPORTANT: Support for WordPress 5.4 and below has been discontinued. If you are running any of those outdated versions, you MUST upgrade WordPress before installing WPForms Coupons 1.1.0. Failure to do that will disable WPForms Coupons functionality.<\/li>
Changed: Minimum WPForms version supported is 1.8.4.<\/li>
Changed: The Coupon field has an improved preview in the Form Builder.<\/li>
Changed: Front-end validation and the process of applying a coupon has a better UX.<\/li>
Fixed: Smart Logic was not working when the Coupon field value was used to show\/hide other fields.<\/li>
Fixed: Show an Allowed Forms error notice if there is no enabled form, only if the coupon is already created.<\/li>
Fixed: The coupon was not applied when the maximum limit was reached and then increased again.<\/li>
Fixed: The wpforms_coupons_admin_coupons_edit_date_format<\/code> filter was not changing the date format on the Edit Coupon page.<\/li>
Fixed: The Coupon field became unavailable in the Form Builder if drag-n-drop action started and stopped.<\/li><\/ul>","