Initial commit - exported from wordpress
This commit is contained in:
42
content/post/2023-05-21-hilger-grading-portal.md
Normal file
42
content/post/2023-05-21-hilger-grading-portal.md
Normal file
@ -0,0 +1,42 @@
|
||||
---
|
||||
author: mikeconrad
|
||||
categories:
|
||||
- Software Engineering
|
||||
date: "2023-05-21T16:07:50Z"
|
||||
image: /wp-content/uploads/2024/03/hilger-portal-home.webp
|
||||
tags:
|
||||
- Portfolio
|
||||
title: Hilger Grading Portal
|
||||
|
||||
---
|
||||
|
||||
Back around 2014 I took on my first freelance development project for a Homeschool Co-op here in Chattanooga called [Hilger Higher Learning](https://www.hhlearning.com/). The problem that they were trying to solve involved managing grades and report cards for their students. In the past, they had a developer build a rudimentary web application that would allow them to enter grades for students, however it lacked any sort of concurrency meaning that if two teachers were making changes to the same student at the same time, teacher b’s changes would overwrite teacher a’s changes. This was obviously a huge headache.
|
||||
|
||||
I built out the first version of the app using PHP and HTML, CSS and Datatables with lots of jQuery sprinkled in. I built in custom functionality that allowed them to easily compile and print all the report cards for all students with the simple click of a button. It was a game changer or them and streamlined the process significantly.
|
||||
|
||||
That system was in production for 5 years or so with minimal updates and maintance. I recently rebuilt it using React and ChakraUI on the frontend and [KeystoneJS](https://keystonejs.com/) on the backend. I also modernized the deployment by building Docker images for the frontend/backend. I actually ended up keeping parts of it in PHP due to the fact that I couldn’t find a JavaScript library that would solve the challenges I had. Here are some screenshots of it in action:
|
||||
|
||||
This is the page listing all teachers in the system and whether or not they have admin privileges. Any admin user can grant other admin users this privilege. There is also a button to send the teacher a password reset email (via Postmark API integration) and an option that allows admin users to impersonate other users for troubleshooting and diagnostic purposes.
|
||||
|
||||
<figure class="wp-block-image size-large"></figure>The data is all coming from the KeystoneJS backend GraphQL API. I am using [urql](https://www.npmjs.com/package/urql) for fetching the data and handling mutations. This is the page that displays students. It is filterable and searchable. Teachers also have the ability to mark a student as active or inactive for the semester as well as delete them from the system.
|
||||
|
||||
<figure class="wp-block-image size-large"></figure>Clicking on a student takes the teacher/admin to an edit course screen where they can add and remove courses for each student. A teacher can add as many courses as they need. If multiple teachers have added courses for this student, the user will only see the courses they have entered.
|
||||
|
||||
<figure class="wp-block-image size-large"></figure>There is another page that allows admin users to view and manage all of the parents in the system. It allows them to easily send a password reset email to the parents as well as to view the parent portal.
|
||||
|
||||
<figure class="wp-block-image size-large"></figure>---
|
||||
|
||||
## Technologies used
|
||||
|
||||
- Digital Ocean Droplet (Server) – Ubuntu Server
|
||||
- Docker (Frontend, Backend, PHP, Postgresql database)
|
||||
- Git
|
||||
- NodeJS
|
||||
- PHP
|
||||
- ChakraUI
|
||||
- KeystoneJS
|
||||
- Postmark
|
||||
- GraphQL
|
||||
- Typescript
|
||||
- React
|
||||
- urql
|
33
content/post/2023-07-12-hoots-wings.md
Normal file
33
content/post/2023-07-12-hoots-wings.md
Normal file
@ -0,0 +1,33 @@
|
||||
---
|
||||
author: mikeconrad
|
||||
categories:
|
||||
- Software Engineering
|
||||
date: "2023-07-12T11:32:44Z"
|
||||
image: /wp-content/uploads/2024/03/hoots-locations-min.webp
|
||||
tags:
|
||||
- Portfolio
|
||||
title: Hoots Wings
|
||||
|
||||
---
|
||||
|
||||
While working for [Morrison](https://morrison.agency/) I had the pleasure of building a website for [Hoots Wings.](https://hootswings.com) The CMS was [Perch](https://grabaperch.com/) and it was mostly HTML, CSS, PHP and JavaScript on the frontend, however I built out a customer store locator using NodeJS and VueJS.
|
||||
|
||||
<figure class="wp-block-image size-full"></figure>I was the sole frontend developer responsible for taking the designs from SketchUp and translating them to the site you see now. Most of the blocks and templates are built using a mix of PHP and HTML/SCSS. There was also some JavaScript for things like getting the users location and rendering popups/modals.
|
||||
|
||||
The store locator was a separate piece that was built in Vue2.0 with a NodeJS backend. For the backend I used [KeystoneJS](https://keystonejs.com/) to hold all of the store information. There was also some custom development that was done in order to sync the stores added via the CMS with [Yext](https://www.yext.com/) and vice versa.
|
||||
|
||||
<figure class="wp-block-image size-large"></figure>For that piece I ended up having to write a custom integration in Perch that would connect to the NodeJS backend and pull the stores but also make sure that those were in sync with Yext. This required diving into the Yext API some and examining a similar integration that we had for another client site.
|
||||
|
||||
Unfortunately I don’t have any screen grabs of the admin side of things since that is proprietary but the system I built allowed a site admin to go in and add/edit store locations that would show up on the site and also show up in Yext with the appropriate information.
|
||||
|
||||
#### Screenshots
|
||||
|
||||
Here are some full screenshots of the site.
|
||||
|
||||
Homepage
|
||||
|
||||
<figure class="wp-block-image size-large"></figure>Menu Page
|
||||
|
||||
<figure class="wp-block-image size-large"></figure>Locations Page
|
||||
|
||||
<figure class="wp-block-image size-large"></figure>
|
@ -0,0 +1,166 @@
|
||||
---
|
||||
author: mikeconrad
|
||||
categories:
|
||||
- Software Engineering
|
||||
date: "2024-01-03T19:59:49Z"
|
||||
tags:
|
||||
- Blog Post
|
||||
- GraphQL
|
||||
- KeystoneJS
|
||||
- NodeJS
|
||||
- Prisma
|
||||
- TypeScript
|
||||
title: Roll your own authenticator app with KeystoneJS and React
|
||||
---
|
||||
|
||||
In this series of articles we are going to be building an authenticator app using KeystoneJS for the backend and React for the frontend. The concept is pretty simple and yes there are a bunch out there already but I recently had a need to learn some of the ins and outs of TOTP tokens and thought this project would be a fun idea. Let’s get started.
|
||||
|
||||
##### Step 1: Init keystone app
|
||||
|
||||
Open up a terminal and create a blank keystone project. We are going to call our app authenticator to keep things simple.
|
||||
|
||||
```
|
||||
$ yarn create keystone-app
|
||||
yarn create v1.22.21
|
||||
[1/4] Resolving packages...
|
||||
[2/4] Fetching packages...
|
||||
[3/4] Linking dependencies...
|
||||
[4/4] Building fresh packages...
|
||||
|
||||
success Installed "create-keystone-app@9.0.1" with binaries:
|
||||
- create-keystone-app
|
||||
[###################################################################################################################################################################################] 273/273
|
||||
✨ You're about to generate a project using Keystone 6 packages.
|
||||
|
||||
✔ What directory should create-keystone-app generate your app into? · authenticator
|
||||
|
||||
⠸ Installing dependencies with yarn. This may take a few minutes.
|
||||
⚠ Failed to install with yarn.
|
||||
✔ Installed dependencies with npm.
|
||||
|
||||
|
||||
🎉 Keystone created a starter project in: authenticator
|
||||
|
||||
To launch your app, run:
|
||||
|
||||
- cd authenticator
|
||||
- npm run dev
|
||||
|
||||
Next steps:
|
||||
|
||||
- Read authenticator/README.md for additional getting started details.
|
||||
- Edit authenticator/keystone.ts to customize your app.
|
||||
- Open the Admin UI
|
||||
- Open the Graphql API
|
||||
- Read the docs
|
||||
- Star Keystone on GitHub
|
||||
|
||||
Done in 84.06s.
|
||||
|
||||
```
|
||||
|
||||
After a few minutes you should be ready to go. Ignore the error about yarn not being able to install dependencies, it’s an issue with my setup. Next go ahead and open up the project folder with your editor of choice. I use VSCodium:
|
||||
|
||||
```
|
||||
codium authenticator
|
||||
```
|
||||
|
||||
Let’s go ahead and remove all the comments from the `schema.ts` file and clean it up some:
|
||||
|
||||
```
|
||||
sed -i '/\/\//d' schema.ts
|
||||
```
|
||||
|
||||
Also, go ahead and delete the Post and Tag list as we won’t be using them. Our cleaned up `schema.ts` should look like this:
|
||||
|
||||
```
|
||||
// schema.ts
|
||||
import { list } from '@keystone-6/core';
|
||||
import { allowAll } from '@keystone-6/core/access';
|
||||
|
||||
import {
|
||||
text,
|
||||
relationship,
|
||||
password,
|
||||
timestamp,
|
||||
select,
|
||||
} from '@keystone-6/core/fields';
|
||||
|
||||
|
||||
import type { Lists } from '.keystone/types';
|
||||
|
||||
export const lists: Lists = {
|
||||
User: list({
|
||||
access: allowAll,
|
||||
|
||||
fields: {
|
||||
name: text({ validation: { isRequired: true } }),
|
||||
email: text({
|
||||
validation: { isRequired: true },
|
||||
isIndexed: 'unique',
|
||||
}),
|
||||
password: password({ validation: { isRequired: true } }),
|
||||
createdAt: timestamp({
|
||||
defaultValue: { kind: 'now' },
|
||||
}),
|
||||
},
|
||||
}),
|
||||
|
||||
};
|
||||
|
||||
```
|
||||
|
||||
Next we will define the schema for our tokens. We will need 3 basic things to start with:
|
||||
|
||||
- Issuer
|
||||
- Secret Key
|
||||
- Account
|
||||
|
||||
The only thing that really matters for generating a TOTP is actually the secret key. The other two fields are mostly for identifying and differentiating tokens. Go ahead and add the following to our `schema.ts` underneath the User list:
|
||||
|
||||
```
|
||||
Token: list({
|
||||
access: allowAll,
|
||||
fields: {
|
||||
secretKey: text({ validation: { isRequired: true } }),
|
||||
issuer: text({ validation: { isRequired: true }}),
|
||||
account: text({ validation: { isRequired: true }})
|
||||
}
|
||||
}),
|
||||
```
|
||||
|
||||
Now that we have defined our Token, we should probably link it to a user. KeystoneJS makes this really easily. We simply need to add a relationship field to our User list. Add the following field to the user list:
|
||||
|
||||
```
|
||||
tokens: relationship({ ref:'Token', many: true })
|
||||
```
|
||||
|
||||
We are defining a tokens field on the User list and tying it to our Token list. We are also passing `many: true` saying that a user can have one or more tokens. Now that we have the basics set up, let’s go ahead and spin up our app and see what we have:
|
||||
|
||||
```
|
||||
$ yarn dev
|
||||
yarn run v1.22.21
|
||||
$ keystone dev
|
||||
✨ Starting Keystone
|
||||
⭐️ Server listening on :3000 (http://localhost:3000/)
|
||||
⭐️ GraphQL API available at /api/graphql
|
||||
✨ Generating GraphQL and Prisma schemas
|
||||
✨ The database is already in sync with the Prisma schema
|
||||
✨ Connecting to the database
|
||||
✨ Creating server
|
||||
✅ GraphQL API ready
|
||||
✨ Generating Admin UI code
|
||||
✨ Preparing Admin UI app
|
||||
✅ Admin UI ready
|
||||
|
||||
```
|
||||
|
||||
Our server should be running on localhost:3000 so let’s check it out! The first time we open it up we will be greeted with the initialization screen. Go ahead and create an account to login:
|
||||
|
||||
<figure class="wp-block-image size-full"></figure>Once you login you should see a dashboard similar to this:
|
||||
|
||||
<figure class="wp-block-image size-large"></figure>You can see we have Users and Tokens that we can manage. The beauty of KeystoneJS is that you get full CRUD functionality out of the box just by defining our schema! Go ahead and click on Tokens to add a token:
|
||||
|
||||
<figure class="wp-block-image size-large"></figure>For this example I just entered some random text as an example. This is enough to start testing out our TOTP functionality. Click ‘Create Token’ and you should see a list displaying existing tokens:
|
||||
|
||||
<figure class="wp-block-image size-large"></figure>We are now ready to jump into the frontend. Stay tuned for pt 2 of this series.
|
@ -0,0 +1,246 @@
|
||||
---
|
||||
author: mikeconrad
|
||||
categories:
|
||||
- Software Engineering
|
||||
date: "2024-01-10T20:41:00Z"
|
||||
guid: https://hackanooga.com/?p=539
|
||||
id: 539
|
||||
tags:
|
||||
- Blog Post
|
||||
- GraphQL
|
||||
- KeystoneJS
|
||||
- NodeJS
|
||||
- Prisma
|
||||
- TypeScript
|
||||
title: Roll your own authenticator app with KeystoneJS and React - pt 2
|
||||
url: /roll-your-own-authenticator-app-with-keystonejs-and-react-pt-2/
|
||||
---
|
||||
|
||||
In part 1 of this series we built out a basic backend using KeystoneJS. In this part we will go ahead and start a new React frontend that will interact with our backend. We will be using Vite. Let’s get started. Make sure you are in the `authenticator` folder and run the following:
|
||||
|
||||
```
|
||||
$ yarn create vite@latest
|
||||
yarn create v1.22.21
|
||||
[1/4] Resolving packages...
|
||||
[2/4] Fetching packages...
|
||||
[3/4] Linking dependencies...
|
||||
[4/4] Building fresh packages...
|
||||
|
||||
success Installed "create-vite@5.2.2" with binaries:
|
||||
- create-vite
|
||||
- cva
|
||||
✔ Project name: … frontend
|
||||
✔ Select a framework: › React
|
||||
✔ Select a variant: › TypeScript
|
||||
|
||||
Scaffolding project in /home/mikeconrad/projects/authenticator/frontend...
|
||||
|
||||
Done. Now run:
|
||||
|
||||
cd frontend
|
||||
yarn
|
||||
yarn dev
|
||||
|
||||
Done in 10.20s.
|
||||
```
|
||||
|
||||
Let’s go ahead and go into our frontend directory and get started:
|
||||
|
||||
```
|
||||
$ cd frontend
|
||||
$ yarn
|
||||
yarn install v1.22.21
|
||||
info No lockfile found.
|
||||
[1/4] Resolving packages...
|
||||
[2/4] Fetching packages...
|
||||
[3/4] Linking dependencies...
|
||||
[4/4] Building fresh packages...
|
||||
success Saved lockfile.
|
||||
Done in 10.21s.
|
||||
|
||||
$ yarn dev
|
||||
yarn run v1.22.21
|
||||
$ vite
|
||||
Port 5173 is in use, trying another one...
|
||||
|
||||
VITE v5.1.6 ready in 218 ms
|
||||
|
||||
➜ Local: http://localhost:5174/
|
||||
➜ Network: use --host to expose
|
||||
➜ press h + enter to show help
|
||||
|
||||
```
|
||||
|
||||
Next go ahead and open the project up in your IDE of choice. I prefer [VSCodium](https://vscodium.com/):
|
||||
|
||||
```
|
||||
codium frontend
|
||||
```
|
||||
|
||||
Go ahead and open up `src/App.tsx` and remove all the boilerplate so it looks like this:
|
||||
|
||||
```
|
||||
import './App.css'
|
||||
|
||||
function App() {
|
||||
|
||||
return (
|
||||
<>
|
||||
</>
|
||||
)
|
||||
}
|
||||
|
||||
export default App
|
||||
```
|
||||
|
||||
Let’s start by building a card component that will display an individual token. Our goal is something that looks like this:
|
||||
|
||||
<figure class="wp-block-image size-full"></figure>We will start by creating a Components folder with a Card component:
|
||||
|
||||
```
|
||||
$ mkdir src/Components
|
||||
$ touch src/Components/Card.tsx
|
||||
```
|
||||
|
||||
Let’s go ahead and make a couple updates, we will create this simple card component, add some dummy tokens and some basic styling.
|
||||
|
||||
```
|
||||
# src/App.tsx
|
||||
|
||||
import './App.css'
|
||||
import Card from './Components/Card';
|
||||
export interface IToken {
|
||||
account: string;
|
||||
issuer: string;
|
||||
token: string;
|
||||
}
|
||||
function App() {
|
||||
|
||||
const tokens: IToken[] = [
|
||||
{
|
||||
account: 'enxoco@github.com',
|
||||
issuer: 'Github',
|
||||
token: 'AJFDLDAJKFK'
|
||||
},
|
||||
{
|
||||
account: 'mikeconrad@example.com',
|
||||
issuer: 'Example.com',
|
||||
token: 'KAJLFDJLKAFD'
|
||||
}
|
||||
]
|
||||
return (
|
||||
<>
|
||||
<div className='cardWrapper'>
|
||||
{tokens.map(token => <Card token={token} />)}
|
||||
</div>
|
||||
</>
|
||||
)
|
||||
}
|
||||
|
||||
export default App
|
||||
|
||||
```
|
||||
|
||||
```
|
||||
# src/Components/Card.tsx
|
||||
import { IToken } from "../App"
|
||||
|
||||
function Card({ token }: { token: IToken }) {
|
||||
return (
|
||||
<>
|
||||
<div className='card'>
|
||||
<span>{token.issuer}</span>
|
||||
<span>{token.account}</span>
|
||||
<span>{token.token}</span>
|
||||
</div>
|
||||
</>
|
||||
|
||||
)
|
||||
}
|
||||
export default Card
|
||||
```
|
||||
|
||||
```
|
||||
# src/index.css
|
||||
:root {
|
||||
font-family: Inter, system-ui, Avenir, Helvetica, Arial, sans-serif;
|
||||
line-height: 1.5;
|
||||
font-weight: 400;
|
||||
|
||||
color-scheme: light dark;
|
||||
color: rgba(255, 255, 255, 0.87);
|
||||
|
||||
font-synthesis: none;
|
||||
text-rendering: optimizeLegibility;
|
||||
-webkit-font-smoothing: antialiased;
|
||||
-moz-osx-font-smoothing: grayscale;
|
||||
}
|
||||
|
||||
a {
|
||||
font-weight: 500;
|
||||
color: #646cff;
|
||||
text-decoration: inherit;
|
||||
}
|
||||
a:hover {
|
||||
color: #535bf2;
|
||||
}
|
||||
|
||||
body {
|
||||
margin: 0;
|
||||
display: flex;
|
||||
place-items: center;
|
||||
min-width: 320px;
|
||||
min-height: 100vh;
|
||||
background-color: #2c2c2c;
|
||||
|
||||
}
|
||||
|
||||
.cardWrapper {
|
||||
display: flex;
|
||||
}
|
||||
|
||||
.card {
|
||||
padding: 2em;
|
||||
min-width: 250px;
|
||||
border: 1px solid;
|
||||
margin: 10px;
|
||||
background-color: #333333;
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
align-items: baseline;
|
||||
}
|
||||
```
|
||||
|
||||
Now you should have something that looks like this:
|
||||
|
||||
<figure class="wp-block-image size-full"></figure>Alright, we have some of the boring stuff out of the way, now let’s start making some magic. If you aren’t familiar with how TOTP tokens work, basically there is an Algorithm that generates them. I would encourage you to read the [RFC](https://datatracker.ietf.org/doc/html/rfc6238) for a detailed explanation. Basically it is an algorithm that generates a one time password using the current time as a source of uniqueness along with the secret key.
|
||||
|
||||
If we really wanted to we could implement this algorithm ourselves but thankfully there are some really simple libraries that do it for us. For our project we will be using one called `<a href="https://github.com/bellstrand/totp-generator">totp-generator</a>`. Let’s go ahead and install it and check it out:
|
||||
|
||||
```
|
||||
$ yarn add totp-generator
|
||||
```
|
||||
|
||||
Now let’s add it to our card component and see what happens. Using it is really simple. We just need to import it, instantiate a new `TokenGenerator` and pass it our Secret key:
|
||||
|
||||
```
|
||||
# src/Components/card.tsx
|
||||
import { TOTP } from 'totp-generator';
|
||||
---
|
||||
function Card({ token }: { token: IToken }) {
|
||||
const { otp, expires } = TOTP.generate(token.token)
|
||||
return (
|
||||
<>
|
||||
<div className='card'>
|
||||
<span>{token.issuer}</span>
|
||||
<span>{token.account}</span>
|
||||
<span>{otp} - {expires}</span>
|
||||
</div>
|
||||
</>
|
||||
)
|
||||
}
|
||||
```
|
||||
|
||||
Now save and go back to your browser and you should see that our secret keys are now being displayed as tokens:
|
||||
|
||||
<figure class="wp-block-image size-full"></figure>That is pretty cool, the only problem is you need to refresh the page to refresh the token. We will take care of that in part 3 of this series as well as handling fetching tokens from our backend.
|
@ -0,0 +1,157 @@
|
||||
---
|
||||
author: mikeconrad
|
||||
categories:
|
||||
- Software Engineering
|
||||
date: "2024-01-17T12:11:00Z"
|
||||
enclosure:
|
||||
- |
|
||||
https://hackanooga.com/wp-content/uploads/2024/03/otp-countdown.mp4
|
||||
29730
|
||||
video/mp4
|
||||
guid: https://hackanooga.com/?p=546
|
||||
id: 546
|
||||
tags:
|
||||
- Authentication
|
||||
- Blog Post
|
||||
- React
|
||||
- TypeScript
|
||||
title: Roll your own authenticator app with KeystoneJS and React - pt 3
|
||||
url: /roll-your-own-authenticator-app-with-keystonejs-and-react-pt-3/
|
||||
---
|
||||
|
||||
In our previous post we got to the point of displaying an OTP in our card component. Now it is time to refactor a bit and implement a countdown functionality to see when this token will expire. For now we will go ahead and add this logic into our Card component. In order to figure out how to build this countdown timer we first need to understand how the TOTP counter is calculated.
|
||||
|
||||
In other words, we know that at TOTP token is derived from a secret key and the current time. If we dig into the spec some we can find that time is a reference to Linux epoch time or the number of seconds that have elapsed since January 1st 1970. For a little more clarification check out this Stackexchange [article](https://crypto.stackexchange.com/questions/72558/what-time-is-used-in-a-totp-counter).
|
||||
|
||||
So if we know that the time is based on epoch time, we also need to know that most TOTP tokens have a validity period of either 30 seconds or 60 seconds. 30 seconds is the most common standard so we will use that for our implementation. If we put all that together then basically all we need is 2 variables:
|
||||
|
||||
1. Number of seconds since epoch
|
||||
2. How many seconds until this token expires
|
||||
|
||||
The first one is easy:
|
||||
|
||||
```
|
||||
let secondsSinceEpoch;
|
||||
secondsSinceEpoch = Math.ceil(Date.now() / 1000) - 1;
|
||||
|
||||
# This gives us a time like so: 1710338609
|
||||
```
|
||||
|
||||
For the second one we will need to do a little math but it’s pretty straightforward. We need to divide `secondsSinceEpoch` by 30 seconds and then subtract this number from 30. Here is what that looks like:
|
||||
|
||||
```
|
||||
let secondsSinceEpoch;
|
||||
let secondsRemaining;
|
||||
const period = 30;
|
||||
|
||||
secondsSinceEpoch = Math.ceil(Date.now() / 1000) - 1;
|
||||
secondsRemaining = period - (secondsSinceEpoch % period);
|
||||
```
|
||||
|
||||
Now let’s put all of that together into a function that we can test out to make sure we are getting the results we expect.
|
||||
|
||||
```
|
||||
const timer = setInterval(() => {
|
||||
countdown()
|
||||
}, 1000)
|
||||
|
||||
function countdown() {
|
||||
let secondsSinceEpoch;
|
||||
let secondsRemaining;
|
||||
|
||||
const period = 30;
|
||||
secondsSinceEpoch = Math.ceil(Date.now() / 1000) - 1;
|
||||
|
||||
secondsRemaining = period - (secondsSinceEpoch % period);
|
||||
console.log(secondsSinceEpoch, secondsRemaining)
|
||||
if (secondsRemaining == 1) {
|
||||
console.log("timer done")
|
||||
clearInterval(timer)
|
||||
}
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
Running this function should give you output similar to the following. In this example we are stopping the timer once it hits 1 second just to show that everything is working as we expect. In our application we will want this time to keep going forever:
|
||||
|
||||
```
|
||||
1710339348, 12
|
||||
1710339349, 11
|
||||
1710339350, 10
|
||||
1710339351, 9
|
||||
1710339352, 8
|
||||
1710339353, 7
|
||||
1710339354, 6
|
||||
1710339355, 5
|
||||
1710339356, 4
|
||||
1710339357, 3
|
||||
1710339358, 2
|
||||
1710339359, 1
|
||||
"timer done"
|
||||
```
|
||||
|
||||
Here is a JSfiddle that shows it in action: https://jsfiddle.net/561vg3k7/
|
||||
|
||||
We can go ahead and add this function to our Card component and get it wired up. I am going to skip ahead a bit and add a progress bar to our card that is synced with our countdown timer and changes colors as it drops below 10 seconds. For now we will be using a `setInterval` function to accomplish this.
|
||||
|
||||
Here is what my updated `src/Components/Card.tsx` looks like:
|
||||
|
||||
```
|
||||
import { useState } from "react";
|
||||
import { IToken } from "../App"
|
||||
import { TOTP } from 'totp-generator';
|
||||
|
||||
|
||||
function Card({ token }: { token: IToken }) {
|
||||
const { otp } = TOTP.generate(token.token);
|
||||
const [timerStyle, setTimerStyle] = useState("");
|
||||
const [timerWidth, setTimerWidth] = useState("");
|
||||
|
||||
|
||||
function countdown() {
|
||||
let secondsSinceEpoch: number;
|
||||
let secondsRemaining: number = 30;
|
||||
const period = 30;
|
||||
secondsSinceEpoch = Math.ceil(Date.now() / 1000) - 1;
|
||||
secondsRemaining = period - (secondsSinceEpoch % period);
|
||||
setTimerWidth(`${100 - (100 / 30 * (30 - secondsRemaining))}%`)
|
||||
setTimerStyle(secondsRemaining < 10 ? "salmon" : "lightgreen")
|
||||
}
|
||||
setInterval(() => {
|
||||
countdown();
|
||||
}, 250);
|
||||
return (
|
||||
<>
|
||||
<div className='card'>
|
||||
<div className='progressBar'style={{ width: timerWidth, backgroundColor: timerStyle}}></div>
|
||||
<span>{token.issuer}</span>
|
||||
<span>{token.account}</span>
|
||||
<span >{otp}</span>
|
||||
</div>
|
||||
</>
|
||||
|
||||
)
|
||||
}
|
||||
export default Card
|
||||
```
|
||||
|
||||
Pretty straightforward. I also updated my `src/index.css` and added a style for our progress bar:
|
||||
|
||||
```
|
||||
.progressBar {
|
||||
height: 10px;
|
||||
position: absolute;
|
||||
top: 0;
|
||||
left: 0;
|
||||
right: inherit;
|
||||
}
|
||||
// Also be sure to add position:relative to .card which is the parent of this.
|
||||
```
|
||||
|
||||
Here is what it all looks like in action:
|
||||
|
||||
<figure class="wp-block-video"><video controls="" src="https://hackanooga.com/wp-content/uploads/2024/03/otp-countdown.mp4"></video></figure>If you look closely you will notice a few interesting things. First is that the color of the progress bar changes from green to red. This is handled by our `timerStyle` variable. That part is pretty simple, if the timer is less than 10 seconds we set the background color as salmon otherwise we use light green. The width of the progress bar is controlled by `${100 – (100 / 30 \* (30 – secondsRemaining))}%`
|
||||
|
||||
The other interesting thing to note is that when the timer runs out it automatically restarts at 30 seconds with a new OTP. This is due to the fact that this component is re-rendering every 1/4 second and every time it re-renders it is running the entire function body including: `const { otp } = TOTP.generate(token.token);`.
|
||||
|
||||
This is to be expected since we are using React and we are just using a setInterval. It may be a little unexpected though if you aren’t as familiar with React render cycles. For our purposes this will work just fine for now. Stay tuned for pt 4 of this series where we wire up the backend API.
|
@ -0,0 +1,153 @@
|
||||
---
|
||||
author: mikeconrad
|
||||
categories:
|
||||
- Cloudflare
|
||||
- Docker
|
||||
- Self Hosted
|
||||
- Traefik
|
||||
dark_fusion_page_sidebar:
|
||||
- sidebar-1
|
||||
dark_fusion_site_layout:
|
||||
- ""
|
||||
date: "2024-02-01T14:35:00Z"
|
||||
guid: https://wordpress.hackanooga.com/?p=422
|
||||
id: 422
|
||||
tags:
|
||||
- Blog Post
|
||||
title: Traefik with Let’s Encrypt and Cloudflare (pt 1)
|
||||
url: /traefik-with-lets-encrypt-and-cloudflare-pt-1/
|
||||
---
|
||||
|
||||
Recently I decided to rebuild one of my homelab servers. Previously I was using Nginx as my reverse proxy but I decided to switch to Traefik since I have been using it professionally for some time now. One of the reasons I like Traefik is that it is stupid simple to set up certificates and when I am using it with Docker I don’t have to worry about a bunch of configuration files. If you aren’t familiar with how Traefik works with Docker, here is a brief example of a `docker-compose.yaml`
|
||||
|
||||
```
|
||||
version: '3'
|
||||
|
||||
services:
|
||||
reverse-proxy:
|
||||
# The official v2 Traefik docker image
|
||||
image: traefik:v2.11
|
||||
# Enables the web UI and tells Traefik to listen to docker
|
||||
command:
|
||||
- --api.insecure=true
|
||||
- --providers.docker=true
|
||||
- --entrypoints.web.address=:80
|
||||
- --entrypoints.websecure.address=:443
|
||||
# Set up LetsEncrypt
|
||||
- --certificatesresolvers.letsencrypt.acme.dnschallenge=true
|
||||
- --certificatesresolvers.letsencrypt.acme.dnschallenge.provider=cloudflare
|
||||
- --certificatesresolvers.letsencrypt.acme.email=user@example.com
|
||||
- --certificatesresolvers.letsencrypt.acme.storage=/letsencrypt/acme.json
|
||||
# Redirect all http requests to https
|
||||
- --entryPoints.web.http.redirections.entryPoint.to=websecure
|
||||
- --entryPoints.web.http.redirections.entryPoint.scheme=https
|
||||
- --entryPoints.web.http.redirections.entrypoint.permanent=true
|
||||
- --log=true
|
||||
- --log.level=INFO
|
||||
# Needed to request certs via lets encrypt
|
||||
environment:
|
||||
- CF_DNS_API_TOKEN=[redacted]
|
||||
ports:
|
||||
# The HTTP port
|
||||
- "80:80"
|
||||
- "443:443"
|
||||
# The Web UI (enabled by --api.insecure=true)
|
||||
- "8080:8080"
|
||||
volumes:
|
||||
# So that Traefik can listen to the Docker events
|
||||
- /var/run/docker.sock:/var/run/docker.sock:ro
|
||||
# Used for storing letsencrypt certificates
|
||||
- ./letsencrypt:/letsencrypt
|
||||
- ./volumes/traefik/logs:/logs
|
||||
networks:
|
||||
- traefik
|
||||
ots:
|
||||
image: luzifer/ots
|
||||
container_name: ots
|
||||
restart: always
|
||||
environment:
|
||||
REDIS_URL: redis://redis:6379/0
|
||||
SECRET_EXPIRY: "604800"
|
||||
STORAGE_TYPE: redis
|
||||
depends_on:
|
||||
- redis
|
||||
labels:
|
||||
- traefik.enable=true
|
||||
- traefik.http.routers.ots.rule=Host(`ots.example.com`)
|
||||
- traefik.http.routers.ots.entrypoints=websecure
|
||||
- traefik.http.routers.ots.tls=true
|
||||
- traefik.http.routers.ots.tls.certresolver=letsencrypt
|
||||
- traefik.http.services.ots.loadbalancer.server.port=3000
|
||||
networks:
|
||||
- traefik
|
||||
redis:
|
||||
image: redis:alpine
|
||||
restart: always
|
||||
volumes:
|
||||
- ./redis-data:/data
|
||||
networks:
|
||||
- traefik
|
||||
networks:
|
||||
traefik:
|
||||
external: true
|
||||
|
||||
|
||||
|
||||
```
|
||||
|
||||
In part one of this series I will be going over some of the basics of Traefik and how dynamic routing works. If you want to skip to the good stuff and get everything configured with Cloudflare, you can skip to [part 2](https://wordpress.hackanooga.com/traefik-with-lets-encrypt-and-cloudflare-pt-2/).
|
||||
|
||||
This example set’s up the primary Traefik container which acts as the ingress controller as well as a handy One Time Secret sharing [service](https://github.com/Luzifer/ots) I use. Traefik handles routing in Docker via labels. For this to work properly the services that Traefik is trying to route to all need to be on the same Docker network. For this example we created a network called traefik by running the following:
|
||||
|
||||
```
|
||||
docker network create traefik
|
||||
|
||||
```
|
||||
|
||||
Let’s take a look at the labels we applied to the `ots` container a little closer:
|
||||
|
||||
```
|
||||
labels:
|
||||
- traefik.enable=true
|
||||
- traefik.http.routers.ots.rule=Host(`ots.example.com`)
|
||||
- traefik.http.routers.ots.entrypoints=websecure
|
||||
- traefik.http.routers.ots.tls=true
|
||||
- traefik.http.routers.ots.tls.certresolver=letsencrypt
|
||||
- traefik.http.services.ots.loadbalancer.server.port=3000
|
||||
```
|
||||
|
||||
`traefik.enable=true` – This should be pretty self explanatory but it tells Traefik that we want it to know about this service.
|
||||
|
||||
`traefik.http.routers.ots.rule=Host('ots.example.com') - This is where some of the magic comes in.` Here we are defining a router called `ots`. The name is arbitrary in that it doesn’t have to match the name of the service but for our example it does. There are many rules that you can specify but the easiest for this example is host. Basically we are saying that any request coming in for ots.example.com should be picked up by this router. You can find more options for routers in the Traefik [docs](https://doc.traefik.io/traefik/routing/routers/).
|
||||
|
||||
– traefik.http.routers.ots.entrypoints=websecure
|
||||
– traefik.http.routers.ots.tls=true
|
||||
– traefik.http.routers.ots.tls.certresolver=letsencrypt
|
||||
|
||||
We are using these three labels to tell our router that we want it to use the websecure entrypoint, and that it should use the letsencrypt certresolver to grab it’s certificates. websecure is an arbitrary name that we assigned to our :443 interface. There are multiple ways to configure this, I choose to use the cli format in my traefik config:
|
||||
|
||||
“`
|
||||
|
||||
```
|
||||
command:
|
||||
- --api.insecure=true
|
||||
- --providers.docker=true
|
||||
# Our entrypoint names are arbitrary but these are convention.
|
||||
# The important part is the port binding that we associate.
|
||||
- --entrypoints.web.address=:80
|
||||
- --entrypoints.websecure.address=:443
|
||||
|
||||
|
||||
```
|
||||
|
||||
These last label is optional depending on your setup but it is important to understand as the documentation is a little fuzzy.
|
||||
|
||||
– traefik.http.services.ots.loadbalancer.server.port=3000
|
||||
|
||||
Here’s how it works. Suppose you have a container that exposes multiple ports. Maybe one of those is a web ui and another is something that you don’t want exposed. By default Traefik will try and guess which port to route requests to. My understanding is that it will try and use the first exposed port. However you can override this functionality by using the label above which will tell Traefik specifically which port you want to route to inside the container.
|
||||
|
||||
The service name is derived automatically from the definition in the docker compose file:
|
||||
|
||||
```
|
||||
<br></br> ots: # This will become the service name<br></br> image: luzifer/ots<br></br> container_name: ots<br></br>
|
||||
```
|
@ -0,0 +1,125 @@
|
||||
---
|
||||
author: mikeconrad
|
||||
categories:
|
||||
- Automation
|
||||
- Cloudflare
|
||||
- Docker
|
||||
- Traefik
|
||||
dark_fusion_page_sidebar:
|
||||
- sidebar-1
|
||||
dark_fusion_site_layout:
|
||||
- ""
|
||||
date: "2024-02-15T15:19:12Z"
|
||||
guid: https://wordpress.hackanooga.com/?p=425
|
||||
id: 425
|
||||
tags:
|
||||
- Blog Post
|
||||
title: Traefik with Let’s Encrypt and Cloudflare (pt 2)
|
||||
url: /traefik-with-lets-encrypt-and-cloudflare-pt-2/
|
||||
---
|
||||
|
||||
In this article we are gonna get into setting up Traefik to request dynamic certs from Lets Encrypt. I had a few issues getting this up and running and the documentation is a little fuzzy. In my case I decided to go with the DNS challenge route. Really the only reason I went with this option is because I was having issues with the TLS and HTTP challenges. Well as it turns out my issues didn’t have as much to do with my configuration as they did with my router.
|
||||
|
||||
Sometime in the past I had set up some special rules on my router to force all clients on my network to send DNS requests through a self hosted DNS server. I did this to keep some of my “smart” devices from misbehaving by blocking there access to the outside world. As it turns out some devices will ignore the DNS servers that you hand out via DHCP and will use their own instead. That is of course unless you force DNS redirection but that is another post for another day.
|
||||
|
||||
Let’s revisit our current configuration:
|
||||
|
||||
```
|
||||
version: '3'
|
||||
|
||||
services:
|
||||
reverse-proxy:
|
||||
# The official v2 Traefik docker image
|
||||
image: traefik:v2.11
|
||||
# Enables the web UI and tells Traefik to listen to docker
|
||||
command:
|
||||
- --api.insecure=true
|
||||
- --providers.docker=true
|
||||
- --providers.file.filename=/config.yml
|
||||
- --entrypoints.web.address=:80
|
||||
- --entrypoints.websecure.address=:443
|
||||
# Set up LetsEncrypt
|
||||
- --certificatesresolvers.letsencrypt.acme.dnschallenge=true
|
||||
- --certificatesresolvers.letsencrypt.acme.dnschallenge.provider=cloudflare
|
||||
- --certificatesresolvers.letsencrypt.acme.email=mikeconrad@onmail.com
|
||||
- --certificatesresolvers.letsencrypt.acme.storage=/letsencrypt/acme.json
|
||||
- --entryPoints.web.http.redirections.entryPoint.to=websecure
|
||||
- --entryPoints.web.http.redirections.entryPoint.scheme=https
|
||||
- --entryPoints.web.http.redirections.entrypoint.permanent=true
|
||||
- --log=true
|
||||
- --log.level=INFO
|
||||
# - '--certificatesresolvers.letsencrypt.acme.caserver=https://acme-staging-v02.api.letsencrypt.org/directory'
|
||||
|
||||
environment:
|
||||
- CF_DNS_API_TOKEN=${CF_DNS_API_TOKEN}
|
||||
ports:
|
||||
# The HTTP port
|
||||
- "80:80"
|
||||
- "443:443"
|
||||
# The Web UI (enabled by --api.insecure=true)
|
||||
- "8080:8080"
|
||||
volumes:
|
||||
# So that Traefik can listen to the Docker events
|
||||
- /var/run/docker.sock:/var/run/docker.sock:ro
|
||||
- ./letsencrypt:/letsencrypt
|
||||
- ./volumes/traefik/logs:/logs
|
||||
- ./traefik/config.yml:/config.yml:ro
|
||||
networks:
|
||||
- traefik
|
||||
ots:
|
||||
image: luzifer/ots
|
||||
container_name: ots
|
||||
restart: always
|
||||
environment:
|
||||
# Optional, see "Customization" in README
|
||||
#CUSTOMIZE: '/etc/ots/customize.yaml'
|
||||
# See README for details
|
||||
REDIS_URL: redis://redis:6379/0
|
||||
# 168h = 1w
|
||||
SECRET_EXPIRY: "604800"
|
||||
# "mem" or "redis" (See README)
|
||||
STORAGE_TYPE: redis
|
||||
depends_on:
|
||||
- redis
|
||||
labels:
|
||||
- traefik.enable=true
|
||||
- traefik.http.routers.ots.rule=Host(`ots.hackanooga.com`)
|
||||
- traefik.http.routers.ots.entrypoints=websecure
|
||||
- traefik.http.routers.ots.tls=true
|
||||
- traefik.http.routers.ots.tls.certresolver=letsencrypt
|
||||
networks:
|
||||
- traefik
|
||||
redis:
|
||||
image: redis:alpine
|
||||
restart: always
|
||||
volumes:
|
||||
- ./redis-data:/data
|
||||
networks:
|
||||
- traefik
|
||||
networks:
|
||||
traefik:
|
||||
external: true
|
||||
|
||||
|
||||
```
|
||||
|
||||
Now that we have all of this in place there are a couple more things we need to do on the Cloudflare side:
|
||||
|
||||
### Step 1: Setup wildcard DNS entry
|
||||
|
||||
This is pretty straightforward. Follow the Cloudflare [documentation](https://developers.cloudflare.com/dns/manage-dns-records/reference/wildcard-dns-records/) if you aren’t familiar with setting this up.
|
||||
|
||||
### Step 2: Create API Token
|
||||
|
||||
This is where the Traefik documentation is a little lacking. I had some issues getting this set up initially but ultimately found this [documentation](https://go-acme.github.io/lego/dns/cloudflare/) which pointed me in the right direction. In your Cloudflare account you will need to create an API token. Navigate to the dashboard, go to your profile -> API Tokens and create new token. It should have the following permissions:
|
||||
|
||||
```
|
||||
Zone.Zone.Read
|
||||
Zone.DNS.Edit
|
||||
```
|
||||
|
||||
<figure class="wp-block-image size-full"></figure>Also be sure to give it permission to access all zones in your account. Now simply provide that token when starting up the stack and you should be good to go:
|
||||
|
||||
```
|
||||
CF_DNS_API_TOKEN=[redacted] docker compose up -d
|
||||
```
|
@ -0,0 +1,54 @@
|
||||
---
|
||||
author: mikeconrad
|
||||
categories:
|
||||
- Automation
|
||||
- Docker
|
||||
- OCI
|
||||
- Self Hosted
|
||||
dark_fusion_page_sidebar:
|
||||
- sidebar-1
|
||||
dark_fusion_site_layout:
|
||||
- ""
|
||||
date: "2024-03-07T10:07:07Z"
|
||||
guid: https://wordpress.hackanooga.com/?p=413
|
||||
id: 413
|
||||
tags:
|
||||
- Blog Post
|
||||
title: Self hosted package registries with Gitea
|
||||
url: /self-hosted-package-registries-with-gitea/
|
||||
---
|
||||
|
||||
I am a big proponent of open source technologies. I have been using [Gitea](https://about.gitea.com/) for a couple years now in my homelab. A few years ago I moved most of my code off of Github and onto my self hosted instance. I recently came across a really handy feature that I didn’t know Gitea had and was pleasantly surprised by: [Package Registry](https://docs.gitea.com/usage/packages/overview?_highlight=packag). You are no doubt familiar with what a package registry is in the broad context. Here are some examples of package registries you probably use on a regular basis:
|
||||
|
||||
- npm
|
||||
- cargo
|
||||
- docker
|
||||
- composer
|
||||
- nuget
|
||||
- helm
|
||||
|
||||
There are a number of reasons why you would want to self host a registry. For example, in my home lab I have some `Docker` images that are specific to my use cases and I don’t necessarily want them on a public registry. I’m also not concerned about losing the artifacts as I can easily recreate them from code. Gitea makes this really easy to setup, in fact it comes baked in with the installation. For the sake of this post I will just assume that you already have Gitea installed and setup.
|
||||
|
||||
Since the package registry is baked in and enabled by default, I will demonstrate how easy it is to push a docker image. We will pull the default `alpine` image, re-tag it and push it to our internal registry:
|
||||
|
||||
```
|
||||
# Pull the official Alpine image
|
||||
docker pull alpine:latest
|
||||
|
||||
# Re tag the image with our local registry information
|
||||
docker tag alpine:latest git.hackanooga.com/mikeconrad/alpine:latest
|
||||
|
||||
# Login using your gitea user account
|
||||
docker login git.hackanooga.com
|
||||
|
||||
# Push the image to our registry
|
||||
docker push git.hackanooga.com/mikeconrad/alpine:latest
|
||||
|
||||
|
||||
```
|
||||
|
||||
Now log into your Gitea instance, navigate to your user account and look for `packages`. You should see the newly uploaded alpine image.
|
||||
|
||||
<figure class="wp-block-image size-large"></figure>You can see that the package type is container. Clicking on it will give you more information:
|
||||
|
||||
<figure class="wp-block-image size-large"></figure>
|
@ -0,0 +1,197 @@
|
||||
---
|
||||
author: mikeconrad
|
||||
categories:
|
||||
- Ansible
|
||||
- Automation
|
||||
- CI/CD
|
||||
- TeamCity
|
||||
dark_fusion_page_sidebar:
|
||||
- sidebar-1
|
||||
dark_fusion_site_layout:
|
||||
- ""
|
||||
date: "2024-03-11T09:37:47Z"
|
||||
guid: https://wordpress.hackanooga.com/?p=393
|
||||
id: 393
|
||||
tags:
|
||||
- Blog Post
|
||||
title: Automating CI/CD with TeamCity and Ansible
|
||||
url: /automating-ci-cd-with-teamcity-ansible/
|
||||
---
|
||||
|
||||
In part one of this series we are going to explore a CI/CD option you may not be familiar with but should definitely be on your radar. I used Jetbrains TeamCity for several months at my last company and really enjoyed my time with it. A couple of the things I like most about it are:
|
||||
|
||||
- Ability to declare global variables and have them be passed down to all projects
|
||||
- Ability to declare variables that are made up of other variables
|
||||
|
||||
I like to use private or self hosted Docker registries for a lot of my projects and one of the pain points I have had with some other solutions (well mostly Bitbucket) is that they don’t integrate well with these private registries and when I run into a situation where I am pushing an image to or pulling an image from a private registry it get’s a little messy. TeamCity is nice in that I can add a connection to my private registry in my root project and them simply add that as a build feature to any projects that may need it. Essentially, now I only have one place where I have to keep those credentials and manage that connection.
|
||||
|
||||
Another reason I love it is the fact that you can create really powerful build templates that you can reuse. This became very powerful when we were trying to standardize our build processes. For example, most of the apps we build are `.NET` backends and `React` frontends. We built docker images for every project and pushed them to our private registry. TeamCity gave us the ability to standardize the naming convention and really streamline the build process. Enough about that though, the rest of this series will assume that you are using TeamCity. This post will focus on getting up and running using Ansible.
|
||||
|
||||
---
|
||||
|
||||
## Installation and Setup
|
||||
|
||||
For this I will assume that you already have Ansible on your machine and that you will be installing TeamCity locally. You can simply follow along with the installation guide [here](https://www.jetbrains.com/help/teamcity/install-teamcity-server-on-linux-or-macos.html#Example%3A+Installation+using+Ubuntu+Linux). We will be creating an Ansible playbook based on the following steps. If you just want the finished code, you can find it on my Gitea instance [here](https://git.hackanooga.com/mikeconrad/teamcity-ansible-scripts.git):
|
||||
|
||||
#### Step 1 : Create project and initial playbook
|
||||
|
||||
To get started go ahead and create a new directory to hold our configuration:
|
||||
|
||||
```
|
||||
mkdir ~/projects/teamcity-configuration-ansible
|
||||
touch install-teamcity-server.yml
|
||||
```
|
||||
|
||||
Now open up `install-teamcity-server.yml` and add a task to install Java 17 as it is a prerequisite. You will need sudo for this task. \*\*\*As of this writing TeamCity does not support Java 18 or 19. If you try to install one of these you will get an error when trying to start TeamCity.
|
||||
|
||||
```
|
||||
---
|
||||
- name: Install Teamcity
|
||||
hosts: localhost
|
||||
become: true
|
||||
become_user: sudo
|
||||
|
||||
# Add some variables to make our lives easier
|
||||
vars:
|
||||
java_version: "17"
|
||||
teamcity:
|
||||
installation_path: /opt/TeamCity
|
||||
version: "2023.11.4"
|
||||
|
||||
tasks:
|
||||
- name: Install Java
|
||||
ansible.builtin.apt:
|
||||
name: openjdk-{{ java_version }}-jre-headless
|
||||
update_cache: yes
|
||||
state: latest
|
||||
install_recommends: no
|
||||
```
|
||||
|
||||
The next step is to create a dedicated user account. Add the following task to `install-teamcity-server.yml`
|
||||
|
||||
```
|
||||
- name: Add Teamcity User
|
||||
ansible.builtin.user:
|
||||
name: teamcity
|
||||
```
|
||||
|
||||
Next we will need to download the latest version of TeamCity. 2023.11.4 is the latest as of this writing. Add the following task to your `install-teamcity-server.yml`
|
||||
|
||||
```
|
||||
- name: Download TeamCity Server
|
||||
ansible.builtin.get_url:
|
||||
url: https://download.jetbrains.com/teamcity/TeamCity-{{teamcity.version}}.tar.gz
|
||||
dest: /opt/TeamCity-{{teamcity.version}}.tar.gz
|
||||
mode: '0770'
|
||||
|
||||
```
|
||||
|
||||
Now to install TeamCity Server add the following:
|
||||
|
||||
```
|
||||
- name: Install TeamCity Server
|
||||
ansible.builtin.shell: |
|
||||
tar xfz /opt/TeamCity-{{teamcity.version}}.tar.gz
|
||||
rm -rf /opt/TeamCity-{{teamcity.version}}.tar.gz
|
||||
args:
|
||||
chdir: /opt
|
||||
```
|
||||
|
||||
Now that we have everything set up and installed we want to make sure that our new `teamcity` user has access to everything they need to get up and running. We will add the following lines:
|
||||
|
||||
```
|
||||
- name: Update permissions
|
||||
ansible.builtin.shell: chown -R teamcity:teamcity /opt/TeamCity
|
||||
```
|
||||
|
||||
This gives us a pretty nice setup. We have TeamCity server installed with a dedicated user account. The last thing we will do is create a `systemd` service so that we can easily start/stop the server. For this we will need to add a few things.
|
||||
|
||||
1. A service file that tells our system how to manage TeamCity
|
||||
2. A j2 template file that is used to create this service file
|
||||
3. A handler that tells the system to run `systemctl daemon-reload` once the service has been installed.
|
||||
|
||||
Go ahead and create a new templates folder with the following `teamcity.service.j2` file
|
||||
|
||||
```
|
||||
[Unit]
|
||||
Description=JetBrains TeamCity
|
||||
Requires=network.target
|
||||
After=syslog.target network.target
|
||||
[Service]
|
||||
Type=forking
|
||||
ExecStart={{teamcity.installation_path}}/bin/runAll.sh start
|
||||
ExecStop={{teamcity.installation_path}}/bin/runAll.sh stop
|
||||
User=teamcity
|
||||
PIDFile={{teamcity.installation_path}}/teamcity.pid
|
||||
Environment="TEAMCITY_PID_FILE_PATH={{teamcity.installation_path}}/teamcity.pid"
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
```
|
||||
|
||||
Your project should now look like the following:
|
||||
|
||||
```
|
||||
$: ~/projects/teamcity-ansible-terraform
|
||||
.
|
||||
├── install-teamcity-server.yml
|
||||
└── templates
|
||||
└── teamcity.service.j2
|
||||
|
||||
1 directory, 2 files
|
||||
```
|
||||
|
||||
That’s it! Now you should have a fully automated installed of TeamCity Server ready to be deployed wherever you need it. Here is the final playbook file, also you can find the most up to date version in my [repo](https://git.hackanooga.com/mikeconrad/teamcity-ansible-scripts.git):
|
||||
|
||||
```
|
||||
---
|
||||
- name: Install Teamcity
|
||||
hosts: localhost
|
||||
become: true
|
||||
become_method: sudo
|
||||
|
||||
vars:
|
||||
java_version: "17"
|
||||
teamcity:
|
||||
installation_path: /opt/TeamCity
|
||||
version: "2023.11.4"
|
||||
|
||||
tasks:
|
||||
- name: Install Java
|
||||
ansible.builtin.apt:
|
||||
name: openjdk-{{ java_version }}-jdk # This is important because TeamCity will fail to start if we try to use 18 or 19
|
||||
update_cache: yes
|
||||
state: latest
|
||||
install_recommends: no
|
||||
|
||||
- name: Add TeamCity User
|
||||
ansible.builtin.user:
|
||||
name: teamcity
|
||||
|
||||
- name: Download TeamCity Server
|
||||
ansible.builtin.get_url:
|
||||
url: https://download.jetbrains.com/teamcity/TeamCity-{{teamcity.version}}.tar.gz
|
||||
dest: /opt/TeamCity-{{teamcity.version}}.tar.gz
|
||||
mode: '0770'
|
||||
|
||||
- name: Install TeamCity Server
|
||||
ansible.builtin.shell: |
|
||||
tar xfz /opt/TeamCity-{{teamcity.version}}.tar.gz
|
||||
rm -rf /opt/TeamCity-{{teamcity.version}}.tar.gz
|
||||
args:
|
||||
chdir: /opt
|
||||
|
||||
- name: Update permissions
|
||||
ansible.builtin.shell: chown -R teamcity:teamcity /opt/TeamCity
|
||||
|
||||
- name: TeamCity | Create environment file
|
||||
template: src=teamcity.service.j2 dest=/etc/systemd/system/teamcityserver.service
|
||||
notify:
|
||||
- reload systemctl
|
||||
- name: TeamCity | Start teamcity
|
||||
service: name=teamcityserver.service state=started enabled=yes
|
||||
|
||||
# Trigger a reload of systemctl after the service file has been created.
|
||||
handlers:
|
||||
- name: reload systemctl
|
||||
command: systemctl daemon-reload
|
||||
```
|
@ -0,0 +1,23 @@
|
||||
---
|
||||
author: mikeconrad
|
||||
categories:
|
||||
- Automation
|
||||
- Docker
|
||||
- Software Engineering
|
||||
date: "2024-04-03T09:12:41Z"
|
||||
guid: https://hackanooga.com/?p=557
|
||||
id: 557
|
||||
image: /wp-content/uploads/2024/04/docker-logo-blue-min.png
|
||||
tags:
|
||||
- Blog Post
|
||||
title: Stop all running containers with Docker
|
||||
url: /stop-all-running-containers-with-docker/
|
||||
---
|
||||
|
||||
These are some handy snippets I use on a regular basis when managing containers. I have one server in particular that can sometimes end up with 50 to 100 orphaned containers for various reasons. The easiest/quickest way to stop all of them is to do something like this:
|
||||
|
||||
```
|
||||
docker container stop $(docker container ps -q)
|
||||
```
|
||||
|
||||
Let me break this down in case you are not familiar with the syntax. Basically we are passing the output of `docker container ps -q` into docker container stop. This works because the stop command can take a list of container ids which is what we get when passing the `-q` flag to docker container ps.
|
@ -0,0 +1,162 @@
|
||||
---
|
||||
author: mikeconrad
|
||||
categories:
|
||||
- Ansible
|
||||
- Automation
|
||||
- Docker
|
||||
- Software Engineering
|
||||
- Traefik
|
||||
date: "2024-05-11T09:44:01Z"
|
||||
guid: https://hackanooga.com/?p=564
|
||||
id: 564
|
||||
tags:
|
||||
- Blog Post
|
||||
title: Traefik 3.0 service discovery in Docker Swarm mode
|
||||
url: /traefik-3-0-service-discovery-in-docker-swarm-mode/
|
||||
---
|
||||
|
||||
I recently decided to set up a Docker swarm cluster for a project I was working on. If you aren’t familiar with Swarm mode, it is similar in some ways to k8s but with much less complexity and it is built into Docker. If you are looking for a fairly straightforward way to deploy containers across a number of nodes without all the overhead of k8s it can be a good choice, however it isn’t a very popular or widespread solution these days.
|
||||
|
||||
Anyway, I set up a VM scaling set in Azure with 10 Ubuntu 22.04 vms and wrote some Ansible scripts to automate the process of installing Docker on each machine as well as setting 3 up as swarm managers and the other 7 as worker nodes. I ssh’d into the primary manager node and created a docker compose file for launching an observability stack.
|
||||
|
||||
Here is what that `docker-compose.yml` looks like:
|
||||
|
||||
```
|
||||
---
|
||||
services:
|
||||
otel-collector:
|
||||
image: otel/opentelemetry-collector-contrib:0.88.0
|
||||
volumes:
|
||||
- /home/user/repo/common/devops/observability/otel-config.yaml:/etc/otel/config.yaml
|
||||
- /home/user/repo/log:/log/otel
|
||||
command: --config /etc/otel/config.yaml
|
||||
environment:
|
||||
JAEGER_ENDPOINT: 'tempo:4317'
|
||||
LOKI_ENDPOINT: 'http://loki:3100/loki/api/v1/push'
|
||||
ports:
|
||||
- '8889:8889' # Prometheus metrics exporter (scrape endpoint)
|
||||
- '13133:13133' # health_check extension
|
||||
- '55679:55679' # ZPages extension
|
||||
deploy:
|
||||
placement:
|
||||
constraints:
|
||||
- node.hostname==dockerswa2V8BY4
|
||||
networks:
|
||||
- traefik
|
||||
prometheus:
|
||||
container_name: prometheus
|
||||
image: prom/prometheus:v2.42.0
|
||||
volumes:
|
||||
- /home/user/repo/common/devops/observability/prometheus.yml:/etc/prometheus/prometheus.yml
|
||||
ports:
|
||||
- '9090:9090'
|
||||
deploy:
|
||||
placement:
|
||||
constraints:
|
||||
- node.hostname==dockerswa2V8BY4
|
||||
networks:
|
||||
- traefik
|
||||
loki:
|
||||
container_name: loki
|
||||
image: grafana/loki:2.7.4
|
||||
ports:
|
||||
- '3100:3100'
|
||||
networks:
|
||||
- traefik
|
||||
grafana:
|
||||
container_name: grafana
|
||||
image: grafana/grafana:9.4.3
|
||||
volumes:
|
||||
- /home/user/repo/common/devops/observability/grafana-datasources.yml:/etc/grafana/provisioning/datasources/datasources.yml
|
||||
environment:
|
||||
GF_AUTH_ANONYMOUS_ENABLED: 'false'
|
||||
GF_AUTH_ANONYMOUS_ORG_ROLE: 'Admin'
|
||||
expose:
|
||||
- '3000'
|
||||
labels:
|
||||
- traefik.constraint-label=traefik
|
||||
- traefik.http.middlewares.https-redirect.redirectscheme.scheme=https
|
||||
- traefik.http.middlewares.https-redirect.redirectscheme.permanent=true
|
||||
- traefik.http.routers.grafana-http.rule=Host(`swarm-grafana.mydomain.com`)
|
||||
- traefik.http.routers.grafana-http.entrypoints=http
|
||||
- traefik.http.routers.grafana-http.middlewares=https-redirect
|
||||
# traefik-https the actual router using HTTPS
|
||||
# Uses the environment variable DOMAIN
|
||||
- traefik.http.routers.grafana-https.rule=Host(`swarm-grafana.mydomain.com`)
|
||||
- traefik.http.routers.grafana-https.entrypoints=https
|
||||
- traefik.http.routers.grafana-https.tls=true
|
||||
# Use the special Traefik service api@internal with the web UI/Dashboard
|
||||
- traefik.http.routers.grafana-https.service=grafana
|
||||
# Use the "le" (Let's Encrypt) resolver created below
|
||||
- traefik.http.routers.grafana-https.tls.certresolver=le
|
||||
# Enable HTTP Basic auth, using the middleware created above
|
||||
- traefik.http.services.grafana.loadbalancer.server.port=3000
|
||||
deploy:
|
||||
placement:
|
||||
constraints:
|
||||
- node.hostname==dockerswa2V8BY4
|
||||
networks:
|
||||
- traefik
|
||||
# Tempo runs as user 10001, and docker compose creates the volume as root.
|
||||
# As such, we need to chown the volume in order for Tempo to start correctly.
|
||||
init:
|
||||
image: &tempoImage grafana/tempo:latest
|
||||
user: root
|
||||
entrypoint:
|
||||
- 'chown'
|
||||
- '10001:10001'
|
||||
- '/var/tempo'
|
||||
volumes:
|
||||
- /home/user/repo/tempo-data:/var/tempo
|
||||
deploy:
|
||||
placement:
|
||||
constraints:
|
||||
- node.hostname==dockerswa2V8BY4
|
||||
|
||||
tempo:
|
||||
image: *tempoImage
|
||||
container_name: tempo
|
||||
command: ['-config.file=/etc/tempo.yaml']
|
||||
volumes:
|
||||
- /home/user/repo/common/devops/observability/tempo.yaml:/etc/tempo.yaml
|
||||
- /home/user/repo/tempo-data:/var/tempo
|
||||
deploy:
|
||||
placement:
|
||||
constraints:
|
||||
- node.hostname==dockerswa2V8BY4
|
||||
ports:
|
||||
- '14268' # jaeger ingest
|
||||
- '3200' # tempo
|
||||
- '4317' # otlp grpc
|
||||
- '4318' # otlp http
|
||||
- '9411' # zipkin
|
||||
depends_on:
|
||||
- init
|
||||
networks:
|
||||
- traefik
|
||||
networks:
|
||||
traefik:
|
||||
external: true
|
||||
|
||||
```
|
||||
|
||||
Pretty straightforward so I proceed to deploy it into the swarm
|
||||
|
||||
```
|
||||
docker stack deploy -c docker-compose.yml observability
|
||||
```
|
||||
|
||||
Everything deploys properly but when I view the Traefik logs there is an issue with all the services except for the grafana service. I get errors like this:
|
||||
|
||||
```
|
||||
traefik_traefik.1.tm5iqb9x59on@dockerswa2V8BY4 | 2024-05-11T13:14:16Z ERR error="service \"observability-prometheus\" error: port is missing" container=observability-prometheus-37i852h4o36c23lzwuu9pvee9 providerName=swarm
|
||||
|
||||
```
|
||||
|
||||
It drove me crazy for about half a day or so. I couldn’t find any reason why the grafana service worked as expected but none of the others did. Part of my love/hate relationship with Traefik stems from the fact that configuration issues like this can be hard to track and debug. Ultimately after lots of searching and banging my head against a wall I found the answer in the Traefik docs and thought I would share here for anyone else who might run into this issue. Again, this solution is specific to Docker Swarm mode.
|
||||
|
||||
<https://doc.traefik.io/traefik/providers/swarm/#configuration-examples>
|
||||
|
||||
Expand that first section and you will see the solution:
|
||||
|
||||
<figure class="wp-block-image size-full is-resized"></figure>It turns out I just needed to update my `docker-compose.yml` and nest the labels under a deploy section, redeploy and everything was working as expected.
|
250
content/post/2024-06-24-fun-with-bots-ssh-tarpitting.md
Normal file
250
content/post/2024-06-24-fun-with-bots-ssh-tarpitting.md
Normal file
@ -0,0 +1,250 @@
|
||||
---
|
||||
author: mikeconrad
|
||||
categories:
|
||||
- Cloudflare
|
||||
- Networking
|
||||
- Open Source
|
||||
- Security
|
||||
- SSH
|
||||
date: "2024-06-24T09:37:43Z"
|
||||
guid: https://hackanooga.com/?p=576
|
||||
id: 576
|
||||
tags:
|
||||
- Blog Post
|
||||
title: Fun with bots - SSH tarpitting
|
||||
url: /fun-with-bots-ssh-tarpitting/
|
||||
---
|
||||
|
||||
For those of you who aren’t familiar with the concept of a network tarpit it is a fairly simple concept. Wikipedia defines it like this:
|
||||
|
||||
> A **tarpit** is a service on a [computer system](https://en.wikipedia.org/wiki/Computer_system) (usually a [server](https://en.wikipedia.org/wiki/Server_(computing))) that purposely delays incoming connections. The technique was developed as a defense against a [computer worm](https://en.wikipedia.org/wiki/Computer_worm), and the idea is that [network](https://en.wikipedia.org/wiki/Computer_network) abuses such as [spamming](https://en.wikipedia.org/wiki/Spamming) or broad scanning are less effective, and therefore less attractive, if they take too long. The concept is analogous with a [tar pit](https://en.wikipedia.org/wiki/Tar_pit), in which animals can get bogged down and slowly sink under the surface, like in a [swamp](https://en.wikipedia.org/wiki/Swamp).
|
||||
>
|
||||
> <cite>[https://en.wikipedia.org/wiki/Tarpit\_(networking)](https://en.wikipedia.org/wiki/Tarpit_(networking))</cite>
|
||||
|
||||
If you run any sort of service on the internet then you know as soon as your server has a public IP address and open ports, there are scanners and bots trying to get in constantly. If you take decent steps towards security then it is little more than an annoyance, but annoying all the less. One day when I had some extra time on my hands I started researching ways to mess with the bots trying to scan/attack my site.
|
||||
|
||||
It turns out that this problem has been solved multiple times in multiple ways. One of the most popular tools for tarpitting ssh connections is [endlessh](https://github.com/skeeto/endlessh). The way it works is actually pretty simple. The [SSH RFC](https://datatracker.ietf.org/doc/html/rfc4253#section-4.2) states that when an SSH connection is established, both sides MUST send an identification string. Further down the spec is the line that allows this behavior:
|
||||
|
||||
> ```
|
||||
> The server MAY send other lines of data before sending the version
|
||||
> string. Each line SHOULD be terminated by a Carriage Return and Line
|
||||
> Feed. Such lines MUST NOT begin with "SSH-", and SHOULD be encoded
|
||||
> in ISO-10646 UTF-8 [<a href="https://datatracker.ietf.org/doc/html/rfc3629">RFC3629</a>] (language is not specified). Clients
|
||||
> MUST be able to process such lines. Such lines MAY be silently
|
||||
> ignored, or MAY be displayed to the client user. If they are
|
||||
> displayed, control character filtering, as discussed in [<a href="https://datatracker.ietf.org/doc/html/rfc4253#ref-SSH-ARCH">SSH-ARCH</a>],
|
||||
> SHOULD be used. The primary use of this feature is to allow TCP-
|
||||
> wrappers to display an error message before disconnecting.
|
||||
> ```
|
||||
>
|
||||
> <cite>SSH RFC</cite>
|
||||
|
||||
Essentially this means that their is no limit to the amount of data that a server can send back to the client and the client must be able to wait and process all of this data. Now let’s see it in action.
|
||||
|
||||
```
|
||||
git clone https://github.com/skeeto/endlessh.git
|
||||
cd endlessh
|
||||
make
|
||||
./endlessh &
|
||||
```
|
||||
|
||||
By default this fake server listens on port 2222. I have a port forward set up that forwards all ssh traffic from port 22 to 2222. Now try to connect via ssh:
|
||||
|
||||
```
|
||||
ssh -vvv localhost -p 2222
|
||||
```
|
||||
|
||||
If you wait a few seconds you will see the server send back the version string and then start sending a random banner:
|
||||
|
||||
```
|
||||
$:/tmp/endlessh$ 2024-06-24T13:05:59.488Z Port 2222
|
||||
2024-06-24T13:05:59.488Z Delay 10000
|
||||
2024-06-24T13:05:59.488Z MaxLineLength 32
|
||||
2024-06-24T13:05:59.488Z MaxClients 4096
|
||||
2024-06-24T13:05:59.488Z BindFamily IPv4 Mapped IPv6
|
||||
2024-06-24T13:05:59.488Z socket() = 3
|
||||
2024-06-24T13:05:59.488Z setsockopt(3, SO_REUSEADDR, true) = 0
|
||||
2024-06-24T13:05:59.488Z setsockopt(3, IPV6_V6ONLY, true) = 0
|
||||
2024-06-24T13:05:59.488Z bind(3, port=2222) = 0
|
||||
2024-06-24T13:05:59.488Z listen(3) = 0
|
||||
2024-06-24T13:05:59.488Z poll(1, -1)
|
||||
ssh -vvv localhost -p 2222
|
||||
OpenSSH_8.9p1 Ubuntu-3ubuntu0.7, OpenSSL 3.0.2 15 Mar 2022
|
||||
debug1: Reading configuration data /home/mikeconrad/.ssh/config
|
||||
debug1: Reading configuration data /etc/ssh/ssh_config
|
||||
debug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files
|
||||
debug1: /etc/ssh/ssh_config line 21: Applying options for *
|
||||
debug3: expanded UserKnownHostsFile '~/.ssh/known_hosts' -> '/home/mikeconrad/.ssh/known_hosts'
|
||||
debug3: expanded UserKnownHostsFile '~/.ssh/known_hosts2' -> '/home/mikeconrad/.ssh/known_hosts2'
|
||||
debug2: resolving "localhost" port 2222
|
||||
debug3: resolve_host: lookup localhost:2222
|
||||
debug3: ssh_connect_direct: entering
|
||||
debug1: Connecting to localhost [::1] port 2222.
|
||||
debug3: set_sock_tos: set socket 3 IPV6_TCLASS 0x10
|
||||
debug1: Connection established.
|
||||
2024-06-24T13:06:08.635Z = 1
|
||||
2024-06-24T13:06:08.635Z accept() = 4
|
||||
2024-06-24T13:06:08.635Z setsockopt(4, SO_RCVBUF, 1) = 0
|
||||
2024-06-24T13:06:08.635Z ACCEPT host=::1 port=43696 fd=4 n=1/4096
|
||||
2024-06-24T13:06:08.635Z poll(1, 10000)
|
||||
debug1: identity file /home/mikeconrad/.ssh/id_rsa type 0
|
||||
debug1: identity file /home/mikeconrad/.ssh/id_rsa-cert type 4
|
||||
debug1: identity file /home/mikeconrad/.ssh/id_ecdsa type -1
|
||||
debug1: identity file /home/mikeconrad/.ssh/id_ecdsa-cert type -1
|
||||
debug1: identity file /home/mikeconrad/.ssh/id_ecdsa_sk type -1
|
||||
debug1: identity file /home/mikeconrad/.ssh/id_ecdsa_sk-cert type -1
|
||||
debug1: identity file /home/mikeconrad/.ssh/id_ed25519 type -1
|
||||
debug1: identity file /home/mikeconrad/.ssh/id_ed25519-cert type -1
|
||||
debug1: identity file /home/mikeconrad/.ssh/id_ed25519_sk type -1
|
||||
debug1: identity file /home/mikeconrad/.ssh/id_ed25519_sk-cert type -1
|
||||
debug1: identity file /home/mikeconrad/.ssh/id_xmss type -1
|
||||
debug1: identity file /home/mikeconrad/.ssh/id_xmss-cert type -1
|
||||
debug1: identity file /home/mikeconrad/.ssh/id_dsa type -1
|
||||
debug1: identity file /home/mikeconrad/.ssh/id_dsa-cert type -1
|
||||
debug1: Local version string SSH-2.0-OpenSSH_8.9p1 Ubuntu-3ubuntu0.7
|
||||
2024-06-24T13:06:18.684Z = 0
|
||||
2024-06-24T13:06:18.684Z write(4) = 3
|
||||
2024-06-24T13:06:18.684Z poll(1, 10000)
|
||||
debug1: kex_exchange_identification: banner line 0: V
|
||||
2024-06-24T13:06:28.734Z = 0
|
||||
2024-06-24T13:06:28.734Z write(4) = 25
|
||||
2024-06-24T13:06:28.734Z poll(1, 10000)
|
||||
debug1: kex_exchange_identification: banner line 1: 2I=ED}PZ,z T_Y|Yc]$b{R]
|
||||
|
||||
|
||||
```
|
||||
|
||||
This is a great way to give back to those bots and script kiddies. In my research into other methods I also stumbled across this brilliant program [fakessh](https://github.com/iBug/fakessh). While fakessh isn’t technically a tarpit, it’s more of a honeypot but very interesting nonetheless. It creates a fake SSH server and logs the ip address, connection string and any commands executed by the attacker. Essentially it allows any username/password combination to connect and gives them a fake shell prompt. There is no actual access to any file system and all of their commands basically return gibberish.
|
||||
|
||||
Here are some logs from an actual server of mine running fakessh
|
||||
|
||||
```
|
||||
2024/06/24 06:51:20 [conn] ip=183.81.169.238:40430
|
||||
2024/06/24 06:51:22 [auth] ip=183.81.169.238:40430 version="SSH-2.0-Go" user="root" password="0"
|
||||
2024/06/24 06:51:23 [conn] ip=183.81.169.238:40444
|
||||
2024/06/24 06:51:25 [auth] ip=183.81.169.238:40444 version="SSH-2.0-Go" user="root" password="eve"
|
||||
2024/06/24 06:51:26 [conn] ip=183.81.169.238:48408
|
||||
2024/06/24 06:51:27 [auth] ip=183.81.169.238:48408 version="SSH-2.0-Go" user="root" password="root"
|
||||
2024/06/24 06:51:28 [conn] ip=183.81.169.238:48434
|
||||
2024/06/24 06:51:30 [auth] ip=183.81.169.238:48434 version="SSH-2.0-Go" user="root" password="1"
|
||||
2024/06/24 06:51:30 [conn] ip=183.81.169.238:48448
|
||||
2024/06/24 06:51:32 [auth] ip=183.81.169.238:48448 version="SSH-2.0-Go" user="root" password="123"
|
||||
2024/06/24 06:51:32 [conn] ip=183.81.169.238:48476
|
||||
2024/06/24 06:51:35 [auth] ip=183.81.169.238:48476 version="SSH-2.0-Go" user="root" password="admin"
|
||||
2024/06/24 06:51:35 [conn] ip=183.81.169.238:39250
|
||||
2024/06/24 06:51:37 [auth] ip=183.81.169.238:39250 version="SSH-2.0-Go" user="root" password="123456"
|
||||
2024/06/24 06:51:38 [conn] ip=183.81.169.238:39276
|
||||
2024/06/24 06:51:40 [auth] ip=183.81.169.238:39276 version="SSH-2.0-Go" user="root" password="123123"
|
||||
2024/06/24 06:51:40 [conn] ip=183.81.169.238:39294
|
||||
2024/06/24 06:51:42 [auth] ip=183.81.169.238:39294 version="SSH-2.0-Go" user="root" password="test"
|
||||
2024/06/24 06:51:43 [conn] ip=183.81.169.238:39316
|
||||
2024/06/24 06:51:45 [auth] ip=183.81.169.238:39316 version="SSH-2.0-Go" user="root" password="123456789"
|
||||
2024/06/24 06:51:45 [conn] ip=183.81.169.238:35108
|
||||
2024/06/24 06:51:47 [auth] ip=183.81.169.238:35108 version="SSH-2.0-Go" user="root" password="12345"
|
||||
2024/06/24 06:51:48 [conn] ip=183.81.169.238:35114
|
||||
2024/06/24 06:51:50 [auth] ip=183.81.169.238:35114 version="SSH-2.0-Go" user="root" password="password"
|
||||
2024/06/24 06:51:50 [conn] ip=183.81.169.238:35130
|
||||
2024/06/24 06:51:52 [auth] ip=183.81.169.238:35130 version="SSH-2.0-Go" user="root" password="12345678"
|
||||
2024/06/24 06:51:52 [conn] ip=183.81.169.238:35146
|
||||
2024/06/24 06:51:54 [auth] ip=183.81.169.238:35146 version="SSH-2.0-Go" user="root" password="111111"
|
||||
2024/06/24 06:51:55 [conn] ip=183.81.169.238:58490
|
||||
2024/06/24 06:51:57 [auth] ip=183.81.169.238:58490 version="SSH-2.0-Go" user="root" password="1234567890"
|
||||
2024/06/24 06:51:57 [conn] ip=183.81.169.238:58528
|
||||
2024/06/24 06:51:59 [auth] ip=183.81.169.238:58528 version="SSH-2.0-Go" user="root" password="1234"
|
||||
2024/06/24 06:52:00 [conn] ip=183.81.169.238:58572
|
||||
2024/06/24 06:52:02 [auth] ip=183.81.169.238:58572 version="SSH-2.0-Go" user="root" password="password123"
|
||||
2024/06/24 06:52:02 [conn] ip=183.81.169.238:58588
|
||||
2024/06/24 06:52:04 [auth] ip=183.81.169.238:58588 version="SSH-2.0-Go" user="root" password="ubuntu"
|
||||
2024/06/24 06:52:05 [conn] ip=183.81.169.238:37198
|
||||
2024/06/24 06:52:07 [auth] ip=183.81.169.238:37198 version="SSH-2.0-Go" user="Antminer" password="root"
|
||||
2024/06/24 06:52:07 [conn] ip=183.81.169.238:37214
|
||||
2024/06/24 06:52:09 [auth] ip=183.81.169.238:37214 version="SSH-2.0-Go" user="Antminer" password="admin"
|
||||
2024/06/24 06:52:10 [conn] ip=183.81.169.238:37238
|
||||
2024/06/24 06:52:11 [auth] ip=183.81.169.238:37238 version="SSH-2.0-Go" user="root" password="innot1t2"
|
||||
2024/06/24 06:52:12 [conn] ip=183.81.169.238:37258
|
||||
2024/06/24 06:52:14 [auth] ip=183.81.169.238:37258 version="SSH-2.0-Go" user="root" password="t1t2t3a5"
|
||||
2024/06/24 06:52:14 [conn] ip=183.81.169.238:55658
|
||||
2024/06/24 06:52:16 [auth] ip=183.81.169.238:55658 version="SSH-2.0-Go" user="root" password="blacksheepwall"
|
||||
2024/06/24 06:52:17 [conn] ip=183.81.169.238:55670
|
||||
2024/06/24 06:52:19 [auth] ip=183.81.169.238:55670 version="SSH-2.0-Go" user="root" password="envision"
|
||||
2024/06/24 06:52:19 [conn] ip=183.81.169.238:55708
|
||||
2024/06/24 06:52:21 [auth] ip=183.81.169.238:55708 version="SSH-2.0-Go" user="root" password="bwcon"
|
||||
2024/06/24 06:52:22 [conn] ip=183.81.169.238:55776
|
||||
2024/06/24 06:52:23 [auth] ip=183.81.169.238:55776 version="SSH-2.0-Go" user="admin" password="root"
|
||||
2024/06/24 06:52:24 [conn] ip=183.81.169.238:46646
|
||||
2024/06/24 06:52:26 [auth] ip=183.81.169.238:46646 version="SSH-2.0-Go" user="baikal" password="baikal"
|
||||
2024/06/24 06:52:26 [conn] ip=180.101.88.197:44620
|
||||
2024/06/24 06:52:27 [conn] ip=180.101.88.197:44620 err="ssh: disconnect, reason 11: "
|
||||
2024/06/24 06:53:35 [conn] ip=218.92.0.76:50610
|
||||
2024/06/24 06:53:36 [conn] ip=218.92.0.76:50610 err="ssh: disconnect, reason 11: "
|
||||
2024/06/24 07:02:28 [conn] ip=218.92.0.27:64676
|
||||
2024/06/24 07:02:30 [conn] ip=218.92.0.27:64676 err="ssh: disconnect, reason 11: "
|
||||
2024/06/24 07:10:05 [conn] ip=218.92.0.76:57601
|
||||
2024/06/24 07:10:07 [conn] ip=218.92.0.76:57601 err="ssh: disconnect, reason 11: "
|
||||
2024/06/24 07:14:05 [conn] ip=193.201.9.156:63056
|
||||
2024/06/24 07:14:05 [auth] ip=193.201.9.156:63056 version="SSH-2.0-Go" user="ubnt" password="ubnt"
|
||||
2024/06/24 07:14:05 [conn] ip=193.201.9.156:63056 err="read tcp 10.10.10.107:2222->193.201.9.156:63056: read: connection reset by peer"
|
||||
2024/06/24 07:24:53 [conn] ip=218.92.0.31:25485
|
||||
2024/06/24 07:24:54 [conn] ip=218.92.0.31:25485 err="ssh: disconnect, reason 11: "
|
||||
2024/06/24 07:24:54 [conn] ip=218.92.0.112:39270
|
||||
2024/06/24 07:24:56 [conn] ip=218.92.0.112:39270 err="ssh: disconnect, reason 11: "
|
||||
2024/06/24 07:26:42 [conn] ip=218.92.0.34:59993
|
||||
2024/06/24 07:35:46 [conn] ip=218.92.0.34:59993 err="read tcp 10.10.10.107:2222->218.92.0.34:59993: read: connection reset by peer"
|
||||
2024/06/24 07:41:28 [conn] ip=218.92.0.107:62285
|
||||
2024/06/24 07:41:31 [conn] ip=218.92.0.107:62285 err="ssh: disconnect, reason 11: "
|
||||
2024/06/24 07:43:27 [conn] ip=218.92.0.29:34556
|
||||
2024/06/24 07:43:28 [conn] ip=218.92.0.29:34556 err="ssh: disconnect, reason 11: "
|
||||
2024/06/24 07:44:15 [conn] ip=218.92.0.118:37047
|
||||
2024/06/24 07:44:22 [conn] ip=218.92.0.118:37047 err="ssh: disconnect, reason 11: "
|
||||
2024/06/24 07:56:10 [conn] ip=157.245.98.245:6116
|
||||
2024/06/24 07:56:11 [conn] ip=157.245.98.245:6116 err="ssh: unexpected message type 20 (expected 21)"
|
||||
2024/06/24 07:57:57 [conn] ip=218.92.0.112:28326
|
||||
2024/06/24 07:57:58 [conn] ip=218.92.0.112:28326 err="ssh: disconnect, reason 11: "
|
||||
2024/06/24 08:00:01 [conn] ip=218.92.0.24:24948
|
||||
2024/06/24 08:00:02 [conn] ip=218.92.0.24:24948 err="ssh: disconnect, reason 11: "
|
||||
2024/06/24 08:06:19 [conn] ip=193.201.9.156:46865
|
||||
2024/06/24 08:06:20 [auth] ip=193.201.9.156:46865 version="SSH-2.0-Go" user="root" password="xc3511"
|
||||
2024/06/24 08:06:20 [conn] ip=193.201.9.156:46865 err="read tcp 10.10.10.107:2222->193.201.9.156:46865: read: connection reset by peer"
|
||||
2024/06/24 08:14:26 [conn] ip=180.101.88.197:48347
|
||||
2024/06/24 08:14:28 [conn] ip=180.101.88.197:48347 err="ssh: disconnect, reason 11: "
|
||||
2024/06/24 08:16:28 [conn] ip=218.92.0.56:18064
|
||||
2024/06/24 08:16:32 [conn] ip=218.92.0.56:18064 err="ssh: disconnect, reason 11: "
|
||||
2024/06/24 08:30:55 [conn] ip=180.101.88.196:40495
|
||||
2024/06/24 08:30:57 [conn] ip=180.101.88.196:40495 err="ssh: disconnect, reason 11: "
|
||||
2024/06/24 08:32:20 [conn] ip=85.209.11.227:15493
|
||||
2024/06/24 08:32:21 [auth] ip=85.209.11.227:15493 version="SSH-2.0-Go" user="telecomadmin" password="admintelecom"
|
||||
2024/06/24 08:32:21 [conn] ip=85.209.11.227:15493 err="read tcp 10.10.10.107:2222->85.209.11.227:15493: read: connection reset by peer"
|
||||
2024/06/24 08:33:19 [conn] ip=218.92.0.34:59804
|
||||
2024/06/24 08:33:21 [conn] ip=218.92.0.34:59804 err="ssh: disconnect, reason 11: "
|
||||
2024/06/24 08:41:00 [conn] ip=218.92.0.27:45567
|
||||
2024/06/24 08:41:02 [conn] ip=218.92.0.27:45567 err="ssh: disconnect, reason 11: "
|
||||
2024/06/24 08:47:15 [conn] ip=180.101.88.196:17032
|
||||
2024/06/24 08:47:16 [conn] ip=180.101.88.196:17032 err="ssh: disconnect, reason 11: "
|
||||
2024/06/24 08:49:51 [conn] ip=218.92.0.29:26360
|
||||
2024/06/24 08:49:57 [conn] ip=218.92.0.29:26360 err="ssh: disconnect, reason 11: "
|
||||
2024/06/24 08:58:27 [conn] ip=193.201.9.156:49525
|
||||
2024/06/24 08:58:28 [auth] ip=193.201.9.156:49525 version="SSH-2.0-Go" user="admin" password="1234"
|
||||
2024/06/24 08:58:28 [conn] ip=193.201.9.156:49525 err="read tcp 10.10.10.107:2222->193.201.9.156:49525: read: connection reset by peer"
|
||||
2024/06/24 08:58:44 [conn] ip=218.92.0.31:11835
|
||||
2024/06/24 08:58:46 [conn] ip=218.92.0.31:11835 err="ssh: disconnect, reason 11: "
|
||||
2024/06/24 09:03:38 [conn] ip=218.92.0.107:57758
|
||||
2024/06/24 09:03:40 [conn] ip=218.92.0.107:57758 err="ssh: disconnect, reason 11: "
|
||||
2024/06/24 09:07:36 [conn] ip=218.92.0.56:21354
|
||||
2024/06/24 09:07:39 [conn] ip=218.92.0.56:21354 err="ssh: disconnect, reason 11: "
|
||||
|
||||
```
|
||||
|
||||
Those are mostly connections and disconnections. They probably connected, realized it was fake and disconnected. There are a couple that tried to execute some commands though:
|
||||
|
||||
```
|
||||
:~$ sudo grep head /var/log/fakessh/fakessh.log
|
||||
2024/06/23 15:48:02 [shell] ip=184.160.233.163:45735 duration=0s bytes=15 head="ls 2>/dev/null\n"
|
||||
2024/06/24 03:55:11 [shell] ip=14.46.116.243:43656 duration=20s bytes=0 head=""
|
||||
|
||||
```
|
||||
|
||||
Fun fact: Cloudflare’s Bot Fight Mode uses a form of tarpitting:
|
||||
|
||||
> Once enabled, when we detect a bad bot, we will do three things: (1) we’re going to disincentivize the bot maker economically by tarpitting them, including requiring them to solve a computationally intensive challenge that will require more of their bot’s CPU; (2) for [Bandwidth Alliance partners](https://blog.cloudflare.com/bandwidth-alliance/), we’re going to hand the IP of the bot to the partner and get the bot kicked offline; and (3) we’re going to plant trees to make up for the bot’s carbon cost.
|
||||
>
|
||||
> <cite><https://blog.cloudflare.com/cleaning-up-bad-bots></cite>
|
211
content/post/2024-07-16-debugging-running-nginx-config.md
Normal file
211
content/post/2024-07-16-debugging-running-nginx-config.md
Normal file
@ -0,0 +1,211 @@
|
||||
---
|
||||
author: mikeconrad
|
||||
categories:
|
||||
- Docker
|
||||
- Networking
|
||||
- Self Hosted
|
||||
date: "2024-07-16T21:42:43Z"
|
||||
guid: https://hackanooga.com/?p=596
|
||||
id: 596
|
||||
image: /wp-content/uploads/2024/07/nginx.png
|
||||
tags:
|
||||
- Blog Post
|
||||
title: Debugging running Nginx config
|
||||
url: /debugging-running-nginx-config/
|
||||
---
|
||||
|
||||
I was recently working on project where a client had cPanel/WHM with Nginx and Apache. They had a large number of sites managed by Nginx with a large number of includes. I created a custom config to override a location block and needed to be certain that my changes where actually being picked up. Anytime I make changes to an Nginx config, I try to be vigilant about running:
|
||||
|
||||
```
|
||||
nginx -t
|
||||
```
|
||||
|
||||
to test my configuration and ensure I don’t have any syntax errors. I was looking for an easy way to view the actual compiled config and found the `-T` flag which will test the configuration and dump it to standard out. This is pretty handy if you have a large number of includes in various locations. Here is an example from a fresh Nginx Docker container:
|
||||
|
||||
```
|
||||
root@2771f302dc98:/# nginx -T
|
||||
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
|
||||
nginx: configuration file /etc/nginx/nginx.conf test is successful
|
||||
# configuration file /etc/nginx/nginx.conf:
|
||||
|
||||
user nginx;
|
||||
worker_processes auto;
|
||||
|
||||
error_log /var/log/nginx/error.log notice;
|
||||
pid /var/run/nginx.pid;
|
||||
|
||||
|
||||
events {
|
||||
worker_connections 1024;
|
||||
}
|
||||
|
||||
|
||||
http {
|
||||
include /etc/nginx/mime.types;
|
||||
default_type application/octet-stream;
|
||||
|
||||
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
|
||||
'$status $body_bytes_sent "$http_referer" '
|
||||
'"$http_user_agent" "$http_x_forwarded_for"';
|
||||
|
||||
access_log /var/log/nginx/access.log main;
|
||||
|
||||
sendfile on;
|
||||
#tcp_nopush on;
|
||||
|
||||
keepalive_timeout 65;
|
||||
|
||||
#gzip on;
|
||||
|
||||
include /etc/nginx/conf.d/*.conf;
|
||||
}
|
||||
|
||||
# configuration file /etc/nginx/mime.types:
|
||||
|
||||
types {
|
||||
text/html html htm shtml;
|
||||
text/css css;
|
||||
text/xml xml;
|
||||
image/gif gif;
|
||||
image/jpeg jpeg jpg;
|
||||
application/javascript js;
|
||||
application/atom+xml atom;
|
||||
application/rss+xml rss;
|
||||
|
||||
text/mathml mml;
|
||||
text/plain txt;
|
||||
text/vnd.sun.j2me.app-descriptor jad;
|
||||
text/vnd.wap.wml wml;
|
||||
text/x-component htc;
|
||||
|
||||
image/avif avif;
|
||||
image/png png;
|
||||
image/svg+xml svg svgz;
|
||||
image/tiff tif tiff;
|
||||
image/vnd.wap.wbmp wbmp;
|
||||
image/webp webp;
|
||||
image/x-icon ico;
|
||||
image/x-jng jng;
|
||||
image/x-ms-bmp bmp;
|
||||
|
||||
font/woff woff;
|
||||
font/woff2 woff2;
|
||||
|
||||
application/java-archive jar war ear;
|
||||
application/json json;
|
||||
application/mac-binhex40 hqx;
|
||||
application/msword doc;
|
||||
application/pdf pdf;
|
||||
application/postscript ps eps ai;
|
||||
application/rtf rtf;
|
||||
application/vnd.apple.mpegurl m3u8;
|
||||
application/vnd.google-earth.kml+xml kml;
|
||||
application/vnd.google-earth.kmz kmz;
|
||||
application/vnd.ms-excel xls;
|
||||
application/vnd.ms-fontobject eot;
|
||||
application/vnd.ms-powerpoint ppt;
|
||||
application/vnd.oasis.opendocument.graphics odg;
|
||||
application/vnd.oasis.opendocument.presentation odp;
|
||||
application/vnd.oasis.opendocument.spreadsheet ods;
|
||||
application/vnd.oasis.opendocument.text odt;
|
||||
application/vnd.openxmlformats-officedocument.presentationml.presentation
|
||||
pptx;
|
||||
application/vnd.openxmlformats-officedocument.spreadsheetml.sheet
|
||||
xlsx;
|
||||
application/vnd.openxmlformats-officedocument.wordprocessingml.document
|
||||
docx;
|
||||
application/vnd.wap.wmlc wmlc;
|
||||
application/wasm wasm;
|
||||
application/x-7z-compressed 7z;
|
||||
application/x-cocoa cco;
|
||||
application/x-java-archive-diff jardiff;
|
||||
application/x-java-jnlp-file jnlp;
|
||||
application/x-makeself run;
|
||||
application/x-perl pl pm;
|
||||
application/x-pilot prc pdb;
|
||||
application/x-rar-compressed rar;
|
||||
application/x-redhat-package-manager rpm;
|
||||
application/x-sea sea;
|
||||
application/x-shockwave-flash swf;
|
||||
application/x-stuffit sit;
|
||||
application/x-tcl tcl tk;
|
||||
application/x-x509-ca-cert der pem crt;
|
||||
application/x-xpinstall xpi;
|
||||
application/xhtml+xml xhtml;
|
||||
application/xspf+xml xspf;
|
||||
application/zip zip;
|
||||
|
||||
application/octet-stream bin exe dll;
|
||||
application/octet-stream deb;
|
||||
application/octet-stream dmg;
|
||||
application/octet-stream iso img;
|
||||
application/octet-stream msi msp msm;
|
||||
|
||||
audio/midi mid midi kar;
|
||||
audio/mpeg mp3;
|
||||
audio/ogg ogg;
|
||||
audio/x-m4a m4a;
|
||||
audio/x-realaudio ra;
|
||||
|
||||
video/3gpp 3gpp 3gp;
|
||||
video/mp2t ts;
|
||||
video/mp4 mp4;
|
||||
video/mpeg mpeg mpg;
|
||||
video/quicktime mov;
|
||||
video/webm webm;
|
||||
video/x-flv flv;
|
||||
video/x-m4v m4v;
|
||||
video/x-mng mng;
|
||||
video/x-ms-asf asx asf;
|
||||
video/x-ms-wmv wmv;
|
||||
video/x-msvideo avi;
|
||||
}
|
||||
|
||||
# configuration file /etc/nginx/conf.d/default.conf:
|
||||
server {
|
||||
listen 80;
|
||||
server_name localhost;
|
||||
|
||||
#access_log /var/log/nginx/host.access.log main;
|
||||
|
||||
location / {
|
||||
root /usr/share/nginx/html;
|
||||
index index.html index.htm;
|
||||
}
|
||||
|
||||
#error_page 404 /404.html;
|
||||
|
||||
# redirect server error pages to the static page /50x.html
|
||||
#
|
||||
error_page 500 502 503 504 /50x.html;
|
||||
location = /50x.html {
|
||||
root /usr/share/nginx/html;
|
||||
}
|
||||
|
||||
# proxy the PHP scripts to Apache listening on 127.0.0.1:80
|
||||
#
|
||||
#location ~ \.php$ {
|
||||
# proxy_pass http://127.0.0.1;
|
||||
#}
|
||||
|
||||
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
|
||||
#
|
||||
#location ~ \.php$ {
|
||||
# root html;
|
||||
# fastcgi_pass 127.0.0.1:9000;
|
||||
# fastcgi_index index.php;
|
||||
# fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name;
|
||||
# include fastcgi_params;
|
||||
#}
|
||||
|
||||
# deny access to .htaccess files, if Apache's document root
|
||||
# concurs with nginx's one
|
||||
#
|
||||
#location ~ /\.ht {
|
||||
# deny all;
|
||||
#}
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
As you can see from the output above, we get all of the various Nginx config files in use printed to the console, perfect for `grepping` or searching/filtering with other tools.
|
@ -0,0 +1,40 @@
|
||||
---
|
||||
author: mikeconrad
|
||||
categories:
|
||||
- Cloudflare
|
||||
- Docker
|
||||
- Open Source
|
||||
- Security
|
||||
- Self Hosted
|
||||
date: "2024-07-16T22:15:23Z"
|
||||
guid: https://hackanooga.com/?p=599
|
||||
id: 599
|
||||
tags:
|
||||
- Upwork Project
|
||||
title: SFTP Server Setup for Daily Inventory File Transfers
|
||||
url: /sftp-server-setup-for-daily-inventory-file-transfers/
|
||||
---
|
||||
|
||||
### Job Description
|
||||
|
||||
> We are looking for an experienced professional to help us set up an SFTP server that will allow our vendors to send us inventory files on a daily basis. The server should ensure secure and reliable file transfers, allowing our vendors to easily upload their inventory updates. The successful candidate will possess expertise in SFTP server setup and configuration, as well as knowledge of network security protocols. The required skills for this job include:
|
||||
>
|
||||
> – SFTP server setup and configuration
|
||||
> – Network security protocols
|
||||
> – Troubleshooting and problem-solving skills
|
||||
>
|
||||
> If you have demonstrated experience in setting up SFTP servers and ensuring smooth daily file transfers, we would love to hear from you.
|
||||
|
||||
---
|
||||
|
||||
### My Role
|
||||
|
||||
I walked the client through the process of setting up a Digital Ocean account. I created a `Ubuntu 22.04` VM and installed [SFTPGo](https://github.com/drakkan/sftpgo). I set the client up with an administrator user so that they could easily login and manage users and shares. I implemented some basic security practices as well and set the client up with a custom domain and free TLS/SSL certificate from LetsEncrypt. With the documentation and screenshots I provided the client, they were able to get everything up and running and add users and connect other systems easily and securly.
|
||||
|
||||
---
|
||||
|
||||
## Client Feedback
|
||||
|
||||
> Rating is 5 out of 5.
|
||||
>
|
||||
> Michael was EXTREMELY helpful and great to work with. We really benefited from his support and help with everything.
|
@ -0,0 +1,44 @@
|
||||
---
|
||||
author: mikeconrad
|
||||
categories:
|
||||
- Cloudflare
|
||||
- Networking
|
||||
- Security
|
||||
- Software Engineering
|
||||
date: "2024-08-01T17:02:29Z"
|
||||
excerpt: Harden your origin server by only allowing access from Cloudflare IP addresses.
|
||||
guid: https://hackanooga.com/?p=607
|
||||
id: 607
|
||||
tags:
|
||||
- Blog Post
|
||||
title: Hardening your web server by only allowing traffic from Cloudflare
|
||||
url: /hardening-your-web-server-by-only-allowing-traffic-from-cloudflare/
|
||||
---
|
||||
|
||||
#### TDLR:
|
||||
|
||||
If you just want the code you can find a convenient script on my Gitea server [here](https://git.hackanooga.com/mikeconrad/random_scripts/src/branch/master/allow_only_cloudflare_traffic.sh). This version has been slightly modified so that it will work on more systems.
|
||||
|
||||
I have been using Cloudflare for several years for both personal and professional projects. The free plan has some various gracious limits and it’s a great way to clear out some low hanging fruit and improve the security of your application. If you’re not familiar with how it works, basically Cloudflare has two modes for DNS records. `DNS Only` and `Proxied`. The only way to get the advantages of Cloudflare is to use `Proxied` mode. Cloudflare has some great documentation on how all of their services work but basically what happens is that you are pointing your domain to Cloudflare and Cloudflare provisions their network of Proxy servers to handle requests for your domain.
|
||||
|
||||
These proxy servers allow you to secure your domain by implementing things like WAF and Rate limiting. You can also enforce HTTPS only mode and modify/add custom request/response headers. You will notice that once you turn this mode on, your webserver will log requests as coming from Cloudflare IP addresses. They have great [documentation](https://developers.cloudflare.com/support/troubleshooting/restoring-visitor-ips/restoring-original-visitor-ips/) on how to configure your webserver to restore these IP addresses in your log files.
|
||||
|
||||
This is a very easy step to start securing your origin server but it still allows attackers to access your servers directly if they know the IP address. We can take our security one step forward by only allowing requests from IP addresses originating within Cloudflare meaning that we will only allow requests if they are coming from a Cloudflare proxy server. The setup is fairly straightforward. In this example I will be using a Linux server.
|
||||
|
||||
We can achieve this pretty easily because Cloudflare provides a sort of API where they regular publish their network blocks. Here is the basic script we will use:
|
||||
|
||||
```
|
||||
for ip in $(curl https://www.cloudflare.com/ips-v4/); do iptables -I INPUT -p tcp -m multiport --dports http,https -s $ip -j ACCEPT; done
|
||||
|
||||
for ip in $(curl https://www.cloudflare.com/ips-v6/); do ip6tables -I INPUT -p tcp -m multiport --dports http,https -s $ip -j ACCEPT; done
|
||||
|
||||
iptables -A INPUT -p tcp -m multiport --dports http,https -j DROP
|
||||
ip6tables -A INPUT -p tcp -m multiport --dports http,https -j DROP
|
||||
|
||||
```
|
||||
|
||||
This will pull down the latest network addresses from Cloudflare and create `iptables` rules for us. These IP addresses do change from time to time so you may want to put this in a script and run it via a `cronjob` to have it update on a regular basis.
|
||||
|
||||
Now with this in place, here is the results:
|
||||
|
||||
<figure class="wp-block-image size-large"></figure>This should cut down on some of the noise from attackers and script kiddies trying to find holes in your security.
|
398
content/post/2024-09-25-standing-up-a-wireguard-vpn.md
Normal file
398
content/post/2024-09-25-standing-up-a-wireguard-vpn.md
Normal file
@ -0,0 +1,398 @@
|
||||
---
|
||||
author: mikeconrad
|
||||
categories:
|
||||
- Automation
|
||||
- IaC
|
||||
- Open Source
|
||||
- Security
|
||||
- Self Hosted
|
||||
- Software Engineering
|
||||
- SSH
|
||||
date: "2024-09-25T09:56:04Z"
|
||||
guid: https://hackanooga.com/?p=619
|
||||
id: 619
|
||||
tags:
|
||||
- Blog Post
|
||||
title: Standing up a Wireguard VPN
|
||||
url: /standing-up-a-wireguard-vpn/
|
||||
---
|
||||
|
||||
VPN’s have traditionally been slow, complex and hard to set up and configure. That all changed several years ago when Wireguard was officially merged into the mainline Linux kernel ([src](https://arstechnica.com/gadgets/2020/03/wireguard-vpn-makes-it-to-1-0-0-and-into-the-next-linux-kernel/)). I won’t go over all the reasons for why you should want to use Wireguard in this article, instead I will be focusing on just how easy it is to set up and configure.
|
||||
|
||||
For this tutorial we will be using Terraform to stand up a Digital Ocean droplet and then install Wireguard onto that. The Digital Ocean droplet will be acting as our “server” in this example and we will be using our own computer as the “client”. Of course, you don’t have to use Terraform, you just need a Linux box to install Wireguard on. You can find the code for this tutorial on my personal Git server [here](https://git.hackanooga.com/mikeconrad/wireguard-terraform-digitalocean).
|
||||
|
||||
### Create Droplet with Terraform
|
||||
|
||||
I have written some very basic Terraform to get us started. The Terraform is very basic and just creates a droplet with a predefined ssh key and a setup script passed as user data. When the droplet gets created, the script will get copied to the instance and automatically executed. After a few minutes everything should be ready to go. If you want to clone the repo above, feel free to, or if you would rather do everything by hand that’s great too. I will assume that you are doing everything by hand. The process of deploying from the repo should be pretty self explainitory. My reasoning for doing it this way is because I wanted to better understand the process.
|
||||
|
||||
First create our main.tf with the following contents:
|
||||
|
||||
```
|
||||
# main.tf
|
||||
# Attach an SSH key to our droplet
|
||||
resource "digitalocean_ssh_key" "default" {
|
||||
name = "Terraform Example"
|
||||
public_key = file("./tf-digitalocean.pub")
|
||||
}
|
||||
|
||||
# Create a new Web Droplet in the nyc1 region
|
||||
resource "digitalocean_droplet" "web" {
|
||||
image = "ubuntu-22-04-x64"
|
||||
name = "wireguard"
|
||||
region = "nyc1"
|
||||
size = "s-2vcpu-4gb"
|
||||
ssh_keys = [digitalocean_ssh_key.default.fingerprint]
|
||||
user_data = file("setup.sh")
|
||||
}
|
||||
|
||||
output "droplet_output" {
|
||||
value = digitalocean_droplet.web.ipv4_address
|
||||
}
|
||||
```
|
||||
|
||||
Next create a terraform.tf file in the same directory with the following contents:
|
||||
|
||||
```
|
||||
terraform {
|
||||
required_providers {
|
||||
digitalocean = {
|
||||
source = "digitalocean/digitalocean"
|
||||
version = "2.41.0"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
provider "digitalocean" {
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
Now we will need to create the ssh key that we defined in our Terraform code.
|
||||
|
||||
```
|
||||
$ ssh-keygen -t rsa -C "WireguardVPN" -f ./tf-digitalocean -q -N ""
|
||||
```
|
||||
|
||||
Next we need to set an environment variable for our DigitalOcean access token.
|
||||
|
||||
```
|
||||
$ export DIGITALOCEAN_ACCESS_TOKEN=dop_v1_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
|
||||
```
|
||||
|
||||
Now we are ready to initialize our Terraform and apply it:
|
||||
|
||||
```
|
||||
$ terraform init
|
||||
$ terraform apply
|
||||
|
||||
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
|
||||
+ create
|
||||
|
||||
Terraform will perform the following actions:
|
||||
|
||||
# digitalocean_droplet.web will be created
|
||||
+ resource "digitalocean_droplet" "web" {
|
||||
+ backups = false
|
||||
+ created_at = (known after apply)
|
||||
+ disk = (known after apply)
|
||||
+ graceful_shutdown = false
|
||||
+ id = (known after apply)
|
||||
+ image = "ubuntu-22-04-x64"
|
||||
+ ipv4_address = (known after apply)
|
||||
+ ipv4_address_private = (known after apply)
|
||||
+ ipv6 = false
|
||||
+ ipv6_address = (known after apply)
|
||||
+ locked = (known after apply)
|
||||
+ memory = (known after apply)
|
||||
+ monitoring = false
|
||||
+ name = "wireguard"
|
||||
+ price_hourly = (known after apply)
|
||||
+ price_monthly = (known after apply)
|
||||
+ private_networking = (known after apply)
|
||||
+ region = "nyc1"
|
||||
+ resize_disk = true
|
||||
+ size = "s-2vcpu-4gb"
|
||||
+ ssh_keys = (known after apply)
|
||||
+ status = (known after apply)
|
||||
+ urn = (known after apply)
|
||||
+ user_data = "69d130f386b262b136863be5fcffc32bff055ac0"
|
||||
+ vcpus = (known after apply)
|
||||
+ volume_ids = (known after apply)
|
||||
+ vpc_uuid = (known after apply)
|
||||
}
|
||||
|
||||
# digitalocean_ssh_key.default will be created
|
||||
+ resource "digitalocean_ssh_key" "default" {
|
||||
+ fingerprint = (known after apply)
|
||||
+ id = (known after apply)
|
||||
+ name = "Terraform Example"
|
||||
+ public_key = <<-EOT
|
||||
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDXOBlFdNqV48oxWobrn2rPt4y1FTqrqscA5bSu2f3CogwbDKDyNglXu8RL4opjfdBHQES+pEqvt21niqes8z2QsBTF3TRQ39SaHM8wnOTeC8d0uSgyrp9b7higHd0SDJVJZT0Bz5AlpYfCO/gpEW51XrKKeud7vImj8nGPDHnENN0Ie0UVYZ5+V1zlr0BBI7LX01MtzUOgSldDX0lif7IZWW4XEv40ojWyYJNQwO/gwyDrdAq+kl+xZu7LmBhngcqd02+X6w4SbdgYg2flu25Td0MME0DEsXKiZYf7kniTrKgCs4kJAmidCDYlYRt43dlM69pB5jVD/u4r3O+erTapH/O1EDhsdA9y0aYpKOv26ssYU+ZXK/nax+Heu0giflm7ENTCblKTPCtpG1DBthhX6Ml0AYjZF1cUaaAvpN8UjElxQ9r+PSwXloSnf25/r9UOBs1uco8VDwbx5cM0SpdYm6ERtLqGRYrG2SDJ8yLgiCE9EK9n3uQExyrTMKWzVAc= WireguardVPN
|
||||
EOT
|
||||
}
|
||||
|
||||
Plan: 2 to add, 0 to change, 0 to destroy.
|
||||
|
||||
Changes to Outputs:
|
||||
+ droplet_output = (known after apply)
|
||||
|
||||
Do you want to perform these actions?
|
||||
Terraform will perform the actions described above.
|
||||
Only 'yes' will be accepted to approve.
|
||||
|
||||
Enter a value: yes
|
||||
|
||||
digitalocean_ssh_key.default: Creating...
|
||||
digitalocean_ssh_key.default: Creation complete after 1s [id=43499750]
|
||||
digitalocean_droplet.web: Creating...
|
||||
digitalocean_droplet.web: Still creating... [10s elapsed]
|
||||
digitalocean_droplet.web: Still creating... [20s elapsed]
|
||||
digitalocean_droplet.web: Still creating... [30s elapsed]
|
||||
digitalocean_droplet.web: Creation complete after 31s [id=447469336]
|
||||
|
||||
Apply complete! Resources: 2 added, 0 changed, 0 destroyed.
|
||||
|
||||
Outputs:
|
||||
|
||||
droplet_output = "159.223.113.207"
|
||||
|
||||
```
|
||||
|
||||
All pretty standard stuff. Nice! It only took about 30 seconds or so on my machine to spin up a droplet and start provisioning it. It is worth noting that the setup script will take a few minutes to run. Before we log into our new droplet, let’s take a quick look at the setup script that we are running.
|
||||
|
||||
```
|
||||
#!/usr/bin/env sh
|
||||
set -e
|
||||
set -u
|
||||
# Set the listen port used by Wireguard, this is the default so feel free to change it.
|
||||
LISTENPORT=51820
|
||||
CONFIG_DIR=/root/wireguard-conf
|
||||
umask 077
|
||||
mkdir -p $CONFIG_DIR/client
|
||||
|
||||
# Install wireguard
|
||||
apt update && apt install -y wireguard
|
||||
|
||||
# Generate public/private key for the "server".
|
||||
wg genkey > $CONFIG_DIR/privatekey
|
||||
wg pubkey < $CONFIG_DIR/privatekey > $CONFIG_DIR/publickey
|
||||
|
||||
# Generate public/private key for the "client"
|
||||
wg genkey > $CONFIG_DIR/client/privatekey
|
||||
wg pubkey < $CONFIG_DIR/client/privatekey > $CONFIG_DIR/client/publickey
|
||||
|
||||
|
||||
# Generate server config
|
||||
echo "[Interface]
|
||||
Address = 10.66.66.1/24,fd42:42:42::1/64
|
||||
ListenPort = $LISTENPORT
|
||||
PrivateKey = $(cat $CONFIG_DIR/privatekey)
|
||||
|
||||
### Client config
|
||||
[Peer]
|
||||
PublicKey = $(cat $CONFIG_DIR/client/publickey)
|
||||
AllowedIPs = 10.66.66.2/32,fd42:42:42::2/128
|
||||
" > /etc/wireguard/do.conf
|
||||
|
||||
|
||||
# Generate client config. This will need to be copied to your machine.
|
||||
echo "[Interface]
|
||||
PrivateKey = $(cat $CONFIG_DIR/client/privatekey)
|
||||
Address = 10.66.66.2/32,fd42:42:42::2/128
|
||||
DNS = 1.1.1.1,1.0.0.1
|
||||
|
||||
[Peer]
|
||||
PublicKey = $(cat publickey)
|
||||
Endpoint = $(curl icanhazip.com):$LISTENPORT
|
||||
AllowedIPs = 0.0.0.0/0,::/0
|
||||
" > client-config.conf
|
||||
|
||||
wg-quick up do
|
||||
|
||||
# Add iptables rules to forward internet traffic through this box
|
||||
# We are assuming our Wireguard interface is called do and our
|
||||
# primary public facing interface is called eth0.
|
||||
|
||||
iptables -I INPUT -p udp --dport 51820 -j ACCEPT
|
||||
iptables -I FORWARD -i eth0 -o do -j ACCEPT
|
||||
iptables -I FORWARD -i do -j ACCEPT
|
||||
iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
|
||||
ip6tables -I FORWARD -i do -j ACCEPT
|
||||
ip6tables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
|
||||
|
||||
# Enable routing on the server
|
||||
echo "net.ipv4.ip_forward = 1
|
||||
net.ipv6.conf.all.forwarding = 1" >/etc/sysctl.d/wg.conf
|
||||
sysctl --system
|
||||
```
|
||||
|
||||
As you can see, it is pretty straightforward. All you really need to do is:
|
||||
|
||||
On the “server” side:
|
||||
|
||||
1. Generate a private key and derive a public key from it for both the “server” and the “client”.
|
||||
2. Create a “server” config that tells the droplet what address to bind to for the wireguard interface, which private key to use to secure that interface and what port to listen on.
|
||||
3. The “server” config also needs to know what peers or “clients” to accept connections from in the AllowedIPs block. In this case we are just specifying one. The “server” also needs to know the public key of the “client” that will be connecting.
|
||||
|
||||
On the “client” side:
|
||||
|
||||
1. Create a “client” config that tells our machine what address to assign to the wireguard interface (obviously needs to be on the same subnet as the interface on the server side).
|
||||
2. The client needs to know which private key to use to secure the interface.
|
||||
3. It also needs to know the public key of the server as well as the public IP address/hostname of the “server” it is connecting to as well as the port it is listening on.
|
||||
4. Finally it needs to know what traffic to route over the wireguard interface. In this example we are simply routing all traffic but you could restrict this as you see fit.
|
||||
|
||||
Now that we have our configs in place, we need to copy the client config to our local machine. The following command should work as long as you make sure to replace the IP address with the IP address of your newly created droplet:
|
||||
|
||||
```
|
||||
## Make sure you have Wireguard installed on your local machine as well.
|
||||
## https://wireguard.com/install
|
||||
|
||||
## Copy the client config to our local machine and move it to our wireguard directory.
|
||||
$ ssh -i tf-digitalocean root@157.230.177.54 -- cat /root/wireguard-conf/client-config.conf| sudo tee /etc/wireguard/do.conf
|
||||
```
|
||||
|
||||
Before we try to connect, let’s log into the server and make sure everything is set up correctly:
|
||||
|
||||
```
|
||||
$ ssh -i tf-digitalocean root@159.223.113.207
|
||||
Welcome to Ubuntu 22.04.4 LTS (GNU/Linux 5.15.0-113-generic x86_64)
|
||||
|
||||
* Documentation: https://help.ubuntu.com
|
||||
* Management: https://landscape.canonical.com
|
||||
* Support: https://ubuntu.com/pro
|
||||
|
||||
System information as of Wed Sep 25 13:19:02 UTC 2024
|
||||
|
||||
System load: 0.03 Processes: 113
|
||||
Usage of /: 2.1% of 77.35GB Users logged in: 0
|
||||
Memory usage: 6% IPv4 address for eth0: 157.230.221.196
|
||||
Swap usage: 0% IPv4 address for eth0: 10.10.0.5
|
||||
|
||||
Expanded Security Maintenance for Applications is not enabled.
|
||||
|
||||
70 updates can be applied immediately.
|
||||
40 of these updates are standard security updates.
|
||||
To see these additional updates run: apt list --upgradable
|
||||
|
||||
Enable ESM Apps to receive additional future security updates.
|
||||
See https://ubuntu.com/esm or run: sudo pro status
|
||||
|
||||
New release '24.04.1 LTS' available.
|
||||
Run 'do-release-upgrade' to upgrade to it.
|
||||
|
||||
|
||||
Last login: Wed Sep 25 13:16:25 2024 from 74.221.191.214
|
||||
root@wireguard:~#
|
||||
|
||||
|
||||
```
|
||||
|
||||
Awesome! We are connected. Now let’s check the wireguard interface using the `wg` command. If our config was correct, we should see an interface line and 1 peer line like so. If the peer line is missing then something is wrong with the configuration. Most likely a mismatch between public/private key.:
|
||||
|
||||
```
|
||||
root@wireguard:~# wg
|
||||
interface: do
|
||||
public key: fTvqo/cZVofJ9IZgWHwU6XKcIwM/EcxUsMw4voeS/Hg=
|
||||
private key: (hidden)
|
||||
listening port: 51820
|
||||
|
||||
peer: 5RxMenh1L+rNJobROkUrub4DBUj+nEUPKiNe4DFR8iY=
|
||||
allowed ips: 10.66.66.2/32, fd42:42:42::2/128
|
||||
root@wireguard:~#
|
||||
```
|
||||
|
||||
So now we should be ready to go! On your local machine go ahead and try it out:
|
||||
|
||||
```
|
||||
## Start the interface with wg-quick up [interface_name]
|
||||
$ sudo wg-quick up do
|
||||
[sudo] password for mikeconrad:
|
||||
[#] ip link add do type wireguard
|
||||
[#] wg setconf do /dev/fd/63
|
||||
[#] ip -4 address add 10.66.66.2/32 dev do
|
||||
[#] ip -6 address add fd42:42:42::2/128 dev do
|
||||
[#] ip link set mtu 1420 up dev do
|
||||
[#] resolvconf -a do -m 0 -x
|
||||
[#] wg set do fwmark 51820
|
||||
[#] ip -6 route add ::/0 dev do table 51820
|
||||
[#] ip -6 rule add not fwmark 51820 table 51820
|
||||
[#] ip -6 rule add table main suppress_prefixlength 0
|
||||
[#] ip6tables-restore -n
|
||||
[#] ip -4 route add 0.0.0.0/0 dev do table 51820
|
||||
[#] ip -4 rule add not fwmark 51820 table 51820
|
||||
[#] ip -4 rule add table main suppress_prefixlength 0
|
||||
[#] sysctl -q net.ipv4.conf.all.src_valid_mark=1
|
||||
[#] iptables-restore -n
|
||||
|
||||
## Check our config
|
||||
$ sudo wg
|
||||
interface: do
|
||||
public key: fJ8mptCR/utCR4K2LmJTKTjn3xc4RDmZ3NNEQGwI7iI=
|
||||
private key: (hidden)
|
||||
listening port: 34596
|
||||
fwmark: 0xca6c
|
||||
|
||||
peer: duTHwMhzSZxnRJ2GFCUCHE4HgY5tSeRn9EzQt9XVDx4=
|
||||
endpoint: 157.230.177.54:51820
|
||||
allowed ips: 0.0.0.0/0, ::/0
|
||||
latest handshake: 1 second ago
|
||||
transfer: 1.82 KiB received, 2.89 KiB sent
|
||||
|
||||
## Make sure we can ping the outside world
|
||||
mikeconrad@pop-os:~/projects/wireguard-terraform-digitalocean$ ping 1.1.1.1
|
||||
PING 1.1.1.1 (1.1.1.1) 56(84) bytes of data.
|
||||
64 bytes from 1.1.1.1: icmp_seq=1 ttl=56 time=28.0 ms
|
||||
^C
|
||||
--- 1.1.1.1 ping statistics ---
|
||||
1 packets transmitted, 1 received, 0% packet loss, time 0ms
|
||||
rtt min/avg/max/mdev = 27.991/27.991/27.991/0.000 ms
|
||||
|
||||
## Verify our traffic is actually going over the tunnel.
|
||||
$ curl icanhazip.com
|
||||
157.230.177.54
|
||||
|
||||
|
||||
|
||||
```
|
||||
|
||||
We should also be able to ssh into our instance over the VPN using the `10.66.66.1` address:
|
||||
|
||||
```
|
||||
$ ssh -i tf-digitalocean root@10.66.66.1
|
||||
The authenticity of host '10.66.66.1 (10.66.66.1)' can't be established.
|
||||
ED25519 key fingerprint is SHA256:E7BKSO3qP+iVVXfb/tLaUfKIc4RvtZ0k248epdE04m8.
|
||||
This host key is known by the following other names/addresses:
|
||||
~/.ssh/known_hosts:130: [hashed name]
|
||||
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
|
||||
Warning: Permanently added '10.66.66.1' (ED25519) to the list of known hosts.
|
||||
Welcome to Ubuntu 22.04.4 LTS (GNU/Linux 5.15.0-113-generic x86_64)
|
||||
|
||||
* Documentation: https://help.ubuntu.com
|
||||
* Management: https://landscape.canonical.com
|
||||
* Support: https://ubuntu.com/pro
|
||||
|
||||
System information as of Wed Sep 25 13:32:12 UTC 2024
|
||||
|
||||
System load: 0.02 Processes: 109
|
||||
Usage of /: 2.1% of 77.35GB Users logged in: 0
|
||||
Memory usage: 6% IPv4 address for eth0: 157.230.177.54
|
||||
Swap usage: 0% IPv4 address for eth0: 10.10.0.5
|
||||
|
||||
Expanded Security Maintenance for Applications is not enabled.
|
||||
|
||||
73 updates can be applied immediately.
|
||||
40 of these updates are standard security updates.
|
||||
To see these additional updates run: apt list --upgradable
|
||||
|
||||
Enable ESM Apps to receive additional future security updates.
|
||||
See https://ubuntu.com/esm or run: sudo pro status
|
||||
|
||||
New release '24.04.1 LTS' available.
|
||||
Run 'do-release-upgrade' to upgrade to it.
|
||||
|
||||
|
||||
root@wireguard:~#
|
||||
|
||||
```
|
||||
|
||||
Looks like everything is working! If you run the script from the repo you will have a fully functioning Wireguard VPN in less than 5 minutes! Pretty cool stuff! This article was not meant to be exhaustive but instead a simple primer to get your feet wet. The setup script I used is heavily inspired by [angristan/wireguard-install](https://github.com/angristan/wireguard-install). Another great resource is the [Unofficial docs repo](https://github.com/pirate/wireguard-docs).
|
@ -0,0 +1,22 @@
|
||||
---
|
||||
author: mikeconrad
|
||||
categories:
|
||||
- Software Engineering
|
||||
date: "2024-11-20T16:07:50Z"
|
||||
guid: https://hackanooga.com/?p=509
|
||||
id: 509
|
||||
image: /wp-content/uploads/2024/03/hilger-portal-home.webp
|
||||
tags:
|
||||
- Portfolio
|
||||
title: The case for recording user sessions
|
||||
draft: true
|
||||
---
|
||||
|
||||
About six months or so I started making the case for why we should start recording user sessions. We decided on [Sentry Session Replays](https://sentry.io/product/session-replay/) since we were already using Sentry for error reporting. While I can definetly appreciate the privacy concerns around this technology, for Paragon it made sense. Plus Sentry has some PII controls built in that will automatically scrub things like text inputs. I believe you can take it a step further by setting up a relay server but I haven't gone that road yet.
|
||||
|
||||
I won't get into all the specifics in this post but I did want to highlight a few reasons why you should consider capturing user sessions for you website/app.
|
||||
|
||||
** Disclaimer: You will likely only capture a small sample size of sessions due to the fact that some people use Ad blockers, Brave browser, etc. That being said, take the data gathered from these sessions with a grain of salt remembering they don't reflect the patterns of EVERY user of your site **
|
||||
|
||||
## 1) Increased visibility into bugs
|
||||
This is one of the most important reasons why you should adopt this technology. One of the great things about Sentry (and other similiar tools) is that they record every DOM interaction and play them back into an iframe. This means you can inspect the replays
|
Reference in New Issue
Block a user