<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[Spencer Smolen]]></title><description><![CDATA[Linux and automation blog including posts on topics ranging from DevOps to shell scripting. Tutorials and code abound!]]></description><link>https://spencersmolen.com/</link><generator>Ghost 5.44</generator><lastBuildDate>Mon, 09 Mar 2026 04:33:11 GMT</lastBuildDate><atom:link href="https://spencersmolen.com/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Deploying a 3-Tier Web App with Docker Compose]]></title><description><![CDATA[<p></p><p>This weekend I was working on setting up a 3-tier deployment of an application I&apos;ve been pretty fond of recently: <a href="https://ghost.org/?ref=spencersmolen.com">Ghost</a>. It&apos;s a publishing and blogging platform that has some really awesome features. It&apos;s fun to mess with in its own right, but if</p>]]></description><link>https://spencersmolen.com/deploying-a-3-tier-architecture-web-app-with-docker-compose/</link><guid isPermaLink="false">6451ae103dd32a18e63da3a9</guid><category><![CDATA[Docker]]></category><category><![CDATA[docker-compose]]></category><category><![CDATA[web-application]]></category><category><![CDATA[system-design]]></category><dc:creator><![CDATA[Spencer Smolen]]></dc:creator><pubDate>Wed, 03 May 2023 01:23:25 GMT</pubDate><content:encoded><![CDATA[<p></p><p>This weekend I was working on setting up a 3-tier deployment of an application I&apos;ve been pretty fond of recently: <a href="https://ghost.org/?ref=spencersmolen.com">Ghost</a>. It&apos;s a publishing and blogging platform that has some really awesome features. It&apos;s fun to mess with in its own right, but if you wanted to deploy Ghost in a production setting you&apos;d want to deploy more than just Ghost in order to do it effectively. </p><p>I drafted up a <code>docker-compose.yml</code> file and NGINX configuration to explore this idea and dive into what a production-ready Ghost deployment might work like. Here&apos;s a link to the finished product for those who want to dive right in:</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://gitlab.com/spencersmolen/ghost-three-tier?ref=spencersmolen.com"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Spencer Smolen / 3-Tier Ghost Site &#xB7; GitLab</div><div class="kg-bookmark-description">GitLab.com</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://gitlab.com/assets/favicon-72a2cad5025aa931d6ea56c3201d1f18e68a8cd39788c7c80d5b2b82aa5143ef.png" alt><span class="kg-bookmark-author">GitLab</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://gitlab.com/uploads/-/system/project/avatar/45611699/3tier.png" alt></div></a></figure><h2 id="3-tiered-application-architecture-explained">3-Tiered Application Architecture Explained</h2><p>This is a working example of a 3-tier Ghost deployment. There are a ton of great articles and resources on 3-tier architecture so I won&#x2019;t go into too much detail here as to the ins and outs of the framework. Below is a graph explaining how this application, and others that follow the framework, are organized.</p><figure class="kg-card kg-image-card"><img src="https://spencersmolen.com/content/images/2023/05/3-Tier-Architecture.png" class="kg-image" alt loading="lazy" width="677" height="1065" srcset="https://spencersmolen.com/content/images/size/w600/2023/05/3-Tier-Architecture.png 600w, https://spencersmolen.com/content/images/2023/05/3-Tier-Architecture.png 677w"></figure><p>The general idea is to separate the application into 3 parts: a frontend, a backend, and a middle tier that can speak to both the front end and the backend and prevents them from having to talk directly to each other.</p><p>Generally, there are a few reasons you might want to do this, but all of the advantages draw from one main fact: by doing this you achieve greater isolation of functional components that comprise your application. Among the benefits are:</p><ul><li>Increased security</li><li>Easier maintenance</li><li>Protection against single points of failure</li></ul><p>In a proper 3-tier application, only <strong>one</strong> of the three tiers is accessible to the end user. This is the part of the application you interface with as you use it and it&#x2019;s typically accessible via the Internet. This is the <em>presentation layer</em>.</p><p>Behind that is the real heart of the application, called the <em>application layer</em>. This is the workhorse of the application and it is usually an API of sorts whose job is just to compute and hand off the resulting work to the presentation layer.</p><p>Finally, there&#x2019;s the <em>data</em> layer. This is the store of data that any application typically needs to run over time. This is where long-term results are stored, and it is referenced by the application layer while it does its heavy lifting.</p><div class="kg-card kg-callout-card kg-callout-card-yellow"><div class="kg-callout-emoji">&#x1F4A1;</div><div class="kg-callout-text"><em>In a typical 3-tier deployment, communications between the application tier and the backend are <strong>database queries</strong>, and communications between the front end and the application tier are <strong>API calls</strong>.</em></div></div><h2 id="an-implementation-using-ghost">An Implementation using Ghost</h2><p>This repository comes in the form of a Docker Compose project. It is composed of 3 Docker containers, each corresponding to one of the tiers in the framework. Ghost, for those unfamiliar, is a blogging platform of sorts for everyone from at-home bloggers to massive publications and enterprise customers. The 3 containers contained in this project are:</p><h3 id="nginx">NGINX</h3><p>A reverse proxy whose job is just to interface with the outside world and be a middleman between Ghost and whoever is trying to access its content. Think of it like a currier who delivers pages on demand and goes back and asks the application for pages as people request them.</p><h3 id="ghost">Ghost</h3><p>This is the heart of the application. This is the part that generates web content, allows people to make blog posts, organizes all the content in a meaningful way so it can be searched, prevents things from crashing, etc.</p><h3 id="mysql">MySQL</h3><p>This is the database where Ghost stores all of its content that&#x2019;s needed for long-term use. This is a database much like any other, nothing too fancy.</p><h2 id="using-the-repository-to-deploy-ghost">Using the Repository to Deploy Ghost</h2><p>Below is the <code>docker-compose.yml</code> file that does the heavy lifting of setting up these 3 servers for us:</p><pre><code class="language-yaml">---
version: &apos;3&apos;
services:
  db:
    image: mysql:8
    expose: [3306, 33060]
    environment:
      MYSQL_DATABASE: ghost
      MYSQL_USER: ghost
      MYSQL_PASSWORD: &quot;$SQL_USER_PASSWORD&quot;
      MYSQL_ROOT_PASSWORD: &quot;$SQL_ROOT_PASSWORD&quot;
    networks: [backend_net]
    volumes: [mysql_data:/var/lib/mysql]
    healthcheck:
      test: [&apos;CMD-SHELL&apos;, &apos;mysqladmin ping&apos;, &apos;-h 127.0.0.1&apos;, &apos; --password=&quot;$$(cat /run/secrets/db_password)&quot;&apos;, &apos;--silent&apos;]
      interval: 3s
      retries: 5
      start_period: 30s
    secrets: [ db_password ]
  app:
    image: ghost:5
    depends_on:
      db:
        condition: service_healthy
    environment:
      database__client: mysql
      database__connection__host: db
      database__connection__user: ghost
      database__connection__database: ghost
      database__connection__password: &quot;$SQL_USER_PASSWORD&quot;
      url: http://$NGINX_HOST:$NGINX_PORT
    networks: [backend_net, frontend_net]
  web:
    image: nginx:stable
    ports: [$NGINX_PORT:443]
    volumes: [./nginx/:/etc/nginx/templates]
    networks: [frontend_net]
    depends_on: [app]
    environment:
      NGINX_HOST: $NGINX_HOST
      NGINX_PORT: $NGINX_PORT
    secrets: [ssl_cert,ssl_key,dhparam]
networks:
  frontend_net:
    driver: bridge
  backend_net:
    driver: bridge
volumes:
  mysql_data:
secrets:
  ssl_cert:
    file: ./secrets/nginx.crt
  ssl_key:
    file: ./secrets/nginx.key
  db_password:
    file: ./secrets/db_password</code></pre><p>And you&#x2019;ll have created both a private key and a certificate (you&#x2019;ll need both files) as well as installed them on your machine so you don&#x2019;t get that annoying self-signed certificate warning.</p><p>Once you&#x2019;ve done that create a folder called <code>./secrets</code> wherever you downloaded this repository and place them in there after naming the cert and the key <code>nginx.crt</code> and <code>nginx.key</code>, respectively.</p><p>After that, you&#x2019;ll be ready to use the <code>Makefile</code> to prep the installation. Just type this in the folder with the <code>Makefile</code> &#xA0;and type:</p><pre><code>make compose-file
make passwords</code></pre><p>After that, you can go ahead and run:</p><pre><code>docker-compose up -d</code></pre><p>in the folder where the <code>docker-compose.yml</code> file is generated and it should start booting up the images! At this point, you should be able to navigate in a browser to the domain name you made the certificate out to and see the fresh installation of Ghost. In my example, I was messing around with a domain <code>ziinc.dev</code>.</p><p>Although this project is comprised of containers it&apos;s designed in a way that allows you to have persistent data beyond the life of any one of the containers. This is achieved by using a Docker volume to hold the database&apos;s <code>/var/lib/mysql</code> directory. This is where the data is saved in a MySQL application. The lines below achieve this:</p><pre><code>volumes:
  mysql_data:</code></pre><p> By structuring the application this way, we can swap any container out with a new one should it go down or get destroyed so long as the Docker volume is managed correctly.</p><p>Finally, if you&apos;re feeling adventurous, you can easily take this Docker Compose file and use <code>kompose</code> to convert it into a set of Kubernetes entities for a more advanced deployment on Kubernetes. Just download it from <a href="https://kompose.io/installation/?ref=spencersmolen.com">here</a> and run the following:</p><pre><code>kompose convert -f docker-compose.yaml
kubectl apply -f .</code></pre><p>Enjoy!</p>]]></content:encoded></item><item><title><![CDATA[Creating an Automated RPM Build Pipeline using GitHub Actions]]></title><description><![CDATA[<p>There&apos;s something awesome about Linux packages. Being able to freely access repositories of pre-built software has always been a core part of the Linux universe.</p><p>Recently I&apos;ve been building a lot of packages for <a href="https://gitlab.com/spencersmolen/elx?ref=spencersmolen.com">eLX</a> and I&apos;ve realized packages can be relatively simple to</p>]]></description><link>https://spencersmolen.com/creating-an-automated-rpm-build-pipeline-using-github-actions/</link><guid isPermaLink="false">644f747b3dd32a18e63da10e</guid><category><![CDATA[CI/CD]]></category><category><![CDATA[DevOps]]></category><category><![CDATA[Github Actions]]></category><category><![CDATA[automation]]></category><dc:creator><![CDATA[Spencer Smolen]]></dc:creator><pubDate>Tue, 02 May 2023 13:19:21 GMT</pubDate><content:encoded><![CDATA[<p>There&apos;s something awesome about Linux packages. Being able to freely access repositories of pre-built software has always been a core part of the Linux universe.</p><p>Recently I&apos;ve been building a lot of packages for <a href="https://gitlab.com/spencersmolen/elx?ref=spencersmolen.com">eLX</a> and I&apos;ve realized packages can be relatively simple to put together and the ability to archive and distribute your code is extremely powerful. Contrary to what you might think, you don&apos;t have to be a coder to be able to reap the benefits of building &amp; packaging software. What was fascinating to me was that like other tools hijacked from the Linux development community, e.g. GNU make, there&apos;s massive value in these tools for system administrators as well.</p><p>First off &#x2013; &#xA0;what is a package, anyways? We&apos;re talking about <code>.rpm</code> and <code>.deb</code> files. Those things we drink up from tools like <code>dnf</code> and <code>apt</code> in order to download software. These files are basically archives similar to <code>.rar</code> or <code>.zip</code> files: they&apos;re just compressed collections of other files, i.e. they&apos;re really nothing fancy. </p><p>In the world of distributions downstream from Fedora, we use <code>.rpm</code> files to package software, while projects downstream from Debian use <code>.deb</code> files. All <code>.rpm</code> files start as a <code>.spec</code> file, which defines how to turn some raw source code into a built binary application and package it into an <code>.rpm</code>. Just to get an idea of what I mean, here&apos;s an actual spec to build <code>lnav</code>, a command line log navigator written in C.</p><pre><code>Name:          lnav
Version:       0.11.1
Release:       1%{?dist}
Summary:       Curses-based tool for viewing and analyzing log files
License:       BSD
 
URL:           http://lnav.org
Source0:       https://github.com/tstack/lnav/releases/download/v%{version}/%{name}-%{version}.tar.bz2
 
BuildRequires: bzip2-devel
BuildRequires: gcc-c++
BuildRequires: libarchive-devel
BuildRequires: libcurl-devel
BuildRequires: make
BuildRequires: ncurses-devel
BuildRequires: openssh
BuildRequires: openssl-devel
BuildRequires: pcre2-devel
BuildRequires: readline-devel
BuildRequires: sqlite-devel
BuildRequires: zlib-devel
 
%description
%{name} is an enhanced log file viewer.
 
 
%prep
%setup -q
 
%build
%configure --disable-static --disable-silent-rules
%make_build
%install
%make_install
 
%files
%doc AUTHORS NEWS.md README.md
%license LICENSE
%{_bindir}/%{name}
%{_mandir}/man1/%{name}.1*
 </code></pre><p>Believe it or not, this file has everything needed to be fed into a program called <code>rpmbuild</code> and produce a complete <code>.rpm</code>. Now getting into the nitty gritty of the spec file is beyond the scope of this article but let&apos;s suppose you already have one. Wouldn&apos;t it be nice to automate the compiling of your code and building of your packages with this spec file you? The technology here of compiling and packaging software is nowhere near new, but how can we marry the old world of packaging <code>.rpm</code> files with the new world of CI/CD?</p><h2 id="the-world-of-cicd-pipelines">The World of CI/CD Pipelines</h2><p>In the past few years, CI/CD has become huge. <em>Continuous Integration</em> &amp; <em>Continuous Delivery</em> refers to the ability of people who work with code to automate many of the tasks that used to be done by hand in such a way that has taken on its own style. CI/CD revolves around the idea of setting in motion a cascading sequence of events once a coder finishes and commits some piece of code to the code base. </p><p>One of the younger CI/CD pipeline tools is GitHub Actions. Recently I built a pipeline that would automatically package up the code I had just published into both an <code>.rpm</code> and <code>.deb</code> and publish it as a release asset in GitHub when tagged with a new version number. This is huge! Once the coding is committed, the entire process of building and publishing it in a consumable package form is automated.</p><p> The best news is that after front-loading the work by building the pipeline it requires almost zero maintenance. So if you&apos;re interested in sharing code, scripts, configurations, or keeping snapshots of them to archive yourself, e.g. your dotfiles, you&apos;re about the have the tools.</p><h2 id="the-build-pipeline">The Build Pipeline</h2><p>The pipeline walks through 4 what are called &quot;jobs&quot; in most CI/CD frameworks. Each job completes a task that you define in an individual Docker container that is destroyed when the job has been completed. Here is a brief description of each job:</p><ol><li>First, it&apos;s going to download a copy of the most recent commit as raw source code. The pipeline only runs when there is a new version tagged on one of the commits, e.g. v1.4. This typically occurs when new features have been added or some sizable amount of work has been done and it&apos;s time for a new release. We&apos;re gonna download those plain text files and archive them in a file named something like <code>softwarename-1.4.tar.gz</code>. </li><li>Next, we&apos;re going to take our <code>.spec</code> file which, &#xA0;remember, is like the recipe for how to build the software from source. We have the source code we downloaded in the last step, now we&apos;ve got this <code>.spec</code> file that has our instructions on how to build it. We&apos;re gonna run the recipe on the source code we downloaded in <code>softwarename-1.4.tar.gz</code> using a tool called <code>rpmbuild</code> and it&apos;s going to make our package.</li><li>Since half the Linux world speaks <code>.rpm</code> and the other half speaks <code>.deb</code> , we&apos;re going to use an awesome tool called <code>alien</code> to convert our <code>.rpm</code> to a <code>.deb</code> with minimal effort.</li><li>Finally, the pipeline is going to take both of these packages which it has <em>cached</em> in the process of stepping through the pipeline and (1) create a new release in GitHub and (2) publish both our Linux packages of our freshly baked software onto GitHub so people can freely download it.</li></ol><p>Alright!</p><h2 id="walking-through-the-jobs">Walking Through the Jobs </h2><p>GitHub Actions, like many other modern automation tools, is a YAML-based tool. However, rather than having a rigid API like some others, the entire system is itself somewhat &quot;package&quot; based. People on GitHub <a href="https://github.com/marketplace?type=actions&amp;ref=spencersmolen.com">post modules</a> that you can use referred to as &quot;Actions&quot;. Using one of these published Actions usually just amounts to a few lines of YAML to carry out some task. To use a published &quot;Action&quot; you use the keyword <code>uses</code> in your workflow. For example:</p><pre><code> jobs:
   job1:
     steps:
      - name: Checkout repository
        uses: actions/checkout@v2</code></pre><p>You&apos;ll see them throughout the pipeline.</p><h3 id="job-1">Job #1:</h3><pre><code>  build_tarball:
    name: Build source archive
    runs-on: ubuntu-latest
    steps:
      - name: Checkout repository
        uses: actions/checkout@v2

      - name: Replace version in RPM spec
        run: sed -Ei &apos;s/(^Version:[[:space:]]*).*/\1${{github.ref_name}}/&apos; ${{ vars.PKG_NAME }}.spec

      - name: Create source archive
        run: tar -cvf ${{ vars.PKG_NAME }}-${{ github.ref_name }}.tar.gz *

      - name: Upload source archive as artifact
        uses: actions/upload-artifact@v3
        with:
          name: ${{ vars.PKG_NAME }}-${{ github.ref_name }}.tar.gz
          path: ${{ vars.PKG_NAME }}-${{ github.ref_name }}.tar.gz</code></pre><p>First, we use the <code>actions/checkout@v2</code> &quot;action&quot; to pull down our code into the current execution environment, which for whatever it&apos;s worth, is a container executing the steps of your pipeline.</p><p>Next, we go ahead and perform a quick search and replace using <code>sed</code> to update the version number in our spec file that triggered the pipeline in the first place. Then, we&apos;re going to archive the code by creating our <code>tar</code> file. </p><p>Finally, we&apos;re going to go ahead and upload that code as what&apos;s called an artifact. This isn&apos;t the final upload to our GitHub Release page. Rather, it&apos;s a way of putting it aside while the rest of the pipeline runs. Because each job runs in a new container, we need to make use of this artifact cache often in order to pass files from one job to the next.</p><h3 id="job-2">Job #2:</h3><p>Here&apos;s where the action is:</p><pre><code>  build_rpm:
    name: Build .rpm package
    needs: build_tarball
    runs-on: ubuntu-latest
    steps:
      - name: Checkout repository
        uses: actions/checkout@v2

      - name: Replace version in RPM spec so correct source is downloaded when building RPM
        run: sed -Ei &apos;s/(^Version:[[:space:]]*).*/\1${{github.ref_name}}/&apos; ${{ vars.PKG_NAME }}.spec

      - name: Run rpmbuild on RPM spec to produce package
        id: rpm
        uses: naveenrajm7/rpmbuild@master
        with:
          spec_file: ${{ vars.PKG_NAME }}.spec

      - name: Upload .rpm package as artifact
        uses: actions/upload-artifact@v3
        with:
          name: ${{ vars.PKG_NAME }}-${{ github.ref_name }}-1.${{ env.DIST }}.${{ env.ARCH }}.rpm
          path: rpmbuild/RPMS/${{ env.ARCH }}/*.rpm
</code></pre><p>Because this new job starts in a fresh container we&apos;re going to start our second job off just like we did the first: by checking out our project with the <code>actions/checkout@v2</code> action and updating the version in the <code>.spec</code> file.</p><p>Then we run the <code>naveenrajm7/rpmbuild@master</code> action which basically runs two <code>rpmbuild</code> commands that produce the rpm behind the scenes. Finally, we upload the <code>.rpm</code> as an artifact just like we did with the raw source code in the first job so we can have access so it in future jobs.</p><h3 id="job-3">Job #3:</h3><pre><code>  build_deb:
    name: Build .deb package
    needs: build_rpm
    runs-on: ubuntu-latest
    steps:
      - name: Download .rpm artifact
        uses: actions/download-artifact@v3
        id: download
        with:
          name: ${{ vars.PKG_NAME }}-${{ github.ref_name }}-1.${{ env.DIST }}.${{ env.ARCH }}.rpm

      - name: Convert .rpm to .deb
        run: |
          sudo apt install -y alien
          sudo alien -k --verbose --to-deb *.rpm

      - name: Upload .deb package as artifact
        uses: actions/upload-artifact@v3
        with:
          name: ${{ vars.PKG_NAME }}-${{ github.ref_name }}-1.${{ env.DIST }}.${{ env.ARCH }}.deb
          path: ${{ vars.PKG_NAME }}*.deb</code></pre><p>You&apos;re probably starting to see the pattern here. This time we&apos;re going to, instead of downloading our code, download one of the artifacts we uploaded in the earlier steps. We&apos;re going to download the <code>.rpm</code> from the last step and convert it to a <code>.deb</code> for Debian-based systems. To do that we&apos;ll run the <code>alien</code> command &#xA0;and upload the resultant <code>.deb</code> as an artifact with the others.</p><h3 id="job-4">Job #4:</h3><p>Finally, in the 4th job, we&apos;re going to create our release! This is the GitHub event that these files are going to be uploaded with:</p><pre><code>  release:
    name: Create release with all assets
    needs: [build_tarball, build_rpm, build_deb]
    runs-on: ubuntu-latest
    steps:
      - name: Download cached rpm, deb, and tar.gz artifacts
        uses: actions/download-artifact@v3

      - name: Release
        uses: softprops/action-gh-release@v1
        with:
          files: |
            ${{ vars.PKG_NAME }}-${{ github.ref_name }}.tar.gz/*.tar.gz
            ${{ vars.PKG_NAME }}-${{ github.ref_name }}-1.${{ env.DIST }}.${{ env.ARCH }}.rpm/**/*.rpm
            ${{ vars.PKG_NAME }}-${{ github.ref_name }}-1.${{ env.DIST }}.${{ env.ARCH }}.deb/**/*.deb</code></pre><p>In this job, we&apos;re going to fetch all those &quot;artifacts&quot; we kept throwing up into the cache at the end of each job. Without any other arguments, the following line will download all items saved in the cache:</p><pre><code>      - name: Download cached rpm, deb, and tar.gz artifacts
        uses: actions/download-artifact@v3</code></pre><p>After that we&apos;ve actually pulled down is:</p><ol><li>an archive of the raw code in the format <code>softwarename-1.4.tar.gz</code></li><li>an <code>.rpm</code> of the build, e.g. <code>softwarename-1.4.rpm</code></li><li>a <code>.deb</code> of the build, e.g. <code>softwarename-1.4.deb</code></li></ol><p>Last we have quite a useful action here called <code>softprops/action-gh-release@v1</code> that allows us to create a release and attach all our assets to it in the same step. &#xA0;In this step, we upload our artifacts, and voil&#xE0;! Our code has been shared. It will now be visible on the &quot;Releases&quot; page of our repo:</p><figure class="kg-card kg-image-card"><img src="https://spencersmolen.com/content/images/2023/05/image.png" class="kg-image" alt loading="lazy" width="2000" height="1483" srcset="https://spencersmolen.com/content/images/size/w600/2023/05/image.png 600w, https://spencersmolen.com/content/images/size/w1000/2023/05/image.png 1000w, https://spencersmolen.com/content/images/size/w1600/2023/05/image.png 1600w, https://spencersmolen.com/content/images/2023/05/image.png 2082w" sizes="(min-width: 720px) 720px"></figure><p>Keep in mind this process is totally automated. Once the code has been committed the pipeline starts running. You can keep tabs on it by navigating to the &quot;Actions&quot; tab at the top of your repository. There&apos;s a pretty comprehensive log for you to go through. It will look something like this:</p><figure class="kg-card kg-image-card"><img src="https://spencersmolen.com/content/images/2023/05/image-1.png" class="kg-image" alt loading="lazy" width="2000" height="1627" srcset="https://spencersmolen.com/content/images/size/w600/2023/05/image-1.png 600w, https://spencersmolen.com/content/images/size/w1000/2023/05/image-1.png 1000w, https://spencersmolen.com/content/images/size/w1600/2023/05/image-1.png 1600w, https://spencersmolen.com/content/images/size/w2400/2023/05/image-1.png 2400w" sizes="(min-width: 720px) 720px"></figure><p></p><p>Alright here&apos;s the finished copy of the pipeline below! </p><pre><code>name: Build Linux Packages
on:
  push:
    tags:
      - &quot;*.*.*&quot;
env:
  DIST: el7
  ARCH: noarch

jobs:
  build_tarball:
    name: Build source archive
    runs-on: ubuntu-latest
    steps:
      - name: Checkout repository
        uses: actions/checkout@v2

      - name: Replace version in RPM spec so correct source is downloaded when building RPM
        run: sed -Ei &apos;s/(^Version:[[:space:]]*).*/\1${{github.ref_name}}/&apos; ${{ vars.PKG_NAME }}.spec

      - name: Create source archive
        run: tar -cvf ${{ vars.PKG_NAME }}-${{ github.ref_name }}.tar.gz *

      - name: Upload source archive as artifact
        uses: actions/upload-artifact@v3
        with:
          name: ${{ vars.PKG_NAME }}-${{ github.ref_name }}.tar.gz
          path: ${{ vars.PKG_NAME }}-${{ github.ref_name }}.tar.gz

  build_rpm:
    name: Build .rpm package
    needs: build_tarball
    runs-on: ubuntu-latest
    steps:
      - name: Checkout repository
        uses: actions/checkout@v2

      - name: Replace version in RPM spec so correct source is downloaded when building RPM
        run: sed -Ei &apos;s/(^Version:[[:space:]]*).*/\1${{github.ref_name}}/&apos; ${{ vars.PKG_NAME }}.spec

      - name: Run rpmbuild on RPM spec to produce package
        id: rpm
        uses: naveenrajm7/rpmbuild@master
        with:
          spec_file: ${{ vars.PKG_NAME }}.spec

      - name: Upload .rpm package as artifact
        uses: actions/upload-artifact@v3
        with:
          name: ${{ vars.PKG_NAME }}-${{ github.ref_name }}-1.${{ env.DIST }}.${{ env.ARCH }}.rpm
          path: rpmbuild/RPMS/${{ env.ARCH }}/*.rpm

  build_deb:
    name: Build .deb package
    needs: build_rpm
    runs-on: ubuntu-latest
    steps:
      - name: Download .rpm artifact
        uses: actions/download-artifact@v3
        id: download
        with:
          name: ${{ vars.PKG_NAME }}-${{ github.ref_name }}-1.${{ env.DIST }}.${{ env.ARCH }}.rpm

      - name: Convert .rpm to .deb
        run: |
          sudo apt install -y alien
          sudo alien -k --verbose --to-deb *.rpm

      - name: Upload .deb package as artifact
        uses: actions/upload-artifact@v3
        with:
          name: ${{ vars.PKG_NAME }}-${{ github.ref_name }}-1.${{ env.DIST }}.${{ env.ARCH }}.deb
          path: ${{ vars.PKG_NAME }}*.deb

  release:
    name: Create release with all assets
    needs: [build_tarball, build_rpm, build_deb]
    runs-on: ubuntu-latest
    steps:
      - name: Download cached rpm, deb, and tar.gz artifacts
        uses: actions/download-artifact@v3

      - name: Release
        uses: softprops/action-gh-release@v1
        with:
          files: |
            ${{ vars.PKG_NAME }}-${{ github.ref_name }}.tar.gz/*.tar.gz
            ${{ vars.PKG_NAME }}-${{ github.ref_name }}-1.${{ env.DIST }}.${{ env.ARCH }}.rpm/**/*.rpm
            ${{ vars.PKG_NAME }}-${{ github.ref_name }}-1.${{ env.DIST }}.${{ env.ARCH }}.deb/**/*.deb</code></pre><p>To add it to your project just copy and paste the contents of the file goes in the <code>.github/workflows</code> folder in your repository. You can name the file anything you want it will be executed regardless. For reference you can see how it&apos;s used in my project <a href="https://github.com/kriipke/provii?ref=spencersmolen.com">provii</a> here:</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://github.com/kriipke/provii?ref=spencersmolen.com"><div class="kg-bookmark-content"><div class="kg-bookmark-title">GitHub - kriipke/provii: provisioning tool to install pre-compiled binaries of your favorite command-line tools</div><div class="kg-bookmark-description">provisioning tool to install pre-compiled binaries of your favorite command-line tools - GitHub - kriipke/provii: provisioning tool to install pre-compiled binaries of your favorite command-line tools</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://github.com/fluidicon.png" alt><span class="kg-bookmark-author">GitHub</span><span class="kg-bookmark-publisher">kriipke</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://opengraph.githubassets.com/c7a34d0500089a8333a0e09f046818d9e3a4a4a362c075260ecbbf32842e2d57/kriipke/provii" alt></div></a></figure><p>Enjoy!</p>]]></content:encoded></item><item><title><![CDATA[Debugging Bash Scripts with $PS4]]></title><description><![CDATA[<p>Debugging Bash scripts can be a pain, let&apos;s be honest. There aren&apos;t a lot of built-in debugging features nor are there many tools around to aid in the task. There is, however, <code>$PS4</code>.</p><p>For those unfamiliar, <code>$PS4</code> is a variable that can be set so that</p>]]></description><link>https://spencersmolen.com/debugging-bash/</link><guid isPermaLink="false">644617833dd32a18e63da016</guid><category><![CDATA[bash]]></category><category><![CDATA[debugging]]></category><category><![CDATA[shell scripting]]></category><dc:creator><![CDATA[Spencer Smolen]]></dc:creator><pubDate>Mon, 24 Apr 2023 06:52:07 GMT</pubDate><content:encoded><![CDATA[<p>Debugging Bash scripts can be a pain, let&apos;s be honest. There aren&apos;t a lot of built-in debugging features nor are there many tools around to aid in the task. There is, however, <code>$PS4</code>.</p><p>For those unfamiliar, <code>$PS4</code> is a variable that can be set so that when you run your scripts using <code>set -x</code>, a shell option that allows you to print each line of your script as it&apos;s run, you can add some custom text to the beginning of each line. The <code>set -x</code> option works like this:</p><p>If I have the following shell script named <code>test.sh</code>:</p><pre><code>#!/bin/bash 

set -x 

TEST_VERBIAGE=&quot;You should see this printed to your terminal&quot;
printf %s &quot;$TEST_VERBIAGE&quot;</code></pre><p>When I run it in the terminal it will look like this:</p><pre><code>&#x276F; ./test.sh  
+ TEST_VERBIAGE=&apos;You should see this printed to your terminal&apos;
+ printf %s &apos;You should see this printed to your terminal&apos;
You should see this printed to your terminal%</code></pre><p>Back to <code>$PS4</code>. The power of <code>$PS4</code> is not to be understated, but there isn&apos;t much in the bash <code>man</code> page that would lead you to believe there&apos;s much going on with it. Below is what you&apos;ll find in the <code>man</code> page about it:</p><pre><code>The value of this parameter is expanded as with PS1 and the value is printed before each command bash displays during an execution trace.  The first character of the expanded value of PS4 is replicated multiple times, as necessary, to indicate multiple levels of indirection.  The default is ``+ &apos;&apos;.</code></pre><p>That isn&apos;t very helpful, but it does explain that the default is <code>+</code>. If you look above at the example when I ran <code>test.sh</code> in the terminal you&apos;ll notice a <code>+</code> at the beginning of each line that was an executed command (the other line is the output that you would normally see in your terminal, so it&apos;s not included). This is where the output of <code>$PS4</code> is placed.</p><h2 id="crafting-a-ps4-variable">Crafting a PS4 Variable</h2><p>You can include anything you want in your <code>$PS4</code> but it generally makes sense to include things that explain the context of what&apos;s being executed so that when a line fails, you can have some extra information to help you figure out why. You could include the value of certain variables in your script, the time, or anything you want. </p><p>A simple example would be to use <code>$PS4</code> to print out the time in nanoseconds before each line so you can tell how long each command takes to execute when looking back over the trace. This would be done by putting the following line at the top of your script:</p><pre><code>PS4=&apos;$(date +%N)&apos;</code></pre><p>One thing that should be noted when crafting your <code>$PS4</code> is that you need to put the contents in single quotes: <code>PS4=&apos;$(date +%N)&apos;</code>. That&apos;s because the value of <code>$PS4</code> is executed at the beginning of each line, over and over, ad nauseam. If you put the contents in double quotes, like <code>PS4=&quot;$(date +%N)&quot;</code>, then anything that&apos;s going to be executed will only be executed <em>once</em> at the beginning of your script when you define the variable and each line will be prefixed by the same content, which isn&apos;t helpful.</p><p>For example, if we used the <code>PS4=&quot;$(date)&quot;</code> example above we would get this:</p><pre><code>&#x276F; ./test.sh  
++ date +%N
+ PS4=&apos;152563673 &apos;
152563673 TEST_VERBIAGE=&apos;You should see this printed to your terminal&apos;
152563673 printf %s &apos;You should see this printed to your terminal&apos;
You should see this printed to your terminal%  </code></pre><p>When what we really wanted was this, which shows the difference in time in nanoseconds:</p><pre><code>&#x276F; ./test.sh  
+ PS4=&apos;$(date +%N) &apos;
885045572 TEST_VERBIAGE=&apos;You should see this printed to your terminal&apos;
890522778 printf %s &apos;You should see this printed to your terminal&apos;
You should see this printed to your terminal%  </code></pre><p>So to get that output our script is going to look like this:</p><pre><code>#!/bin/bash 

set -x 

PS4=&apos;$(date +%N) &apos;

TEST_VERBIAGE=&quot;You should see this printed to your terminal&quot;
printf %s &quot;$TEST_VERBIAGE&quot;</code></pre><h2 id="a-solid-default-ps4">A Solid Default <code>$PS4</code></h2><p>Now you may be using <code>$PS4</code> to examine different things in your shell script but in general, I&apos;m going to walk you through the default <code>$PS4</code> that I use for my scripts. Without further ado here it is:</p><pre><code>PS4=&apos;$(tput setaf 4)$(printf &quot;%-12s\\t%.3fs\\t@line\\t%-10s&quot; $(date +%T) $(echo $(date &quot;+%s.%3N&quot;)-&apos;$(date &quot;+%s.%3N&quot;)&apos; | bc ) $LINENO)$(tput sgr 0)&apos;
</code></pre><p>The output looks like this:</p><figure class="kg-card kg-image-card"><img src="https://spencersmolen.com/content/images/2023/04/Image-4-24-23-at-2.25-AM.jpg" class="kg-image" alt loading="lazy" width="1504" height="230" srcset="https://spencersmolen.com/content/images/size/w600/2023/04/Image-4-24-23-at-2.25-AM.jpg 600w, https://spencersmolen.com/content/images/size/w1000/2023/04/Image-4-24-23-at-2.25-AM.jpg 1000w, https://spencersmolen.com/content/images/2023/04/Image-4-24-23-at-2.25-AM.jpg 1504w" sizes="(min-width: 720px) 720px"></figure><p>The first column is the time in the format <code>hours:minutes:seconds</code>. The second column is the number of seconds since the script started running, e.g. how far into the script timewise. The third and fourth columns indicate the line in the script you&apos;re on when executing the command on that line in the trace. </p><p>This last one is massively useful. When your script hits a snag and you get an error, usually the first thing you want to know in a bash script was what line in my script executed the last command.</p><p>You&apos;re more than welcome to copy and paste this and use it as is. With what you know about <code>$PS4</code> and how it&apos;s used you should be good to go. It may look funny when you copy it into your editor because there is actually a part of this <code>$PS4</code> definition that is not in single quotes as I advised at the beginning of this article. That&apos;s actually on purpose. What it does is it captures the time in nanoseconds at the time the variable is defined to use as a reference when calculating the seconds since the script started! A neat little trick.</p><p>Note that the <code>tput</code> commands at the beginning and the end are what create the colored aspect of the script. This is super useful when you&apos;ve got thousands of lines of this mixed without output from the commands as well in order to orient yourself while navigating the trace. However, to use <code>tput</code> you&apos;ll need <code>ncurses</code> installed. </p><p>This is a non-issue in 99% of situations but if you&apos;re in a Docker container using a barebones image you may not have it, so just take those parts out or install <code>ncurses</code> if you hit a snag. </p><p>Enjoy!</p>]]></content:encoded></item><item><title><![CDATA[Creating a Linux Deployment Server]]></title><description><![CDATA[<p>It is extremely useful to be able to have a workflow set up to be able to install a Linux distro exactly how you need it at the drop of a dime. There are generally two ways to do this:</p><ol><li>Have a deployment server set up to automate the installation</li></ol>]]></description><link>https://spencersmolen.com/creating-a-linux-deployment-server/</link><guid isPermaLink="false">64433a903dd32a18e63d9b92</guid><category><![CDATA[linux]]></category><category><![CDATA[automation]]></category><category><![CDATA[TrueNAS]]></category><dc:creator><![CDATA[Spencer Smolen]]></dc:creator><pubDate>Sat, 22 Apr 2023 11:00:44 GMT</pubDate><content:encoded><![CDATA[<p>It is extremely useful to be able to have a workflow set up to be able to install a Linux distro exactly how you need it at the drop of a dime. There are generally two ways to do this:</p><ol><li>Have a deployment server set up to automate the installation of a new system</li><li>Create a golden image that is either (a) to physical media, such as a hard drive, or (b) stored as a virtual machine to be cloned or imported by your hypervisor</li></ol><p>In this post, we&apos;re going to cover the first option.</p><p>There are a few ways to do this. One is using an out-of-the-box solution like <a href="https://www.redhat.com/en/technologies/management/satellite?ref=spencersmolen.com">Red Hat Satellite</a>. There are some advantages to doing this. The first advantage is that solutions like these market themselves as turnkey solutions, meaning they have everything you need ready to go. </p><p>However, if your goal is to learn, you will definitely be robbing yourself of a lot if you choose to go this route. Additionally, you will be locked into using the software the way it was built. If there&apos;s something you want to do using a turnkey solution, but can&apos;t, then too bad you&apos;re kinda stuck. That being said, if you build a solution on your own you&apos;ll be able to tweak the system as your needs change.</p><h2 id="design-overview">Design Overview</h2><p>When designing any system, the first step is determining your requirements. For this project, the design constraints are thus:</p><ul><li>To be able to pick from multiple distributions of Linux.</li><li>To be able to pick from multiple releases of any given Linux distribution.</li><li>To be able to deploy any of the above using either (a) a fully automated install or (b) a manual walk-through of the installation.</li><li>When using an automated install, be able to pick from any number of pre-configured installation sets, e.g. pick from either a &quot;base&quot; install or one pre-configured as an IDM server, etc.</li><li>To be able to deploy any of the above to either a bare-metal server or a virtual machine.</li></ul><p>For the sake of simplicity and ease of use, I&apos;m going to show you how to set all of this up with TrueNAS. TrueNAS Scale is a NAS-oriented variant of Debian that implements all the components we&apos;re going to need to do this and gives you an optional GUI to use to manage each component. We&apos;re going to be writing all the configuration files from scratch so there will be no sacrifice of functionality or future expansion. </p><p>It&apos;s worth noting that <strong>you can totally use the architecture outlined in this article without using TrueNAS at all</strong>, you&apos;ll just need to deploy each of the components by hand. With that in mind, let&apos;s see how we&apos;re going to architect this solution.</p><h3 id="components-of-the-deployment-server">Components of the Deployment Server</h3><p>I&apos;m going to quickly outline the components we&apos;re going to need to make this deployment server possible. After that, we&apos;re going to walk through the boot process and explain how each component is used in the installation process.</p><ol><li><strong>TFTP server</strong> &#x2013; TFTP (Trivial File Transfer Protocol) is a simple file transfer protocol that we&apos;re going to use to make PXE booting possible. PXE (pre-boot execution) booting is basically a way to bootstrap the boot process using parameters fed over the network at boot. When the machine boots up it will, if configured to, attempt a PXE boot. If it does, and things are configured correctly it will load up the files it needs to boot up into memory and run basically a small hard drive in memory and run on that. You can do a lot of things with PXE booting but in this scenario, we&apos;ll be using it to load up the installation disk only.</li><li><strong>DHCP server</strong> - This is, unlike the rest of these protocols, not a file transfer protocol. Instead, this is something you should already have configured on your network that allows you to connect newly booted machines to your network. It is probably installed and runs on your router, but you may have one configured yourself. Either way, we&apos;re not going to be setting up a DHCP server in this tutorial, just modifying the configuration of your existing one for the sake of simplicity.</li><li><strong>FTP server </strong>&#x2013; FTP (File Transfer Protocol) is a protocol that&apos;s been around for a long time and is a nice choice for the miscellaneous files we&apos;re going to need to use during the automated part of the installation. This is where we&apos;ll host our kickstart files (I&apos;ll go back to this in a bit) as well as any other files we may need, e.g. configuration files, SSH keys we want to install on the servers, anything really.</li><li><strong>NFS server</strong> &#x2013; NFS (Network File Server) is the protocol we&apos;re going to use to host all the OS packages we need to install during the boot process. This is more or less a clone of the contents of the installation DVD, however, we can tailor it to our liking by adding some custom OS packages. </li></ol><p>Alright! Those are the components we need. I know that sounds like a lot but to be honest, TrueNAS is the perfect tool for this job if you&apos;re looking to get this going in a single afternoon. I&apos;m kind of a hardcore do-it-yourselfer and I assure you that you really can sacrifice nothing by using TrueNAS.</p><p>Now let&apos;s get into the way all these components fit in. There are two main installation scenarios we need to think about. I&apos;ll go over them briefly.</p><h3 id="automated-install-overview">Automated Install Overview</h3><p>Below are the steps the installation process will go through during an automated install, highlighting the way each component described in the last section fits into the picture:</p><ol><li>Your machine initializes a PXE boot, which prompts it to receive its network parameters via DHCP. Upon doing so, the machine will be fed two things: (1) the IP address of the <strong>TFTP server </strong>hosting the PXE boot files and the (2) the path on the TFTP server of the file to boot from. If (a) your machine is configured to boot in PXE mode (b) your DHCP server is configured correctly this should lead you to step 2. If not, check your settings as described below.</li><li>Upon receiving the IP address of the TFTP server and the path of the boot file, the machine will then be presented with some options outlined in our <em>PXE configuration file, </em>hosted at <code>/pxelinux.cfg/default</code> on our <strong>TFTP server</strong>. At this point, the available deployment options described in <code>/pxelinux.cfg/default</code> will be read across your screen for you to select. Below is a basic example:</li></ol><figure class="kg-card kg-image-card"><img src="https://spencersmolen.com/content/images/2023/04/pxeboot2.png" class="kg-image" alt loading="lazy" width="1442" height="788" srcset="https://spencersmolen.com/content/images/size/w600/2023/04/pxeboot2.png 600w, https://spencersmolen.com/content/images/size/w1000/2023/04/pxeboot2.png 1000w, https://spencersmolen.com/content/images/2023/04/pxeboot2.png 1442w" sizes="(min-width: 720px) 720px"></figure><p>3. You will then manually select which option you want and hit enter. Depending on which option you select the correct kickstart file will be chosen and pulled down from our <strong>FTP server</strong>. This is made possible by a line like <code>append ... inst.ks=ftp://10.0.0.2/ks/el9/ks.cfg</code> in our &#xA0;<code>/pxelinux.cfg/default</code>. Note that we can configure this to automatically select an option for us so we don&apos;t have to have any user interaction. However, keep in mind the implications of this. It&apos;s nice to have this little bit of human interaction that way you don&apos;t end up wiping a machine that gets connected to your network just but simply plugging in the ethernet jack and turning it on!</p><p>4. The kickstart file, e.g. <code>kickstart.cfg</code>, will have a line in it that points the installer to the location of the <strong>NFS server</strong> hosting all the OS repositories with all of the packages we need for the installation as defined in the kickstart file. This is made possible by a line like <code>nfs --server=10.2.0.6 --dir=/mnt/store/deployment/mirror/el9/rhel-9.1</code> in our <code>kickstart.cfg</code> file.</p><p>Once that is complete you should have a fully automated install going on as pictured below and you can check back in in a few minutes for your fresh OS ready to go!</p><figure class="kg-card kg-image-card"><img src="https://spencersmolen.com/content/images/2023/04/fedora_install_kickstart-1.png" class="kg-image" alt loading="lazy" width="1068" height="809" srcset="https://spencersmolen.com/content/images/size/w600/2023/04/fedora_install_kickstart-1.png 600w, https://spencersmolen.com/content/images/size/w1000/2023/04/fedora_install_kickstart-1.png 1000w, https://spencersmolen.com/content/images/2023/04/fedora_install_kickstart-1.png 1068w" sizes="(min-width: 720px) 720px"></figure><p>The non-automated version of this is the exact same except that in step 3, you will select an option which will, instead of initiating an automated install and showing the image above, present you with the familiar screen shown below allowing you to click through the installer as you normally would.</p><figure class="kg-card kg-image-card"><img src="https://spencersmolen.com/content/images/2023/04/Initial-Default-Installation-Summary-RHEL9.png.webp" class="kg-image" alt loading="lazy" width="800" height="600" srcset="https://spencersmolen.com/content/images/size/w600/2023/04/Initial-Default-Installation-Summary-RHEL9.png.webp 600w, https://spencersmolen.com/content/images/2023/04/Initial-Default-Installation-Summary-RHEL9.png.webp 800w" sizes="(min-width: 720px) 720px"></figure><h2 id="configuring-the-server">Configuring the Server</h2><p>There are basically 3 steps to getting this up and running</p><ol><li>Get your services up and running</li><li>Get your configuration files in place</li><li>Get your files that need to be hosted in-place</li></ol><p>Alright, here we go!</p><h3 id="configuring-the-services">Configuring the Services</h3><p>I&apos;m going to assume you already have a fresh install of TrueNAS configured as there are plenty of articles on how to do this. To be honest it&apos;s pretty simple just <a href="https://www.truenas.com/download-truenas-scale/?ref=spencersmolen.com">flash the ISO</a> to a flash drive and install it disk you can boot from. There are no options during the installation it&apos;s pretty to the point.</p><p>Once you&apos;ve got a TrueNAS installation and storage pool configured you should see something like this when you go to Datasets:</p><figure class="kg-card kg-image-card"><img src="https://spencersmolen.com/content/images/2023/04/Screenshot-2023-04-22-at-4.37.05-AM.png" class="kg-image" alt loading="lazy" width="2000" height="1627" srcset="https://spencersmolen.com/content/images/size/w600/2023/04/Screenshot-2023-04-22-at-4.37.05-AM.png 600w, https://spencersmolen.com/content/images/size/w1000/2023/04/Screenshot-2023-04-22-at-4.37.05-AM.png 1000w, https://spencersmolen.com/content/images/size/w1600/2023/04/Screenshot-2023-04-22-at-4.37.05-AM.png 1600w, https://spencersmolen.com/content/images/size/w2400/2023/04/Screenshot-2023-04-22-at-4.37.05-AM.png 2400w" sizes="(min-width: 720px) 720px"></figure><p>Go ahead and create a new Dataset within your storage pool by selecting &quot;Add Dataset&quot; off to the right and name it &quot;Deployment&quot;.</p><figure class="kg-card kg-image-card"><img src="https://spencersmolen.com/content/images/2023/04/Screenshot-2023-04-22-at-4.37.20-AM.png" class="kg-image" alt loading="lazy" width="2000" height="1627" srcset="https://spencersmolen.com/content/images/size/w600/2023/04/Screenshot-2023-04-22-at-4.37.20-AM.png 600w, https://spencersmolen.com/content/images/size/w1000/2023/04/Screenshot-2023-04-22-at-4.37.20-AM.png 1000w, https://spencersmolen.com/content/images/size/w1600/2023/04/Screenshot-2023-04-22-at-4.37.20-AM.png 1600w, https://spencersmolen.com/content/images/size/w2400/2023/04/Screenshot-2023-04-22-at-4.37.20-AM.png 2400w" sizes="(min-width: 720px) 720px"></figure><p>Within that Dataset create 3 more:</p><ol><li> <code>ftp</code></li><li><code>mirror</code></li><li><code>pxelinux</code></li></ol><p>Until you&apos;ve got something that looks like this:</p><figure class="kg-card kg-image-card"><img src="https://spencersmolen.com/content/images/2023/04/Screenshot-2023-04-22-at-4.39.28-AM.png" class="kg-image" alt loading="lazy" width="2000" height="1627" srcset="https://spencersmolen.com/content/images/size/w600/2023/04/Screenshot-2023-04-22-at-4.39.28-AM.png 600w, https://spencersmolen.com/content/images/size/w1000/2023/04/Screenshot-2023-04-22-at-4.39.28-AM.png 1000w, https://spencersmolen.com/content/images/size/w1600/2023/04/Screenshot-2023-04-22-at-4.39.28-AM.png 1600w, https://spencersmolen.com/content/images/size/w2400/2023/04/Screenshot-2023-04-22-at-4.39.28-AM.png 2400w" sizes="(min-width: 720px) 720px"></figure><p>Next, go over to &quot;Shares&quot; on the left-hand navigation and click &quot;Add&quot; off to the right of &quot;UNIX (NFS) Shares&quot;.</p><figure class="kg-card kg-image-card"><img src="https://spencersmolen.com/content/images/2023/04/Screenshot-2023-04-22-at-4.52.58-AM.png" class="kg-image" alt loading="lazy" width="2000" height="1627" srcset="https://spencersmolen.com/content/images/size/w600/2023/04/Screenshot-2023-04-22-at-4.52.58-AM.png 600w, https://spencersmolen.com/content/images/size/w1000/2023/04/Screenshot-2023-04-22-at-4.52.58-AM.png 1000w, https://spencersmolen.com/content/images/size/w1600/2023/04/Screenshot-2023-04-22-at-4.52.58-AM.png 1600w, https://spencersmolen.com/content/images/size/w2400/2023/04/Screenshot-2023-04-22-at-4.52.58-AM.png 2400w" sizes="(min-width: 720px) 720px"></figure><p>Go ahead and find the <code>mirror</code> dataset we created under the <code>deployment</code> dataset and click &quot;Save&quot;.</p><figure class="kg-card kg-image-card"><img src="https://spencersmolen.com/content/images/2023/04/Screenshot-2023-04-22-at-4.43.45-AM.png" class="kg-image" alt loading="lazy" width="2000" height="1627" srcset="https://spencersmolen.com/content/images/size/w600/2023/04/Screenshot-2023-04-22-at-4.43.45-AM.png 600w, https://spencersmolen.com/content/images/size/w1000/2023/04/Screenshot-2023-04-22-at-4.43.45-AM.png 1000w, https://spencersmolen.com/content/images/size/w1600/2023/04/Screenshot-2023-04-22-at-4.43.45-AM.png 1600w, https://spencersmolen.com/content/images/size/w2400/2023/04/Screenshot-2023-04-22-at-4.43.45-AM.png 2400w" sizes="(min-width: 720px) 720px"></figure><p>When you do it will ask you if you want to enable the service. Press <em>Enable Service</em>.</p><figure class="kg-card kg-image-card"><img src="https://spencersmolen.com/content/images/2023/04/Screenshot-2023-04-22-at-4.44.06-AM.png" class="kg-image" alt loading="lazy" width="2000" height="1627" srcset="https://spencersmolen.com/content/images/size/w600/2023/04/Screenshot-2023-04-22-at-4.44.06-AM.png 600w, https://spencersmolen.com/content/images/size/w1000/2023/04/Screenshot-2023-04-22-at-4.44.06-AM.png 1000w, https://spencersmolen.com/content/images/size/w1600/2023/04/Screenshot-2023-04-22-at-4.44.06-AM.png 1600w, https://spencersmolen.com/content/images/size/w2400/2023/04/Screenshot-2023-04-22-at-4.44.06-AM.png 2400w" sizes="(min-width: 720px) 720px"></figure><p>Next, go down to <em>System Settings &gt; Services </em>on the left-hand navigation.</p><figure class="kg-card kg-image-card"><img src="https://spencersmolen.com/content/images/2023/04/Screenshot-2023-04-22-at-4.39.44-AM.png" class="kg-image" alt loading="lazy" width="2000" height="1627" srcset="https://spencersmolen.com/content/images/size/w600/2023/04/Screenshot-2023-04-22-at-4.39.44-AM.png 600w, https://spencersmolen.com/content/images/size/w1000/2023/04/Screenshot-2023-04-22-at-4.39.44-AM.png 1000w, https://spencersmolen.com/content/images/size/w1600/2023/04/Screenshot-2023-04-22-at-4.39.44-AM.png 1600w, https://spencersmolen.com/content/images/size/w2400/2023/04/Screenshot-2023-04-22-at-4.39.44-AM.png 2400w" sizes="(min-width: 720px) 720px"></figure><p>You should see something like the following:</p><figure class="kg-card kg-image-card"><img src="https://spencersmolen.com/content/images/2023/04/Screenshot-2023-04-22-at-4.40.28-AM.png" class="kg-image" alt loading="lazy" width="2000" height="1627" srcset="https://spencersmolen.com/content/images/size/w600/2023/04/Screenshot-2023-04-22-at-4.40.28-AM.png 600w, https://spencersmolen.com/content/images/size/w1000/2023/04/Screenshot-2023-04-22-at-4.40.28-AM.png 1000w, https://spencersmolen.com/content/images/size/w1600/2023/04/Screenshot-2023-04-22-at-4.40.28-AM.png 1600w, https://spencersmolen.com/content/images/size/w2400/2023/04/Screenshot-2023-04-22-at-4.40.28-AM.png 2400w" sizes="(min-width: 720px) 720px"></figure><p>Click the little pencil icon to the right of &quot;TFTP&quot; to configure the TFTP service.</p><figure class="kg-card kg-image-card"><img src="https://spencersmolen.com/content/images/2023/04/Screenshot-2023-04-22-at-4.58.42-AM.png" class="kg-image" alt loading="lazy" width="2000" height="1627" srcset="https://spencersmolen.com/content/images/size/w600/2023/04/Screenshot-2023-04-22-at-4.58.42-AM.png 600w, https://spencersmolen.com/content/images/size/w1000/2023/04/Screenshot-2023-04-22-at-4.58.42-AM.png 1000w, https://spencersmolen.com/content/images/size/w1600/2023/04/Screenshot-2023-04-22-at-4.58.42-AM.png 1600w, https://spencersmolen.com/content/images/size/w2400/2023/04/Screenshot-2023-04-22-at-4.58.42-AM.png 2400w" sizes="(min-width: 720px) 720px"></figure><p>Go ahead and find the <code>pxelinux</code> dataset we created under the <code>deployment</code> and select the correct IP under <em>Host</em> and click <em>Save</em>.</p><figure class="kg-card kg-image-card"><img src="https://spencersmolen.com/content/images/2023/04/Screenshot-2023-04-22-at-4.41.47-AM.png" class="kg-image" alt loading="lazy" width="2000" height="1627" srcset="https://spencersmolen.com/content/images/size/w600/2023/04/Screenshot-2023-04-22-at-4.41.47-AM.png 600w, https://spencersmolen.com/content/images/size/w1000/2023/04/Screenshot-2023-04-22-at-4.41.47-AM.png 1000w, https://spencersmolen.com/content/images/size/w1600/2023/04/Screenshot-2023-04-22-at-4.41.47-AM.png 1600w, https://spencersmolen.com/content/images/size/w2400/2023/04/Screenshot-2023-04-22-at-4.41.47-AM.png 2400w" sizes="(min-width: 720px) 720px"></figure><p>Select the on/off switch under the <em>Running</em> column and also check the checkbox under <em>Start Automatically</em> on the row for TFTP.</p><p>Finally, go up to <em>FTP</em> and click the pencil to edit. At the bottom expand the <em>Advanced Options</em> and (1) check <em>Allow Anonymous Login</em> and (2) select the <code>ftp</code> dataset we created under <code>deployment</code>. </p><figure class="kg-card kg-image-card"><img src="https://spencersmolen.com/content/images/2023/04/Screenshot-2023-04-22-at-6.53.42-AM.png" class="kg-image" alt loading="lazy" width="2000" height="1623" srcset="https://spencersmolen.com/content/images/size/w600/2023/04/Screenshot-2023-04-22-at-6.53.42-AM.png 600w, https://spencersmolen.com/content/images/size/w1000/2023/04/Screenshot-2023-04-22-at-6.53.42-AM.png 1000w, https://spencersmolen.com/content/images/size/w1600/2023/04/Screenshot-2023-04-22-at-6.53.42-AM.png 1600w, https://spencersmolen.com/content/images/size/w2400/2023/04/Screenshot-2023-04-22-at-6.53.42-AM.png 2400w" sizes="(min-width: 720px) 720px"></figure><p>Afterward, navigate back to &#xA0;<em>System Settings &gt; Services</em> and check the on/off switch under the <em>Running</em> column and the checkbox on <em>Start Automatically</em> so you have something that looks like this:</p><figure class="kg-card kg-image-card"><img src="https://spencersmolen.com/content/images/2023/04/Screenshot-2023-04-22-at-4.45.05-AM.png" class="kg-image" alt loading="lazy" width="2000" height="1627" srcset="https://spencersmolen.com/content/images/size/w600/2023/04/Screenshot-2023-04-22-at-4.45.05-AM.png 600w, https://spencersmolen.com/content/images/size/w1000/2023/04/Screenshot-2023-04-22-at-4.45.05-AM.png 1000w, https://spencersmolen.com/content/images/size/w1600/2023/04/Screenshot-2023-04-22-at-4.45.05-AM.png 1600w, https://spencersmolen.com/content/images/size/w2400/2023/04/Screenshot-2023-04-22-at-4.45.05-AM.png 2400w" sizes="(min-width: 720px) 720px"></figure><p>Perfect! You now have all the services you need running you&apos;re just not hosting any of the files yet. We&apos;ll cover this in the next section.</p><p>If you chose to do all this on your own by hand then now you can tune back in for configuring the server.</p><h2 id="deploying-your-pxelinux-files">Deploying Your PXELinux Files</h2><p></p><h3 id="the-pxe-configuration-file-pxelinuxcfgdefault">The PXE Configuration File: <code>/pxelinux.cfg/default</code></h3><p>This is where you&apos;re going to give the general deployment options that you are going to select when booting up a new install. You&apos;re going to want to outline all possible deployment scenarios here because this is the only human interaction involved in the whole process if you choose an automated install. If the configuration you want isn&apos;t listed here you&apos;re going to have to select a guided installation and walk through it by hand. This isn&apos;t the end of the world but it&apos;s a much nicer experience with a fully automated install I&apos;ve found. After all, you&apos;ve set up a deployment server, you should be enjoying the fruits of your labor, right? :)</p><pre><code>timeout 600
display boot.msg
default 4
prompt  1

label 1
  menu label ^Install RHEL 9.1 via Kickstart: Bare Metal
  kernel images/RHEL-9.1/vmlinuz
  append initrd=images/RHEL-9.1/initrd.img ip=dhcp inst.ks=ftp://10.2.0.8/ks/el9/ks-baremetal.cfg

label 2
  menu label ^Install RHEL 9.1 via Kickstart: Virtual Machine
  kernel images/RHEL-9.1/vmlinuz
  append initrd=images/RHEL-9.1/initrd.img ip=dhcp inst.ks=ftp://10.2.0.8/ks/el9/ks-vm.cfg

label 3
  menu label ^Install RHEL 9.0 via local mirror
  kernel images/RHEL-9.0/vmlinuz
  append initrd=images/RHEL-9.0/initrd.img ip=dhcp inst.repo=nfs:10.2.0.6:/mnt/store/deployment/mirror/el9/rhel-9.0/

label 4
  menu label Boot from ^local drive
  localboot 0x80


menu end</code></pre><p>A few things to note here. All the paths listed here include <code>boot.msg</code> and <code>images/..</code> are relative to the PXE boot file given in your DHCP parameters, which was probably <code>pxelinux.0</code> (we&apos;ll get around to all this in the configuration section below). Getting all these files in the right place can be tricky, so just as a time saver, I&apos;ve created a GitHub repository with a <code>Makefile</code> to automate this for you. </p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://github.com/kriipke/linux-deployment-server?ref=spencersmolen.com"><div class="kg-bookmark-content"><div class="kg-bookmark-title">GitHub - kriipke/linux-deployment-server</div><div class="kg-bookmark-description">Contribute to kriipke/linux-deployment-server development by creating an account on GitHub.</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://github.com/fluidicon.png" alt><span class="kg-bookmark-author">GitHub</span><span class="kg-bookmark-publisher">kriipke</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://opengraph.githubassets.com/3fbfde08fb11937e95f4bb4e4bcfc3de0c23f79f776fbc8857ecbcf01d05202b/kriipke/linux-deployment-server" alt></div></a></figure><p>If you open TrueNAS and open the &quot;System Settings&quot; menu on the left-hand navigation you will see an option in the menu for &quot;Shell&quot;. Click on that and navigate to <code>/tmp</code> and clone the repository with <code>git clone <a href="https://github.com/kriipke/linux-deployment-server?ref=spencersmolen.com">https://github.com/kriipke/linux-deployment-server</a></code>.</p><figure class="kg-card kg-image-card"><img src="https://spencersmolen.com/content/images/2023/04/Screenshot-2023-04-22-at-5.27.15-AM.png" class="kg-image" alt loading="lazy" width="2000" height="813" srcset="https://spencersmolen.com/content/images/size/w600/2023/04/Screenshot-2023-04-22-at-5.27.15-AM.png 600w, https://spencersmolen.com/content/images/size/w1000/2023/04/Screenshot-2023-04-22-at-5.27.15-AM.png 1000w, https://spencersmolen.com/content/images/size/w1600/2023/04/Screenshot-2023-04-22-at-5.27.15-AM.png 1600w, https://spencersmolen.com/content/images/size/w2400/2023/04/Screenshot-2023-04-22-at-5.27.15-AM.png 2400w" sizes="(min-width: 720px) 720px"></figure><p> I would highly recommend reading over the <code>README.adoc</code> to get a good understanding of what&apos;s going on here. I would also recommend opening up the <code>Makefile</code> and reading over the shell commands executed at each step to make sure you&apos;re not robbed of learning anything.</p><p>If you decided to read it over it should be pretty apparent how to use it as it&apos;s fairly documented. If you don&apos;t want to bother yourself with that, after you clone the repository just copy the contents of the folder <code>/tmp/linux-deployment-server/pxelinux</code> into the root of your TFTP server. This should be able to to be done with a command like:</p><pre><code>cp -r /tmp/linux-deployment-server/pxelinux/* \
	/mnt/$YOUR_POOL_NAME/deployment/pxelinux/</code></pre><p>On my system, since my pool&apos;s name is <code>store</code>, I&apos;d have something like the following</p><pre><code>/mnt/store/deployment/pxelinux
/mnt/store/deployment/pxelinux/ldlinux.c32
/mnt/store/deployment/pxelinux/images
/mnt/store/deployment/pxelinux/images/RHEL-9.1
/mnt/store/deployment/pxelinux/images/RHEL-9.1/.gitignore
/mnt/store/deployment/pxelinux/images/RHEL-9.0
/mnt/store/deployment/pxelinux/images/RHEL-9.0/.gitignore
/mnt/store/deployment/pxelinux/pxelinux.cfg
/mnt/store/deployment/pxelinux/pxelinux.cfg/default
/mnt/store/deployment/pxelinux/libutil.c32
/mnt/store/deployment/pxelinux/pxelinux.0
/mnt/store/deployment/pxelinux/menu.c32
/mnt/store/deployment/pxelinux/libcom32.c32</code></pre><p>At this point, the only files you&apos;re missing are the <code>vmlinuz</code> and the <code>initrd.img</code> for each release. You can find these files on the installation DVD of each release at <code>/images/pxeboot/vmlinuz</code> and <code>/images/pxeboot/initrd.img</code>.</p><p>Copying these files over from each release is quite laborious and will get annoying really quickly so, if you choose, you can use the <code>Makefile</code> to make this a lot faster by:</p><ol><li>downloading the DVD installation disk for each release and placing it the <code>/tmp/linux-deployment-server/iso</code> directory we created when you cloned the repository from GitHub.</li><li>Navigate to <code>/tmp/linux-deployment-server</code> and edit the first few lines of the <code>Makefile</code> with the right variables for the ISO &amp; release you need the boot files from.</li><li>Typing <code>make bootimgs</code> in the same folder as the <code>Makefile</code> and it will place them in the correct directory. Make sure you change the <code>TFTP_ROOT := ./pxelinux</code> &#xA0;line to the correct path in your TrueNAS store (it should be something like <code>/mnt/$YOUR_STORE_NAME/deployment/pxelinux/images</code>).</li></ol><p>Once you do this you should have the full file listing, checking it against this one below:</p><pre><code>/mnt/store/deployment/pxelinux
/mnt/store/deployment/pxelinux/ldlinux.c32
/mnt/store/deployment/pxelinux/images
/mnt/store/deployment/pxelinux/images/RHEL-9.1
/mnt/store/deployment/pxelinux/images/RHEL-9.1/vmlinuz
/mnt/store/deployment/pxelinux/images/RHEL-9.1/initrd.img
/mnt/store/deployment/pxelinux/images/RHEL-9.1/.gitignore
/mnt/store/deployment/pxelinux/images/RHEL-9.0
/mnt/store/deployment/pxelinux/images/RHEL-9.0/.gitignore
/mnt/store/deployment/pxelinux/images/RHEL-9.0/initrd.img
/mnt/store/deployment/pxelinux/images/RHEL-9.0/vmlinuz
/mnt/store/deployment/pxelinux/pxelinux.cfg
/mnt/store/deployment/pxelinux/pxelinux.cfg/default
/mnt/store/deployment/pxelinux/libutil.c32
/mnt/store/deployment/pxelinux/pxelinux.0
/mnt/store/deployment/pxelinux/menu.c32
/mnt/store/deployment/pxelinux/libcom32.c32</code></pre><h3 id="deploying-your-kickstart-files">Deploying Your Kickstart Files</h3><p>The kickstart files are extremely powerful. You can find a nice reference for the most recent version of Enterprise Linux (of all variations including Rocky, CentOS, etc.) <a href="https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/html-single/performing_an_advanced_rhel_9_installation/index?ref=spencersmolen.com#kickstart-script-file-format-reference_installing-rhel-as-an-experienced-user">here</a>. For Fedora, the most up-to-date reference is <a href="https://docs.fedoraproject.org/en-US/fedora/latest/?ref=spencersmolen.com">here</a>. To deploy these files copy all the files from <code>/tmp/linux-deployment-server/ftp/</code> to <code>/mnt/$YOUR_STORE_NAME/ftp/</code>.</p><pre><code>lang en_US
keyboard --xlayouts=&apos;us&apos;
timezone America/New_York --utc
rootpw --plaintext changeme
user --groups=wheel --name=ansible --password=changeme --uid=1000 --gecos=&quot;For making unattended changes with Ansible.&quot; --gid=1000
reboot
text

nfs --server=10.2.0.6 --dir=/mnt/store/deployment/mirror/el9/rhel-9.1

bootloader --append=&quot;rhgb quiet crashkernel=1G-4G:192M,4G-64G:256M,64G-:512M&quot;
zerombr
clearpart --all --initlabel
autopart
network --bootproto=dhcp
skipx
firstboot --disable
selinux --enforcing
firewall --enabled --ssh

%packages
@^minimal-environment
qemu-guest-agent
openssh-server
-kexec-tools
-dracut-config-rescue
-plymouth*
-iwl*firmware
%end</code></pre><p>The above example is the kickstart file I have included for EL9 variants for virtual machines. Really the only thing that makes it special for virtual machines is that it removes a lot of firmware that won&apos;t be used and installs the <code>qemu-guest-agent</code> package so that it&apos;ll play nicely with KVM hypervisors when booted up as a guest image. </p><p>Make sure to change the line with NFS in it your local NFS server IP. This will be the IP of your TrueNAS server. In all fairness, you could just as easily replace this line for one that starts with <code>url</code> which allows you to specify an internet-based repository instead of one hosted locally via NFS. The option is yours. See <a href="https://docs.centos.org/en-US/8-docs/advanced-install/assembly_kickstart-commands-and-options-reference/?ref=spencersmolen.com#url_kickstart-commands-for-installation-program-configuration-and-flow-control">here</a> for more information.</p><p>I could give you a handful of kickstarts but in all honesty, the whole point of the kickstart files is customization. My Kickstart files (or anyone else&apos;s) wouldn&apos;t do you any good because your needs are unique your kickstart files should reflect that.</p><p>It&apos;s important to understand that the heart of this deployment workflow is the kickstart files though. Having multiple, special-purpose kickstart files associated with different boot options in your <code>/pxelinux.cfg/default</code> is where the real magic happens.</p><p>The options are really limitless. CentOS has a pretty illustrative documentation section on post-install scripts that can be run from the kickstart files:</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://docs.centos.org/en-US/8-docs/advanced-install/assembly_kickstart-script-file-format-reference/?ref=spencersmolen.com#post-installation-scripts-in-kickstart_kickstart-script-file-format-reference"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Kickstart script file format reference :: CentOS Docs Site</div><div class="kg-bookmark-description"></div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://docs.centos.org/favicon.ico" alt></div></div><div class="kg-bookmark-thumbnail"><img src="https://docs.centos.org/_images/logo_small.png" alt></div></a></figure><p>I won&apos;t tell you what your kickstarts should look like but I will give you some ideas to get you going. You could configure any of the following in your kickstarts:</p><ul><li>create a domain joined host</li><li>add your favorite repositories (e.g. Docker, EPEL, etc.)</li><li>create an Ansible user for immediate provisioning</li><li>use a kickstart to deploy identical nodes in a Linux computing cluster</li><li>set a security baseline for all newly deployed hosts on your network</li><li>configure logging and remote monitoring</li><li>deploy an agent such <a href="https://www.osquery.io/?ref=spencersmolen.com">osquery</a>, Chef, Puppet, etc.</li></ul><h2 id="hosting-a-local-os-repository-via-nfs">Hosting a Local OS Repository via NFS</h2><p>The idea behind this is simple. You&apos;re basically going to mount the installation DVD and copy the entire contents into a folder in <code>/mnt/$YOUR_STORE_NAME/deployment/mirror</code>. In order to help with this task, I&apos;ve included a target in the <code>Makefile</code> for this. </p><ol><li>Download the installation DVD for the release you want to mirror and place it the <code>/tmp/linux-deployment-server/iso</code> directory we created when you cloned the repository from GitHub.</li><li>Navigate to <code>/tmp/linux-deployment-server</code> and edit the first few lines of the <code>Makefile</code> with the right variables for the ISO &amp; release you need the boot files from.</li><li>Typing <code>make osrepo</code> in the same folder as the <code>Makefile</code> and it will place them in the correct directory. Make sure you change the <code>REPO_DIR=./mirror</code> line to the correct path in your TrueNAS store (it should be something like <code>/mnt/$YOUR_STORE_NAME/deployment/pxelinux/images</code>).</li></ol>]]></content:encoded></item><item><title><![CDATA[Learn How to Learn Linux: Part 2]]></title><description><![CDATA[<p>This article is part of a series on learning how to learn Linux. In this article we&apos;re going to cover the nuts and bolts of the <code>man</code> page system and accessing and using it to your advantage. Additionally, we&apos;re going to cover other places you might</p>]]></description><link>https://spencersmolen.com/learn-how-to-learn-linux-part-2/</link><guid isPermaLink="false">643d93793dd32a18e63d9a46</guid><category><![CDATA[linux]]></category><dc:creator><![CDATA[Spencer Smolen]]></dc:creator><pubDate>Tue, 18 Apr 2023 02:13:12 GMT</pubDate><content:encoded><![CDATA[<p>This article is part of a series on learning how to learn Linux. In this article we&apos;re going to cover the nuts and bolts of the <code>man</code> page system and accessing and using it to your advantage. Additionally, we&apos;re going to cover other places you might find documentation on your Linux system and how make best use of it.</p><h2 id="the-man-sections">The <code>man</code> sections</h2><p>Understanding the <code>man</code> pages lies in grasping the purpose and utilization of the various sections of the man page corpus (which is just a fancy word for collection).</p><p>There are 8 standard man page sections that originate from the original Unix developers Dennis Ritchie and Brian Kernighan (see <a href="https://spencersmolen.com/learn-how-to-learn-linux-part-i/">Part I</a> of the series for more).</p><p>These sections are briefly described if you type the following command:</p><pre><code class="language-man">man 7 man-pages</code></pre><p>If you do you&apos;ll see something like the following:</p><pre><code>1 User commands (Programs)
       Commands that can be executed by the user from within a shell.

2 System calls
       Functions which wrap operations performed by the kernel.

3 Library calls
       All library functions excluding the system call wrappers
       (Most of the libc functions).

4 Special files (devices)
       Files found in /dev which allow to access to devices through
       the kernel.

5 File formats and configuration files
       Describes various human-readable file formats and
       configuration files.

6 Games
       Games and funny little programs available on the system.

7 Overview, conventions, and miscellaneous
       Overviews or descriptions of various topics, conventions and
       protocols, character set standards, the standard filesystem
       layout, and miscellaneous other things.

8 System management commands
       Commands like mount(8), many of which only root can execute.</code></pre><p>Now you may have commented that the command you typed to view this was a little peculiar. Instead of the typical <code>man</code> command, e.g. <code>man vi</code>, we added a number in between between the word <code>man</code> and what we want a <code>man</code> page for. If you were adventerous and tried to run <code>man-pages</code> in your shell you will have noticed that <code>man-pages</code> is not even a command. </p><p>What&apos;s going on here is that we&apos;ve queried section 7 of the man pages with this command and section 7 is labelled &quot;Overview, conventions, and miscellaneous.&quot; Thus, this command just returned a man page that was just generally giving information on the &quot;man pages.&quot; </p><p>Hopefully you&apos;re having an &quot;ah-ha&quot; moment right now and realizing that there is a lot more to the <code>man</code> pages that finding out what arguments a command takes. Indeed, if you look at the sections you&apos;ll find there are all kinds of goodies tucked away in various sections of the man pages. </p><p>One of the core concepts discussed in <a href="https://spencersmolen.com/learn-how-to-learn-linux-part-i/">Part I</a> of this article series is that Unix-based operating systems attempt, traditionally, to come with all the documentation for each component that ship with it. That being said, to locate all the man pages available for a given section, just browse the folders your man pages are in! If you type:</p><pre><code class="language-shell">$ ls -1d /usr/share/man/man*
/usr/share/man/man1
/usr/share/man/man4
/usr/share/man/man5
/usr/share/man/man6
/usr/share/man/man7
/usr/share/man/man8
/usr/share/man/man9</code></pre><p>If you wish to view the <code>man</code> pages available for section 7 just type </p><pre><code class="language-shell">ls /usr/share/man/man7
</code></pre><p>There are other places, however, that man pages may be stored. Check the shell variable <code>$MANPATH</code> for a comprehensive list for your system (and user). For a more thorough explanation of how <code>$MANPATH</code> is generated see:</p><pre><code>man 5 manpath</code></pre><p>You&apos;ll notice that this is in section 5 of the man pages. This is because it is a reference to the configuration file <code>/etc/manpath.config</code>. <em>This is an extremely useful section</em>. Linux relies heavily on configuration files for proper operation and you are sure to spend a lot of time getting to know them as you use Linux or become acquainted with new tools. </p><p>I would highly recommend running the following command to check out what <code>man</code> pages there are for existing configuration files on your system if you haven&apos;t ever done this:</p><p><code>ls /usr/share/man/man5</code></p><p>If you don&apos;t see any <code>man</code> pages listed in following the above instructions, then don&apos;t worry. This mystery will be solved in the next section.</p><h2 id="the-man-db-man-pages-packages">The <code>man-db</code> &amp; <code>man-pages</code> Packages</h2><p>Now I&#x2019;ll go out on a limb and assume none of you are running Unix proper at home (or anywhere) but instead one of it&#x2019;s direct descendants like a distribution of the much loved GNU/Linux.</p><p>While the details of how GNU/Linux emerged from the world of Unix is beyond the scope of this article, all you need to know for the sake of this article is that the package <code>man-db</code> makes this mostly possible, and is more or less a port of the original Unix manual pager utilities package for GNU/Linux.</p><p>If the <code>man-db</code> package is the paging utilities that let you scroll up and down and search in a man page, then <code>man-pages</code> is the packages that contains the actual man pages that get fed into the pager: the raw, un-rendered man pages.</p><p>To install these packages on your Linux distribution should they be missing, use the following:</p><figure class="kg-card kg-code-card"><pre><code>sudo apt install man-db man-pages</code></pre><figcaption>for Debian-based systems</figcaption></figure><figure class="kg-card kg-code-card"><pre><code>sudo dnf install man-db man-pages </code></pre><figcaption>for Fedora-based systems</figcaption></figure><h2 id="the-man-command">The <code>man</code> command</h2><p>I&apos;m sure you are all fairly familiar with the <code>man</code> command if you&apos;re reading an article about learning Linux so I won&apos;t spend long here. </p><p>I would like to point out an extremely useful switch available on the command for those of you that are unfamiliar: <code>-k</code>. This switch provides the same functionality as the antiquated <code>apropos</code> command, which is basically a &quot;search&quot; feature of the man pages.</p><p>The <code>-k</code> switch is going to be your go-to tool for situations where you know, or expect, there to be <code>man</code> pages on a topic but you&apos;re just not sure what they are.</p><p>For example, if you&apos;re setting up NFS for the first time on a server or on a client and you just want to see what&apos;s available to you as far as <code>man</code> pages on the topic, you would type <code>man -k nfs</code>. &#xA0;You might do this because you don&apos;t know what the configuration files are, or you don&apos;t know what commands you need to run on the client, or on the server, etc.</p><p>When I run <code>man -k nfs</code> on my system I get the following results:</p><pre><code>blkmapd (8)          - pNFS block layout mapping daemon
confstr (3)          - get configuration dependent string variables
filesystems (5)      - Linux filesystem types: ext, ext2, ext3, ext4, hpfs, iso9660, J...
fs (5)               - Linux filesystem types: ext, ext2, ext3, ext4, hpfs, iso9660, J...
idmapd (8)           - NFSv4 ID &lt;-&gt; Name Mapper
idmapd.conf (5)      - configuration file for libnfsidmap
ipa-client-automount (1) - Configure automount and NFS for IPA
mount.nfs (8)        - mount a Network File System
mountstats (8)       - Displays various NFS client per-mount statistics
nfs (5)              - fstab format and options for the nfs file systems
nfs.conf (5)         - general configuration for NFS daemons and tools
nfs.systemd (7)      - managing NFS services through systemd.
nfs4_uid_to_name (3) - ID mapping routines used for NFSv4
nfsconf (8)          - Query various NFS configuration settings
nfsidmap (5)         - The NFS idmapper upcall program
nfsiostat (8)        - Emulate iostat for NFS mount points using /proc/self/mountstats
nfsmount.conf (5)    - Configuration file for NFS mounts
nfsservctl (2)       - syscall interface to kernel nfs daemon
nfsstat (8)          - list NFS statistics
rpc.idmapd (8)       - NFSv4 ID &lt;-&gt; Name Mapper
rpc.sm-notify (8)    - send reboot notifications to NFS peers
rpcdebug (8)         - set and clear NFS and RPC kernel debug flags
showmount (8)        - show mount information for an NFS server
sm-notify (8)        - send reboot notifications to NFS peers
umount.nfs (8)       - unmount a Network File System
</code></pre><p>You&apos;ll notice there is a wealth of information available to you on the topic. You&apos;re made aware of everything from filetypes to use with the <code>mount</code> command, to configuration files, to the general <code>man 5 filesystems</code> page.</p><p>I cannot overemphasize how much you will learn about Linux by exploring the man pages with <code>man -k</code>.</p><h2 id="the-wild-west-of-documentation-usrshare">The Wild West of Documentation: <code>/usr/share</code></h2><p>There is often overflow documentation available in <code>/usr/share</code> or <code>/usr/share/doc</code> for complicated programs and frameworks you may have installed on your system. I say &quot;or&quot; because technically <code>/usr/share/doc</code>, which sounds like it would be where the documentation goes, <em>is</em> mentioned briefly in section <a href="https://refspecs.linuxfoundation.org/FHS_3.0/fhs/ch04s11.html?ref=spencersmolen.com#specificOptions15">4.11.3</a> of the <a href="https://refspecs.linuxfoundation.org/fhs.shtml?ref=spencersmolen.com">Filesystem Hierarchy Standard</a>, there&apos;s rarely anything too juicy in there. Instead, you&apos;ll often find the real documentation &#x2013; should it exist &#x2013; in a folder named after the program in question within <code>/usr/share</code>. </p><p>For example, <code>gnupg</code> usually comes extremely well-documented, including information in <code>/usr/share/gnupg</code>. &#xA0;What you&apos;ll find in this folder usually comes in two varieties:</p><ol><li>Example configuration files</li><li>Manuals and how-to guides in the form of <code>.txt</code>, <code>.html</code>, or sometimes <code>.pdf</code> documents.</li></ol><p>While there is no real convention here as how this folder should be structured or what should be contained in here, it&apos;s anyone&apos;s guess as to what you&apos;ll find there for any given program. However conventionless the content may be, the notion of applications that ship with full documentation is something we have the original Unix creators for. That in mind, usually what you&apos;ll find here is documentation too extensive for man pages and for that reason you&apos;ll usually find it for programs that are quite complex or multi-faceted.</p><p>Please note that <code>/share/</code> directories exist all over the place. Don&apos;t forget to check <code>/usr/local/share</code>, <code>$HOME/.local/share</code> and <code>/opt/*/share</code>. All of these are potential locations for documentation and sample configuration files.</p><h2 id="to-be-continued">To be continued...</h2><p>In the next article in this series we&apos;ll talk about how our use of the command line and the man pages can (and may already have) reveal much deeper information about the inner workings of Linux.</p>]]></content:encoded></item><item><title><![CDATA[Learn How to Learn Linux: Part I]]></title><description><![CDATA[<p>Many people, whether they like to admit it or not, have struggled at some point or another in the process of learning Linux. Even the people that are reading this that think that they&#x2019;re done <em><em>learning GNU/Linux the OS</em></em>, are still going to have to be <em><em>learning</em></em></p>]]></description><link>https://spencersmolen.com/learn-how-to-learn-linux-part-i/</link><guid isPermaLink="false">643d789b3dd32a18e63d98c2</guid><category><![CDATA[linux]]></category><dc:creator><![CDATA[Spencer Smolen]]></dc:creator><pubDate>Mon, 17 Apr 2023 18:38:11 GMT</pubDate><content:encoded><![CDATA[<p>Many people, whether they like to admit it or not, have struggled at some point or another in the process of learning Linux. Even the people that are reading this that think that they&#x2019;re done <em><em>learning GNU/Linux the OS</em></em>, are still going to have to be <em><em>learning within the GNU/Linux OS</em></em> forever.</p><p>What do I mean by this? One may become familiar with things like static vs dynamically compiled binaries, the ELF binary structure, the difference between user-space and kernel-space. This type of knowledge is what I would consider knowledge of Linux itself, and we&apos;ll refer to it as &quot;conceptual knowledge&quot; of Linux.</p><p>On the other hand, there is a completely different kind of fluency in Linux that refers to the ability, for example, to clean and explore tabular data like a CSV in Linux or to fluidly &#x201C;slice &amp; dice&#x201D; log files via the command line to summarize runtime errors in Linux. This knowledge isn&apos;t about a component of Linux but rather refers to a type of pragmatic &quot;know-how&quot; one posseses. We&apos;ll call this type of knowledge &quot;procedural knowledge&quot; of Linux.</p><p>These two types of knowledge people may have in various capacities based on how they use Linux. For example, a data scientist may have a ton of procedural knowledge of Linux from time spent doing data science at the command-line but lack a core understanding of how Linux itself works. On the other hand, someone with a Computer Science degree that took some classes on operating systems may have a deep understanding of how Linux works internally but have minimal command-line fluency or operational competancy.</p><p>In order to say that you &quot;know Linux&quot; in the general sense, however, you&apos;re going to want a healthy dose of both. You need a robust ability to get things done in an efficient manner in Linux (preferably at the command line), and you&apos;re going to need a decent understanding of what&apos;s going on undernearth the hood as you perform these tasks.</p><p>Now there&apos;s really no one place to acquire this competancy. A classroom may teach you the conceptual knowledge you need but typically procedural knowledge is acquired through blood sweat and tears at a terminal. Likewise, some people may have put in the hours at the command line but are still feeling like they are missing some core knowledge of Linux internals and how things work under the hood.</p><p>The purpose of this article is to meet you where you are, wherever you are in your journey in learning Linux, and give you the tools to round out your knowledge of Linux. The good news is that the path I&apos;m going to suggest is the same for everyone, and you can build off of whatever knowledge you have in a meaningful way to do this.</p><p>What is this single remedy to learning Linux? It&apos;s not a mysterious answer, but rather t<em>he <code>man</code> pages. </em>Before you say &quot;that&apos;s silly I already know about the man pages!&quot; I implore to you consider the two ideas. How the <code>man</code> pages are a part of a much larger and extremely ambitious documentation project in Linux is not exactly obvious. Additionally &#x2013; and more importantly, <em>why</em> the man pages are the crux of learning Linux is even less obvious. </p><p>Throughout Parts I &amp; II of this article series I&apos;m going to try and illustrate this second point: <em>why</em> the man pages are the crux of learning Linux. In doing so we&apos;re going to cover a lot of of the first point. Namely, to outline the general attempt Linux has made at being the first operating-system <em>whose entire manual comes included with the operating system itself </em>and use this to your advantage. Starting to get curious? Keep reading.</p><h2 id="unix-programmer%E2%80%99s-manual-i-an-attempt-at-documentation">Unix Programmer&#x2019;s Manual I: An attempt at documentation</h2><p>Dennis Richie &amp; Ken Thompson&#x2019;s work was published in tandem with the Unix operating system itself, the two sharing a birthday of November 3, 1971.</p><p>The original Unix Programmer&#x2019;s manual included a small amount of system call and executable manuals and was far from comprehensive manual of the Unix I OS. Additionally, the entire thing was distributed as a printed document and had to be ordered and mailed out!</p><p>Also worth noting was that at this time, there was no <code>man</code> command due to the lack of a need for a way to view the manual pages on the computer. This will soon change.</p><h2 id="unix-programmer%E2%80%99s-manual-ii-a-fully-documented-operating-system-is-born">Unix Programmer&#x2019;s Manual II: A fully documented operating system is born</h2><p>By the time the second Unix Programmer&#x2019;s Manual had come out the entire manual was now available on networked repositories.</p><p>With the digital publication of the man pages and the rendering capabilities developed in the<code>man</code>command which to more or less rendered <code>roff</code> files that had, in the past, been sent to physical printers instead to a <code>tty</code> for interactive navigation using a pager, Unix v2 was now the the first OS that had ever attempted to be fully self-documenting and provide that documentation in some fashion with the OS itself. Unix v2 came as a complete package.</p><p>What began as workplace pain point that Richie &amp; Thompson had begrudgingly started prior to the release of the original Unix had been imbued with a enough passion and discipline by the time the second release had come out it had basically matured into the direct descendent of the man pages we look at today.</p><p>In addition to the new network-hosted repositories with Unix II was a new command: <code>man</code>. The <code>man</code> command formatted the text for the screen and displayed them using what is called a pager, which is the program that allows you to scroll up and down the man pages or search through them using a search phrase while displayed on the screen. The default pager on most modern Linux distributions is less but most is a common, more feature rich alternative.</p><pre><code>MAN(1)                     Manual pager utils                     MAN(1)
NAME         

       man - an interface to the system reference manuals

SYNOPSIS         

       man [man options] [[section] page ...] ...
       man -k [apropos options] regexp ...
       man -K [man options] [section] term ...
       man -f [whatis options] page ...
       man -l [man options] file ...
       man -w|-W [man options] page ...

DESCRIPTION         

       man is the system&apos;s manual pager.  Each page argument given to
       man is normally the name of a program, utility or function.  The
       manual page associated with each of these arguments is then found
       and displayed.  A section, if provided, will direct man to look
       only in that section of the manual.  The default action is to
       search in all of the available sections following a pre-defined
       order (see DEFAULTS), and to show only the first page found, even
       if page exists in several sections.

       The table below shows the section numbers of the manual followed
       by the types of pages they contain.

       1   Executable programs or shell commands
       2   System calls (functions provided by the kernel)
       3   Library calls (functions within program libraries)
       4   Special files (usually found in /dev)
       5   File formats and conventions, e.g. /etc/passwd
       6   Games
       7   Miscellaneous (including macro packages and conventions),
           e.g. man(7), groff(7)
       8   System administration commands (usually only for root)
       9   Kernel routines [Non standard]

       A manual page consists of several sections.

       Conventional section names include NAME, SYNOPSIS, CONFIGURATION,
       DESCRIPTION, OPTIONS, EXIT STATUS, RETURN VALUE, ERRORS,
       ENVIRONMENT, FILES, VERSIONS, CONFORMING TO, NOTES, BUGS,
       EXAMPLE, AUTHORS, and SEE ALSO.</code></pre><p>This &#x201C;man page&#x201D; is an example of the manual page for the command man. I have bolded some sections to which I would like to direct your attention.</p><p>Here we see the 9 man sections enumerated. It also is quick to name some commonly used section names such as <em><em>DESCRIPTION, OPTIONS, SEE ALSO</em></em>, etc. We also see some of those section names in action in the <code>man man</code>page itself: <em><em>NAME</em></em>, <em><em>SYNOPSIS</em></em>, and <em><em>DESCRIPTION</em></em>.</p><p>With the release of the Unix Programmer&#x2019;s Manual v2 the follow traditions had been established:</p><ol><li><em><em>It is the responsibility of the operating system distributor to package and distribute the correct documentation with its corresponding software.</em></em></li><li><em>The m<em>anual page corpus will be divided up into </em></em><strong><strong><em><em>sections</em></em></strong></strong><em><em> that cover the breadth of working within the operating system including configuration files, shell commands, and system calls.</em> If the man pages were a book, these would be the chapters.</em></li><li><em><em>Manual pages</em> themselves<em> are</em> further<em> divided up into familiar section names, most of which were already established this by this point including NAME, DESCRIPTION, SEE ALSO, etc.</em></em></li></ol><p>While all of these are important concepts to appreciate as someone learning Linux, the most important of these &quot;traditions&quot; one might call them, is the first: <em>that the operating system will come with a full set of a documentation. </em>This is going to be the an essential fact to keep in the back of your mind always. Whenever in doubt about something in while operating in Linux, know that the answer lies in the palm of your hands.</p><h2 id="finding-help-documentation-in-linux">Finding help documentation in Linux</h2><p>There are 3 places you will typically see this tradition of the self-documented system evidenced when attempting to find information in Linux or other Unix-based operating systems.</p><ol><li>If a command line program, you will be able to append <code>-h</code> or <code>--help</code> to the command and get an abbeviated help document explaining how to use the command.</li><li>Using the various sections or &quot;chapters&quot; of the <code>man</code> page corupus, which collectively include all man pages on a given system. </li><li>The directories <code>/usr/share</code> and <code>/usr/share/doc</code> for programs that came with your Linux distribution along with <code>/usr/local/share</code> and <code>/usr/local/share/doc</code> for programs that you&apos;ve installed. </li></ol><p>We&apos;ll be covering (2) and (3) extensively in this article series. The first you should already be familiar with and if not there&apos;s not much more to be said in the context of help documentation except that it exists, so now you know! One additional tip, about (1) - for some commands <code>--help</code> will give an extended version of whatever <code>-h</code> spits out.</p><p>Another note, I would search for help in the order listed above. For commands, first try the <code>--help</code> documentation, should that not have the answer, check the <code>man</code> page, and if that does not have the answer to try the appropriate <code>share</code> directory.</p><p>There is one notable exception to this rule and that is regarding documentation for commands that come built-in to your shell. These commands are not installed, nor are they provided by your Linux distribution, but rather come compiled into your shell. Commands that fall in this category include those like <code>cd</code>, <code>ls</code>, <code>dirs</code>, and a decent amount of other commands. You can access documentation for these commands by using the <code>help</code> command: just type <code>help [command]</code> just like you would for a <code>man</code> page. For a full list of these commands just type <code>help</code> without any arguments.</p>]]></content:encoded></item><item><title><![CDATA[Creating an Ansible user in Linux]]></title><description><![CDATA[<p>How you manage your Ansible environment is very open-ended. This affords Ansible users an extremely dynamic configuration-management tool. However, this wealth of options can be overwhelming for those new to Ansible or for those that lean heavily on Ansible in small-medium lab environments.</p><p>When you&#x2019;re first starting off</p>]]></description><link>https://spencersmolen.com/creating-an-ansible-user-in-linux/</link><guid isPermaLink="false">643d71fe3dd32a18e63d982b</guid><category><![CDATA[linux]]></category><category><![CDATA[ansible]]></category><category><![CDATA[automation]]></category><dc:creator><![CDATA[Spencer Smolen]]></dc:creator><pubDate>Mon, 17 Apr 2023 16:44:25 GMT</pubDate><content:encoded><![CDATA[<p>How you manage your Ansible environment is very open-ended. This affords Ansible users an extremely dynamic configuration-management tool. However, this wealth of options can be overwhelming for those new to Ansible or for those that lean heavily on Ansible in small-medium lab environments.</p><p>When you&#x2019;re first starting off you may use Ansible to SSH into your machine&#x2019;s as root using username and password. This is about as simple of an Ansible access situation as you&#x2019;ll run into. Why?</p><ol><li>All commonly deployed Linux distributions will provide you with a root user by default.</li><li>In most situations, whether it be forced upon you or you decided it best, the root user will have a password defined.</li></ol><p>This is fine if you&#x2019;re just messing around with Ansible for the first time, but remember the goal of Ansible is to have a control node, a machine that you will use to push configuration changes to all your other machines. So you will be concentrating a massive amount of power (to do good or bad) into a small space. You may just have 1 Ansible user, e.g. named <code>ansible</code>, and 1 control host, e.g. your administrative machine.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://spencersmolen.com/content/images/2023/04/1-O6T6SLj10GwiHOLXc1ioDg.webp" class="kg-image" alt loading="lazy" width="1370" height="825" srcset="https://spencersmolen.com/content/images/size/w600/2023/04/1-O6T6SLj10GwiHOLXc1ioDg.webp 600w, https://spencersmolen.com/content/images/size/w1000/2023/04/1-O6T6SLj10GwiHOLXc1ioDg.webp 1000w, https://spencersmolen.com/content/images/2023/04/1-O6T6SLj10GwiHOLXc1ioDg.webp 1370w" sizes="(min-width: 720px) 720px"><figcaption>An Ansible control node managing 4 hosts. Linux hosts are typically managed via SSH and Windows hosts are typically managed via WinRM.</figcaption></figure><p>&#x200C;If Ansible is set up properly, anyone that has access to this control node will be able to do large amounts of harm to your network because this user is completely designed to be able to go into any machine and make any changes.</p><p>The larger your umbrella of devices you manage with Ansible grows, the higher the stakes get with how well you&#x2019;re protecting this special, often unrestricted access the Ansible control node gets. That being said, the more complex your Ansible environment gets the more you need to think about best practices and how to ensure you&#x2019;re at least giving some attention to how exposed you are and in what ways; even if just in a lab environment.</p><p>Now it&#x2019;s beyond the scope of this article to explain the details of why, but I&#x2019;m going to assume you understand the security concerns associated with configuring every machine in your lab to allow root access over SSH using a password. Not only is root access over SSH not advised, but neither is using passwords to authenticate users over SSH. So what&#x2019;s a better alternative?</p><p>Well, we need to dedicate a single, non-root user to make all of our changes via Ansible. This way, we configure auditing and logging on that user in such a way that we simply cannot on the root user. This will give us the ability to go back and see what was done in the event that a bad actor was to gain access to your Ansible control node. Additionally, we may tailor the security requirements of that account exactly as needed for the kind of access and methods of access we expect from our <code>ansible</code> user.</p><p>Secondly, passwords are among the weakest forms of authentication. This, like using a non-root user for administrative tasks is beyond the scope of this article, however, suffice it to say SSH key pairs employ asymmetric cryptography that allows the generation and use of credentials that do not have to be sent back and forth while communicating which reduces the possibility of the credential being intercepted, among other things.</p><p>So we know we need:</p><ol><li>A user to SSH into our machines as that is not the root user</li><li>A way to authenticate using SSH that relies on asymmetric key pairs and not a password.</li></ol><p>For the rest of this article, we&#x2019;ll be talking about how to configure each one of our hosts that we want to configure using Ansible, either now or in the future, to have this user</p><ol><li>Previously created.</li><li>Granted <code>sudo</code> access.</li><li>Only accepting remote SSH authentication via an SSH key pair.</li></ol><p>If you want to manage your environment securely using Ansible, but do not have federated access across your home lab that will allow you to create and manage a central Ansible user in your Active Directory environment. This is for those of you who have a medium-sized home lab and want to manage it with Ansible.</p><p>If you don&#x2019;t have the federated access that will allow a single Ansible user you can log in to all your Windows, Linux, etc. boxes with. The simple alternative is to create a separate ansible user with a more or less identical setup across all your machines.</p><p>So let&apos;s say you have 20 machines in your home lab (virtual, bare metal, or containers). The easiest way to manage them with Ansible is to have all the same Ansible SSH parameters:</p><ol><li>hosts accept the same <strong><strong>SSH key pair</strong></strong></li><li>hosts have the same <strong><strong>SSH port</strong></strong></li><li>hosts will have the same <strong><strong>SSH username</strong></strong></li></ol><p>Since creating 20 <code>ansible</code> users across 20 machines would be a slight pain I&#x2019;ve devised a script below you can tweak and run on any fresh box you spin up that you want to manage with Ansible:</p><pre><code class="language-bash">#!/bin/sh

# Create an ansible user that requires no password and 
# only accepts SSH authentication. Modify the SSH public
# keys and other variables below to create your ansible
# user as needed.

ANSIBLE_USERNAME=&quot;${ANSIBLE_USERNAME:-ansible}&quot;
ANSIBLE_USERID=&quot;${ANSIBLE_USERID:-2000}&quot;
ANSIBLE_HOMEDIR_PARENT=&quot;${ANSIBLE_HOMEDIR_PARENT:-/home}&quot;
ANSIBLE_SSHKEY=$(cat &lt;&lt;EOF
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAkfocK6qGdfGZLECDB/E5WuOWajWpkoP12JnBrloezb
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFdp/F7yYvsewnZQGbJAGmNcFNbm3qOFCOvrprXDlP24
EOF
)

# 1. Make ansible user

useradd \
  --uid &quot;$ANSIBLE_USERID&quot; \
  --base-dir=&quot;$ANSIBLE_HOMEDIR_PARENT&quot;
  --create-home \  
  --user-group \
  --system \
  &quot;$ANSIBLE_USERNAME&quot;

# 2. Configure ansible user to have unrestricted, passwordless sudo access

printf &apos;%%%s\tALL=(ALL)\tNOPASSWD:\sALL&apos; &quot;$ANSIBLE_USERNAME&quot; &gt; /etc/sudoers.d/$ANSIBLE_USERNAME

# 3. Configure ansible user&apos;s SSH keys to allow incoming connections

mkdir --mode=0700 ~$ANSIBLE_USERNAME/.ssh
chown $ANSIBLE_USERNAME:$ANSIBLE_USERNAME -R ~$ANSIBLE_USERNAME/.ssh
printf &quot;%s\n&quot; $ANSIBLE_SSHKEY &gt; ~$ANSIBLE_USERNAME/.ssh/authorized_keys
chmod 0500 ~$ANSIBLE_USERNAME/.ssh/authorized_keys
chown $ANSIBLE_USERNAME:$ANSIBLE_USERNAME -R ~$ANSIBLE_USERNAME/.ssh/authorized_keys</code></pre><h3 id="shifting-left-simplifying-the-process-even-further">Shifting Left: Simplifying the Process Even Further</h3><p>Now that the script is written up and it&#x2019;s been written in such a way that we can tweak the variables at the top to control, for example, the name of the ansible user and the user&apos;s UID, we can use it in a handful of creative ways other than running it by hand on every fresh machine you spin up.</p><p>If we already know we want the ansible user on every machine we have. We can create a base image that already includes this ansible user. That way, every time we need a new machine spun up the ansible user will already be created!</p><p>Now, there are a few ways to do this. Just to spark your imagination I&#x2019;ll describe two ways, one more involved than the other.</p><h3 id="embedding-the-script-in-a-kickstart-file">Embedding the Script in a Kickstart File</h3><p>Below is a very simple <code>ks.cfg</code> file that can be used to fully, or partially automate the installation of Enterprise Linux distros such as Red Hat or Rocky Linux by including all the variables that you are asked for during the walkthrough-style installation of the OS normally. Instead of being asked for your timezone, firewall preferences, root password, etc. You can just point the fresh machine at (a) this file and (b) the DVD or some other repository of all the packages Linux will download during the initial install. Once the installer has the data and the instructions it can install Linux exactly how you requested it.</p><p>The advantage of having a kickstart file set up this way with our little Ansible user created in the post-installation script is that this method can be used for bare-metal installs and virtual machines in more or less the same way. In other words, once you get this little script the way you like it it&#x2019;s both portable and powerful for generating like images.</p><pre><code>lang en_US
keyboard --xlayouts=&apos;us&apos;
timezone America/New_York --isUtc
rootpw $2b$10$G0m7VBOzrmZKF5t7NVE.EuFdSmcXKX/WZyMZ..M.UgUfly3A.NMLy --iscrypted
reboot
text
cdrom
bootloader --append=&quot;rhgb quiet crashkernel=1G-4G:192M,4G-64G:256M,64G-:512M&quot;
zerombr
clearpart --all --initlabel
auth --passalgo=sha512
skipx
firstboot --disable
selinux --enforcing
firewall --enabled
%post

# THE SCRIPT THAT WE WROTE ABOVE CAN BE PLACED HERE IN
# BETWEEN THE &quot;%post&quot; MARKERS TO CREATE THE ANSIBLE USER
# IMMEDIATELY AFTER THE INITIAL OS IS INSTALLED BEFORE FIRST BOOT

%end</code></pre><h3 id="running-the-script-manually-imaging-the-machine">Running the Script Manually &amp; Imaging the Machine</h3><p>Alternatively, if the above method sounds too complicated for getting you up and running in your home lab with Ansible you can always do the following:</p><ol><li>Install a fresh distribution of Linux in whatever way you&#x2019;re most familiar with. If you&#x2019;re struggling just hop over to <a href="https://www.osboxes.org/virtualbox-images/?ref=spencersmolen.com" rel="noopener ugc nofollow">https://www.osboxes.org</a>) and download your favorite distro for your favorite hypervisor, VirtualBox is free ;)</li><li>Download the script above.</li><li>After tweaking the variables including your SSH public keys and, if you&#x2019;d like, changing the ansible username, make the script executable and run it using the following command: <code>chmod a+x script.sh &amp;&amp; ./script.sh</code></li><li>Depending on your hypervisor, these instructions will vary. But you will want to export the virtual machine to OVF format so that it&#x2019;s just a single file.</li></ol><p>Once you&#x2019;ve exported the VM you should then be able to use that to create your other new VMs either by copying or importing it. As I said above these instructions will vary based on your hypervisor but these links may be illuminating:</p><h3 id="virtualbox">Virtualbox</h3><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://docs.oracle.com/en/virtualization/virtualbox/6.0/user/ovf.html?ref=spencersmolen.com#ovf-export-appliance"><div class="kg-bookmark-content"><div class="kg-bookmark-title">1.14.&#xA0;Importing and Exporting Virtual Machines</div><div class="kg-bookmark-description"></div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://docs.oracle.com/en/virtualization/virtualbox/6.0/user/oracle-mvl-favicon.ico" alt></div></div><div class="kg-bookmark-thumbnail"><img src="https://docs.oracle.com/en/virtualization/virtualbox/6.0/user/images/resized/ovf-import.png" alt></div></a></figure><h3 id="vmware-workstation">VMWare Workstation</h3><p><a href="https://docs.vmware.com/en/VMware-Workstation-Pro/15.0/com.vmware.ws.using.doc/GUID-62F39498-1492-4774-A38D-1EDD3DA3C046.html?ref=spencersmolen.com">https://docs.vmware.com/en/VMware-Workstation-Pro/15.0/com.vmware.ws.using.doc/GUID-62F39498-1492-4774-A38D-1EDD3DA3C046.html</a></p><h3 id="hyper-v">Hyper-V</h3><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://learn.microsoft.com/en-us/windows-server/virtualization/hyper-v/deploy/export-and-import-virtual-machines?ref=spencersmolen.com#export-a-virtual-machine"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Export and import virtual machines</div><div class="kg-bookmark-description">Shows you how to export and import virtual machines using Hyper-V Manager or Windows PowerShell.</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://learn.microsoft.com/favicon.ico" alt><span class="kg-bookmark-author">Microsoft Learn</span><span class="kg-bookmark-publisher">BenjaminArmstrong</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://learn.microsoft.com/en-us/media/logos/logo-ms-social.png" alt></div></a></figure>]]></content:encoded></item><item><title><![CDATA[Website (re) Launch]]></title><description><![CDATA[<p>Working on a revamp of my personal website, be back shortly! Lots of good tech-related content on the way though so stay tuned! Until then, you can check out some of recent writing at <a href="https://spencersmolen.medium.com/?ref=spencersmolen.com">Medium</a> as well as some code I&apos;ve posted over on my <a href="https://github.com/kriipke?ref=spencersmolen.com">GitHub</a> page. Subscribe</p>]]></description><link>https://spencersmolen.com/coming-soon/</link><guid isPermaLink="false">643c92343dd32a18e63d9677</guid><category><![CDATA[News]]></category><dc:creator><![CDATA[Spencer Smolen]]></dc:creator><pubDate>Mon, 17 Apr 2023 00:26:28 GMT</pubDate><content:encoded><![CDATA[<p>Working on a revamp of my personal website, be back shortly! Lots of good tech-related content on the way though so stay tuned! Until then, you can check out some of recent writing at <a href="https://spencersmolen.medium.com/?ref=spencersmolen.com">Medium</a> as well as some code I&apos;ve posted over on my <a href="https://github.com/kriipke?ref=spencersmolen.com">GitHub</a> page. Subscribe with the button on the right to stay updated about the re-launch!</p>]]></content:encoded></item></channel></rss>