Git training success from Los Alamos National Lab

I sometimes describe Vertical Sysadmin as a “boutique” training firm.

According to Merriam-Webster, “boutique” means “a small company that offers highly specialized services or products”.

Our specialty is highly effective training for teams supporting modern IT infrastructures. We don’t just read slides at our students — our students really learn, they achieve full understanding of their tools and come out able to use them smoothly and effortlessly.

Our flagship course is Git Foundations: From Novice to Guru and by now we’ve helped a number of organizations adopt Git. Here is what our training service is like, in the client’s own words:

As part of our transition from SVN to Git as the backing repository for our configuration management repository, we wanted to provide Git training to our production HPC system administration team to get new Git users up to speed and provide seasoned users with more in-depth knowledge of the tool. We chose Vertical Sysadmin to provide this training, and they provided a very well-received three-day course to our team.

The Vertical Sysadmin team was engaged from the start, interested in our motivations and our final goals so they could tailor the course to meet our needs. When the COVID pandemic hit a few weeks before the scheduled class and we were no longer able to host on-site visitors, Vertical Sysadmin was able to turn an interactive in-person class into an equally interactive remote class that included self-paced learning, remote group activities, and phone-based instructor checkups. Results from the class were overwhelmingly positive, with both new and advanced users learning a lot from the experience.

We are very happy with our decision to use Vertical Sysadmin as the provider for this course. The instructor was professional, knowledgeable, and accommodating, and the material was exactly what we needed to help our transition from SVN to Git. This course is helping us reach the goal of completing this transition and modernizing our system software stack before our next wave of new HPC platforms rolls in.

Cory Lueninghoener
HPC Platforms Design Team Lead
Los Alamos National Laboratory

Posted in Uncategorized | Leave a comment

Installing CFEngine on images

When you install CFEngine from package, the packaged post-installation script generates a cryptographic identity for the host which is used to identify the host in CFEngine. CFEngine uses the host’s cryptographic identity to identify the host as IP addresses can change (e.g., when devices transition between mobile cells) and so can hostnames. A thumbprint of the public key servers as the host identifier.

The key pair is stored in /var/cfengine/ppkeys (private and public keys):

# ls /var/cfengine/ppkeys/localhost.p* -1

You can print the host id (thumbprint of public key) using cf-key -p:

# cf-key -p /var/cfengine/ppkeys/

If you install CFEngine onto an image, that burns in the key pair, and then all hosts brought up with that image will have the same identity, which wreaks havoc with CFEngine reporting.

To handle this you have to: a) Remove the key pair before burning the image (“rm /var/cfengine/ppkeys/localhost.p*”) b) Generate a key pair on node initializing (by running “/var/cfengine/bin/cf-key”)

Or, rather than installing the package and then burning the image (i.e., “baking in” the keys), you can download, install and bootstrap CFEngine as part of the node initialization process (at “fry” time).

Posted in Uncategorized | Leave a comment

Setting up CI/CD Pipelines course

I’ll be teaching “Setting up CI/CD Pipelines” at Ohio LinuxFest in Columbus on Sep 29th, and at USENIX LISA in San Francisco on Oct 31st.

My colleague Mike Weilgart will be teaching Git Foundations: Unlocking the Mysteries at both conferences.

We are available for training and consulting engagements.

Posted in Uncategorized | Leave a comment

Sysadmin war story: “The network ate my font!”

About 15 years ago, I was the only UNIX sys admin in a factory,
and I was asked to help with a “network issue” which hit the
Help Desk.

The problem, it was explained to me, was that the network was
eating the MICR font (Magnetic Ink Character Recognition — machine-readable font) on the checks.

Look at the last line on the below check image for an example of MICR font:

Image credit: Wikipedia; Creative Commons license; unmodified.

The finance office was in Toronto and they were printing checks to
printers in Toronto and in Hollywood, California. The checks were
coming out OK in Toronto but in Hollywood the bank routing number
and account number was printed in regular font instead of in MICR
font. The Toronto printer was printing numbers in MICR.

I said there is no way the network is “eating” the MICR font.
That’s not how networks work! Show me the check. Sure enough,
the account number and the routing number (and the check number)
were all in regular font.

I asked the Toronto office to fax me the check printed there —
it had the MICR font. Same check, mind you!

I asked the guy responsible for maintaining the finance system
if anything changed recently (as the problem had just started)
and he was very sure there was no change. Real head-scratcher!

I ended up capturing the postscript of the check using tcpdump
as it was printed from Toronto UNIX server (that was running
the finance system) to the printer in Hollywood and poured over
the dump.

What I found was that the postscript image was referencing a font
but the font was not included into the image. So how did this
ever work?

Digging into this a bit deeper with the finance tech guy, why,
yes, there is a checkbox in the check editor for whether to
encapsulate (bundle) the font in with the image and by default,
every time you go in to edit the check, the box is unchecked!

However the last time the check image was edited was months ago.
Why did it stop working only now? Any guesses?

Turns out the printer had a cache for fonts and was using the
font cached from the earlier check image which included the
font! Moreover, the Toronto and Hollywood offices were on a
different printer maintenance schedule — and as part of the
maintenance the printers are rebooted which clears the font cache!

To confirm this, we rebooted the Toronto printer and it stopped
printing the MICR font on the check.

The resolution was to edit the check image and check the “include
fonts” checkbox.

Posted in Uncategorized | Leave a comment

What I learned at LISA 2016 conference

USENIX Annual Technical Conference and LISA are the first professional conferences I went to as a fledgeling sysadmin. USENIX will always hold a special place in my heart. I attribute my professional success to regular training at USENIX conferences.

I really enjoyed LISA ’16 in Boston.

This year I exhibited, promoting our training services. I learned that people are looking for training on Machine Learning (Big Data) and on Go; and I got to show Tom Limoncelli a success story from my “Time Management for System Administrators” training at Ohio Linux Fest that I had received just that morning.

At the Training Sysadmins BoF, I learned Adobe has a kick-ass in-house training program that keeps getting better. Really impressed with Adobe culture!

It was revitalizing to hear the passion for improving society at the Educating Sysadmins BoF. Keep up the great work!

At the LOPSA Annual Community Meeting, I was awarded a LOPSA challenge coin in recognition for running the Los Angeles chapter and signing up new members. I love it! Thank you!

Nick Anderson asked what are the top three things I learned at LISA.

First, I just want to say “kudos!” to the Program Committee – lots of great (and very modern) content!

  • I learned about the Jupyter Notebook programming and data visualization environment in the class on Machine Learning.
  • I loved the talk on unikernels. Technology keeps changing and LISA helps me keep up.
  • The closing session, “SRE in the Small and in the Large” demonstrated SRE is not just for Google scale — even smaller organizations can reap large benefits from applying “a stitch in times saves nine”. Read the book!

Nick, the keynote by Jane Adams on emergence in complex systems was mindblowing — look for it online soon! (Here is a shorter one, from three years ago.)

My favorite part of the conference was hanging out with my LISA friends. Hope to see you in October in San Francisco!

Posted in Uncategorized | Leave a comment

Setting up a Postgres Sandbox

I’m a fan of disposable sandboxes using Vagrant and VirtualBox.

I’ve been using Postgres on the job for nearly a year, and a while back I decided it was time to have a dedicated Postgres instance on my personal computer, on a virtual machine, just to play around with.

I’m using a Mac, but the steps below should work without change for Windows or Linux. Let me know in the comments if you run into any problems following these steps!

How to set up a reproducible Postgres sandbox inside a disposable VM

  1. If you haven’t done so already, install Vagrant and VirtualBox.

  2. Choose a Vagrant base image. For my own purposes I chose the vanilla base box puppetlabs/centos-6.6-64-nocm (which comes from Puppet Labs but doesn’t have Puppet installed.)

    There are official images for CentOS and official images for Ubuntu as well, in addition to the vast range of user-contributed images. (The official CentOS image doesn’t come with VirtualBox guest additions installed, though, so folder syncing won’t work out of the box.)

    You don’t have to do anything with your choice just yet, just choose one.

  3. Open your terminal or command line environment.

  4. Create a dedicated directory for this particular Vagrant environment to live in, and change directories to that directory. (I personally use ~/term/vagrant/centos-6.)

    mkdir -p ~/term/vagrant/centos-6
    cd !$

    (!$ makes use of a Bash feature called “History Expansion.” It expands to the last portion of the last command executed, in this case ~/term/vagrant/centos-6.)

  5. Initialize the Vagrant environment, specifying the base image (box) you chose in step 2.

    vagrant init puppetlabs/centos-6.6-64-nocm

    You can run ls and observe that a Vagrantfile has been created:

    $ ls
  6. Bring up the Vagrant environment, which (this first time) will also download the base image you chose earlier.
    vagrant up

    This may take a little while. Wait for it.

  7. Log in to the Vagrant virtual machine you’ve created.

    vagrant ssh
  8. If you want a particular version of Postgres, choose the version of the Postgres yum repository package from which you can download the Postgres package itself. (I chose Postgres 9.3.) Right click and choose “copy link.” Then, inside your Vagrant VM, run the following command, replacing the URL with the one you’ve just copied.
  9. Install the package you’ve just downloaded.
    sudo yum install -y ./pgdg-redhat93-9.3-3.noarch.rpm
  10. Create a “packages” directory inside /vagrant, to store the Postgres packages outside the VM, where they will be saved when the VM is destroyed. Change to this directory.
    mkdir /vagrant/packages
    cd !$
  11. Download the Postgres packages (ignoring the versions present in the default repositories).
    yumdownloader --resolve --disablerepo=\* --enablerepo=pgdg\* postgresql\*-server
  12. Exit, and verify that the “packages” directory shows up on your host with three RPMs inside.
    ls packages
  13. Modify the Vagrantfile to include the shell commands to initialize Postgres.

    Change the commented out lines (near the bottom of the file) that read as follows:

      # config.vm.provision "shell", inline: <<-SHELL
      #   apt-get update
      #   apt-get install -y apache2
      # SHELL

    And replace them with:

      config.vm.provision "shell", inline: <<-SHELL
        yum install -C -y /vagrant/packages/postgresql*.rpm
        service postgresql-9.3 initdb
        service postgresql-9.3 start
        chkconfig postgresql-9.3 on
        sudo -i -u postgres createuser -d -e -E -I -r -s vagrant
        sudo -i -u vagrant createdb -e

    Save the changes.

  14. Destroy your vagrant environment.

    vagrant destroy
  15. Bring it up again and log in.
    vagrant up && vagrant ssh
  16. Run psql. You’re in.
    [vagrant@localhost ~]$ psql
    psql (9.3.15)
    Type "help" for help.
    vagrant=# \q
    [vagrant@localhost ~]$

Now you can play around, do whatever you like inside the Postgres instance, and just repeat the last three steps (destroy VM, bring it up, and start psql) as often as you need to. 🙂

Did these instructions help? Did you have any trouble? Any suggestions? Let me know in the comments.

Posted in Uncategorized | Tagged , , , , , | Leave a comment

Bootstrapping CFEngine agent to a regional (AWS) hub

Hat tip to my DevOps buddy Joaquin Menchaca for this one-liner to find out what AWS region your VM is in:

AWS_REGION=$(curl -s | sed -e 's/[a-z]$//')

I am going to use it to bootstrap my Kubernetes VMs to the right CFEngine hub for their AWS region (i.e., to the local hub).

Posted in Uncategorized | Leave a comment

Looking up Postgres table name by id

When working with [TOAST][1] tables, I had the relid (relation or table id) of the parent table, and needed to get its name.

Here is how to perform the lookup. For example, if the relid is 19665:

SELECT relid, relname
FROM pg_catalog.pg_statio_user_tables
WHERE relid = '19665';

Posted in postgres | Leave a comment

Safely updating /etc/sudoers non-interactively

I recently added my account to /etc/sudoers on N servers using Ansible in raw mode (running a shell oneliner). We use visudo to edit /etc/sudoers when we are logged into a server, but since I was doing this in “batch” mode I wanted to do it non-interactively but still have the benefit of the file-locking and syntax checking and rollback that visudo provides. So I set visudo’s VISUAL environment variable to my external command:

[root@myhost ansible]# more
grep -q ^tsalolia /etc/sudoers || VISUAL="(echo /^root; echo a; echo 'tsalolia ALL=(ALL) ALL'; echo .; echo 'g/^#tsalolia/d'; echo 'x!') | ex - /etc/sudoers" visudo
visudo -c
[root@myhost ansible]#

I edited /etc/sudoers directly with ex to append my entry below root’s because every host had a different /etc/sudoers file (this is what happens when you don’t have configuration management).

Long-term, I’m planning to templatize /etc/sudoers at this client.

I should be able to run

VISUAL="cp /etc/ /etc/sudoers" visudo

to safely install the new sudoers file.

P.S. I developed the above on CentOS 5 and 6, but when I tried it on Ubuntu 16, I got the error:

visudo: specified editor ((echo /^root; echo a; echo 'tsalolia ALL=(ALL) ALL'; echo .; echo 'g/^#tsalolia/d'; echo 'x!') | ex - /etc/sudoers) doesn't exist

Posted in Uncategorized | Leave a comment

Introducing Infrastructure Inventory with CFEngine Enterprise

CFEngine Enterprise makes it absurdly easy to track deployed
servers. All you have to do is spin up a hub, install the lightweight agent on each host, and run cf-agent --bootstrap <hub> to setup a trust relationship between hub and hosts. (If you need any help setting this up, let me know.)

The agent will discover information about each host (hostname, address, OS version, disk utilization, packages installed, etc.) and the hub will collect and aggregate this data, so you can see at a glance things like, my OS inventory is 5% RHEL 4, 10% RHEL 5, 84% RHEL 6, and 1% RHEL 7.

The neat thing, because the agent is lightweight, CFEngine is able to refresh this inventory every 5-10 minutes, so when you go into the Reporting UI, the data is completely up to date — even if you are provisioning hosts dynamically (scaling to meet demand, for example), the Reporting Portal will stay up to date.

CFEngine Enterprise has a great Reporting UI which can build charts and graphs on the fly (under Inventory Report in the Reports tab), you can add filters, and it has a custom report builder.

Plus the inventory is extensible — you can run an arbitrary external shell script or binary (e.g. from third-party vendors) to harvest additional data about the environment. This new data gets picked up by the Enterprise Hub along with the built-in inventory.

The Reporting Portal can export PDFs and CSVs as well, so you can make pretty pie graphs and slides for corporate meetings if you’re lucky enough to be involved in such! 🙂

You can see screenshots of the Reporting Portal in my blog post on

Posted in Uncategorized | Leave a comment