What I learned at LISA 2016 conference

USENIX Annual Technical Conference and LISA are the first professional conferences I went to as a fledgeling sysadmin. USENIX will always hold a special place in my heart. I attribute my professional success to regular training at USENIX conferences.

I really enjoyed LISA ’16 in Boston.

This year I exhibited, promoting our training services. I learned that people are looking for training on Machine Learning (Big Data) and on Go; and I got to show Tom Limoncelli a success story from my “Time Management for System Administrators” training at Ohio Linux Fest that I had received just that morning.

At the Training Sysadmins BoF, I learned Adobe has a kick-ass in-house training program that keeps getting better. Really impressed with Adobe culture!

It was revitalizing to hear the passion for improving society at the Educating Sysadmins BoF. Keep up the great work!

At the LOPSA Annual Community Meeting, I was awarded a LOPSA challenge coin in recognition for running the Los Angeles chapter and signing up new members. I love it! Thank you!

Nick Anderson asked what are the top three things I learned at LISA.

First, I just want to say “kudos!” to the Program Committee – lots of great (and very modern) content!

  • I learned about the Jupyter Notebook programming and data visualization environment in the class on Machine Learning.
  • I loved the talk on unikernels. Technology keeps changing and LISA helps me keep up.
  • The closing session, “SRE in the Small and in the Large” demonstrated SRE is not just for Google scale — even smaller organizations can reap large benefits from applying “a stitch in times saves nine”. Read the book!

Nick, the keynote by Jane Adams on emergence in complex systems was mindblowing — look for it online soon! (Here is a shorter one, from three years ago.)

My favorite part of the conference was hanging out with my LISA friends. Hope to see you in October in San Francisco!

Posted in Uncategorized | Leave a comment

Setting up a Postgres Sandbox

I’m a fan of disposable sandboxes using Vagrant and VirtualBox.

I’ve been using Postgres on the job for nearly a year, and a while back I decided it was time to have a dedicated Postgres instance on my personal computer, on a virtual machine, just to play around with.

I’m using a Mac, but the steps below should work without change for Windows or Linux. Let me know in the comments if you run into any problems following these steps!

How to set up a reproducible Postgres sandbox inside a disposable VM

  1. If you haven’t done so already, install Vagrant and VirtualBox.

  2. Choose a Vagrant base image. For my own purposes I chose the vanilla base box puppetlabs/centos-6.6-64-nocm (which comes from Puppet Labs but doesn’t have Puppet installed.)

    There are official images for CentOS and official images for Ubuntu as well, in addition to the vast range of user-contributed images. (The official CentOS image doesn’t come with VirtualBox guest additions installed, though, so folder syncing won’t work out of the box.)

    You don’t have to do anything with your choice just yet, just choose one.

  3. Open your terminal or command line environment.

  4. Create a dedicated directory for this particular Vagrant environment to live in, and change directories to that directory. (I personally use ~/term/vagrant/centos-6.)

    mkdir -p ~/term/vagrant/centos-6
    cd !$
    

    (!$ makes use of a Bash feature called “History Expansion.” It expands to the last portion of the last command executed, in this case ~/term/vagrant/centos-6.)

  5. Initialize the Vagrant environment, specifying the base image (box) you chose in step 2.

    vagrant init puppetlabs/centos-6.6-64-nocm
    

    You can run ls and observe that a Vagrantfile has been created:

    $ ls
    Vagrantfile
    
  6. Bring up the Vagrant environment, which (this first time) will also download the base image you chose earlier.
    vagrant up
    

    This may take a little while. Wait for it.

  7. Log in to the Vagrant virtual machine you’ve created.

    vagrant ssh
    
  8. If you want a particular version of Postgres, choose the version of the Postgres yum repository package from which you can download the Postgres package itself. (I chose Postgres 9.3.) Right click and choose “copy link.” Then, inside your Vagrant VM, run the following command, replacing the URL with the one you’ve just copied.
    wget https://download.postgresql.org/pub/repos/yum/9.3/redhat/rhel-6-x86_64/pgdg-redhat93-9.3-3.noarch.rpm
    
  9. Install the package you’ve just downloaded.
    sudo yum install -y ./pgdg-redhat93-9.3-3.noarch.rpm
    
  10. Create a “packages” directory inside /vagrant, to store the Postgres packages outside the VM, where they will be saved when the VM is destroyed. Change to this directory.
    mkdir /vagrant/packages
    cd !$
    
  11. Download the Postgres packages (ignoring the versions present in the default repositories).
    yumdownloader --resolve --disablerepo=\* --enablerepo=pgdg\* postgresql\*-server
    
  12. Exit, and verify that the “packages” directory shows up on your host with three RPMs inside.
    exit
    ls packages
    
  13. Modify the Vagrantfile to include the shell commands to initialize Postgres.

    Change the commented out lines (near the bottom of the file) that read as follows:

      # config.vm.provision "shell", inline: <<-SHELL
      #   apt-get update
      #   apt-get install -y apache2
      # SHELL
    

    And replace them with:

      config.vm.provision "shell", inline: <<-SHELL
        yum install -C -y /vagrant/packages/postgresql*.rpm
        service postgresql-9.3 initdb
        service postgresql-9.3 start
        chkconfig postgresql-9.3 on
        sudo -i -u postgres createuser -d -e -E -I -r -s vagrant
        sudo -i -u vagrant createdb -e
      SHELL
    

    Save the changes.

  14. Destroy your vagrant environment.

    vagrant destroy
    
  15. Bring it up again and log in.
    vagrant up && vagrant ssh
    
  16. Run psql. You’re in.
    [vagrant@localhost ~]$ psql
    psql (9.3.15)
    Type "help" for help.
    
    vagrant=# \q
    [vagrant@localhost ~]$
    

Now you can play around, do whatever you like inside the Postgres instance, and just repeat the last three steps (destroy VM, bring it up, and start psql) as often as you need to. πŸ™‚

Did these instructions help? Did you have any trouble? Any suggestions? Let me know in the comments.

Posted in Uncategorized | Tagged , , , , , | Leave a comment

Bootstrapping CFEngine agent to a regional (AWS) hub

Hat tip to my DevOps buddy Joaquin Menchaca for this one-liner to find out what AWS region your VM is in:

AWS_REGION=$(curl -s http://169.254.169.254/latest/meta-data/placement/availability-zone | sed -e 's/[a-z]$//')

I am going to use it to bootstrap my Kubernetes VMs to the right CFEngine hub for their AWS region (i.e., to the local hub).

Posted in Uncategorized | Leave a comment

Looking up Postgres table name by id

When working with [TOAST][1] tables, I had the relid (relation or table id) of the parent table, and needed to get its name.

Here is how to perform the lookup. For example, if the relid is 19665:

SELECT relid, relname
FROM pg_catalog.pg_statio_user_tables
WHERE relid = '19665';

Posted in postgres | Leave a comment

Safely updating /etc/sudoers non-interactively

I recently added my account to /etc/sudoers on N servers using Ansible in raw mode (running a shell oneliner). We use visudo to edit /etc/sudoers when we are logged into a server, but since I was doing this in “batch” mode I wanted to do it non-interactively but still have the benefit of the file-locking and syntax checking and rollback that visudo provides. So I set visudo’s VISUAL environment variable to my external command:

[root@myhost ansible]# more sudoers.sh
grep -q ^tsalolia /etc/sudoers || VISUAL="(echo /^root; echo a; echo 'tsalolia ALL=(ALL) ALL'; echo .; echo 'g/^#tsalolia/d'; echo 'x!') | ex - /etc/sudoers" visudo
visudo -c
[root@myhost ansible]#

I edited /etc/sudoers directly with ex to append my entry below root’s because every host had a different /etc/sudoers file (this is what happens when you don’t have configuration management).

Long-term, I’m planning to templatize /etc/sudoers at this client.

I should be able to run

VISUAL="cp /etc/sudoers.new /etc/sudoers" visudo

to safely install the new sudoers file.

P.S. I developed the above on CentOS 5 and 6, but when I tried it on Ubuntu 16, I got the error:

visudo: specified editor ((echo /^root; echo a; echo 'tsalolia ALL=(ALL) ALL'; echo .; echo 'g/^#tsalolia/d'; echo 'x!') | ex - /etc/sudoers) doesn't exist

Posted in Uncategorized | Leave a comment

Introducing Infrastructure Inventory with CFEngine Enterprise

CFEngine Enterprise makes it absurdly easy to track deployed
servers. All you have to do is spin up a hub, install the lightweight agent on each host, and run cf-agent --bootstrap <hub> to setup a trust relationship between hub and hosts. (If you need any help setting this up, let me know.)

The agent will discover information about each host (hostname, address, OS version, disk utilization, packages installed, etc.) and the hub will collect and aggregate this data, so you can see at a glance things like, my OS inventory is 5% RHEL 4, 10% RHEL 5, 84% RHEL 6, and 1% RHEL 7.

The neat thing, because the agent is lightweight, CFEngine is able to refresh this inventory every 5-10 minutes, so when you go into the Reporting UI, the data is completely up to date — even if you are provisioning hosts dynamically (scaling to meet demand, for example), the Reporting Portal will stay up to date.

CFEngine Enterprise has a great Reporting UI which can build charts and graphs on the fly (under Inventory Report in the Reports tab), you can add filters, and it has a custom report builder.

Plus the inventory is extensible — you can run an arbitrary external shell script or binary (e.g. from third-party vendors) to harvest additional data about the environment. This new data gets picked up by the Enterprise Hub along with the built-in inventory.

The Reporting Portal can export PDFs and CSVs as well, so you can make pretty pie graphs and slides for corporate meetings if you’re lucky enough to be involved in such! πŸ™‚

You can see screenshots of the Reporting Portal in my blog post on cfengine.com.

Posted in Uncategorized | Leave a comment

CFEngine Inventory of Windows Server 2012

I am working on setting up a “reporting portal” CFEngine Enterprise hub to aggregate inventory from several hubs in different parts of a company (managed by different organizations). This one “superhub” would allows executives instant insight into infrastructure integrity.

While demonstrating my prototype, an executive liked the idea of having data at her fingertips so much, she asked, can we put our Windows servers into CFEngine?

I said sure, but CFEngine inventory on Windows is not as detailed as it is for UNIX and Linux. The next question naturally then is how detailed is it?

To answer, I spun up a Windows Server 2012 VM in the Joyent public cloud (the Joyent UI is a delight to use, BTW, and I had my VM up in less than a minute) and bootstrapped it to a CFEngine hub in the same cloud. While I was able to pull policy immediately, the hub couldn’t connect to the Windows server on port 5308 to collect reports until I went into the Windows Firewall with Advanced Security control panel and opened up port 5308. (Rackspace.com has a decent write-up.)

Here is what you get out of the box in the way of inventory (see attached).

Name Value
Windows roles WinServer
System version BOCHS – 1
Host name ownerco-18v4p42
Hardware addresses 90:b8:d0:52:7c:09, 90:b8:d0:b5:c7:94
System manufacturer Joyent
Disk free (%) on main drive (C:) 69
BIOS version Bochs
Architecture x86_64
OS type windows
IPv4 addresses 10.112.186.4, 165.225.131.21
OS kernel Windows Server 2012
CFEngine ID SHA=1f6666e1e88b05a4c7a98604ffa429bc452dc209a22e78072abd2d6eccb5170c
System serial number 720f2caa
BIOS vendor Bochs
CFEngine version 3.7.4
Server class windows
Uptime minutes 46
OS Windows Server 2012

The basics are there – hostname, OS version, disk utilization, network addresses. And, just like the UNIX/Linux inventory, the Windows inventory is extensible.

And just for fun, here is a screenshot showing the CFEngine processes running on Windows (the first three in the β€œps” output).

CFE on Windows screenshot

Posted in Uncategorized | Leave a comment

You think our training is expensive?

I charge US $3,000 per training day, plus a US $2,000 admin fee, to come on-site and train up to 12 staff using a training methodology that ensure that deep learning occurs. Some people have pushed back on the price as too expensive.

As Red Adair, the firefighter specializing in putting out oil well fires, once said:

β€œIf you think it’s expensive to hire a professional to do the job, wait until you hire an amateur.”

I have heard horror stories of 5-day long classes where the “Instructor” sat at the front and droned through a PowerPoint presentation. He wouldn’t answer questions because he had hundreds of slides to get through. I’ve heard of Instructors dismissing class on Friday morning because they’ve “covered the material” already, yet the students still can’t do the actions required because they lack complete understanding.

When I train, I look at the faces of the students to see if they understand. You can see it in their eyes if you care to look. I don’t move on to the next module until everybody understands the current one.

The hallmark of our training is balancing theory with practical, so there are lab exercises after every module. It’s one thing to learn about engines in a text book, but you get a completely different level of understanding after you put one together with your own two hands!

Our materials are carefully laid out to cover all the basics and define all the terms and then and only then start on intermediate and advanced topics. Careful attention to fundamentals is how experienced users come out raving how much they’ve learned.

I have never had anyone complain about price after our training. I have had a couple of people express that ours was the best training they ever had, anywhere.

Posted in Uncategorized | Leave a comment

Graphing within psql

I mentioned this on HN years ago but it’s nifty so add it here.

You can graph SQL output with gnuplot without leaving the psql (Postgres client) command-line.

Because @fusiongyro commented “This is incredible! I only wish it were a little easier to do on the fly,” I inquired on the amazingly helpful pgsql-general list.

There are two approaches: client-side and server-side.

  • Ian Barwick explained how to put all the prep stuff into a psql script, define your query and invoke the script.
    
    barwick@localhost:~$ psql -U postgres testdb
      psql (9.2.3)
      Type "help" for help.
    
      testdb=# \set plot_query 'SELECT * FROM plot'
      testdb=# \i tmp/plot.psql
    
    
                                        My Graph
    
        4 ++---------+-----------+----------+----------+-----------+---------**
          +          +           +          +          +           +     **** +
          |                                                          ****     |
      3.5 ++                                                     ****        ++
          |                                                  ****             |
          |                                              ****                 |
        3 ++                                         ****                    ++
          |                                      ****                         |
      2.5 ++                                *****                            ++
          |                             ****                                  |
          |                         ****                                      |
        2 ++                    ****                                         ++
          |                 ****                                              |
          |             ****                                                  |
      1.5 ++        ****                                                     ++
          |     ****                                                          |
          + ****     +           +          +          +           +          +
        1 **---------+-----------+----------+----------+-----------+---------++
          1         1.5          2         2.5         3          3.5         4
                                         Servers
    
      testdb=#
    
  • Sergey Konoplev explained how to do it with a server-side function – you need gnuplot installed on the db server and then you can use

select just_for_fun_graph('select ... from ...', 'My Graph', 78, 24, ...)

Posted in Uncategorized | Leave a comment

Binding an SSH launcher to a GNU Screen hotkey

I have a confession to make. I use SSH to access servers.

I tell the sysadmins I teach to make changes to their servers using configuration management, but:

(a) most clients I work with are just starting to use configuration management so we use SSH to access the systems that aren’t under in configuration management yet, and

(b) I enjoy troubleshooting issues rather than just shooting my IT infrastructure in the head and instantiating a new one that might have the same issue. But this post isn’t about immutable infrastructures. It’s about SSHing to servers.

From “things that make me happy”, I added two lines near the top of my GNU Screen config file, .screenrc:


# start ssh launcher loop

screen -t launcher /bin/sh -c 'while true; do echo -n "Hostname: "; read host;  screen -t $host ssh $host; clear; done'

# bind ctrl-K to "switch to window 0 which contains the SSH launcher"

bindkey "\013" select 0

Now when I want to open a new session, I press Ctrl-K and enter the hostname, and GNU Screen will start a new window, titled with the name of the host, running an SSH session to that host.

It’s the little things in life.

Update:

Now that I’ve used this for a day, I remembered the problem with this setup — after you launch an ssh session, if you press the screen command key twice to go back to the previous window, you end up in the launcher window instead.

When I worked at EarthLink, I made a little shell script similar to this that I called with screen’s “exec” command and did some gnarly input/output redirection where the script took my input and it’s output was fed back to the screen as if it was user input and that contained the command to launch the ssh session. I didn’t save it; looks like I’ll have to reconstruct it.

Also, the latest version of GNU Screen is rather improved: you can renumber windows, split windows vertically, etc.

Yes, I know about tmux. I’m just used to screen. πŸ™‚

Posted in Uncategorized | Leave a comment