Stupid Puppet Trick: Agreeing to the Sun Java License with Debconf Preseeds and Puppet

I had a user ask for Java to be installed on the cluster systems, so I started up by making a simple JRE5 module for puppet, but this first one didn’t quite work:

class jre5 {
  package { "sun-java5-jre":
    ensure => latest;
  }
}

It doesn’t work because Sun wants you to agree to its license before installing the JRE. There’s a couple of ways around this. First, the old-school method:

ssh host "yes | apt-get -y install sun-java5-jre"

where ‘yes’ is a standard Unix program that just prints out “yes” over and over until the program on the other side of the pipe terminates. But “ssh host foo” is not the way of the managed infrastructure.

The second method, much more friendly to centralized management, is to first install debconf-utils on a candidate system, and then install sun-java5-jre on the same system. Once that’s done, you can query the debconf database to see how it stored your answers to the Sun license agreement:

ch226-12:~# debconf-get-selections | grep sun-
sun-java5-bin   shared/accepted-sun-dlj-v1-1    boolean true
sun-java5-jre   shared/accepted-sun-dlj-v1-1    boolean true
sun-java5-jre   sun-java5-jre/jcepolicy note
sun-java5-jre   sun-java5-jre/stopthread        boolean true
sun-java5-bin   shared/error-sun-dlj-v1-1       error
sun-java5-jre   shared/error-sun-dlj-v1-1       error
sun-java5-bin   shared/present-sun-dlj-v1-1     note
sun-java5-jre   shared/present-sun-dlj-v1-1     note

Save those results (debconf seeds) into a file on the gold server. Then we can modify our jre5 class as follows:

class jre5 {
  package { "sun-java5-jre":
    require      => File["/var/cache/debconf/jre5.seeds"],
    responsefile => "/var/cache/debconf/jre5.seeds",
    ensure       => latest;
  }

  file { "/var/cache/debconf/jre5.seeds":
    source => "puppet:///jre5/jre5.seeds",
    ensure => present;
  }
}

Now our class will download the preseeded answers for the Java license, download and install the JRE, and then use the preseeded answers to skip past the license agreement. I had never messed with debconf seeding previously, since I had either just imaged my systems, or provided config files that would be used when I restarted any daemons or programs that depended on those files. Now debconf-utils is part of my standard system class definition.

Note that this method doesn’t work with the default puppet provided in Debian Etch (version 0.20) — the responsefile parameter for Debian packages was only added in puppet 0.22.

Obscure Puppet Error #1

(First in a series of some finite positive number, for the greater edification of Googlers everywhere.)

If you get an error of err: Could not retrieve catalog: Could not parse for environment development: Syntax error at 'Debian' at /etc/puppet/master/manifests/os/Debian.pp:1 on a Debian.pp that only has one line of class Debian {}, just go ahead and change it to class debian {}, then go on with your life. Yes, I know you tried to be smart and use the output of facter operatingsystem and to make include $operatingsystem work right, but just lowercase the class name.

Carry on.

The autostow is Dead, Long Live stowedpackage!

I had posted earlier about distributing stowed packages via rsync and puppet to my managed systems, but that method wasn’t quite what I wanted:

  1. There was one more file to manage outside my regular puppet manifests, and I’d have to remember to keep them both up to date and in sync.
  2. There wasn’t an easy way of ensuring that other versions of a particular package got unstowed before deploying out the desired version.
  3. The entire stow tree would be copied out to every system, regardless of whether OpenMPI was a good fit for the web server.

So, here’s my new method:

  1. Keep my same metastow module loaded on the rsync server. The metastow module contains one top-level directory per puppet architecture (i686, x86_64, etc.). Each of those architecture folders is a stow tree containing every stowed package for that architecture.
  2. Add a stowedpackage definition to my puppet manifests as follows:
    define stowedpackage ( $basepackage, $version,
        $rsyncserver='gold.cae.tntech.edu',
        $rsyncmodule='metastow',
        $stowdestdir='/usr/local/stow' ) {
        file { "stow-initiator_${basepackage}-${version}":
            source => "puppet:///files/stow-initiator_${basepackage}-${version}",
            path   => "/etc/puppet/stow-initiator_${basepackage}-${version}",
        }
        exec { download:
            command     => "/usr/bin/rsync -a --delete ${rsyncserver}::${rsyncmodule}/${hardwaremodel}/${basepackage}-${version} ${stowdestdir}",
            refreshonly => true,
            subscribe   => File["stow-initiator_${basepackage}-${version}"],
            alias       => "download_${basepackage}-${version}"
        }
        exec { unstow-others:
            command     => "cd ${stowdestdir} && stow --delete ${basepackage}-*",
            refreshonly => true,
            subscribe   => Exec["download_${basepackage}-${version}"],
            alias       => "unstow-others_${basepackage}-${version}"
        }
        exec { stow:
            command     => "cd ${stowdestdir} ; stow ${basepackage}-${version}",
            refreshonly => true,
            subscribe   => Exec["unstow-others_${basepackage}-${version}"]
        }
    }
    
  3. Use the stowedpackage definition in other parts of my manifests:
    # Create OpenMPI installation and configuration.
    class openmpi {
    
        stowedpackage {
            "openmpi-1.0.1":
                basepackage=>"openmpi",
                version=>"1.0.1";
        }
    
    }
    
  4. Add a trigger file to the puppetmaster’s /etc/puppet/files folder:
    /etc/puppet/files# date > stow-initiator_openmpi-1.0.1
    

Getting an email list for your class from Web For Faculty

For all the improvements Web For Faculty/Advisors may have over the old SIS system, it sure doesn’t make it easy to generate an email list from your class roll. I can see who’s in my class, click on a flagrant abuse of the HTML select element to see their permanent address, phone numbers, off-campus email address, and other such information, but if I actually want to use my class roll as data in some other application, I’m pretty well screwed. I seem to recall a procedure buried somewhere on the old SIS system that would generate a comma-separated file of student names, email addresses, or something similar, and I’d use those to make a spreadsheet for recording grades for the semester. There doesn’t seem to be such a feature in Web For Faculty/Advisors.

But since I’m far too lazy to type them all in myself, and I don’t have a grader or other underling to task with it, here’s what I did:

  1. Find your class roll, and then click on the “Send E-mail to Class” link.
  2. This will bring up a page with a link of the form “E-mail Group: Lastname1, Firstname1 Middlename1 to LastnameN, FirstnameN MiddlenameN”. This is a fairly clever mailto: link that puts all the students on a BCC list. Mind you, this doesn’t look like it will work in Thunderbird, since WFF formats the spaces after each comma as a %20 instead of an actual space, so the second through Nth email address are all of the form “%20FMLastname21@tntech.edu”, but since I’m not going to actually hand this list to an email client, that’s ok by me.
  3. Right-click that link in Firefox, and select “Copy Email Address”.
  4. Open up your trusty Unix-style shell prompt (you do run MacOS X, Cygwin, or have an account on a Unix/Linux/BSD system somewhere, right? If not, can’t really help you here), and type the following command:
    cut -d= -f2- | sed 's/, /\n/g' | cut -d@ -f1 | sed 's/[0-9]//g'
    (What? You mean it’s not obvious what that does? Ok. First, it cuts off everything before the first = sign, global search/replaces all ‘, ‘ strings with a newline, cuts off everything after the @ symbol, and deletes all numbers.)
  5. After you’ve typed that command, hit Enter and then paste in the copied email addresses.
  6. Hit Enter again. Watch the reformatted student names fly by.
  7. Copy/paste the names into whatever file you want.
  8. (Optional) Marvel that your class information is held in a form that’s almost, but not quite, entirely unlike something useful.

Grabbing Stills and Making FLV Movies from Axis IP Cameras

About a week and a half ago, I was reminded of a long-dormant project to archive still images from an Axis IP camera. I started this up a few years ago as a favor to a coworker, but it never really got finished. Previously, it was a pretty simple cron job that would just authenticate to the camera and download the current still. At some point, it would also use ImageMagick to convert the captured JPEGs to an MPEG or similar, but it was decidedly non-optimal.

So now that I was reminded by people who were very interested in seeing the results (i.e., monitoring their remotely-located labs), I took another stab at it. Much better results this time. Now my customers get:

  • Still images captured every 30 seconds
  • FLV movies of the day’s stills made every 5 minutes
  • A playlist that lets them browse through previous days’ activity for as long as we keep the movies around

The programs and pages that make this mini-site follow below:
Continue reading “Grabbing Stills and Making FLV Movies from Axis IP Cameras”

Client Configuration Management

Back at the infrastructures.org mothership, client configuration management is described as everything that makes a host unique and/or part of a particular group or domain. And for Unix-like systems, everything pretty much comes down to configuration files, services being enabled/disabled, and cron jobs.

Hmm.

Looks like Puppet pretty much handles all of that. As long as I can describe aspects of my systems with puppet classes and modules, I’ve got reusable, consistent configurations on any servers I care to manage.

Client File Access

The infrastructures.org folks list two primary goals of what they call “client file access“: first, consistent access to users’ home directories, and second, consistent access to end-user applications. Some of the things they warn against, such as automounters and the /net directory, we never thought of using to begin with. Their need to consider systems with limited disk space and a need to mount a software share via NFS is less of an issue with us, too: disks large enough to hold our regular software load are relatively cheap, so there shouldn’t be much of a problem there. And since we’re supposed to be deploying applications consistently via Puppet, there shouldn’t be any inconsistency in where an application is installed. As far as consistent access to the home directories, we accomplish this with a two-pronged solution in Puppet — one on the file server, and one on the clients:
Continue reading “Client File Access”

File Replication Servers

Back when the infrastructures.org folks were writing their pages, the page for file replication servers described a need to keep current copies of configuration files in /etc and all programs and other data from /usr/local on all the managed systems. In puppet structures, every file or other resource is just a part of a higher-order class or module. If we’re already keeping our classes and modules up to date on all the managed systems, then old-school file replication goes along automatically.