Client Application Management (Part 2, for stow packages)

UPDATE: this page largely superceded by the stowedpackage puppet definition.

Back in part 1, I outlined how I’m getting a consistent package load on my various hosts with pkgsync and puppet. This works great for things that are already included in Debian. And I’ll make .deb packages of some of our third-party commercial applications, too (Matlab, Abaqus, Ansys, etc.), mostly for the ease of running ‘apt-get install matlab74’ or making one entry into a pkgsync definition. But some things are more of a pain to package up, and it’s easier to let them install into /usr/local. But since I don’t want /usr/local to become an unmanageable mess, I’ve been using GNU Stow to manage it. The basics of stow is that a package (for example, Torque) that wants to install into /usr/local instead gets installed into /usr/local/stow/torque-2.1.6, and stow symlinks from /usr/local/sbin/pbs_mom to /usr/local/stow/torque-2.1.6/sbin/pbs_mom, and for all other files and directories contained in all your stowed packages.

So I needed a way to ensure that a particular set of packages was deployed consistently to a set of machines, and minor modifications to autoapt seemed to be the easiest and most robust way to go at the time. Details after the jump.

External files needed:

My changes to autoapt aren’t rocket science: basically I just removed the logic for version checking (since my naming convention for stowing packages automatically includes their version number) , changed the external commands from apt-get to stow, and added support for prerm and postinst scripts for each stowed package. Also, to trigger a restowing via puppet, rather than by cron job or manually, I’m adapting the method for triggering apt-get dist-upgrades via puppet.

Puppetmaster configuration: install rsync as a daemon. Though if you want to use a different server for rsync, that’s fine, too. My stow tree isn’t too large, and I’d just as soon have it managed off the same system that runs puppetmaster. My /etc/rsyncd.conf contains:

path = /usr/local/metastow
hosts allow =
hosts deny = *
read only
uid = 0

The read only is a safety precaution, and uid = 0 is required so that I can retrieve items that are on root-only permission (Torque’s pbs_mom, for example). The metastow directory itself contains folders for each managed hardware architecture (currently i686 and x86_64), and each of the architecture folders holds the stowed directories, plus prerm and postinst scripts if applicable:

gold:/usr/local/metastow/x86_64# ls -ald torque-2.1.6*
drwxr-sr-x 7 root staff 4096 Apr 13 09:14 torque-2.1.6
-rwxr-xr-x 1 root staff   34 Apr 30 11:13 torque-2.1.6.postinst
-rwxr-xr-x 1 root staff   36 Apr 30 11:12 torque-2.1.6.prerm

Puppet configuration: I have a class of parallel computing systems that need Torque and Ganglia installed. In their class definition:

class cluster-host inherits cae-host {
  file { "/etc/stow_initiator":
    source => "puppet://",
  file { "/etc/puppet/":
    source => "puppet://"
  file { "/etc/puppet/autostow.cfg":
    source => "puppet://"
  exec { "rsync -avz --delete$hardwaremodel/ /usr/local/stow/ ; /etc/puppet/ --filename=/etc/puppet/autostow.cfg --classes=cluster_host":
    refreshonly => true,
    subscribe => File["/etc/stow_initiator"],

The $hardwaremodel variable is automatically defined by Facter if you have the lsb-release Debian package installed. The top-level cae-host class automatically installs it via pkgsync, but just to be safe, I’ll probably add it to my minimal list of preseeded packages for system installation. That way, I know my definitions will all work right out of the box.

Test out your stow packages before deploying them into the metastow tree. In particular, make sure they don’t conflict with anything already in /usr/local — at the time I built my i686 torque packages under Sarge, /usr/local/man was an acceptable folder. Etch symlinked /usr/local/man to /usr/local/share/man, and as a result, my torque package wouldn’t deploy under Etch.

Join the Conversation


Leave a comment

Your email address will not be published. Required fields are marked *