This is a relatively quick task on my end. The central ITS department handles DNS, I don’t use automount, and the vast majority of UID/GID mappings I already covered in the Authentication Servers post, though it may technically belong here. One other thing at the bottom of the infrastructures.org post bears repeating, even if you don’t do the rest of the infrastructure business:
We tend to use hostname aliases in DNS, and in our scripts and configuration files, to denote which hosts currently offer which services. This way, we don’t have to edit scripts when a service moves from one host to another. For example, we might create CNAMEs of ‘sup’ for the SUP server, ‘gold’ for the gold server, and ‘cvs’ for the CVS repository server, even though these might all be the same machine.
This sidesteps the whole “what’s your naming scheme on servers” religious war, and makes things much more maintainable long-term. For the longest time, we had this name disconnect between systems that the outside world would see, and systems that were only used on-campus. As long as it was just a small number of people having to manage things, there wasn’t much of a problem. But when you start having students wanting to work on their own laptops on the campus wireless network, or when you’ve got ad hoc support people trying to fix a problem, the lines between manager, user, and outside client get blurred. It just makes good sense to have your DNS names and your documentation correspond to specific services rather than something seemingly random. Everybody expects to find the web server at www. Eventually, everybody came to expect the mail server at pop, mail, or smtp. I’m starting to push more on the front of “the file server is reachable at files, license servers are reachable at ls01, ls02, etc.” No unnecessary rewrites of documentation, you can migrate services from one physical or virtual server to another with minimal downtime and plenty of time for testing, and other advantages become apparent.