{"id":30,"date":"2007-08-02T15:17:46","date_gmt":"2007-08-02T20:17:46","guid":{"rendered":"http:\/\/blogs.cae.tntech.edu\/mwr\/2007\/08\/02\/the-new-file-server-preseeding-and-lvm\/"},"modified":"2024-10-27T14:26:19","modified_gmt":"2024-10-27T14:26:19","slug":"the-new-file-server-preseeding-and-lvm","status":"publish","type":"post","link":"https:\/\/sites.tntech.edu\/renfro\/2007\/08\/02\/the-new-file-server-preseeding-and-lvm\/","title":{"rendered":"The New File Server: Preseeding and LVM"},"content":{"rendered":"<blockquote><p>Remember that no one cares if you can back up &#8212; only if you can restore.<\/p>\n<div align=\"right\">&#8212; <a href=\"http:\/\/www.amanda.org\/docs\/using.html#restoring_with_amanda\">Amanda 2.5.2 Documentation<\/a><\/p><\/blockquote>\n<p>So we&#8217;ve got a new file server in the middle of initial installation and configuration. The file server is one of our most mission-critical systems &#8212; if mail goes down, a half-dozen people care. If the web server goes down, a few more would care, but it&#8217;s not a life-or-death issue. But the file server? That&#8217;s important. Students I&#8217;ve never met, students who wouldn&#8217;t know the difference between a shell prompt and a hole in the ground, students who couldn&#8217;t care less about parallel computing or anything else I put effort into on this area &#8212; they&#8217;ll notice the file server being down.<\/p>\n<p>And all things considered, I like it that way. I know that a student or faculty member is statistically more likely to lose data on their local hard drive, their flash drive, or their removable media of choice than I am to lose it on a RAID-5, hot-spare-ready, redundant power supplies drive array connected to a RAID-1, redundant power supplies server. We&#8217;ve had one data loss experience since 2001 when I started doing this. And we only lost data because of<\/p>\n<ul>\n<li>Human error in moving the external RAID in our server rack<\/li>\n<li>Having Amanda holding disk space on the external RAID in addition to what was on the system&#8217;s internal drives<\/li>\n<\/ul>\n<p>and I&#8217;m not too keen to repeat it. I didn&#8217;t get more than 2 hours of sleep at a time for most of a week while I was constantly having to load a different tape in the changer. Thankfully, I didn&#8217;t have to camp out in the server room the whole time, since I could manipulate the changer via ssh. But it was both embarrassing and a major drag away from anything I&#8217;d have rather been doing at the time.<\/p>\n<p>But the new file server is physically ready, and &gt;90% ready as far as configuration and software are concerned. More details on this after the jump.<!--more--><\/p>\n<p>As far as the server specifications go, we&#8217;ve got<\/p>\n<ul>\n<li>Dell PowerEdge 2950 server with Energy Smart options<\/li>\n<li>1 quad-core Xeon 1.6 GHz CPU<\/li>\n<li>2 GB RAM<\/li>\n<li>8 146 GB SAS drives (2.5 inch, 10K RPM) in a RAID-5 with hotspare<\/li>\n<li>redundant power supplies<\/li>\n<li>4-year warranty<\/li>\n<\/ul>\n<p>For the external disk array, we&#8217;ve got a Dell PowerVault MD1000 with 15 750 GB SATA drives in a RAID-5 with hotspare. The tape changer is a Dell PowerVault 124T LTO-3 with a barcode reader and capacity for eight 400 GB (native capacity) tapes.<\/p>\n<p>My management goal with this system is to never type <code>apt-get install<\/code>, <code>crontab -e<\/code>, or <code>xemacs<\/code> on it. I want preseeding and puppet to handle all package installation, package configuration, and as many other administration duties as possible. That way, in the unlikely event of a physical disaster, I can get services back up and running as quickly as possible, and I can redeploy these services on to a future server if needed.<\/p>\n<p><a href=\"http:\/\/blogs.cae.tntech.edu\/mwr\/2007\/04\/17\/unattended-debian-installations-or-how-i-learned-to-stop-worrying-and-love-the-preseedcfg\/\">My original preseed.cfg<\/a> contains the vast majority of what I needed for this server. The biggest difference is in the partitioning scheme:<\/p>\n<ul>\n<li>I want the file server to use LVM in case I need to change how space is divided up<\/li>\n<li>I want separate partitions for \/tmp and \/var to help prevent users from sucking up all the space in the system areas<\/li>\n<li>I need a separate space for Amanda&#8217;s holding disk, since we don&#8217;t back up a full tape of data each day, and you don&#8217;t want to back up your holding directory into your holding directory each day<\/li>\n<\/ul>\n<p>So my new preseeded partitioning instructions work out as<\/p>\n<pre>\nd-i partman-auto\/disk string \/dev\/discs\/disc0\/disc\nd-i partman-auto\/method string lvm\nd-i partman-auto\/purge_lvm_from_device boolean true\nd-i partman-lvm\/confirm boolean true\nd-i partman-auto\/init_automatically_partition \\\\\n\tselect Guided - use entire disk and set up LVM\n\nd-i partman-auto\/expert_recipe string                         \\\\\n      boot-root ::                                            \\\\\n              40 300 300 ext3                                 \\\\\n                      $primary{ } $bootable{ }                \\\\\n                      method{ format } format{ }              \\\\\n                      use_filesystem{ } filesystem{ ext3 }    \\\\\n                      mountpoint{ \/boot }                     \\\\\n              .                                               \\\\\n              500 10000 1000000000 ext3                       \\\\\n                      method{ format } format{ } $lvmok{ }    \\\\\n                      use_filesystem{ } filesystem{ ext3 }    \\\\\n                      mountpoint{ \/ }                         \\\\\n              .                                               \\\\\n              600000 600000 600000 ext3                       \\\\\n                      method{ format } format{ } $lvmok{ }    \\\\\n                      use_filesystem{ } filesystem{ ext3 }    \\\\\n                      mountpoint{ \/opt\/amanda }               \\\\\n              .                                               \\\\\n              500 9000 5000 ext3                              \\\\\n                      method{ format } format{ } $lvmok{ }    \\\\\n                      use_filesystem{ } filesystem{ ext3 }    \\\\\n                      mountpoint{ \/var }                      \\\\\n              .                                               \\\\\n              500 9000 5000 ext3                              \\\\\n                      method{ format } format{ } $lvmok{ }    \\\\\n                      use_filesystem{ } filesystem{ ext3 }    \\\\\n                      mountpoint{ \/tmp }                      \\\\\n              .                                               \\\\\n              64 512 200% linux-swap $lvmok{ }                \\\\\n                      method{ swap } format{ }                \\\\\n              .\n\nd-i partman\/confirm_write_new_label boolean true\nd-i partman\/choose_partition \\\\\n\tselect Finish partitioning and write changes to disk\nd-i partman\/confirm boolean true\n<\/pre>\n<p>This ends up giving me a system disk layout as follows:<\/p>\n<pre>\nFilesystem            Size  Used Avail Use% Mounted on\n\/dev\/mapper\/ch208r-root\n                      230G  707M  218G   1% \/\n\/dev\/sda1             274M   17M  243M   7% \/boot\n\/dev\/mapper\/ch208r-opt+amanda\n                      564G  422M  535G   1% \/opt\/amanda\n\/dev\/mapper\/ch208r-tmp\n                      4.7G  138M  4.4G   4% \/tmp\n\/dev\/mapper\/ch208r-var\n                      4.7G  264M  4.2G   6% \/var\n<\/pre>\n<p>The next thing I discovered is that fdisk, cfdisk, and anything using DOS-style partition tables has trouble with comically-large volumes like our 9.75 TB (pre-formatting) RAID volume. <a href=\"http:\/\/www.coraid.com\/support\/linux\/contrib\/chernow\/gpt.html\">This document from Coraid<\/a> gives enough information about using parted and GPT to let us partition the new array. Next, it turns out that the old reliable <a href=\"http:\/\/lwn.net\/Articles\/187321\/\">ext3 filesystem has an 8 TB size limit<\/a> on it, so we went with xfs on the external array.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Remember that no one cares if you can back up &#8212; only if you can restore. &#8212; Amanda 2.5.2 Documentation So we&#8217;ve got a new file server in the middle of initial installation and configuration. The file server is one of our most mission-critical systems &#8212; if mail goes down, a half-dozen people care. If &hellip; <\/p>\n<p class=\"link-more\"><a href=\"https:\/\/sites.tntech.edu\/renfro\/2007\/08\/02\/the-new-file-server-preseeding-and-lvm\/\" class=\"more-link\">Continue reading<span class=\"screen-reader-text\"> &#8220;The New File Server: Preseeding and LVM&#8221;<\/span><\/a><\/p>\n","protected":false},"author":87,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[4,7,16],"tags":[],"class_list":["post-30","post","type-post","status-publish","format-standard","hentry","category-debian","category-infrastructures","category-puppet","entry"],"_links":{"self":[{"href":"https:\/\/sites.tntech.edu\/renfro\/wp-json\/wp\/v2\/posts\/30","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/sites.tntech.edu\/renfro\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/sites.tntech.edu\/renfro\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/sites.tntech.edu\/renfro\/wp-json\/wp\/v2\/users\/87"}],"replies":[{"embeddable":true,"href":"https:\/\/sites.tntech.edu\/renfro\/wp-json\/wp\/v2\/comments?post=30"}],"version-history":[{"count":1,"href":"https:\/\/sites.tntech.edu\/renfro\/wp-json\/wp\/v2\/posts\/30\/revisions"}],"predecessor-version":[{"id":494,"href":"https:\/\/sites.tntech.edu\/renfro\/wp-json\/wp\/v2\/posts\/30\/revisions\/494"}],"wp:attachment":[{"href":"https:\/\/sites.tntech.edu\/renfro\/wp-json\/wp\/v2\/media?parent=30"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/sites.tntech.edu\/renfro\/wp-json\/wp\/v2\/categories?post=30"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/sites.tntech.edu\/renfro\/wp-json\/wp\/v2\/tags?post=30"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}