Currently, untyped lives on a virtual machine provided by Bytemark. The service they provide is excellent, and we recommend them highly. While we are building our own server we hope to continue to host with Bytemark, but instead of being on a VM, we’ll be on our own hardware. We’ll see how that goes.
Today, Christian and I spent a few hours working on the most fundamental part of a server: the filesystem. Before I go into any detail about the decisions we made, I’ll give you a sense for what we’re working with:
Untyped’s new home | |
Chassis | Intel SR1200 |
Processors | 2x PIII 1.4GHz |
RAM | 2GB 133MHz ECC |
Hard disks | 2x 250GB IDE |
This server has seen use before, but we’ve replaced all the moving parts; we’re quite pleased with its condition, and think it will provide us with a number of years of good service. And, we hope that we don’t need to do a low-level install again in the next few years.
The first thing we did was to grab a Debian 3.1r1 “testing” net installCD image. We had to boot the 2.4 kernel, as the 2.6 kernel fails to load appropriate CD drivers from the install CD; this didn’t really matter. Then, we came around to our filesystem layout; what we knew was that we wanted to partition off different parts of the directory tree (/, /boot, /var, /usr, /tmp, /home); we didn’t know exactly how much space to give each part of the filesystem, however. Do we makehome 40GB, or 60GB? What about usr? The list goes on and on.
We started by setting up a 4GB swap partition at the end of each drive, a 400MB boot partition at the front of each drive, and setting aside the remaining 245GB or so for the main parts of the filesystem. We then used the Debian installer to turn the 400MB partitions into one RAID set, and the 245GB partition into another. This way, both our boot partition and the main part of the drive are in RAID, but we’re guaranteed that our boot partition is at the front of the disk.
(What is RAID? I’ll stick with the Wikipedia on that one. It keeps our two disk drives in perfect sync; this way, if one of them fails, we might be able to replace it before the second one does, thus keeping our system running with little or no interruption. This is a Good Thing.)
Then, we dove into that big, 245GB space. We used LVM—the Logical Volume Manager–to partition the rest of the disk. LVM is great because it essentially ignores the physical layout of your disks, and allows you to dynamically resize partitions without any great gnashing of teeth. So, we laid out 40GB for /home and /data, 20GB for /usr, and 4GB each for /, /var, and /tmp.
Note that this is only 112GB of space; we had (roughly) 245GB of space in that big RAID set. Our intention is to grow those partitions as we need to. For now, we chose large, but reasonable values for each of these partitions. In time, we may decide to increase the amount of space allocated to /home—perhaps from 40GB to 80GB. The point is, we have around 116GB of space to “grow into”, and we can allocate it to any of the partitions we currently have… or we can create new partitions. In either case, these operations don’t require us to shutdown the machine, or even take it entirely offline—we only need to “stop” the logical volume that the partition is on.
While we could do this more quickly the second time around, this took us between one and two hours; we were careful to check our assumptions, and rehashed and discussed a lot of the decisions in light of how we might want to use the server in the future.
Once the filesystem was set up, the rest of the installation went quickly; things were pulled in automatically over the network, and we rebooted into our new machine. The filesystem looks great, and we expect that the decisions we made will serve us well in the future. Now, we’ll work on upgrading the kernel to a 2.6 series (with SMP support), and then begin replicating those services that currently exist on the VM to the server.
Links that came in handy: