I just bought two new 320G hard drives last weekend and they are finally up and running with my same old Gentoo Linux OS. I wanted to put them in a RAID1 configuration and this was my first experience with software RAID.
The first time I did it I just made took one third of the new drive (about 120G) and made it a RAID1, then copied the old hard drive over. I purposefully chose not to use LVM because I have used it before and although it is extremely handy I have always been worried about the difficulty of recovering data from a bricked drive when the data is scattered all about. After copying data over to this one big partition I realized that it wasn't so easy to resize a RAID. I also read about how bad it was to have everything all on one partition (/home, /, /var, etc...). So I did a complete 180 and decided to use LVM.
Here is my partition layout:
Disk /dev/sda: 320.0 GB, 320072933376 bytes 255 heads, 63 sectors/track, 38913 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda1 1 13 104422 fd Linux raid autodetect (/boot, in /dev/md1) /dev/sda2 14 622 4891792+ fd Linux raid autodetect (empty, for Xen later, in /dev/md2) /dev/sda3 623 866 1959930 fd Linux raid autodetect (/, in /dev/md3) /dev/sda4 867 38913 305612527+ 5 Extended /dev/sda5 867 5730 39070079+ fd Linux raid autodetect (for LVM, in /dev/md5) /dev/sda6 5731 10594 39070079+ fd Linux raid autodetect (for LVM, in /dev/md6) /dev/sda7 10595 15458 39070079+ fd Linux raid autodetect (for LVM, in /dev/md7) /dev/sda8 15459 20322 39070079+ fd Linux raid autodetect (for LVM, in /dev/md8) ... lots of free space ... /dev/sda9 38670 38913 51657007 82 Linux swap / Solaris (in /dev/md9)
/dev/sdb looks exactly the same of course, which you can do very easily with:
sfdisk -d /dev/sda | sfdisk /dev/sdb
Basically /dev/sda5 onward are all 40G physical partitions. I'll RAID1 them with the corresponding partitions on /dev/sdb. I have also RAID'ed the /boot partition and the root partition, and the sdb2/sda3 partitions for future use. So far, I have 7 raids:
-(~:$)-> cat /proc/mdstat Personalities : [linear] [raid0] [raid1] [raid6] [raid5] [raid4] md8 : active raid1 sdb8 sda8 39069952 blocks [2/2] [UU] md7 : active raid1 sdb7 sda7 39069952 blocks [2/2] [UU] md1 : active raid1 sdb1 sda1 104320 blocks [2/2] [UU] md2 : active raid1 sdb2 sda2 4891712 blocks [2/2] [UU] md3 : active raid1 sdb3 sda3 1959808 blocks [2/2] [UU] md5 : active raid1 sdb5 sda5 39069952 blocks [2/2] [UU] md6 : active raid1 sdb6 sda6 39069952 blocks [2/2] [UU] unused devices:
I then made md5, 6, 7, and 8 LVM physical volumes and then shoved into an LVM volume group using pvcreate, vgcreate, and vgextend:
# pvs PV VG Fmt Attr PSize PFree /dev/md5 vg lvm2 a- 37.26G 0 /dev/md6 vg lvm2 a- 37.26G 528.00M /dev/md7 vg lvm2 a- 37.26G 0 /dev/md8 vg lvm2 a- 37.26G 29.52G # vgs VG #PV #LV #SN Attr VSize VFree vg 4 5 0 wz--n- 149.03G 30.03Gsonata opt # vgs VG #PV #LV #SN Attr VSize VFree vg 4 5 0 wz--n- 149.03G 30.03G
Then I created some LVM logical volumes using lvcreate:
# lvs LV VG Attr LSize Origin Snap% Move Log Copy% home vg -wi-ao 80.00G opt vg -wi-ao 2.00G tmp vg -wi-ao 2.00G usr vg -wi-ao 20.00G var vg -wi-ao 15.00G
I keep /home, /opt, /tmp, /usr and /var separate. That's why my root (/) partition on /dev/md3 only needs to be 2G.
The reason I made a bunch of RAID's and put them in the LVM rather than just one big RAID is because it means I haven't tied my entire drive down to anything. I can always move data off of one partition later using a few LVM commands and install another OS if I have to. Few people do this, but I think it is a good idea. On my computer at work, I did something similar. I split my hard drive into chunks of 40G and then threw into an LVM. Splitting your hard drive into multiple physical partitions is very useful.
The other thing I did was put the swap drive at the end of the hard drive. It is easy to make it bigger that way. So many people make one of their primary partitions at the beginning of the hard drive into a swap partition, which I can never understand. I almost always end up adding more RAM to my computer and thus needing to make my swap partition bigger. When it is at the beginning of the drive, it means you have to add a new one at the end, where you have free space. Putting my swap at the end of the drive in the first place means I don't have to have 2 swaps later. Given the complexity of my drives as it is, this swap thing is a minor concern.
I also plan on putting Xen on /dev/md2 later. I hope to be able to boot into an OS on /dev/md2, then start up the Gentoo Linux on /dev/md3. If I ever need to restart my Linux on /dev/md3 I can reboot it without turning off the computer. It would also allow me to run Windows XP under Xen rather than under VMWare as I do know. From what I hear Xen has better performance.
The main thing left for me to do is to buy another 320G drive and set up an rsync backup from the 320G RAID to the backup drive (probably in a USB adapter case). As most people probably know, RAID does not replace backups. RAID just protects you against hardware failure. It does not protect you against "rm -rf /". These drives were only $110 CAD each, so getting another backup drive is no big deal.
One other thing that I did differently than usual is that I made all my filesystems ext3, rather than reiserfs (as I normally do). One of the reasons is that I had far too many cases where my reiserfs partitions got screwed up. In fact even my old hard drive had a screwed up directory that was not fixable, unless I ran the reiserfsck program with the --rebuild-tree option, which is not a very safe option. It turns out the directory was not an important one. Anyways, I have never had a problem with ext2 or ext3 ever, so I went with those instead.