6.1.3. What About...
6.1.3.1. ...taking multiple snapshots of a filesystem?
/home6.1.3.2. ...improving performance?
stripingTo enable striping, use the -i (stripe-count) and -I (stripe-size) arguments to the lvcreate command:
# lvcreate main-i 3-I 8--name mysql--size 20G
The stripe count must be equal to or less than the number of PVs in the VG, and the stripe size (which is in kilobytes) must be a power of 2 between 4 and 512.
You can also select striping in the LV Properties area of the Create New Logical Volume dialog ( Figure 6-4 ).
6.1.3.3. ...LVM mirroring?
mirroring technology previewAn alternative approach that is stable, proven, and provides a wider range of configuration options is to layer LVM on top of the md RAID system (discussed in Lab 6.2, "Managing RAID ").
6.1.3.4. ...using LVM with RAID?
md6.1.3.5. ...using a raw, unpartitioned disk as a PV?
6.1.3.6. ...a failing disk drive?
To migrate data off a specific PV, use the pvmove command:
# pvmove /dev/hda3
6.1.3.7. ...creating a flexible disk layout?
for absolute maximum flexibility, divide your disk into multiple partitions and then add each partition to your volume group as a separate PV.
For example, if you have a 100 GB disk drive, you can divide the disk into five 20 GB partitions and use those as physical volumes in one volume group.
The advantage to this approach is that you can free up one or two of those PVs for use with another operating system at a later date. You can also easily switch to a RAID array by adding one (or more) disks, as long as 20 percent of your VG is free, with the following steps:
1. Migrate data off one of the PVs.
2. Remove that PV from the VG.
3. Remake that PV as a RAID device.
4. Add the new RAID PV back into the VG.
5. Repeat the process for the remaining PVs.
You can use this same process to change RAID levels (for example, switching from RAID-1 (mirroring) to RAID-5 (rotating ECC) when going from two disks to three or more disks).
6.1.4. Where Can I Learn More?
lvm vgcreate vgremove vgextend vgreduce vgdisplay vgs vgscan vgchange pvcreate pvremove, pvmove pvdisplay pvs lvcreate lvremove lvextend lvreduce lvresize lvdisplay lvsThe LVM2 Resource page: http://sourceware.org/lvm2/
A Red Hat article on LVM: http://www.redhat.com/magazine/009jul05/departments/red_hat_speaks/
6.2. Managing RAID
Redundant Arrays of Inexpensive Disks6.2.1. How Do I Do That?
dmraidUsing dmraid can thwart data-recovery efforts if the motherboard fails and another motherboard of the same model (or a model with a compatible BIOS dmraid implementation) is not available.
There are six "levels" of RAID that are supported by the kernel in Fedora Core, as outlined in Table 6-3.
Table 6-3. RAID levels supported by Fedora Core
| RAID Level | Description | Protection against drive failure | Write performance | Read performance | Number of drives | Capacity |
|---|---|---|---|---|---|---|
| Linear | Linear/Append. Devices are concatenated together to make one large storage area (deprecated; use LVM instead). | No. | Normal. | Normal | 2 | Sum of all drives |
| 0 | Striped. The first block of data is written to the first block on the first drive, the second block of data is written to the first block on the second drive, and so forth. | No. | Normal to normal multiplied by the number of drives, depending on application. | Multiplied by the number of drives | 2 or more | Sum of all drives |
| 1 | Mirroring. All data is written to two (or more) drives. | Yes. As long as one drive is working, your data is safe. | Normal. | Multiplied by the number of drives | 2 or more | Equal to one drive |
| 4 | Dedicated parity. Data is striped across all drives except that the last drive gets parity data for each block in that "stripe." | Yes. One drive can fail (but any more than that will cause data loss). | Reduced: two reads and one write for each write operation. The parity drive is a bottleneck. | Multiplied by the number of drives minus one | 3 or more | Sum of all drives except one |
| 5 | Distributed parity. Like level 4, except that the drive used for parity is rotated from stripe to stripe, eliminating the bottleneck on the parity drive. | Yes. One drive can fail. | Like level 4, except with no parity bottleneck. | Multiplied by the number of drives minus one | 3 or more | Sum of all drives except one |
| 6 | Distributed error-correcting code. Like level 5, but with redundant information on two drives. | Yes. Two drives can fail. | Same as level 5. | Multiplied by the number of drives minus two | 4 or more | Sum of all drives except two |
For many desktop configurations, RAID level 1 (RAID 1) is appropriate because it can be set up with only two drives. For servers, RAID 5 or 6 is commonly used.
Although Table 6-3 specifies the number of drives required by each RAID level, the Linux RAID system is usually used with disk partitions, so a partition from each of several disks can form one RAID array, and another set of partitions from those same drives can form another RAID array.
RAID arrays should ideally be set up during installation, but it is possible to create them after the fact. The mdadm command is used for all RAID administration operations; no graphical RAID administration tools are included in Fedora.