Tyler Chris - Fedora Linux стр 76.

Шрифт
Фон

6.2.1.1. Displaying Information About the Current RAID Configuration

/proc/ mdstat

$ cat/proc/mdstat

Personalities : [raid1]

md0 : active raid1 hdc1[1] hda1[0]

102144 blocks [2/2] [UU]

md1 : active raid1 hdc2[1] hda3[0]

1048576 blocks [2/2] [UU]

md2 : active raid1 hdc3[1]

77023232 blocks [2/1] [_U]

This display indicates that only the raid1 ( mirroring) personality is active, managing three device nodes:

md0

This is a two-partition mirror, incorporating /dev/hda1 (device 0) and /dev/hdc1 (device 1). The total size is 102,144 blocks (about 100 MB). Both devices are active.

md1

This is another two-partition mirror, incorporating /dev/hda3 as device 0 and /dev/hdc2 as device 1. It's 1,048,576 blocks long (1 GB), and both devices are active.

md2

This is yet another two-partition mirror, but only one partition ( /dev/hdc3 ) is present. The size is about 75 GB.

The designations md0 , md1 , and md2 refer to multidevice nodes that can be accessed as /dev/md0 , /dev/md1 , and /dev/md2 .

You can get more detailed information about RAID devices using the mdadm command with the -D (detail) option. Let's look at md0 and md2 :

# mdadm -D /dev/md0

/dev/md0:

Version : 00.90.03

Creation Time : Mon Aug 9 02:16:43 2004

Raid Level : raid1

Array Size : 102144 (99.75 MiB 104.60 MB)

Device Size : 102144 (99.75 MiB 104.60 MB)

Raid Devices : 2

Total Devices : 2

Preferred Minor : 0

Persistence : Superblock is persistent

Update Time : Tue Mar 28 04:04:22 2006

State : clean

Active Devices : 2

Working Devices : 2

Failed Devices : 0

Spare Devices : 0

UUID : dd2aabd5:fb2ab384:cba9912c:df0b0f4b

Events : 0.3275

Number Major Minor RaidDevice State

0 3 1 0 active sync /dev/hda1

1 22 1 1 active sync /dev/hdc1

# mdadm -D/dev/md2

/dev/md2:

Version : 00.90.03

Creation Time : Mon Aug 9 02:16:19 2004

Raid Level : raid1

Array Size : 77023232 (73.46 GiB 78.87 GB)

Device Size : 77023232 (73.46 GiB 78.87 GB)

Raid Devices : 2

Total Devices : 1

Preferred Minor : 2

Persistence : Superblock is persistent

Update Time : Tue Mar 28 15:36:04 2006

State : clean, degraded

Active Devices : 1

Working Devices : 1

Failed Devices : 0

Spare Devices : 0

UUID : 31c6dbdc:414eee2d:50c4c773:2edc66f6

Events : 0.19023894

Number Major Minor RaidDevice State

0 0 0 - removed

1 22 3 1 active sync /dev/hdc3

Note that md2 is marked as degraded because one of the devices is missing.

6.2.1.2. Creating a RAID array

If you want to experiment with RAID, you can use two USB flash drives; in these next examples, I'm using some 64 MB flash drives that I have lying around. If your USB drives are auto-mounted when you insert them, unmount them before using them for RAID, either by right-clicking on them on the desktop and selecting Unmount Volume or by using the umount command.

/dev/md0 /dev/sdc1

mdadm: set /dev/sdc1 faulty in /dev/md0

The "failed" drive is marked with the symbol (F) in /proc/ mdstat :

# cat /proc/mdstat

Personalities : [raid1]

md0 : active raid1 sdc1[2](F) sdb1[0]

63872 blocks [2/1] [U_]

unused devices: <none>

To place the "failed" element back into the array, remove it and add it again:

# mdadm --remove/dev/md0 /dev/sdc1

mdadm: hot removed /dev/sdc1

# mdadm --add/dev/md0 /dev/sdc1

mdadm: re-added /dev/sdc1

# cat /proc/mdstat

Personalities : [raid1]

md0 : active raid1 sdc1[1] sdb1[0]

63872 blocks [2/1] [U_]

[>....................] recovery = 0.0% (928/63872) finish=3.1min speed=309K/sec

unused devices: <none>

If the drive had really failed (instead of being subject to a simulated failure), you would replace the drive after removing it from the array and before adding the new one.

Do not hot-plug disk drivesi.e., physically remove or add them with the power turned onunless the drive, disk controller, and connectors are all designed for this operation. If in doubt, shut down the system, switch the drives while the system is turned off, and then turn the power back on.

/proc/mdstat

# cat /proc/mdstat

Personalities : [raid1]

md0 : active raid1 sdc1[1] sdb1[0]

63872 blocks [2/1] [U_]

[=============>.......] recovery = 65.0% (42496/63872)

finish=0.8min speed=401K/sec

unused devices: <none>

The mdadm command shows similar information in a more verbose form:

# mdadm -D/dev/md0

/dev/md0:

Version : 00.90.03

Creation Time : Thu Mar 30 01:01:00 2006

Raid Level : raid1

Array Size : 63872 (62.39 MiB 65.40 MB)

Device Size : 63872 (62.39 MiB 65.40 MB)

Raid Devices : 2

Total Devices : 2

Preferred Minor : 0

Persistence : Superblock is persistent

Update Time : Thu Mar 30 01:48:39 2006

State : clean, degraded, recovering

Active Devices : 1

Working Devices : 2

Failed Devices : 0

Spare Devices : 1

Rebuild Status : 65% complete

UUID : b7572e60:4389f5dd:ce231ede:458a4f79

Events : 0.34

Number Major Minor RaidDevice State

0 8 17 0 active sync /dev/sdb1

1 8 33 1 spare rebuilding /dev/sdc1

6.2.1.4. Stopping and restarting a RAID array

# vgchangetest-an

0 logical volume(s) in volume group "test" now active

Ваша оценка очень важна

0
Шрифт
Фон

Помогите Вашим друзьям узнать о библиотеке