|
Home | Switchboard | Unix Administration | Red Hat | TCP/IP Networks | Neoliberalism | Toxic Managers |
(slightly skeptical) Educational society promoting "Back to basics" movement against IT overcomplexity and bastardization of classic Unix |
News | Reference | LVM | ||||
Linux Multipath IO | udev | ZFS | Solaris Volume Manager (SVM) | Admin Horror Stories | Humor | Etc |
|
Software RAID was originally introduced in Veritas Volume Manager and later was copycatted to Linux. It gives ability to create RAID arrays without having RAID-cable hardware disk controller, providing what can be called "poor man RAID". All major RAID Levels are supported: 0, 1, and 5. Follow the 20-percent rule when creating a RAID5 volume: because of the complexity of parity calculations, volumes with greater than about 20 percent writes should probably not be RAID5 volumes. If data redundancy is needed, consider mirroring. Generally it's a bad idea to use software RAID on production systems, it's much better to use disk controller which supports RAID. Software RAID is error prone and in this sense might paradoxically provide lower reliability then regular Ext3 partitions. It's not compatible with hardware based solutions. The only advantage is that it is free.
|
Special and pretty complex driver is needed to implement software RAID solution. Linux uses a driver called md, which is not integrated into LVM and has a separate administration utility called mdadm. The latter has different syntax then LVM commands. This is a typical design blunder (the first workable solution was adopted and became entrenched), but we need to live with it. This just makes this solution more clumsy.
Just like any other application, software-based arrays occupy host system memory and consume CPU cycles. It also makes disks operating system dependent. The performance of a software-based RAID disks is directly dependent on server CPU performance and load.
Except for the functionality, hardware-based RAID schemes have very little in common with software-based RAID. Since the host CPU can execute user applications while the array adapter's processor simultaneously executes the array functions, the result is true hardware multi-tasking. Hardware arrays also do not occupy any host system memory, nor are they operating system dependent.
mdadm can operated on partitions what were created (or converted to) special partition id: fd.
The example that follows creates a single partition (/dev/sdb1) on the second SCSI drive (/dev/sdb) and marks it as an automatically detectable RAID partition:
# fdisk /dev/sdb Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 1 First cylinder (1-1116, default 1): 1 Last cylinder or +size or +sizeM or +sizeK (1-1116, default 1116): 1116Change the drive type of /dev/sdb1 to Linux Raid Auto (0xFD) so it can be detected automatically at boot time:
Command (m for help): t Partition number (1-4): 1 Hex code (type L to list codes): fd
TIPS:
Using matched drives is strongly recommended
If you plan to use whole disks as array members, you don't need to partition member disks individually.
Set partitions to type Linux Raid Auto (0xFD) if you want the kernel to automatically start arrays at boot time. Otherwise, leave them as Linux (0x83).
RAID-1 and RAID-4/5 arrays should contain member disks that have partitions of the same size. If these arrays contain partitions of differing sizes, the larger partitions will be truncated to reflect the size of the smallest partition.
RAID-0 and linear mode arrays can contain partitions that have varying sizes without losing any disk space. Remember that when the smaller disks that belong to a RAID-0 become full, only the remaining disks are striped. So you might see variable performance on a RAID-0 with member disks of differing sizes as the array fills up.
mdadm provides for a half-dozen operations. Here is the list from the man page:
The most often used mdadm command is create:
mdadm -v --create /dev/md-device --level=<num> --raid-devices=<num> <device list>
Options are as following:
For example:
mdadm -v --create /dev/raid1 --level=1 --raid-devices=2 /dev/sdb /dev/sdc
Here is a more detailed discussion from the article InformIT Managing Storage in Red Hat Enterprise Linux 5 Understanding RAID:
RAID-0 and RAID-1 are often combined to gain the advantages of both. They can be combined as RAID-0+1 which means that two volumes are mirrored while each volume is striped internally. The other combination is RAID-1+0 where each disk is mirrored and striping is done across all mirrors. RAID-0+1 may be considered less reliable because if each half looses any one disk, the entire volume fails.
When creating partitions to use for the RAID device, make sure they are of type Linux raid auto. In fdisk, this is partition id fd. After creating the partitions for the RAID device, use the following syntax as the root user to create the RAID device:mdadm --create /dev/mdX --level=<num> --raid-devices=<num> <device list>The progress of the device creation can be monitored with the following command as root: tail -f /proc/mdstatmdadm --create /dev/md0 --level=1 --raid-devices=3 /dev/sda5 /dev/sda6 /dev/sda7The command cat /proc/mdstat should show output similar to:
Personalities : [raid0] [raid1] md0 : active raid1 sda7[2] sda6[1] sda5[0] 10241280 blocks [3/3] [UUU] [>....................] resync = 0.0% (8192/10241280) finish=62.3min speed=2730K/sec unused devices: <none>The RAID device /dev/md0 is created. To add a partition to a RAID device, execute the following as root after creating the partition of type Linux raid auto (fd in fdisk):
mdadm /dev/mdX -a <device list>To add /dev/sda8 to the /dev/md0 RAID device created in the previous section:
mdadm /dev/md0 -a /dev/sda8The /dev/sda8 partition is now a spare partition in the RAID array.
Personalities : [raid0] [raid1] md0 : active raid1 sda8[3](S) sda7[2] sda6[1] sda5[0] 10241280 blocks [3/3] [UUU] [>....................] resync = 0.6% (66560/10241280) finish=84.0min speed=2016K/sec unused devices: <none>If a partition in the array fails, use the following to remove it from the array and rebuild the array using the spare partition already added:
mdadm /dev/mdX -f <failed device>For example, to fail /dev/sda5 from /dev/md0 and replace it with the spare (assuming the spare has already been added):
mdadm /dev/md0 -f /dev/sda5To verify that the device has been failed and that the rebuild has been complete and was successful, monitor the /proc/mdstat file (output shown in Listing 7.9):
tail -f /proc/mdstatNotice that /dev/sda5 is now failed and that /dev/sda8 has changed from a spare to an active partition in the RAID array.
Failing a Partition and Replacing with a Spare
Personalities : [raid0] [raid1] md0 : active raid1 sda8[3] sda7[2] sda6[1] sda5[4](F) 10241280 blocks [3/2] [_UU] [>....................] recovery = 0.2% (30528/10241280) finish=11.1min speed=15264K/sec unused devices: <none>Monitoring RAID Devices
The following commands are useful for monitoring RAID devices:
- cat /proc/mdstat: Shows the status of the RAID devices and the status of any actions being performed on them such as adding a new member or rebuilding the array.
- mdadm --query /dev/mdX: Displays basic data about the device such as size and number of spares such as:
/dev/md0: 9.77GiB raid1 3 devices, 1 spare.Add the --detail option to display more data ( mdadm --query --detail /dev/mdX):
/dev/md0: Version : 00.90.03 Creation Time : Mon Dec 18 07:39:05 2006 Raid Level : raid1 Array Size : 10241280 (9.77 GiB 10.49 GB) Device Size : 10241280 (9.77 GiB 10.49 GB) Raid Devices : 3 Total Devices : 4 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Mon Dec 18 07:40:01 2006 State : clean, degraded, recovering Active Devices : 2 Working Devices : 3 Failed Devices : 1 Spare Devices : 1 Rebuild Status : 49% complete UUID : be623775:3e4ed7d6:c133873d:fbd771aa Events : 0.5 Number Major Minor RaidDevice State 3 8 8 0 spare rebuilding /dev/sda8 1 8 6 1 active sync /dev/sda6 2 8 7 2 active sync /dev/sda7 4 8 5 - faulty spare /dev/sda5- mdadm --examine <partition>: Displays detailed data about a component of a RAID array such as RAID level, total number of devices, number of working devices, and number of failed devices. For example, the output of mdadm --examine /dev/sda6 shows the following:
/dev/sda6: Magic : a92b4efc Version : 00.90.00 UUID : be623775:3e4ed7d6:c133873d:fbd771aa Creation Time : Mon Dec 18 07:39:05 2006 Raid Level : raid1 Device Size : 10241280 (9.77 GiB 10.49 GB) Array Size : 10241280 (9.77 GiB 10.49 GB) Raid Devices : 3 Total Devices : 4 Preferred Minor : 0 Update Time : Mon Dec 18 07:40:01 2006 State : active Active Devices : 2 Working Devices : 3 Failed Devices : 0 Spare Devices : 1 Checksum : ee90b526 - correct Events : 0.5 Number Major Minor RaidDevice State this 1 8 6 1 active sync /dev/sda6 0 0 0 0 0 removed 1 1 8 6 1 active sync /dev/sda6 2 2 8 7 2 active sync /dev/sda7 3 3 8 8 3 spare /dev/sda8
January 31, 2008 | www.bgevolution.comThis concept works just as for an internal hard drive. Although, USB drives seem to not remain part of the array after a reboot, therefore to use a USB device in a RAID1 setup, you will have to leave the drive connected, and the computer running. Another tactic is to occasionally sync your USB drive to the array, and shut down the USB drive after synchronization. Either tactic is effective.
You can create a quick script to add the USB partitions to the RAID1.
The first thing to do when synchronizing is to add the partition:
sudo mdadm --add /dev/md0 /dev/sdb1
I have 4 partitions therefore my script contains 4 add commands.
Then grow the arrays to fit the number of devices:
sudo mdadm --grow /dev/md0 --raid-devices=3
After growing the array your USB drive will magically sync USB is substantially slower than SATA or PATA. Anything over 100 Gigabytes will take some time. My 149 Gigabyte /home partition takes about an hour and a half to synchronize. Once its synced I do not experience any apparent difference in system performance.
I recommend you experiment with setting up and managing RAID and LVM systems before using it on an important filesystem. One way I was able to do it was to take old hard drive and create a bunch of partitions on it (8 or so should be enough) and try combining them into RAID arrays. In my testing I created two RAID-5 arrays each with 3 partitions. You can then manually fail and hot remove the partitions from the array and then add them back to see how the recovery process works. You'll get a warning about the partitions sharing a physical disc but you can ignore that since it's only for experimentation.Initial set of a RAID-5 array
In my case I have two systems with RAID arrays, one with two 73G SCSI drives running RAID-1 (mirroring) and my other test system is configured with three 120G IDE drives running RAID-5. In most cases I will refer to my RAID-5 configuration as that will be more typical.
I have an extra IDE controller in my system to allow me to support the use of more than 4 IDE devices which caused a very odd drive assignment. The order doesn't seem to bother the Linux kernel so it doesn't bother me. My basic configuration is as follows:
hda 120G driveThe first step is to create the physical partitions on each drive that will be part of the RAID array. In my case I want to use each 120G drive in the array in it's entirety. All the drives are partitioned identically so for example, this is how hda is partitioned:
hdb 120G drive
hde 60G boot drive not on RAID array
hdf 120G drive
hdg CD-ROM drive
Disk /dev/hda: 120.0 GB, 120034123776 bytes 16 heads, 63 sectors/track, 232581 cylinders Units = cylinders of 1008 * 512 = 516096 bytes Device Boot Start End Blocks Id System /dev/hda1 * 1 232581 117220792+ fd Linux raid autodetectSo now with all three drives with a partitioned with id fd Linux raid autodetect you can go ahead and combine the partitions into a RAID array:
# /sbin/mdadm --create --verbose /dev/md0 --level=5 --raid-devices=3 \ /dev/hdb1 /dev/hda1 /dev/hdf1Wow, that was easy. That created a special device /dev/md0 which can be used instead of a physical partition. You can check on the status of that RAID array with the mdadm command:
# /sbin/mdadm --detail /dev/md0 Version : 00.90.01 Creation Time : Wed May 11 20:00:18 2005 Raid Level : raid5 Array Size : 234436352 (223.58 GiB 240.06 GB) Device Size : 117218176 (111.79 GiB 120.03 GB) Raid Devices : 3 Total Devices : 3 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Fri Jun 10 04:13:11 2005 State : clean Active Devices : 3 Working Devices : 3 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 64K UUID : 36161bdd:a9018a79:60e0757a:e27bb7ca Events : 0.10670 Number Major Minor RaidDevice State 0 3 1 0 active sync /dev/hda1 1 3 65 1 active sync /dev/hdb1 2 33 65 2 active sync /dev/hdf1The important lines to see are the State line which should say clean otherwise there might be a problem. At the bottom you should make sure that the State column always says active sync which says each device is actively in the array. You could potentially have a spare device that's on-hand should any drive should fail. If you have a spare you'll see it listed as such here.
One thing you'll see above if you're paying attention is the fact that the size of the array is 240G but I have three 120G drives as part of the array. That's because the extra space is used as extra parity data that is needed to survive the failure of one of the drives.
Initial set of LVM on top of RAID
Now that we have /dev/md0 device you can create a Logical Volume on top of it. Why would you want to do that? If I were to build an ext3 filesystem on top of the RAID device and someday wanted to increase it's capacity I wouldn't be able to do that without backing up the data, building a new RAID array and restoring my data. Using LVM allows me to expand (or contract) the size of the filesystem without disturbing the existing data.Anyway, here are the steps to then add this RAID array to the LVM system. The first command pvcreate will "initialize a disk or partition for use by LVM". The second command vgcreate will then create the Volume Group, in my case I called it lvm-raid:
# pvcreate /dev/md0 # vgcreate lvm-raid /dev/md0The default value for the physical extent size can be too low for a large RAID array. In those cases you'll need to specify the -s option with a larger than default physical extent size. The default is only 4MB as of the version in Fedora Core 5. The maximum number of physical extents is approximately 65k so take your maximum volume size and divide it by 65k then round it to the next nice round number. For example, to successfully create a 550G RAID let's figure that's approximately 550,000 megabytes and divide by 65,000 which gives you roughly 8.46. Round it up to the next nice round number and use 16M (for 16 megabytes) as the physical extent size and you'll be fine:
# vgcreate -s 16M <volume group name>Ok, you've created a blank receptacle but now you have to tell how many Physical Extents from the physical device (/dev/md0 in this case) will be allocated to this Volume Group. In my case I wanted all the data from /dev/md0 to be allocated to this Volume Group. If later I wanted to add additional space I would create a new RAID array and add that physical device to this Volume Group.
To find out how many PEs are available to me use the vgdisplay command to find out how many are available and now I can create a Logical Volume using all (or some) of the space in the Volume Group. In my case I call the Logical Volume lvm0.
# vgdisplay lvm-raid . . Free PE / Size 57235 / 223.57 GB # lvcreate -l 57235 lvm-raid -n lvm0In the end you will have a device you can use very much like a plain 'ol partition called /dev/lvm-raid/lvm0. You can now check on the status of the Logical Volume with the lvdisplay command. The device can then be used to to create a filesystem on.
# lvdisplay /dev/lvm-raid/lvm0 --- Logical volume --- LV Name /dev/lvm-raid/lvm0 VG Name lvm-raid LV UUID FFX673-dGlX-tsEL-6UXl-1hLs-6b3Y-rkO9O2 LV Write Access read/write LV Status available # open 1 LV Size 223.57 GB Current LE 57235 Segments 1 Allocation inherit Read ahead sectors 0 Block device 253:2 # mkfs.ext3 /dev/lvm-raid/lvm0 . . # mount /dev/lvm-raid/lvm0 /mnt # df -h /mnt Filesystem Size Used Avail Use% Mounted on /dev/mapper/lvm--raid-lvm0 224G 93M 224G 1% /mnt
Handling a Drive Failure
As everything eventually does break (some sooner than others) a drive in the array will fail. It is a very good idea to run smartd on all drives in your array (and probably ALL drives period) to be notified of a failure or a pending failure as soon as possible. You can also manually fail a partition, meaning to take it out of the RAID array, with the following command:# /sbin/mdadm /dev/md0 -f /dev/hdb1 mdadm: set /dev/hdb1 faulty in /dev/md0
Once the system has determined a drive has failed or is otherwise missing (you can shut down and pull out a drive and reboot to similate a drive failure or use the command to manually fail a drive above it will show something like this in mdadm:
# /sbin/mdadm --detail /dev/md0 Update Time : Wed Jun 15 11:30:59 2005 State : clean, degraded Active Devices : 2 Working Devices : 2 Failed Devices : 1 Spare Devices : 0 . . Number Major Minor RaidDevice State 0 3 1 0 active sync /dev/hda1 1 0 0 - removed 2 33 65 2 active sync /dev/hdf1You'll notice in this case I had /dev/hdb fail. I replaced it with a new drive with the same capacity and was able to add it back to the array. The first step is to partition the new drive just like when first creating the array. Then you can simply add the partition back to the array and watch the status as the data is rebuilt onto the newly replace drive.
# /sbin/mdadm /dev/md0 -a /dev/hdb1 # /sbin/mdadm --detail /dev/md0 Update Time : Wed Jun 15 12:11:23 2005 State : clean, degraded, recovering Active Devices : 2 Working Devices : 3 Failed Devices : 0 Spare Devices : 1 Layout : left-symmetric Chunk Size : 64K Rebuild Status : 2% completeDuring the rebuild process the system performance may be somewhat impacted but the data should remain in-tact.
Expanding an Array/Filesytem
I'm told it's now possible to expand the size of a RAID array much as you could on a commercial array such as the NetApp. The link below describes the procedure. I have yet to try it but it looks promising:Growing a RAID5 array - http://scotgate.org/?p=107
3.7.2 Using the command line to create RAIDIn Example 3-12 we have two disks; a small part of the first is used for / partition and a swap device, and the second disk is empty.
We can create a logical partition on our first disk and mirror it to the partition on the second disk. For better compatibility and performance, we choose to span identical cylinders.
Example 3-12 Starting point software RAID
# fdisk -lDisk /dev/sda: 255 heads, 63 sectors, 17849 cylindersUnits = cylinders of 16065 * 512 bytes
Device Boot Start End Blocks Id System/dev/sda1 * 1 1 8001 41 PPC PReP Boot/dev/sda3 15 537 4200997+ 83 Linux/dev/sda4 538 17848 139050607+ 5 Extended/dev/sda5 538 799 2104483+ 82 Linux swap
Disk /dev/sdb: 255 heads, 63 sectors, 17849 cylindersUnits = cylinders of 16065 * 512 bytes
Device Boot Start End Blocks Id System
First we create a RAID partition on the first disk (we type: fdisk /dev/sda, n, l, Enter, Enter, t, 6, fd).
n - new
l - logical
enter - use default starting cylinder
enter - use default ending cylinder
t - change type
6 - number of partition which type we want change
fd - Type Linux Software RAID autodetect Example 3-13 Creating a RAID partition on the first disk
fdisk /dev/sdaThe number of cylinders for this disk is set to 17849.There is nothing wrong with that, but this is larger than 1024,and could in certain setups cause problems with:1) software that runs at boot time (e.g., old versions of LILO)2) booting and partitioning software from other OSs(e.g., DOS FDISK, OS/2 FDISK)
Command (m for help): nCommand actionl logical (5 or over)p primary partition (1-4)lFirst cylinder (800-17848, default 800):Using default value 800Last cylinder or +size or +sizeM or +sizeK (800-17848, default 17848):Using default value 17848
Command (m for help): tPartition number (1-6): 6Hex code (type L to list codes): fdChanged system type of partition 6 to fd (Linux raid autodetect)
Command (m for help): wThe partition table has been altered!
Calling ioctl() to re-read partition table.
WARNING: Re-reading the partition table failed with error 16: Device or resource busy.The kernel still uses the old table.The new table will be used at the next reboot.Syncing disks.
In Example 3-14, we create a RAID partition on the second disk and use 800 as a start cylinder in order to be analog to the first one. We will lose additional space anyway if we are going to mirror the partitions.
Example 3-14 Creating a RAID partition on the second disk
Command (m for help): nCommand actione extendedp primary partition (1-4)pPartition number (1-4): 4First cylinder (1-17849, default 1): 800Last cylinder or +size or +sizeM or +sizeK (800-17849, default 17849):Using default value 17849
Command (m for help): tPartition number (1-4): 4Hex code (type L to list codes): fdChanged system type of partition 4 to fd (Linux raid autodetect)
Command (m for help): wThe partition table has been altered!
Calling ioctl() to re-read partition table.
WARNING: Re-reading the partition table failed with error 16: Device or resource busy.The kernel still uses the old table.The new table will be used at the next reboot.Syncing disks.
Now we have partition sda6 and sdb4 ready for RAID. In order to define what kind of RAID we want to create, we edit /etc/raidtab as shown in Example 3-15.
Example 3-15 /etc/raidtab file
raiddev /dev/md0raid-level raid1nr-raid-disks 2chunk-size 32persistent-superblock 1device /dev/sda6raid-disk 0device /dev/sdb4raid-disk 1
Now we run the mkraid command in order to create a RAID device defined in /etc/raidtab, as shown in Example 3-16.
Example 3-16 mkraid
# mkraid /dev/md0handling MD device /dev/md0analyzing super-blockdisk 0: /dev/sda6, 136946061kB, raid superblock at 136945984kBdisk 1: /dev/sdb3, 136954125kB, raid superblock at 136954048kB
We can watch the status of our RAID device by issuing the cat /proc/mdstat command, as shown in Example 3-17.
Example 3-17 Checking RAID status
# cat /proc/mdstatPersonalities : [raid1]read_ahead 1024 sectorsmd0 : active raid1 sdb3[1] sda6[0]136945984 blocks [2/2] [UU][>....................] resync = 1.8% (2538560/136945984) finish=140.7min speed=15917K/secunused devices: <none>
Note: We do not need to wait for reconstruction to finish in order to use the RAID device; the synchronization is done using idle I/O bandwidth. The process is transparent, so we can use the device (place LVM on it, partition and mount) although the disks are not synchronized yet. If one disk fails during the synchronization, we will need our backup tape.
Now we can create a volume group and add /dev/md0 to it, as shown in Example 3-18.
Example 3-18 Creating VG on RAID device
# vgcreate raidvg /dev/md0vgcreate -- INFO: using default physical extent size 4 MBvgcreate -- INFO: maximum logical volume size is 255.99 Gigabytevgcreate -- doing automatic backup of volume group "raidvg"vgcreate -- volume group "raidvg" successfully created and activated
And a logical volume in this volume group, as shown in Example 3-19.
Example 3-19 Creating LV in raidvg
lvcreate -L 20G -n mirrordata1 raidvglvcreate -- doing automatic backup of "raidvg"lvcreate -- logical volume "/dev/raidvg/mirrordata1" successfully created
This newly created volume can be formatted and mounted.
Creating an Array
Create (
mdadm --create
) mode is used to create a new array. In this example I usemdadm
to create a RAID-0 at /dev/md0 made up of /dev/sdb1 and /dev/sdc1:# mdadm --create --verbose /dev/md0 --level=0 --raid-devices=2 /dev/sdb1 /dev/sdc1 mdadm: chunk size defaults to 64K mdadm: array /dev/md0 started.
The
--level
option specifies which type of RAID to create in the same way that raidtools uses theraid-level
configuration line. Valid choices are 0,1,4 and 5 for RAID-0, RAID-1, RAID-4, RAID-5 respectively. Linear (--level=linear
) is also a valid choice for linear mode. The--raid-devices
option works the same as thenr-raid-disks
option when using /etc/raidtab andraidtools
.In general,
mdadm
commands take the format:mdadm [mode] <raiddevice> [options] <component disks>
Each of
mdadm
's options also has a short form that is less descriptive but shorter to type. For example, the following command uses the short form of each option but is identical to the example I showed above.# mdadm -Cv /dev/md0 -l0 -n2 -c128 /dev/sdb1 /dev/sdc1
-C
selects Create mode, and I have also included the-v
option here to turn on verbose output.-l
and-n
specify the RAID level and number of member disks. Users ofraidtools
and /etc/raidtab can see how much easier it is to create arrays usingmdadm
. You can change the default chunk size (64KB) using the--chunk
or-c
option. In this previous example I changed the chunk size to 128KB.mdadm
also supports shell expansions, so you don't have to type in the device name for every component disk if you are creating a large array. In this example, I'll create a RAID-5 with five member disks and a chunk size of 128KB:# mdadm -Cv /dev/md0 -l5 -n5 -c128 /dev/sd{a,b,c,d,e}1 mdadm: layout defaults to left-symmetric mdadm: array /dev/md0 started.
This example creates an array at /dev/md0 using SCSI disk partitions /dev/sda1, /dev/sdb1, /dev/sdc1, /dev/sdd1, and /dev/sde1. Notice that I have also set the chunk size to 128 KB using the
-c128
option. When creating a RAID-5,mdadm
will automatically choose theleft-symmetric
parity algorithm, which is the best choice.Use the
--stop
or-S
command to stop running array:# mdadm -S /dev/md0
/etc/mdadm.conf
/etc/mdadm.conf is
mdadm
s' primary configuration file. Unlike /etc/raidtab,mdadm
does not rely on /etc/mdadm.conf to create or manage arrays. Rather, mdadm.conf is simply an extra way of keeping track of software RAIDs. Using a configuration file withmdadm
is useful, but not required. Having one means you can quickly manage arrays without spending extra time figuring out what array properties are and where disks belong. For example, if an array wasn't running and there was no mdadm.conf file describing it, then the system administrator would need to spend time examining individual disks to determine array properties and member disks.Unlike the configuration file for
raidtools
,mdadm
.conf is concise and simply lists disks and arrays. The configuration file can contain two types of lines each starting with either theDEVICE
orARRAY
keyword. Whitespace separates the keyword from the configuration information.DEVICE
lines specify a list of devices that are potential member disks.ARRAY
lines specify device entries for arrays as well as identifier information. This information can include lists of one or more UUIDs,md
device minor numbers, or a listing of member devices.A simple mdadm.conf file might look like this:
DEVICE /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1 ARRAY /dev/md0 devices=/dev/sda1,/dev/sdb1 ARRAY /dev/md1 devices=/dev/sdc1,/dev/sdd1
In general, it's best to create an /etc/mdadm.conf file after you have created an array and update the file when new arrays are created. Without an /etc/mdadm.conf file you'd need to specify more detailed information about an array on the command in order to activate it. That means you'd have to remember which devices belonged to which arrays, and that could easily become a hassle on systems with a lot of disks.
mdadm
even provides an easy way to generate ARRAY lines. The output is a single long line, but I have broken it here to fit the page:# mdadm --detail --scan ARRAY /dev/md0 level=raid0 num-devices=2 \ UUID=410a299e:4cdd535e:169d3df4:48b7144a
If there were multiple arrays running on the system, then
mdadm
would generate an array line for each one. So after you're done building arrays you could redirect the output ofmdadm --detail --scan
to /etc/mdadm.conf. Just make sure that you manually create aDEVICE
entry as well. Using the example I've provided above we might have an /etc/mdadm.conf that looks like:DEVICE /dev/sdb1 /dev/sdc1 ARRAY /dev/md0 level=raid0 num-devices=2 \ UUID=410a299e:4cdd535e:169d3df4:48b7144a
This article will present a simple example with two drives. For this article, a CentOS 5.3 distribution was used on the following system:
- There are two drives for testing. They are Seagate ST3500641AS-RK with 16 MB cache each. These are
/dev/sdb
and/dev/sdb
.
Using this configuration a simple RAID-1 configuration is created between
/dev/sdb
and/dev/sdc
.Step 1 - Set the ID of the drives
The first step in the creation of a RAID-1 group is to set the ID of the drives that are to be part of the RAID group. The type is "fd" (Linux raid autodetect) and needs to be set for all partitions and/or drives used in the RAID group. You can check the partition types fairly easy:fdisk -l /dev/sdb
RAID - Wikipedia, the free encyclopedia
mdadm(8) manage MD devices aka Software Raid - Linux man page
The Software-RAID HOWTO Jakob Østergaard & Emilio Bueso (v1.1, 2004-06-03 )
Runtime Software RAID Reconstructor Tutorial
RAID0 Implementation Under Linux
Linux Software RAID - A Belt and a Pair of Suspenders Linux Magazine
InformIT Managing Storage in Red Hat Enterprise Linux 5 Understanding RAID:
Society
Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers : Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism : The Iron Law of Oligarchy : Libertarian Philosophy
Quotes
War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda : SE quotes : Language Design and Programming Quotes : Random IT-related quotes : Somerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose Bierce : Bernard Shaw : Mark Twain Quotes
Bulletin:
Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 : Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law
History:
Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds : Larry Wall : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOS : Programming Languages History : PL/1 : Simula 67 : C : History of GCC development : Scripting Languages : Perl history : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history
Classic books:
The Peter Principle : Parkinson Law : 1984 : The Mythical Man-Month : How to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite
Most popular humor pages:
Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor
The Last but not Least Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt. Ph.D
Copyright © 1996-2021 by Softpanorama Society. www.softpanorama.org was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.
FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.
This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...
|
You can use PayPal to to buy a cup of coffee for authors of this site |
Disclaimer:
The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the Softpanorama society. We do not warrant the correctness of the information provided or its fitness for any purpose. The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.
Last modified: March 12, 2019