Creating, assembling and rebuilding small array is fine. But, things started to get nasty when you try to rebuild or resync large size array.
You may get frustrated when you see it is going to take 22 hours to rebuild the array.
You can always increase RAID resync performance using the following technique.
Recently, I build a small NAS server running Linux for one my client with 5 x 2TB disks in RAID 6 configuration for all in one backup server for Mac OS X and Windows XP/Vista clients computers.
Then whey I cat /proc/mdstat it reported that md0 is created and resync is in progress. The resync speed was around 4000K/sec and resync will complete in approximately in 22 hours. I wanted to finish this early.
/proc/sys/dev/raid/{speed_limit_max,speed_limit_min}
The /proc/sys/dev/raid/speed_limit_min is config file that reflects the current "goal" rebuild speed for times when non-rebuild activity is current on an array.
The speed is in Kibibytes per second, and is a per-device rate, not a per-array rate.
The default is 1000.
The /proc/sys/dev/raid/speed_limit_max is config file that reflects the current "goal" rebuild speed for times when no non-rebuild activity is current on an array.
The default is 100,000.
To see current limits, enter:
To increase speed, enter:
OR
If you want to override the defaults you could add these two lines to /etc/sysctl.conf:
Once array rebuild or fully synced, disable bitmaps:
My speed went from 4k to 51k:
Sample outputs:
Recently, I build a small NAS server running Linux for one my client with 5 x 2TB disks in RAID 6 configuration for all in one backup server for Mac OS X and Windows XP/Vista clients computers.
Then whey I cat /proc/mdstat it reported that md0 is created and resync is in progress. The resync speed was around 4000K/sec and resync will complete in approximately in 22 hours. I wanted to finish this early.
/proc/sys/dev/raid/{speed_limit_max,speed_limit_min}
The /proc/sys/dev/raid/speed_limit_min is config file that reflects the current "goal" rebuild speed for times when non-rebuild activity is current on an array.
The speed is in Kibibytes per second, and is a per-device rate, not a per-array rate.
The default is 1000.
The /proc/sys/dev/raid/speed_limit_max is config file that reflects the current "goal" rebuild speed for times when no non-rebuild activity is current on an array.
The default is 100,000.
To see current limits, enter:
# sysctl dev.raid.speed_limit_min
# sysctl dev.raid.speed_limit_max
To increase speed, enter:
OR
echo value > /proc/sys/dev/raid/speed_limit_min
In this example, set it to 50000 K/Sec, enter:
sysctl -w dev.raid.speed_limit_min=value
# echo 50000 > /proc/sys/dev/raid/speed_limit_min
OR
# sysctl -w dev.raid.speed_limit_min=50000
If you want to override the defaults you could add these two lines to /etc/sysctl.conf:
dev.raid.speed_limit_min = 50000
dev.raid.speed_limit_max = 200000
Bitmap Option
Bitmaps optimize rebuild time after a crash, or after removing and re-adding a device. Turn it on by typing the following command:
# mdadm --grow --bitmap=internal /dev/md0
Once array rebuild or fully synced, disable bitmaps:
# mdadm --grow --bitmap=none /dev/md0
ResultMy speed went from 4k to 51k:
# cat /proc/mdstat
Sample outputs:
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath]
md5 : active raid1 sde2[2](S) sdd2[3](S) sdc2[4](S) sdb2[1] sda2[0]
530048 blocks [2/2] [UU]
md0 : active raid6 sde3[4] sdd3[3] sdc3[2] sdb3[1] sda3[0]
5855836800 blocks level 6, 64k chunk, algorithm 2 [5/5] [UUUUU]
[============>........] resync = 61.7% (1205475036/1951945600) finish=242.9min speed=51204K/sec
References:
- man page md and mdadm
No comments:
Post a Comment