This guide explains how to set up software RAID1 on an already running Linux (Ubuntu 12.10) system. The GRUB2 bootloader will be configured in such a way that the system will still be able to boot if one of the hard drives fails (no matter which one).
Preliminary Note
In this tutorial I am using an Ubuntu 12.10 system with two disks, /dev/sda and /dev/sdb which are identical in size.
/dev/sdb is currently unused, and /dev/sda has the following partition:
/dev/sda1: / partition, ext4;
/dev/sda5: swap
After completing this guide I will have the following situation:
/dev/md0: / partition, ext4;
/dev/md1: swap
The current situation:
[codesyntax lang="bash"]
df -h
[/codesyntax]
root@ubuntu:~# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 19G 969M 17G 6% /
udev 494M 4.0K 494M 1% /dev
tmpfs 201M 272K 201M 1% /run
none 5.0M 0 5.0M 0% /run/lock
none 502M 0 502M 0% /run/shm
none 100M 0 100M 0% /run/user
root@ubuntu:~#
[codesyntax lang="bash"]
fdisk -l
[/codesyntax]
root@ubuntu:~# fdisk -l
Disk /dev/sda: 21.5 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders, total 41943040 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00059a4b
Device Boot Start End Blocks Id System
/dev/sda1 * 2048 39845887 19921920 83 Linux
/dev/sda2 39847934 41940991 1046529 5 Extended
/dev/sda5 39847936 41940991 1046528 82 Linux swap / Solaris
Disk /dev/sdb: 21.5 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders, total 41943040 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/sdb doesn't contain a valid partition table
root@ubuntu:~#
Installing mdadm
First of all install md tools:
[codesyntax lang="bash"]
aptitude install initramfs-tools mdadm
[/codesyntax]
In order to avoid reboot, let's load few kernel modules:
[codesyntax lang="bash"]
modprobe linear modprobe multipath modprobe raid0 modprobe raid1 modprobe raid5 modprobe raid6 modprobe raid10
[/codesyntax]
Now:
[codesyntax lang="bash"]
cat /proc/mdstat
[/codesyntax]
root@ubuntu:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
unused devices:
root@ubuntu:~#
Preparing the second disk
To create a software RAID1 on a running system, we have to prepare the second disk added to the system (in this case /dev/sdb) for RAID1, then copy the contents from the first disk (/dev/sda) to it, and finally add the first disk to the RAID1 array.
Let's copy the partition table from /dev/sda to /dev/sdb so that the both disks have the exactly same layout:
[codesyntax lang="bash"]
sfdisk -d /dev/sda | sfdisk --force /dev/sdb
[/codesyntax]
root@ubuntu:~# sfdisk -d /dev/sda | sfdisk --force /dev/sdb
Checking that no-one is using this disk right now ...
Warning: extended partition does not start at a cylinder boundary.
DOS and Linux will interpret the contents differently.
OK
Disk /dev/sdb: 2610 cylinders, 255 heads, 63 sectors/track
sfdisk: ERROR: sector 0 does not have an msdos signature
/dev/sdb: unrecognized partition table type
Old situation:
No partitions found
New situation:
Units = sectors of 512 bytes, counting from 0
Device Boot Start End #sectors Id System
/dev/sdb1 * 2048 39845887 39843840 83 Linux
/dev/sdb2 39847934 41940991 2093058 5 Extended
/dev/sdb3 0 - 0 0 Empty
/dev/sdb4 0 - 0 0 Empty
/dev/sdb5 39847936 41940991 2093056 82 Linux swap / Solaris
Warning: partition 1 does not end at a cylinder boundary
Warning: partition 2 does not start at a cylinder boundary
Warning: partition 2 does not end at a cylinder boundary
Warning: partition 5 does not end at a cylinder boundary
Successfully wrote the new partition table
Re-reading the partition table ...
If you created or changed a DOS partition, /dev/foo7, say, then use dd(1)
to zero the first 512 bytes: dd if=/dev/zero of=/dev/foo7 bs=512 count=1
(See fdisk(8).)
root@ubuntu:~#
And the output of the command:
[codesyntax lang="bash"]
fdisk -l
[/codesyntax]
root@ubuntu:~# fdisk -l
Disk /dev/sda: 21.5 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders, total 41943040 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00059a4b
Device Boot Start End Blocks Id System
/dev/sda1 * 2048 39845887 19921920 83 Linux
/dev/sda2 39847934 41940991 1046529 5 Extended
/dev/sda5 39847936 41940991 1046528 82 Linux swap / Solaris
Disk /dev/sdb: 21.5 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders, total 41943040 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Device Boot Start End Blocks Id System
/dev/sdb1 * 2048 39845887 19921920 83 Linux
/dev/sdb2 39847934 41940991 1046529 5 Extended
/dev/sdb5 39847936 41940991 1046528 82 Linux swap / Solaris
root@ubuntu:~#
Change the partitions type on /dev/sdb to Linux raid autodetect:
[codesyntax lang="bash"]
sfdisk --change-id /dev/sdb 1 fd sfdisk --change-id /dev/sdb 5 fd
[/codesyntax]
root@ubuntu:~# sfdisk --change-id /dev/sdb 1 fd
Warning: extended partition does not start at a cylinder boundary.
DOS and Linux will interpret the contents differently.
Done
root@ubuntu:~# sfdisk --change-id /dev/sdb 5 fd
Warning: extended partition does not start at a cylinder boundary.
DOS and Linux will interpret the contents differently.
Done
root@ubuntu:~#
To make sure that there are no remains from previous RAID installations on /dev/sdb, we run the following commands:
[codesyntax lang="bash"]
mdadm --zero-superblock /dev/sdb1 mdadm --zero-superblock /dev/sdb5
[/codesyntax]
If you receive the following error messages then there are no remains from previous RAID installations, which is nothing to worry about:
root@ubuntu:~# mdadm --zero-superblock /dev/sdb1
mdadm: Unrecognised md component device - /dev/sdb1
root@ubuntu:~# mdadm --zero-superblock /dev/sdb5
mdadm: Unrecognised md component device - /dev/sdb5
root@ubuntu:~#
Creating RAID arrays
Now use mdadm to create the raid arrays. We mark the first drive (sda) as "missing" so it doesn't wipe out our existing data:
[codesyntax lang="bash"]
mdadm --create /dev/md0 --level=1 --raid-disks=2 missing /dev/sdb1 mdadm --create /dev/md1 --level=1 --raid-disks=2 missing /dev/sdb5
[/codesyntax]
root@ubuntu:~# mdadm --create /dev/md0 --level=1 --raid-disks=2 missing /dev/sdb1
mdadm: Note: this array has metadata at the start and
may not be suitable as a boot device. If you plan to
store '/boot' on this device please ensure that
your boot-loader understands md/v1.x metadata, or use
--metadata=0.90
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
root@ubuntu:~# mdadm --create /dev/md1 --level=1 --raid-disks=2 missing /dev/sdb5
mdadm: Note: this array has metadata at the start and
may not be suitable as a boot device. If you plan to
store '/boot' on this device please ensure that
your boot-loader understands md/v1.x metadata, or use
--metadata=0.90
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md1 started.
root@ubuntu:~#
[codesyntax lang="bash"]
cat /proc/mdstat
[/codesyntax]
root@ubuntu:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md1 : active raid1 sdb5[1]
1045952 blocks super 1.2 [2/1] [_U]
md0 : active raid1 sdb1[1]
19905408 blocks super 1.2 [2/1] [_U]
unused devices: <none>
root@ubuntu:~#
The output above means that we have two degraded arrays ([_U] or [U_] means that an array is degraded while [UU] means that the array is ok).
Create the filesystems on RAID arrays (ext4 on /dev/md0 and swap on /dev/md1)
[codesyntax lang="bash"]
mkfs.ext4 /dev/md0 mkswap /dev/md1
[/codesyntax]
root@ubuntu:~# mkfs.ext4 /dev/md0
mke2fs 1.42.5 (29-Jul-2012)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
1245184 inodes, 4976352 blocks
248817 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=0
152 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000
Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
root@ubuntu:~# mkswap /dev/md1
mkswap: /dev/md1: warning: don't erase bootbits sectors
on whole disk. Use -f to force.
Setting up swapspace version 1, size = 1045948 KiB
no label, UUID=728f7cfe-bd95-43e5-906d-c8a70023d081
root@ubuntu:~#
Adjust mdadm configuration file which doesn't contain any information about RAID arrays yet:
[codesyntax lang="bash"]
cp /etc/mdadm/mdadm.conf /etc/mdadm/mdadm.conf_orig mdadm --examine --scan >> /etc/mdadm/mdadm.conf
[/codesyntax]
Display the content of /etc/mdadm/mdadm.conf:
[codesyntax lang="bash"]
cat /etc/mdadm/mdadm.conf
[/codesyntax]
root@ubuntu:~# cat /etc/mdadm/mdadm.conf
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#
# by default (built-in), scan all partitions (/proc/partitions) and all
# containers for MD superblocks. alternatively, specify devices to scan, using
# wildcards if desired.
#DEVICE partitions containers
# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes
# automatically tag new arrays as belonging to the local system
HOMEHOST
# instruct the monitoring daemon where to send mail alerts
MAILADDR root
# definitions of existing MD arrays
# This file was auto-generated on Tue, 23 Oct 2012 04:36:40 -0700
# by mkconf $Id$
root@ubuntu:~#
Adjusting The System To RAID1
Let's mount /dev/md0:
[codesyntax lang="bash"]
mkdir /mnt/md0 mount /dev/md0 /mnt/md0
[/codesyntax]
[codesyntax lang="bash"]
mount
[/codesyntax]
root@ubuntu:~# mount
/dev/sda1 on / type ext4 (rw,errors=remount-ro)
proc on /proc type proc (rw,noexec,nosuid,nodev)
sysfs on /sys type sysfs (rw,noexec,nosuid,nodev)
none on /sys/fs/fuse/connections type fusectl (rw)
none on /sys/kernel/debug type debugfs (rw)
none on /sys/kernel/security type securityfs (rw)
udev on /dev type devtmpfs (rw,mode=0755)
devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620)
tmpfs on /run type tmpfs (rw,noexec,nosuid,size=10%,mode=0755)
none on /run/lock type tmpfs (rw,noexec,nosuid,nodev,size=5242880)
none on /run/shm type tmpfs (rw,nosuid,nodev)
none on /run/user type tmpfs (rw,noexec,nosuid,nodev,size=104857600,mode=0755)
/dev/md0 on /mnt/md0 type ext4 (rw)
root@ubuntu:~#
Change the UID values in /etc/fstab with the UUID values returned by blkid:
[codesyntax lang="bash"]
blkid /dev/md0 /dev/md1
[/codesyntax]
root@ubuntu:~# blkid /dev/md0 /dev/md1
/dev/md0: UUID="4a49251b-e357-40a4-b13f-13b041c55a9d" TYPE="ext4"
/dev/md1: UUID="728f7cfe-bd95-43e5-906d-c8a70023d081" TYPE="swap"
root@ubuntu:~#
After changing the UUID values the /etc/fstab should look as follows:
[codesyntax lang="bash"]
cat /etc/fstab
[/codesyntax]
root@ubuntu:~# cat /etc/fstab
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# # / was on /dev/sda1 during installation
UUID=4a49251b-e357-40a4-b13f-13b041c55a9d / ext4 errors=remount-ro 0 1
# swap was on /dev/sda5 during installation
UUID=728f7cfe-bd95-43e5-906d-c8a70023d081 none swap sw 0 0
/dev/fd0 /media/floppy0 auto rw,user,noauto,exec,utf8 0 0
root@ubuntu:~#
Next replace /dev/sda1 with /dev/md0 in /etc/mtab:
[codesyntax lang="bash"]
sed -e "s/dev\/sda1/dev\/md0/" -i /etc/mtab
[/codesyntax]
[codesyntax lang="bash"]
cat /etc/mtab
[/codesyntax]
root@ubuntu:~# cat /etc/mtab
/dev/md0 / ext4 rw,errors=remount-ro 0 0
proc /proc proc rw,noexec,nosuid,nodev 0 0
sysfs /sys sysfs rw,noexec,nosuid,nodev 0 0
none /sys/fs/fuse/connections fusectl rw 0 0
none /sys/kernel/debug debugfs rw 0 0
none /sys/kernel/security securityfs rw 0 0
udev /dev devtmpfs rw,mode=0755 0 0
devpts /dev/pts devpts rw,noexec,nosuid,gid=5,mode=0620 0 0
tmpfs /run tmpfs rw,noexec,nosuid,size=10%,mode=0755 0 0
none /run/lock tmpfs rw,noexec,nosuid,nodev,size=5242880 0 0
none /run/shm tmpfs rw,nosuid,nodev 0 0
none /run/user tmpfs rw,noexec,nosuid,nodev,size=104857600,mode=0755 0 0
/dev/md0 /mnt/md0 ext4 rw 0 0
root@ubuntu:~#
Setup the GRUB2 boot loader.
Create the file /etc/grub.d/09_swraid1_setup as follows:
[codesyntax lang="bash"]
cp /etc/grub.d/40_custom /etc/grub.d/09_swraid1_setup vim /etc/grub.d/09_swraid1_setup
[/codesyntax]
#!/bin/sh
exec tail -n +3 $0
# This file provides an easy way to add custom menu entries. Simply type the
# menu entries you want to add after this comment. Be careful not to change
# the 'exec tail' line above.
menuentry 'Ubuntu, with Linux 3.5.0-17-generic' --class ubuntu --class gnu-linux --class gnu --class os {
recordfail
insmod mdraid1x
insmod ext2
set root='(md/0)'
linux /boot/vmlinuz-3.5.0-17-generic root=/dev/md0 ro quiet
initrd /boot/initrd.img-3.5.0-17-generic
}
Make sure you use the correct kernel version in the menuentry (in the linux and initrd lines).
[codesyntax lang="bash"]
uname -r
[/codesyntax]
root@ubuntu:~# uname -r
3.5.0-17-generic
root@ubuntu:~#
Update grub configuration and adjust our ramdisk to the new situation:
[codesyntax lang="bash"]
update-grub update-initramfs -u
[/codesyntax]
root@ubuntu:~# update-grub
Generating grub.cfg ...
Found linux image: /boot/vmlinuz-3.5.0-17-generic
Found initrd image: /boot/initrd.img-3.5.0-17-generic
Found memtest86+ image: /boot/memtest86+.bin
done
root@ubuntu:~# update-initramfs -u
update-initramfs: Generating /boot/initrd.img-3.5.0-17-generic
W: mdadm: /etc/mdadm/mdadm.conf defines no arrays.
root@ubuntu:~#
Copy files to the new disk
Copy the files from the first disk (/dev/sda) to the second one (/dev/sdb)
[codesyntax lang="bash"]
cp -dpRx / /mnt/md0
[/codesyntax]
Preparing GRUB2 (Part 1)
Install GRUB2 boot loader on both disks (/dev/sda and /dev/sdb):
[codesyntax lang="bash"]
grub-install /dev/sda grub-install /dev/sdb
[/codesyntax]
root@ubuntu:~# grub-install /dev/sda
Installation finished. No error reported.
root@ubuntu:~# grub-install /dev/sdb
Installation finished. No error reported.
Now we reboot the system and hope that it boots ok from our RAID arrays:
[codesyntax lang="bash"]
reboot
[/codesyntax]
Preparing /dev/sda
If everything went well, you should now find /dev/md0 in the output of:
[codesyntax lang="bash"]
df -h
[/codesyntax]
root@ubuntu:~# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/md0 19G 985M 17G 6% /
udev 494M 4.0K 494M 1% /dev
tmpfs 201M 304K 201M 1% /run
none 5.0M 0 5.0M 0% /run/lock
none 502M 0 502M 0% /run/shm
none 100M 0 100M 0% /run/user
root@ubuntu:~#
The output of:
[codesyntax lang="bash"]
cat /proc/mdstat
[/codesyntax]
root@ubuntu:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md1 : active raid1 sdb5[1]
1045952 blocks super 1.2 [2/1] [_U]
md0 : active raid1 sdb1[1]
19905408 blocks super 1.2 [2/1] [_U]
unused devices: <none>
root@ubuntu:~#
Change the partitions type on /dev/sda to Linux raid autodetect:
[codesyntax lang="bash"]
sfdisk --change-id /dev/sda 1 fd sfdisk --change-id /dev/sda 5 fd
[/codesyntax]
root@ubuntu:~# sfdisk --change-id /dev/sda 1 fd
Warning: extended partition does not start at a cylinder boundary.
DOS and Linux will interpret the contents differently.
Done
root@ubuntu:~# sfdisk --change-id /dev/sda 5 fd
Warning: extended partition does not start at a cylinder boundary.
DOS and Linux will interpret the contents differently.
Done
root@ubuntu:~#
[codesyntax lang="bash"]
fdisk -l
[/codesyntax]
root@ubuntu:~# fdisk -l
Disk /dev/sda: 21.5 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders, total 41943040 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00059a4b
Device Boot Start End Blocks Id System
/dev/sda1 * 2048 39845887 19921920 fd Linux raid autodetect
/dev/sda2 39847934 41940991 1046529 5 Extended
/dev/sda5 39847936 41940991 1046528 fd Linux raid autodetect
Disk /dev/sdb: 21.5 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders, total 41943040 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Device Boot Start End Blocks Id System
/dev/sdb1 * 2048 39845887 19921920 fd Linux raid autodetect
/dev/sdb2 39847934 41940991 1046529 5 Extended
/dev/sdb5 39847936 41940991 1046528 fd Linux raid autodetect
Disk /dev/md0: 20.4 GB, 20383137792 bytes
2 heads, 4 sectors/track, 4976352 cylinders, total 39810816 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/md0 doesn't contain a valid partition table
Disk /dev/md1: 1071 MB, 1071054848 bytes
2 heads, 4 sectors/track, 261488 cylinders, total 2091904 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/md1 doesn't contain a valid partition table
root@ubuntu:~#
Now we can add /dev/sda1 and /dev/sda5 to the respective RAID arrays:
[codesyntax lang="bash"]
mdadm --add /dev/md0 /dev/sda1 mdadm --add /dev/md1 /dev/sda5
[/codesyntax]
root@ubuntu:~# mdadm --add /dev/md0 /dev/sda1
mdadm: added /dev/sda1
root@ubuntu:~# mdadm --add /dev/md1 /dev/sda5
mdadm: added /dev/sda5
root@ubuntu:~#
Take a look at:
[codesyntax lang="bash"]
cat /proc/mdstat
[/codesyntax]
root@ubuntu:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md1 : active raid1 sda5[2] sdb5[1]
1045952 blocks super 1.2 [2/1] [_U]
resync=DELAYED
md0 : active raid1 sda1[2] sdb1[1]
19905408 blocks super 1.2 [2/1] [_U]
[=======>.............] recovery = 36.4% (7247872/19905408) finish=1.0min speed=205882K/sec
unused devices: <none>
root@ubuntu:~#
Then adjust /etc/mdadm/mdadm.conf to the new situation:
[codesyntax lang="bash"]
cp /etc/mdadm/mdadm.conf_orig /etc/mdadm/mdadm.conf mdadm --examine --scan >> /etc/mdadm/mdadm.conf
[/codesyntax]
Display the content of /etc/mdadm/mdadm.conf:
[codesyntax lang="bash"]
cat /etc/mdadm/mdadm.conf
[/codesyntax]
root@ubuntu:~# cat /etc/mdadm/mdadm.conf
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#
# by default (built-in), scan all partitions (/proc/partitions) and all
# containers for MD superblocks. alternatively, specify devices to scan, using
# wildcards if desired.
#DEVICE partitions containers
# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes
# automatically tag new arrays as belonging to the local system
HOMEHOST
# instruct the monitoring daemon where to send mail alerts
MAILADDR root
# definitions of existing MD arrays
# This file was auto-generated on Tue, 23 Oct 2012 04:36:40 -0700
# by mkconf $Id$
ARRAY /dev/md/0 metadata=1.2 UUID=89e5afc0:2d741a2c:7d0f40f0:a1457396 name=ubuntu:0
ARRAY /dev/md/1 metadata=1.2 UUID=ce9163fc:4e168956:5c9050ad:68f15735 name=ubuntu:1
root@ubuntu:~#
Preparing GRUB2 (Part 2)
Now it's safe to delete /etc/grub.d/09_swraid1_setup
[codesyntax lang="bash"]
rm -f /etc/grub.d/09_swraid1_setup
[/codesyntax]
Update our GRUB2 bootloader configuration and install it again on both disks (/dev/sda and /dev/sdb)
[codesyntax lang="bash"]
update-grub update-initramfs -u grub-install /dev/sda grub-install /dev/sdb
[/codesyntax]
Reboot the machine
[codesyntax lang="bash"]
reboot
[/codesyntax]
Very quality tutorial, but why making raid for swap?
And this not working for ubuntu 14.04, OS not booting from degrading raid.
Fair enough! It's not necessary to have raid for swap...
Well, ubuntu 14.04 was not yet born when I wrote this page. But even ubuntu 12.04 could boot from a degraded raid (https://help.ubuntu.com/community/Installation/SoftwareRAID#Boot_from_Degraded_Disk)
Yes, it is necessary to have RAID for swap.
Swap holds pages of memory which have been pushed out to disk.
No other copy of those pages exists anywhere except in that swap.
If the disk dies, you suddenly lose a chunk of memory: it's as if someone suddenly pulled out a stick of DRAM from your motherboard.
cp -dpRx / /mnt/md0 is not working, said cannot copy to self
my disk: sda (Linux with /rrot /home /NTFS /swap) sdb (new disk to join raid1)
sda1 /root (md0)
sda3 /ntfs (md1)
sda5 /home (md2)
sda6 /swap (md3)
so how can I copy the files from the first disk (/dev/sda) to the second one (/dev/sdb)?
can I use DD?
You could use rsync
You might want mention that
grub-install /dev/sda
grub-install /dev/sdb
may fail on some systems and you may have to use
grub-install --recheck /dev/sda
grub-install --recheck /dev/sdb
Wow - the first raid howto on the web that really works! Thanks so much for your hard work and care, Jonas. Very timely for me, as I discovered that Xenial does not have the old raid installing alternate .iso, just the desktop version. Therefore installed as a normal desktop sys to /sda and followed this to preserve the installation. Went like clockwork.
Above cp -dpRx comment: true, doesn't copy to self. So performed "cp -dpRx / /mount path" (mount point itself directly, not /md0)
worked fine for me.
The horse fell at the last fence, dropped into grub rescue at final reboot. No problem, let boot-repair do its thing and have perfect raid sys up and running now.
Thanx again, Jonas!
Excellent guide!
I'm currently following this software RAID tutorial and am at the point where I've prepared to boot off the second disk. So far I've done the following:
- Copied partition table from /dev/sda to /dev/sdb
- Changed the partition type to 'fd' on both /dev/sdb disks
- Created RAID arrays /dev/md1 and /dev/md2 using /dev/sdb1 and /dev/sdb2
- mkfs.ext4 on /dev/md1 and mkswap on /dev/md2
- Created my mdadm.conf using mdadm --examine --scan
- mounted /dev/md1 to /mnt/md1 and copied /dev/sda1 to it
- Edited /etc/fstab to contain the blkid's of /deb/md1 and /dev/md2 instead of /dev/sda1 and /dev/sda2
- Edited /etc/mtab to contain /dev/md1 instead of /dev/sda1
- Created a Grub menu entry using the proper linux kernel version name
- updated grub
- updated initramfs
- installed Grub on /dev/sda and /dev/sdb
I've verified that the files are copied to the /dev/md1, that the /etc/fstab and /etc/mtab files reflect the UUIDs of the new array properly, and that the array is up and running.
Problem is, when I reboot I get stuck in an infinite loop after selecting the RAID menu entry from the grub menu. The machine immediately reboots and brings me back to the grub menu -- rinse and repeat. I've verified that my menu entry is correct and contains the proper root and I'm not getting any error messages. I'm just at a loss as to where to look for troubleshooting. Any help would be greatly appreciated.
Very nice write-up.
The 'sfdisk --change-id /dev/sdb 1 fd' is deprecated. At least in sfdisk version 2.28.
I think that the command should now be, as follows:
# sfdisk --part-type /dev/sdb 1 A19D880F-05FC-4D3B-A006-743F0F84911E
I found the last string using fdisk. E.g. launch fdisk and press 'l'. In the list being displayed, nr 29 is:
29 Linux RAID A19D880F-05FC-4D3B-A006-743F0F84911E
Don't if this correct, but it may need attention.
You're missing one important thing from this guide: you need to set up monitoring of the arrays.
Last time I had a drive go I knew about it because I received an e-mail from the mdadm monitor that an array had become degraded.
If one drive goes, and you don't know about it, and don't take any action, you're not getting the full benefit of RAID; all you have is a somewhat more reliable storage system such that lifetime(storage) = max(lifetime(driveA), lifetime(driveB)).
Actually, I think this is wrong. I've never had to do anything special to get this monitoring to happen.
Either the kernel or some userland distro scripts set up the "/sbin/mdadm --monitor --scan" process.
It's there in Ubuntu and Deb.
On Ubuntu 16.04 I have /dev/sda1 as /boot (md0) and /dev/sda5 on / (md1).
In /etc/grub.d/09_swraid1_setup do I set root as md0?
Typo:
In /etc/grub.d/09_swraid1_setup do I set root as md1?
Thank you for the quality walkthrough. I did this a bit differently, but not much. After creating md0, I first rsynched "/", to md0, then I did the grub changes on the existing root partition (because that will be used merely as boot partition on reboot), and applied all changes on the new, md0 partition. So my original root remained intact if something went wrong. And then, after booting to md0, i did not need to remove the grub modifications, I just needed upgrade grub and update initramsf and I was good to go.
Kirill: "Very quality tutorial, but why making raid for swap?"
Failing drive - and with it, not raid-ed failing swap - can cause system stability problems, and what's worse, data loss. So on a production server, swap reliability is as importan as system memory reliability, that's why many use raid1 for swap.
Kirill: "And this not working for ubuntu 14.04, OS not booting from degrading raid".
It does. All you need to do is to edit "/etc/initramfs-tools/conf.d/mdadm" (if it does not exist, create it with root user, give it 644), and add/modify the following: "BOOT_DEGRADED=true".
is it now possible to have /boot/efi as part of a raid-1?
web-search so far has resulted with information saying this is not possible then indicating only FAT32 is supported, etc. but my Ubuntu-18.04.lts has vfat /boot/efi so I suspect previous info is just dated. So I thought I would ask explicitly here since this guide is very well done and I think it is worth having this information all covered on this webpage.
PLease PLease Answer me as soon as possible.
In the last step
update-initramfs -u
I get:
Couldn't identify type of root file system for fsck hook
Note, that update-initramfs necessary execute for all ramdisks if they are used in grub.cfg menuentry.
update-initramfs -u updates the first ramdisk only.
So you need to execute command
update-initramfs -u -k all