Install Bubba 2 on RAID-1 volumes

From BubbaWiki
Revision as of 02:34, 12 November 2011 by Charles Leclerc (talk | contribs)
Jump to navigation Jump to search

Introduction

As you may know, Bubba devices support RAID for home storage. However the install procedure does not offer the possibility to have your system partitions (root and swap) on raid. Besides the technical challenge, there are a few good reasons to raidify the system partitions. First it protects your fine-and-many-hours-spent tuned configuration of your device from hard drive failure. And second it gives your bubba a much better fault-tolerant design.

The goal of this how-to is to perform a plain standard software system installation on your bubba. You will have however the possibility to copy the data from your home partition. So with a backup of your settings from the web interface, you should get everything back.

Big red warning

This procedures involves commands changing u-boot variables. U-boot is the bootloader of the bubba device, located on a flash chip of the board. Although everything explained here has been carefully tested (at least by myself), you might mess things up and not be able to boot your bubba without a serial console cable (USB rescue rescue system won't fix it). Don't worry though, you will have plenty of verifications before doing the sensible stuff, and the risk is quite low. Nevertheless, you've been warned !!

In addition, these modifications prevent you from using the usb rescue disk, because your system won't boot after a rescued installed software. The u-boot utils are not available in the rescue system, so you can't restore the modifications you've made to boot the raid array. I am currently working on a workaround, but as of today it doesn't work.

Prerequisites

This how-to has been tested only on Bubba Two. It would probably work on a bubba 3, with a proper configuration of u-boot stuff. I anyone owns a B3 and have validated that part, you're welcome to add it to this page.

This how-to applies to the bubba release 2.4RC1 for the bubba 2. I think it will probably work with future releases too. You will need :

  • A working bubba two with 2.4RC1 software
  • An install/rescue USB stick with 2.4RC1 software on it
  • An external esata hard drive, ideally the same model as your system disk (or at least the same size)
  • Screwdrivers to open the bubba and the external disk box (we will need to swap them).

Principles

The procedure is divided in several parts :

  • Preparing the new disk to receive the raid volumes, create them and copy the software and data on it.

In this step, from the running bubba system, we will partition the new hard drive with 4 partitions (/boot (50Mo), / (10Go), swap (1Go) and /home with the rest). Then we will create the four RAID-1 arrays with the newly created partitions ; these arrays will be in a degraded state because we will add the other disk later. We won't change the kernel or the initrd, so the arrays will need to be configured to allow the kernel to detect them directly during its boot sequence.

After creating the LVM volume for the data, we will format the arrays and extract the bubba filesystem from the install USB key on them. All these steps are greatly inspired by the standard install script from excito (apart from the raid part).

Then we will need to adjust /etc/fstab to reflect the disk configuration

  • Modifying the boot behavior of the device

The boot modification is quite simple : we just need to change the kernel boot parameter root=/dev/sda1 to root=/dev/md0. It is stored in the u-boot environnment as the diskdev parameter.

  • Powering off, physically swapping the disks and reboot

I didn't manage to get u-boot to boot from a differend disk than the one plugged inside the bubba. So we need to place our newly configured disk inside the device. The old disk will go inside the external disk box. After everyhing is closed and plugged, we restart the bubba which should boot on the newly created RAID arrays.

  • Reconfiguring the old system disk and add it to the RAID arrays.

The final step is to repartition the old disk and add the partitions to the RAID arrays. The system will then automatically sync the drives, which concludes the setup.

Tasks

All the commands below need to be run in a ssh session as root.

Preparing the new disk

Partitions and raid arrays

  • Plug/turn on the external disk. We assume from now that it has been detected as /dev/sdb.
  • Create the partitions with the following command :
root@b2:~# sfdisk -uM /dev/sdb << EOF
,50,fd
,10240,fd
,1024,fd
,,fd
EOF
  • Create the four arrays with the previously created partition :
root@b2:~# mdadm --create --metadata=0.9 --level=1 --raid-devices=2 /dev/md0 /dev/sdb1 missing
mdadm: array /dev/md0 started.
root@b2:~# mdadm --create --metadata=0.9 --level=1 --raid-devices=2 /dev/md1 /dev/sdb2 missing
mdadm: array /dev/md1 started.
root@b2:~# mdadm --create --metadata=0.9 --level=1 --raid-devices=2 /dev/md2 /dev/sdb3 missing
mdadm: array /dev/md2 started.
root@b2:~# mdadm --create --metadata=0.9 --level=1 --raid-devices=2 /dev/md3 /dev/sdb4 missing
mdadm: array /dev/md2 started.
  • Create the LVM data volume :
root@b2:~# pvcreate /dev/md3
Physical volume "/dev/md3" successfully created
root@b2:~# vgcreate bubba2 /dev/md3
Volume group "bubba2" successfully created
root@b2:~# lvcreate -l 100%FREE --name storage bubba2
Logical volume "storage" created

Filesystem and data

  • Format the swap array :
root@b2:~# mkswap /dev/md2
  • Format and mount the system array:
root@b2:~# mkfs.ext3 -q -L "Bubba root" /dev/md1
root@b2:~# tune2fs -c0 -i0 /dev/md1
root@b2:~# mkdir /mnt/bubba
root@b2:~# mount /dev/md1 /mnt/bubba
  • Format and mount the /boot array:
root@b2:~# mkfs.ext2 -q -L "Bubba boot" /dev/md0
root@b2:~# tune2fs -c0 -i0 /dev/md0
root@b2:~# mkdir /mnt/bubba
root@b2:~# mount /dev/md0 /mnt/bubba/boot
  • Format and mount the data array:
root@b2:~# mkfs.ext3 -q -L "Bubba home" /dev/mapper/bubba2-storage
root@b2:~# tune2fs -c0 -i0 /dev/mapper/bubba2-storage 
root@b2:~# mkdir /mnt/bubba/home
root@b2:~# mount /dev/mapper/bubba2-storage /mnt/bubba/home
  • Plug in the USB install/rescue key. Assuming it is /dev/sdc, mount it and extract the OS :
root@b2:~# mkdir /mnt/usb
root@b2:~# mount /dev/sdc1 /mnt/usb
root@b2:~# tar zxf /mnt/usb/install/payload/*.tar.gz -C /mnt/bubba
  • [Optional] : copy your data from home to the new disk :
root@b2:~# cp -a /home /mnt/bubba

Adjusting configuration

  • Edit /mnt/bubba/etc/fstab with your favorite editor so it contains :
/dev/md1        /       ext3    noatime,defaults        0       1
/dev/md0        /boot   ext2    noatime,defaults        0       2
/dev/mapper/bubba2-storage      /home   ext3    defaults                0       2
/dev/md2        none    swap    sw                      0       0
usbfs           /proc/bus/usb   usbfs   defaults        0       0
/proc           /proc   proc    defaults                0       0
  • Create a soft link in /boot :
ln -s . /mnt/bubba/boot/boot

Changing u-boot variables

  • Install the uboot-envtools package :
root@b2:~# aptitude install uboot-envtools
  • Create the configuration file for the uboot env tools :
root@b2:~# cat > /etc/fw_env.config << EOF
# MTD definition for Bubba|2
# MTD device name       Device offset   Env. size       Flash sector size
/dev/mtd0               0x050000        0x002000        0x010000
/dev/mtd0               0x060000        0x002000        0x010000
EOF
  • Test the uboot configuration ; The command should output something like :
root@b2:~# fw_printenv
baudrate=115200
loads_echo=1
......

WARNING : You must not see the following output :

root@b2:~# fw_printenv 
Warning: Bad CRC, using default environment
bootcmd=bootp; setenv bootargs root=/dev/nfs nfsroot=${serverip}:${rootpath} ip=${ipaddr}:${serverip}:${gatewayip}:${netmask}:${hostname}::off; bootm
bootdelay=5
baudrate=115200

If you do go check /etc/fw_env.config, with the content above. If it looks OK, then your system is different for a non obvious reason, and you cannot continue before fixing this issue.

  • Modify the diskdev variable :
root@b2:~# fw_setenv diskdev /dev/md1

If the above command fails, read carefully the error. Do not turn off or reboot your device before being sure that your bootloader will still work (if /etc/fw_env.config is correct with the above , there are no reasons that fw_setenv would break anything. Worst case scenario the environment will be reset on the next reboot). If in doubt ask on the forum !

  • Check that the environment is correct :
root@b2:~# fw_printenv diskdev
diskdev=/dev/md1
  • If everything looks OK, turn off the bubba :
root@b2:~# /usr/lib/web-admin/backend.pl power_off
  • Once the bubba is off, you can turn off the external drive.

Swapping disks and first boot

  • Open the bubba and the external drive box, and swap the disks.
  • Turn on the bubba first
  • If everything works, you should be able to configure your bubba as usual after a reinstall

You'll notice that the LED flashes quickly after the boot : that's because of the degraded RAID arrays.

Reconfiguring old system disk

  • Turn on the external drive ; we'll assume it to be on /dev/sdc.
  • Partition the hard drive:
    • If the disks models differs :
root@b2:~# sfdisk -uM /dev/sdb << EOF
,50,fd
,10240,fd
,1024,fd
,,fd
EOF
    • Or, if the models are identical
root@b2:~# sfdisk -d /dev/sda | sfdisk /dev/sdb
  • Add the newly created partitions to the raid arrays :
root@b2:~# mdadm -a /dev/md0 /dev/sdb1
root@b2:~# mdadm -a /dev/md1 /dev/sdb2
root@b2:~# mdadm -a /dev/md2 /dev/sdb3
root@b2:~# mdadm -a /dev/md3 /dev/sdb4

The arrays will begin to sync right away ; you can watch th progress in /proc/mdstat :

root@b2:~# cat /proc/mdstat
Personalities : [raid0] [raid1] 
md1 : active raid1 sdb2[2] sda2[0]
    10490368 blocks [2/1] [U_]
    [====>................]  recovery = 24.0% (2518400/10490368) finish=3.3min speed=40208K/sec
    
md2 : active raid1 sdb3[2] sda3[0]
    1052160 blocks [2/1] [U_]
    	resync=DELAYED
    
md3 : active raid1 sdb4[2] sda4[0]
    476785024 blocks [2/1] [U_]
    	resync=DELAYED
    
md0 : active raid1 sdb1[1] sda1[0]
    56128 blocks [2/2] [UU]
    
unused devices: <none>