synology raid5 rampage

Last night a raid 5 array in our office storage workhorse, a synology, kicked the bucket with two failed disks. Luckily one could be resurrected from the dead. After some S.M.A.R.T. status checks on this one, all things seemed to be fine. Only one failed disk in a raid 5 should not cause any data loss…

Well, it should not but an

cat /proc/mdstat


showed the recovered drive as a spare, which it wasn't prior to the failure.
So i decided to recreate the whole array, following these steps:

Stop the raid:

mdadm --stop /dev/md2

Check the metadata version via:

mdadm --examine /dev/sdc3 | grep Version

Create the array, with the failed drive as missing:

mdadm --create --assume-clean --level=5 --raid-devices=4 metadata=0.9 /dev/md2 /dev/sda3 missing /dev/sdc3 /dev/sdd3

That went smoothly and the raid was up and running again. Sadly the ext4 filesystem has sufferd, and could not be mounted. An

fschk.ext4 /dev/md2

also yielded an error.

As the synology does not come with the nice tool dumpe2fs, I needed another way of finding the location of the backup superblocks:

mkfs.ext4 -n /dev/md2

From the manpage:

-n Causes mke2fs to not actually create a filesystem, but display what it would do if it were to create a filesystem. This can be used to determine the location of the backup superblocks for a particular filesystem, so long as the
mke2fs parameters that were passed when the filesystem was originally created are used again. (With the -n option added, of course!)

With that information I was able to run:

fschk.ext4 -b 214990848 -y -C 0 /dev/md2

This saved almost all of the data on the array... (the -C parameter gives you a nice progress bar ;)

Dieser Beitrag wurde unter /dev/administration veröffentlicht. Setze ein Lesezeichen auf den Permalink.

Schreibe einen Kommentar

Deine E-Mail-Adresse wird nicht veröffentlicht. Erforderliche Felder sind mit * markiert.