Recovering RAID 5 Array with multiple failed drives

Discussions on using the professional data recovery program R-STUDIO for RAID re-construction, NAS recovery, and recovery of various disk and volume managers: Windows storage spaces, Apple volumes, and Linux Logical Volume Manager.
Guest

Recovering RAID 5 Array with multiple failed drives

Post by Guest » Fri Jun 21, 2013 8:03 am

Hi,

I have a RAID 5 Array (4 x 1TB in a ReadyNAS 600). One of the drives was marked bad by the system and I replaced it. During the rebuild it achieved around 80% another drive went bad and the system marked the volume as being dead. My hope is to be able to image the drives and recover the data - I think I need R-Studio to do this.

My question involves the recovery of the data. Under normal RAID operation, the entire drive is marked bad. For data recovery, assuming that no two disks have the same bad area and we know which sectors are bad in the image, can R-Studio recover the data. e.g. if sector 101 is bad on drive 1 and sector 201 is bad on drive 2.

Thanks.

Alt
Site Moderator
Posts: 3129
Joined: Tue Nov 11, 2008 2:13 pm
Contact:

Re: Recovering RAID 5 Array with multiple failed drives

Post by Alt » Tue Jun 25, 2013 8:17 am

What does actually "bad" mean for the 1-st failed drive? Did it really stop working or just the raid controller feel like it should be replaced? Which drive is to choose for creation of a virtual RAID will depend on this answer.

As this is a RAID 5, if there are no the same failed sectors on two drives, the data can be recovered.

Guest

Re: Recovering RAID 5 Array with multiple failed drives

Post by Guest » Tue Jun 25, 2013 3:27 pm

Hi,

The ReadyNAS sent me and email, that there was an unrecoverable error (vague, I know), then I received a second message telling me there were SMART errors on the drive. I think that it took too long to recover from the bad sectors the ReadyNAS dropped it from the array. During the rebuild, I think the same thing happened to the another drive in the array. All the drives spin up, so I believe the problem lies with bad sectors on the drive. I'm thinking that I would start with images of the original 4 drives. How does R-Studio determine in the image that the sector it tried to image was bad ?

In the event that there are two common bad areas, is any kind of recovery still possible ?

Thanks.

Alt
Site Moderator
Posts: 3129
Joined: Tue Nov 11, 2008 2:13 pm
Contact:

Re: Recovering RAID 5 Array with multiple failed drives

Post by Alt » Wed Jun 26, 2013 8:35 am

Guest wrote:Hi,

I'm thinking that I would start with images of the original 4 drives. How does R-Studio determine in the image that the sector it tried to image was bad ?
I'd do the same. R-Studio will receive a message from the Windows that it cannot read the sector. R-Studio will inform you about that through its log. Btw, bad sectors seriously reduces imaging speed, so have enough patience
Guest wrote: In the event that there are two common bad areas, is any kind of recovery still possible ?
Recovery might be possible, but only by a professional with a proper equipment.

Guest

Re: Recovering RAID 5 Array with multiple failed drives

Post by Guest » Sun Aug 11, 2013 1:46 am

I've determined the reason the drives were dropped from the array - TLER. The drives that I used (Seagate ST31000528AS) don't appear to support TLER despite being on the ReadyNAS HCL !

Away, I've managed to create drive images on some external usb 3.0 drives using the Linux version of R-Studio. This took between 26 and 34 hours / drive - not sure if this is expected performace. I never had any unrecoverable sectors reported in log - I would expect to see this if this was the case. It doesn't look like this is any kind of weird RAID configuration except the block size is 16KB. I believe my drive images are good and the array is undamaged.I am hower, stuck at the data recovery stage. I can create my virtual block array from the partitions which appears to be consistent. I have since discovered that the array contains an LVM volume - is that something R-Studio can recover ?

My other option is to write the images out to new drives, check / modify the superblocks and see if the array can see everything correctly.

Any thoughts on the way forward would be great.

Alt
Site Moderator
Posts: 3129
Joined: Tue Nov 11, 2008 2:13 pm
Contact:

Re: Recovering RAID 5 Array with multiple failed drives

Post by Alt » Mon Aug 12, 2013 12:20 pm

Unfortunately, R-Studio doesn't support LVM. Sorry.

Guest

Re: Recovering RAID 5 Array with multiple failed drives

Post by Guest » Tue Aug 13, 2013 11:26 am

So while I understand that LVM is not supported. I have been trying to verify the consistency of the RAID array. When I performed the consistency check I received a lot of OK Zeroed blocks in addition to mostly OK and a couple of BAD. What is an OK Zeroed block ?

The disk layout, is such that on each disc there is an 2GB Linux ext 3 partition, a small 512MB and finally 931 GB. It appears that the Linux ext partition is RAID 1 across all four drives and the the 512MB and 931 GB partition are both RAID 5. I have been working exclusively with the 931GB partitions. While looking through some of the log files on the ext 3 partition, it appears that the sector size for 3rd partition (931 GB) is 1024 and not 512. I know my RAID block size is 16384. Will the sector size not being 512 affect the results of the consistency check and should I change the block size to compensate ?

Alt
Site Moderator
Posts: 3129
Joined: Tue Nov 11, 2008 2:13 pm
Contact:

Re: Recovering RAID 5 Array with multiple failed drives

Post by Alt » Wed Aug 14, 2013 10:51 am

Guest wrote:So while I understand that LVM is not supported. I have been trying to verify the consistency of the RAID array. When I performed the consistency check I received a lot of OK Zeroed blocks in addition to mostly OK and a couple of BAD. What is an OK Zeroed block ?
That means that the blocks are 0's on all disks, and therefore it's impossible to say whether these blocks are consistent.
Guest wrote: The disk layout, is such that on each disc there is an 2GB Linux ext 3 partition, a small 512MB and finally 931 GB. It appears that the Linux ext partition is RAID 1 across all four drives and the the 512MB and 931 GB partition are both RAID 5. I have been working exclusively with the 931GB partitions. While looking through some of the log files on the ext 3 partition, it appears that the sector size for 3rd partition (931 GB) is 1024 and not 512. I know my RAID block size is 16384. Will the sector size not being 512 affect the results of the consistency check and should I change the block size to compensate ?
1. No.
2. No.

Post Reply