Areca RAID6 Recovery Help

Discussions on using the professional data recovery program R-STUDIO for RAID re-construction, NAS recovery, and recovery of various disk and volume managers: Windows storage spaces, Apple volumes, and Linux Logical Volume Manager.
hidden72
Posts: 4
Joined: Tue Nov 16, 2010 1:22 am

Areca RAID6 Recovery Help

Post by hidden72 » Tue Nov 16, 2010 1:37 am

Does anyone know which RAID6 variant/layout the Areca 1230 SATA controller uses?

The story, so far, my RAID6 array had multiple simultaneous failures over the weekend and it went completely offline. I know the order that the drives were originally in when they were connected to the raid controller. I'm pretty sure I had a 64k or 128k stripe as well. I've selected RAID6 and am currently scanning the "Virtual Block Raid" device.

Do I need to let the scan run all the way to the end before I can tell whether or not I have the right stripe size / drive order, or can I interrupt it after a while and check some files (JPGs, etc.) Also, once the scan has completed, do I need to re-scan if I change the stripe size, or is the scan a one-time thing and then I can tweak all the settings I need to?

Since there were multiple read failures across different drives, is r-studio smart enough to rebuild the data from the other drives when it encounters a read error on a drive?

Thanks!

Alt
Site Moderator
Posts: 3129
Joined: Tue Nov 11, 2008 2:13 pm
Contact:

Re: Areca RAID6 Recovery Help

Post by Alt » Tue Nov 16, 2010 10:56 am

I recommend you to try to consult the manufacturer about the parameters.
You don't have to scan the entire RAID, if you're not scanning for known file types. Just 1GB is enough.
Yes, you have to re-scan the RAID when you change the parameters.
Yes, if the read failures are within the RAID data recovery scheme, R-Studio will recover the data.

hidden72
Posts: 4
Joined: Tue Nov 16, 2010 1:22 am

Re: Areca RAID6 Recovery Help

Post by hidden72 » Tue Nov 16, 2010 3:57 pm

Thanks for the quick response. I've opened a ticket with Areca to see if I can get that information. The information about scanning is helpful, and I will be sure to re-scan the images after making paramater changes. As a suggestion, you may want to give the user some type of notification when they change a parameter and apply it that the current "scan" data is no longer valid for this configuration and that they should re-scan before attempting recovery.

I'm working on creating images using R-Drive Image right now. Does the .ARC file format take into account where there are read errors on the physical drives and communicate that to R-Studio so that it knows which drive(s) to recover the data from?

I'll post back here if/when I hear from Areca regarding their block layout for RAID6 on the ARC-1230 adapters. I believe their layout is different, though - since one of your competitors has a product that shows the following RAID6 modes:
RAID6 (Standard P+Q)
RAID6 (ARECA Compact)

Unfortunately, they don't provide much information as to the difference between those layouts.

Then again, the competitors product doesn't seem to be working either, so I can't complain much. Thanks again!

Alt
Site Moderator
Posts: 3129
Joined: Tue Nov 11, 2008 2:13 pm
Contact:

Re: Areca RAID6 Recovery Help

Post by Alt » Wed Nov 17, 2010 7:06 am

hidden72 wrote: I'm working on creating images using R-Drive Image right now. Does the .ARC file format take into account where there are read errors on the physical drives and communicate that to R-Studio so that it knows which drive(s) to recover the data from?
R-Studio can also create images (Images).
R-Drive Image fills the bad sectors with 0s, as it isn't a tool for creating images for data recovery, whereas you may specify which pattern to use for bad sectors in R-Studio (Bad Sectors).

hidden72
Posts: 4
Joined: Tue Nov 16, 2010 1:22 am

Re: Areca RAID6 Recovery Help

Post by hidden72 » Thu Nov 18, 2010 12:31 am

Areca's response is here:

"i am sorry, we do not have such information available for cutomers."

If you have any way to check with Areca, that'd be nice. Obviously, they're not interested in working with me.

Once I finish imaging all of the drives, I'll see if I can figure out if the Areca controller uses one of the preset block layouts.

Alt
Site Moderator
Posts: 3129
Joined: Tue Nov 11, 2008 2:13 pm
Contact:

Re: Areca RAID6 Recovery Help

Post by Alt » Thu Nov 18, 2010 9:18 am

I believe their response to use would be the same. Maybe, you now what type of RAID 6 they use? That make the guessing a little bit easier.

hidden72
Posts: 4
Joined: Tue Nov 16, 2010 1:22 am

Re: Areca RAID6 Recovery Help

Post by hidden72 » Mon Nov 22, 2010 3:48 pm

I was able to get the array partially up and running by imaging each of the individual physical drives and recreating the array through the Areca controller. Some of my data is corrupted, though, so I'm still working on figuring this out. I know the stripe size now (4k for this array), but I'm still unable to get a good block layout for this controller.

I'll probably go find 5 old drives, throw them in the controller, make a volume, and then pull them all out of the volume and use your FAQ to determine the block order & parity information. I'll have to wait until I get this data backed up first, and that will take a few days to get everything to where it needs to be.

If I find something out, I'll be sure to post it here.

kermit

Re: Areca RAID6 Recovery Help

Post by kermit » Sun Jun 03, 2012 8:37 pm

for areca 1680 raid-6:

. the logical to physical order of the disks (which might be the same, IDK, my disks were shuffled anyway).

. the offset into the logical disks where your stripes start

. the logical virtual disk address to logical disk and address

stripe=lba/sw+offset;
disk=((lba%sw)+((sw+2)-((2*stripe)%(sw+2))))%(sw+2);

where your LBA is in stripe units (ala, 128k, 32k..whatever you made the array with).. and sw is, say 10, if you have a 12 disk array.

the first VD, for me, was offset 512k into the physical disks.

I don't know how the parity was calculated, I just ignored that data.

and some ugly fun

int main (int argc, char*argv[]){
long long lba=0,sw=argc-3,su=128*1024;
int disk[sw+2],d=0;
while(d<(sw+2)){
disk[d]=open(argv[d+1],O_RDONLY);
//fprintf(stderr,"%d %d\n",d,disk[d]);
d++;
}
while(1){
char buf[su];
off64_t o;
long long stripe;
stripe=lba/sw;
d=((lba%sw)+((sw+2)-((2*stripe)%(sw+2))))%(sw+2); //DID NOT WORK, but lined up first two disks
// fprintf(stderr,"%12lld %3lld %3d %4s\n",lba++,stripe,d,argv[d+1]); continue;
lseek64(disk[d],(stripe+4)*128*1024,SEEK_SET);
//awk '{lba=($1*8388607);stripe=int((lba/22));ld=lba%22;so=24-((2*stripe)%24);disk=(ld+so)%24;printf "Physical-0x877c %4s %09x Logical %09x disk %2d\n",$2,stripe,lba, disk}' // this was to figure out the order of the disks based on the output of some dd loop i used to find the XFS allocaton group headers, which are all spaced apart predictably.. i lost that script, sorry.
if(write(1,buf,read(disk[d],buf,128*1024))!=128*1024)
break;
lba++;
}

}


it still won't be easy, but if you can code, thats

kermit

Re: Areca RAID6 Recovery Help

Post by kermit » Fri Aug 10, 2012 11:53 am

sorry its messy, but if you're here, you're desperate enough to make sense of it:

areca_raid6_assembler.c:#define _LARGEFILE64_SOURCE
areca_raid6_assembler.c:#include <sys/types.h>
areca_raid6_assembler.c:#include <sys/stat.h>
areca_raid6_assembler.c:#include <fcntl.h>
areca_raid6_assembler.c:#include <unistd.h>
areca_raid6_assembler.c:#include <sys/types.h>
areca_raid6_assembler.c:#include <unistd.h>
areca_raid6_assembler.c:#include <stdio.h>
areca_raid6_assembler.c:// hard coded for 24 disks raid 6 areca format 128k stripe units
areca_raid6_assembler.c:// arecas format for 22 disks (NOT 24)
areca_raid6_assembler.c://
areca_raid6_assembler.c://000.001.002.003.004.005.006.007.008.009.010.011.012.013.014.015.016.017.018.019.........
areca_raid6_assembler.c://022.023.024.025.026.027.028.029.030.031.032.033.034.035.036.037.038.039.....'...020.021.
areca_raid6_assembler.c://044.045.046.047.048.049.050.051.052.053.054.055.056.057.058.059.........040.041.042.043.
areca_raid6_assembler.c://066.067.068.069.070.071.072.073.074.075.076.077.078.079........P060.061.062.063.064.065.
areca_raid6_assembler.c://088.089.090.091.092.093.094.095.096.097.098.099......H..080.081.082.083.084.085.086.087.
areca_raid6_assembler.c://110.111.112.113.114.115.116.117.118.119.....7,+.100.101.102.103.104.105.106.107.108.109.
areca_raid6_assembler.c://132.133.134.135.136.137.138.139.........120.121.122.123.124.125.126.127.128.129.130.131.
areca_raid6_assembler.c://154.155.156.157.158.159.........140.141.142.143.144.145.146.147.148.149.150.151.152.153.
areca_raid6_assembler.c://176.177.178.179.......9.160.161.162.163.164.165.166.167.168.169.170.171.172.173.174.175.
areca_raid6_assembler.c://198.199.....7.3.180.181.182.183.184.185.186.187.188.189.190.191.192.193.194.195.196.197.
areca_raid6_assembler.c://........200.201.202.203.204.205.206.207.208.209.210.211.212.213.214.215.216.217.218.219.
areca_raid6_assembler.c://220.221.222.223.224.225.226.227.228.229.230.231.232.233.234.235.236.237.238.239.........
areca_raid6_assembler.c://242.243.244.245.246.247.248.249.250.251.252.253.254.255.256.257.258.259......|..240.241.
areca_raid6_assembler.c://264.265.266.267.268.269.270.271.272.273.274.275.276.277.278.279.........260.261.262.263.
areca_raid6_assembler.c://286.287.288.289.290.291.292.293.294.295.296.297.298.299........P280.281.282.283.284.285.
areca_raid6_assembler.c://308.309.310.311.312.313.314.315.316.317.318.319.........300.301.302.303.304.305.306.307.
areca_raid6_assembler.c://330.331.332.333.334.335.336.337.338.339.......+.320.321.322.323.324.325.326.327.328.329.
areca_raid6_assembler.c://352.353.354.355.356.357.358.359.........340.341.342.343.344.345.346.347.348.349.350.351.
areca_raid6_assembler.c://374.375.376.377.378.379.........360.361.362.363.364.365.366.367.368.369.370.371.372.373.
areca_raid6_assembler.c://396.397.398.399......D9.380.381.382.383.384.385.386.387.388.389.390.391.392.393.394.395.
areca_raid6_assembler.c://418.419.....g<3.400.401.402.403.404.405.406.407.408.409.410.411.412.413.414.415.416.417.
areca_raid6_assembler.c://........420.421.422.423.424.425.426.427.428.429.430.431.432.433.434.435.436.437.438.439.
areca_raid6_assembler.c://440.441.442.443.444.445.446.447.448.449.450.451.452.453.454.455.456.457.458.459.........
areca_raid6_assembler.c://462.463.464.465.466.467.468.469.470.471.472.473.474.475.476.477.478.479.....g\..460.461.
areca_raid6_assembler.c://484.485.486.487.488.489.490.491.492.493.494.495.496.497.498.499......P..480.481.482.483.
areca_raid6_assembler.c://506.507.508.509.510.511.512.513.514.515.516.517.518.519........P500.501.502.503.504.505.
areca_raid6_assembler.c:int main (int argc, char*argv[]){
areca_raid6_assembler.c: long long lba=0,sw=argc-3,su=128*1024;
areca_raid6_assembler.c: int disk[sw+2],d=0;
areca_raid6_assembler.c: while(d<(sw+2)){
areca_raid6_assembler.c: disk[d]=open(argv[d+1],O_RDONLY);
areca_raid6_assembler.c: //fprintf(stderr,"%d %d\n",d,disk[d]);
areca_raid6_assembler.c: d++;
areca_raid6_assembler.c: }
areca_raid6_assembler.c: while(1){
areca_raid6_assembler.c: char buf[su];
areca_raid6_assembler.c:
areca_raid6_assembler.c: off64_t o;
areca_raid6_assembler.c: long long stripe;
areca_raid6_assembler.c: //long long vdoff=0x877c; // sdc
areca_raid6_assembler.c: long long vdoff=0x877c;
areca_raid6_assembler.c: stripe=lba/sw;
areca_raid6_assembler.c: d=((lba%sw)+((sw+2)-((2*stripe)%(sw+2))))%(sw+2);
areca_raid6_assembler.c:// fprintf(stderr,"%12lld %3lld %3d %4s\n",lba++,stripe,d,argv[d+1]); continue;
areca_raid6_assembler.c: lseek64(disk[d],(stripe+vdoff)*su,SEEK_SET);
areca_raid6_assembler.c:// lseek64(disk[d],(stripe+19074+4)*128*1024,SEEK_SET);
areca_raid6_assembler.c: //awk '{lba=($1*8388607);stripe=int((lba/22));ld=lba%22;so=24-((2*stripe)%24);disk=(ld+so)%24;printf "Physical-0x877c %4s %09x Logical %09x disk %2d\n",$2,stripe,lba, disk}'
areca_raid6_assembler.c: if(write(1,buf,read(disk[d],buf,su))!=su)
areca_raid6_assembler.c: break;
areca_raid6_assembler.c: lba++;
areca_raid6_assembler.c: }
areca_raid6_assembler.c:}
areca_raid6_assembler.c:/*
areca_raid6_assembler.c: Sun Jun 03, 2012 8:37 pm
areca_raid6_assembler.c:for areca 1680 raid-6:
areca_raid6_assembler.c:. the logical to physical order of the disks (which might be the same, IDK, my disks were shuffled anyway).
areca_raid6_assembler.c:. the offset into the logical disks where your stripes start
areca_raid6_assembler.c:. the logical virtual disk address to logical disk and address
areca_raid6_assembler.c:stripe=lba/sw+offset;
areca_raid6_assembler.c:disk=((lba%sw)+((sw+2)-((2*stripe)%(sw+2))))%(sw+2);
areca_raid6_assembler.c:where your LBA is in stripe units (ala, 128k, 32k..whatever you made the array with).. and sw is, say 10, if you have a 12 disk array.
areca_raid6_assembler.c:the first VD, for me, was offset 512k into the physical disks.
areca_raid6_assembler.c:I don't know how the parity was calculated, I just ignored that data.
areca_raid6_assembler.c:*/
areca_raid6_logical_to_physical.c:#define _LARGEFILE64_SOURCE
areca_raid6_logical_to_physical.c:#include <sys/types.h>
areca_raid6_logical_to_physical.c:#include <sys/stat.h>
areca_raid6_logical_to_physical.c:#include <fcntl.h>
areca_raid6_logical_to_physical.c:#include <unistd.h>
areca_raid6_logical_to_physical.c:#include <sys/types.h>
areca_raid6_logical_to_physical.c:#include <unistd.h>
areca_raid6_logical_to_physical.c:#include <stdio.h>
areca_raid6_logical_to_physical.c:// hard coded for 24 disks raid 6 areca format 128k stripe units
areca_raid6_logical_to_physical.c:// arecas format for 22 disks (NOT 24)
areca_raid6_logical_to_physical.c://
areca_raid6_logical_to_physical.c://000.001.002.003.004.005.006.007.008.009.010.011.012.013.014.015.016.017.018.019.........
areca_raid6_logical_to_physical.c://022.023.024.025.026.027.028.029.030.031.032.033.034.035.036.037.038.039.....'...020.021.
areca_raid6_logical_to_physical.c://044.045.046.047.048.049.050.051.052.053.054.055.056.057.058.059.........040.041.042.043.
areca_raid6_logical_to_physical.c://066.067.068.069.070.071.072.073.074.075.076.077.078.079........P060.061.062.063.064.065.
areca_raid6_logical_to_physical.c://088.089.090.091.092.093.094.095.096.097.098.099......H..080.081.082.083.084.085.086.087.
areca_raid6_logical_to_physical.c://110.111.112.113.114.115.116.117.118.119.....7,+.100.101.102.103.104.105.106.107.108.109.
areca_raid6_logical_to_physical.c://132.133.134.135.136.137.138.139.........120.121.122.123.124.125.126.127.128.129.130.131.
areca_raid6_logical_to_physical.c://154.155.156.157.158.159.........140.141.142.143.144.145.146.147.148.149.150.151.152.153.
areca_raid6_logical_to_physical.c://176.177.178.179.......9.160.161.162.163.164.165.166.167.168.169.170.171.172.173.174.175.
areca_raid6_logical_to_physical.c://198.199.....7.3.180.181.182.183.184.185.186.187.188.189.190.191.192.193.194.195.196.197.
areca_raid6_logical_to_physical.c://........200.201.202.203.204.205.206.207.208.209.210.211.212.213.214.215.216.217.218.219.
areca_raid6_logical_to_physical.c://220.221.222.223.224.225.226.227.228.229.230.231.232.233.234.235.236.237.238.239.........
areca_raid6_logical_to_physical.c://242.243.244.245.246.247.248.249.250.251.252.253.254.255.256.257.258.259......|..240.241.
areca_raid6_logical_to_physical.c://264.265.266.267.268.269.270.271.272.273.274.275.276.277.278.279.........260.261.262.263.
areca_raid6_logical_to_physical.c://286.287.288.289.290.291.292.293.294.295.296.297.298.299........P280.281.282.283.284.285.
areca_raid6_logical_to_physical.c://308.309.310.311.312.313.314.315.316.317.318.319.........300.301.302.303.304.305.306.307.
areca_raid6_logical_to_physical.c://330.331.332.333.334.335.336.337.338.339.......+.320.321.322.323.324.325.326.327.328.329.
areca_raid6_logical_to_physical.c://352.353.354.355.356.357.358.359.........340.341.342.343.344.345.346.347.348.349.350.351.
areca_raid6_logical_to_physical.c://374.375.376.377.378.379.........360.361.362.363.364.365.366.367.368.369.370.371.372.373.
areca_raid6_logical_to_physical.c://396.397.398.399......D9.380.381.382.383.384.385.386.387.388.389.390.391.392.393.394.395.
areca_raid6_logical_to_physical.c://418.419.....g<3.400.401.402.403.404.405.406.407.408.409.410.411.412.413.414.415.416.417.
areca_raid6_logical_to_physical.c://........420.421.422.423.424.425.426.427.428.429.430.431.432.433.434.435.436.437.438.439.
areca_raid6_logical_to_physical.c://440.441.442.443.444.445.446.447.448.449.450.451.452.453.454.455.456.457.458.459.........
areca_raid6_logical_to_physical.c://462.463.464.465.466.467.468.469.470.471.472.473.474.475.476.477.478.479.....g\..460.461.
areca_raid6_logical_to_physical.c://484.485.486.487.488.489.490.491.492.493.494.495.496.497.498.499......P..480.481.482.483.
areca_raid6_logical_to_physical.c://506.507.508.509.510.511.512.513.514.515.516.517.518.519........P500.501.502.503.504.505.
areca_raid6_logical_to_physical.c://sw=20 (for 22 disks), 37 groups
areca_raid6_logical_to_physical.c://sw=16 (for 18 disks), 30 groups
areca_raid6_logical_to_physical.c://for 2TB multiplier=8388607
areca_raid6_logical_to_physical.c:int main (int argc, char*argv[]){
areca_raid6_logical_to_physical.c: long long lba=0,sw=20,su=128*1024;
areca_raid6_logical_to_physical.c: int d=sw+2;
areca_raid6_logical_to_physical.c: int n=37;
areca_raid6_logical_to_physical.c: while(n--){
areca_raid6_logical_to_physical.c: char buf[su];
areca_raid6_logical_to_physical.c:
areca_raid6_logical_to_physical.c: off64_t o;
areca_raid6_logical_to_physical.c: long long stripe;
areca_raid6_logical_to_physical.c: //long long vdoff=0x877c; // sdc
areca_raid6_logical_to_physical.c: long long vdoff=4;
areca_raid6_logical_to_physical.c: stripe=lba/sw;
areca_raid6_logical_to_physical.c: d=((lba%sw)+((sw+2)-((2*stripe)%(sw+2))))%(sw+2);
areca_raid6_logical_to_physical.c:// fprintf(stderr,"%12lld %3lld %3d %4s\n",lba++,stripe,d,argv[d+1]); continue;
areca_raid6_logical_to_physical.c: printf("disk %d offset block %lld\n",d,(stripe+vdoff)*su/4096);
areca_raid6_logical_to_physical.c:// lseek64(disk[d],(stripe+19074+4)*128*1024,SEEK_SET);
areca_raid6_logical_to_physical.c: //awk '{lba=($1*8388607);stripe=int((lba/22));ld=lba%22;so=24-((2*stripe)%24);disk=(ld+so)%24;printf "Physical-0x877c %4s %09x Logical %09x disk %2d\n",$2,stripe,lba, disk}'
areca_raid6_logical_to_physical.c: lba+=8388607;
areca_raid6_logical_to_physical.c: }
areca_raid6_logical_to_physical.c:}
areca_raid6_logical_to_physical.c:/*
areca_raid6_logical_to_physical.c: Sun Jun 03, 2012 8:37 pm
areca_raid6_logical_to_physical.c:for areca 1680 raid-6:
areca_raid6_logical_to_physical.c:. the logical to physical order of the disks (which might be the same, IDK, my disks were shuffled anyway).
areca_raid6_logical_to_physical.c:. the offset into the logical disks where your stripes start
areca_raid6_logical_to_physical.c:. the logical virtual disk address to logical disk and address
areca_raid6_logical_to_physical.c:stripe=lba/sw+offset;
areca_raid6_logical_to_physical.c:disk=((lba%sw)+((sw+2)-((2*stripe)%(sw+2))))%(sw+2);
areca_raid6_logical_to_physical.c:where your LBA is in stripe units (ala, 128k, 32k..whatever you made the array with).. and sw is, say 10, if you have a 12 disk array.
areca_raid6_logical_to_physical.c:the first VD, for me, was offset 512k into the physical disks.
areca_raid6_logical_to_physical.c:I don't know how the parity was calculated, I just ignored that data.
areca_raid6_logical_to_physical.c:cat /proc/partitions |while read a b c d;do mknod $d b $a $b;done
areca_raid6_logical_to_physical.c: ./areca_raid6_logical_to_physical
areca_raid6_logical_to_physical.c: cut and paste to tmp/map
areca_raid6_logical_to_physical.c: for map in map.22;do cat /tmp/$map|while read a disk b c ob;do for dev in *;do (t="`dd if=$dev bs=4096 skip=$ob count=1 2>/dev/null |hexdump -C |head -3`";echo "$t"|grep -q '^00000000.*|XFSB' && echo $dev $a $disk $b $c $ob `echo "$t"|tail -1 ` )&done;wait;done|cut -d\| -f1|while read disk a diskn b c off d uuid;do mkdir -p ../uuid/$map/"$uuid";ln -snf /tmp/dev/$disk ../uuid/$map/"$uuid"/$diskn;done ;done
areca_raid6_logical_to_physical.c:*/

TomCrane

Re: Areca RAID6 Recovery Help

Post by TomCrane » Mon Jul 06, 2015 11:04 am

Hi,
I am interested in the RAID6 block layout with a view to doing low level offline diagnostics on an Areca-1882 controller's discs. Does anyone have any information on this? Please could someone also point me to the FAQ mentioned in this thread?

Many thanks
Tom Crane (T dot Crane at rhul dot ac dot uk)

Post Reply