raid5/LVM recovery

Discussions on using the professional data recovery program R-STUDIO for RAID re-construction, NAS recovery, and recovery of various disk and volume managers: Windows storage spaces, Apple volumes, and Linux Logical Volume Manager.
Post Reply
bel
Posts: 1
Joined: Wed Mar 02, 2011 7:43 am

raid5/LVM recovery

Post by bel » Wed Mar 02, 2011 8:03 am

Hello!
I've downloaded the demo version of the software and currently I'm trying to recover my raid5 system after an insuccessful promise ns4300n update.
I've got 4 1Tb disks arranged as a RAID5-array.
The main problems, as stated in the faq on this site, are to determine disk order, block size, byte offset and disk offset. However, the faq is about ntfs and I guess I've got an ext3-raid filesystem.

The disks are (identifying by the 2 last digits of the s/n): JS, VE, HC, JT.

1. While loading R-Studio, I've got the following info in the log:
Error Partition 02.03.2011 15:21:17 ST31000340ASSD15: Partition at 63 extends beyond disk bounds
Error Partition 02.03.2011 15:21:17 ST31000340ASSD15: Partition at 2930095350 extends beyond disk bounds
I guess it's non-critical and connected to the actual raid size which exceeds 1Tb. Is that right?

1. R-Studio finds the master boot records at disks JS and JT. The records are the same.
2. On the VE disk, Sector 0 contains the following info:
Sector 0
0: 4C 41 42 45 4C 4F 4E 45 - 01 00 00 00 00 00 00 00 LABELONE........ LABELONE........
10: 56 F0 24 6D 20 00 00 00 - 4C 56 4D 32 20 30 30 31 Vр$m ...LVM2 001 VЁ$m ...LVM2 001
20: 72 35 50 4F 68 43 79 71 - 49 72 68 4B 65 65 49 41 r5POhCyqIrhKeeIA r5POhCyqIrhKeeIA
30: 4C 36 41 30 70 53 58 61 - 6A 42 44 47 32 4C 42 46 L6A0pSXajBDG2LBF L6A0pSXajBDG2LBF
40: 00 6E 66 4B 5D 01 .nfK]. .nfK].
Sectors 10-... contains the following description:
vg001 {
id = "hNmW56-geyL-TcyF-8rwE-iO7e-7T1t-X2PLsT"
seqno = 1
status = ["RESIZEABLE", "READ", "WRITE"]
extent_size = 8192
max_lv = 0
max_pv = 0

physical_volumes {

pv0 {
id = "r5POhC-yqIr-hKee-IAL6-A0pS-XajB-DG2LBF"
device = "/dev/sda1"

status = ["ALLOCATABLE"]
pe_start = 384
pe_count = 357677
}

pv1 {
id = "MYZ1hT-WBLj-K3RI-t6d2-pb4i-9NJ0-vzeT4N"
device = "/dev/sda2"

status = ["ALLOCATABLE"]
pe_start = 384
pe_count = 357675
}
}

}
# Generated by LVM2: Thu Jan 4 02:11:49 2001

contents = "Text Format Volume Group"
version = 1

description = ""

creation_host = "ns4300n_572328" # Linux ns4300n_572328 2.6.11SR2_1_0 #6 Thu Jul 19 11:36:35 CST 2007 ppc
creation_time = 978574309 # Thu Jan 4 02:11:49 2001

The CH disk starts with empty sectors and
Sector 134
10DF6: 53 57 - 41 50 53 50 41 43 45 32 SWAPSPACE2


I wonder how I can examine the raid structure using this info and what should I do to understand the required raid parameters.
Any help will be appreciated.
Thank you

-----
UPD: the problem was solved using the second ns4300n drive.
Actually, 2 problems occured simultaneously: 1) firmware failure; 2) disk 2 failure
As I was unaware of the second problem, the restoration attempts were insuccessful.
The second ns4300n recovered the failed drive, so the array is now operational.

Post Reply