Thanks for your replies. Anything to improve my ignorance would be much appreciated !
Advice on how to confirm the RAID offset for ext3 RAID 5 would be very helpful. The entropy analysis programs did not produce a significant result.
To answer the RAID failure question, it was an older 3Ware 9650SE 8-LPML hardware RAID 5, 7 x 1 TB, no hot spares, under Linux, with ext3 file system. Disk #6 failed, then disk #3 gave ECC errors during rebuild. [For reasons not important here] everything except one directory was on the automated backup list. So extracted individual disks, numbered them correctly, and cloned each to image file. All cloned successfully except disk #6 which has a head stack failure.
I know RAID stripe is 64K. Manufacturer confirms right synchronous, no parity delay. Disk order is correct - matched by serial number against RAID info also. But don't know enough about how to determine RAID offset.
For RAID offset 0 or 1 sectors, and a missing disk inserted in slot #6, rstudio finds multiple 5.5 TB ext3 volumes (green).
a) They differ in "Parsed Boot Record" count from 2 to 4 to 6. Can I assume the one with 6 parsed MBRs is the right one ?
b) rstudio "Open Files" after scan on these partitions shows deleted files only, some file names correct, but very incomplete, and recovered files are corrupt.
c) Search by file type (raw files) with raid offset 0 can recover uncorrupted tar files much greater in size than Ndisk x stripe size. Does this confirm that RAID offset ?
d) Some volumes (green) cannot be viewed with "Open Files" - rstudio reports a corrupted file system (perhaps from interrupted RAID repair before failure). Should I write out these as 5.5TB disk images and try fsck or testdisk on them ?