I guess the real question is, is it a bug in the Linux version? Which sounds like maybe it's based on an older code base? I mean it looks like the Windows version is at 8.14 build 179623. Kind of seems to me the software isn't a 'one license fits all' sort of thing, so buying the Linux version doesn't translate into having access to the Windows one. Otherwise I'd try that too.
No idea about that, let's hope someone in-the-know chimes in.
But indeed this looks to be the relevant MFT record, so the allocation information should be found at offset 40(hex) of the field “80” ($Data -- see
here for an in-depth description) down there, so :
Code: Select all
41 | 06 | D0 83 EA 0A | 34 | 66 DC BE 20 | 14 AF 00 | 44 | 0A 38 EB 0A | 67 1C BF 20
Which, if I'm not mistaken (and if I'm reading the characters correctly -- hard to distinguish a “0” from a “8” or a “B” on this small screenshot), translates as :
– 6 clusters allocated starting from cluster 183141328
– 549379174 clusters allocated starting from cluster 183141328 + 44820 = 183186148
– 183187466 clusters allocated starting from cluster 183186148 + 549395559 = 732581707
Total number of clusters =
[732583031 false] 732566646 =
[3000660094976 false] 3000592982016 bytes, which looks right.
I have little experience with dd, but with ddrescue this should do the trick :
Code: Select all
ddrescue /dev/sdb1 /media/sdc1/sdf_copy.img /media/sdc1/sdf_copy.log -i 750146879488 -s 24576 -o 0
ddrescue /dev/sdb1 /media/sdc1/sdf_copy.img /media/sdc1/sdf_copy.log -i 750330462208 -s 2250257096704 -o 24576
ddrescue /dev/sdb1 /media/sdc1/sdf_copy.img /media/sdc1/sdf_copy.log -i 3000654671872 -s 750335860736 -o 2250257121280
-i = input offset
-s = size
-o = output offset
Last sector should be (732581707 + 183187466) x 8 -1 = 7326153383 ; it's off by 175785 from the 7325977598 last sector value you noted, but I've noticed that for some reason the “Sectors” column could stop before the actual end for large files, so perhaps if you scroll down to the very end of the file you'll get the number I calculated. Either that, or it's a 0/8/B mistake, you'll have to check.
EDIT 1 : Well, actually the file size field says : 00 60 47 A1 BA 02, which is 3000592982016 (which is the exact capacity of my own 3TB HDDs), which makes both values inconsistent. But, after checking, it appears that the total number of clusters is 732566646, which gives the correct size -- I must have made a mistake when copy-pasting the previously obtained values, adding 549395559 instead of 549379174. (I corrected the wrong numbers above.) The ddrescue commands should be correct (24576 + 2250257096704 + 750335860736 = 3000592982016).
EDIT 2 : “Hmm, Get Info said the MFT number was 64, I didn't see where to put that in”
Since each MFT record is exactly 1KB, putting this value directly in the offset box set in KB gets to the corresponding MFT record. Otherwise, it would be 64 x 1024 = 65536 in bytes. I don't see an easier way to show a file's MFT record in R-Studio.
DMDE is convenient for this : right-click on a file, then “Open MFT file”,
et voilà. And the MFT records are presented in a more easily readable form, with readily translated values. For instance, if I open the MFT record for a 419172179 bytes file which is in 4 fragments, then open the 80h $Data field (click on the “+”), I can read :
Code: Select all
allocated: 419233792
size: 419172179
initializ: 419172179
compress: 419233792
0 run: 42h len: 30765 relc: 1B880BFEh :461900798
30765 run: 42h len: 30735 relc: F50609ABh :277747113
61500 run: 42h len: 30209 relc: BF60DB5h :478421854
91709 run: 42h len: 10643 relc: F6173CDEh :312172604
102352 run: 00h
FFFFFFFFh End Mark
Which is consistent which the values in the MFT record :
Code: Select all
48 00 04 00 00 00 00 00 00 00 FD 18 00 00 00 00
53 0F FC 18 00 00 00 00 53 0F FC 18 00 00 00 00
00 00 FD 18 00 00 00 00 42 2D 78 FE 0B 88 1B 42
0F 78 AB 09 06 F5 42 01 76 B5 0D F6 0B 42 93 29
DE 3C 17 F6 00 F8 FF FF FF FF FF FF 82 79 47 11
18 FC 0F 53 = 419172179 (actual size)
18 FD 00 00 = 419233792 (allocated size, multiple of the cluster size)
Then the cluster runs :
42 | 2D 78 | FE 0B 88 1B => 30765 clusters at relative cluster +461900798
42 | 0F 78 | AB 09 06 F5 => 30735 clusters at relative cluster -184153685 = 277747113
42 | 01 76 | B5 0D F6 0B => 30209 clusters at relative cluster +200674741 = 478421854
42 | 93 29 | DE 3C 17 F6 => 10643 clusters at relative cluster -166249250 = 312172604
So apparently to calculate a negative hexadecimal value it goes like this :
F5 06 09 AB
Apparently if the first digit is higher than 7, then it's a negative number (I'll have to check that again). Then the value of the first byte (or last as it appears in “little endian”) has to be converted to binary, and the “1” on the outer left has to be removed, then the remaining value, converted to decimal, subtracted 128, and the result, with a “-” sign, mutiplied by the corresponding power of 16, added to the rest of the number converted to decimal. I still don't quite get it but at least I get a correct result :
11 x 16^0 + 10 x 16^1 + 9 x 16^2 + 0 x 16^3 + 6 x 16^4 + 0 x 16^5 = 395691
F5(h) = 11110101(b) => 1110101(b) = 117(d) => 117 - 128 = -11 => -11 x 16^6 = -184549376
-184549376 + 395691 = -184153685
461900798 - 184153685 = 277747113
For the other one :
F6 17 3C DE
173CDE(h) = 1522910
F6(h) = 11110110(b) => 1110110(b) = 118(d) => 118 - 128 = -10 => -10 x 16^6 + 1522910 = -166249250
478421854 - 166249250 = 312172604
Pfiouh ! At least I learned
sumpting today...
EDIT 3 : I re-read
this, which made me realize two potential caveats with what I suggested above :
1) You might need to place the parameters before the names of the input / output files for the ddrescue commands to work (and “sudo” before).
2) Since the issue is that the partition on the recovery drive is no longer accessible by regular means, the partition might not be recognized at all on a Linux environment, so referencing it with (for instance) “/dev/sdb1” might not work. In which case, use the whole device as input, and change the offset values according to the partition offset. The typical partition offset for a GPT partitioned 3TB HDD with a single partition is 129MB or 135266304 bytes ; if R-Studio recognizes the partition it should indicate its offset somewhere, if it's a different value, correct accordingly.
Code: Select all
sudo ddrescue -i 750282145792 -s 24576 -o 0 /dev/sdb /media/sdc1/sdf_copy.img /media/sdc1/sdf_copy.log
sudo ddrescue -i 750465728512 -s 2250257096704 -o 24576 /dev/sdb /media/sdc1/sdf_copy.img /media/sdc1/sdf_copy.log
sudo ddrescue -i 3000789938176 -s 750335860736 -o 2250257121280 /dev/sdb /media/sdc1/sdf_copy.img /media/sdc1/sdf_copy.log
It would be wise to run a quick test with for instance a 1MB size (-s 1048576), then open the output with a hexadecimal editor to check if it looks like it's supposed to, before running the whole script, which should take a few hours.
EDIT 4 : I had made some mistakes in the commands above, mixing up “sdc” and “sdc1” ; fixed.