Dear ddrescue community,

First of all, I wanted to thank ddrecue's creators / maintainers for the 
fantastic tool it is.

Second, I would like to ask for some help to understand a bit how the block / 
sector size of a filesystem could affect the performance of ddrescue.
I had a large xfs partition in a RAID that started failing, so before it went 
out completely, I umounted it and used ddrecue to copy it to an image in a 
healthy storage.
I believe the original fs had a sector size of 512 and a block size of 4096.
It took about 12 days, with an average rate of 500MB/s, copying to another nfs 
storage. It only had a few errors, so it just took so long because it was so 
large.

Now the fs where I copied it (another RAID), was formatted with default values 
for xfs, and looks like the sector size now is larger.
Some xfs tools did not seem to like this image, so I decided to wait to restore 
it to a real partition.
The block device of the new RAID reports a sector size of 512, so first I 
created an unformatted partition with fdisk large enough for the image, then I 
invoked ddrescue with the following options:
# ddrescue -v --force --no-scrape /storage/sdb1.img /dev/sda2

It seems to be running fine (I don't expect errors since the hardware is new), 
but the remaining time is twice the time for the original copy and the transfer 
rate is half... The network connection is the same, the systems are newer and 
also the operating system, so I don't think there is an issue there.

Is it because I'm writing to a raw partition? Or could it be related to the 
native filesystem block size where the copy is stored? Can adjusting block size 
and cluster size in ddrescue help or make thing worse?
My biggest worry is that even if I let it run for weeks, the resulting 
partition will be unreadable.
Could someone please help me clarify? My brain is fried by now.

Thanks a lot in advance
Alex

Reply via email to