To be more specific, every second (every screen update) put the current
rate in a rotating 10 value table. So this table would always have the
last 10 read rates, which would be averaged. The average would be saved
every time it was the highest. This would be the average you would use
against the 60% rule. So then after 10 periods (seconds) of reads below
the 60% mark, perform a reopen. Make sense?
On 8/25/2013 8:00 PM, Scott Dwyer wrote:
I have a theory on it, at least to try to automate it. From what I
have seen, when the slowdown happens, the speed drops by half. And I
have also seen it drop again by half, so down to 25% from original
read speed. So if you can get the fastest average read speed (over a
few seconds at least) that has occurred so far during the current
operation, then set the reopen speed at say 60% of that, and after a
few consecutive periods of below the 60%, perform the reopen.
Scott
On 8/25/2013 6:54 PM, Antonio Diaz Diaz wrote:
Scott D wrote:
Might have found a sligh flaw with the "-O" option idea. The flaw is
that it does not take an actual error (at least not one that
ddrescue sees) to cause the slowdown. If there are difficult spots
on the disk but no reported errors, it can still cause the reads
afterwards to reduce speed.
If it helps, it is certainly possible to make ddrescue reopen the
file after one or more slow reads.
Of course I would prefer to see the kernel fixed, or at least a
explanation of why it slows down after an error.
Best regards,
Antonio.
_______________________________________________
Bug-ddrescue mailing list
[email protected]
https://lists.gnu.org/mailman/listinfo/bug-ddrescue
_______________________________________________
Bug-ddrescue mailing list
[email protected]
https://lists.gnu.org/mailman/listinfo/bug-ddrescue
_______________________________________________
Bug-ddrescue mailing list
[email protected]
https://lists.gnu.org/mailman/listinfo/bug-ddrescue