What if the last 100 seconds or more is in an error or bad area? Then
that average would be low and if the read speed never recovered, it
would stay low. Making this work without excess can be tricky. There are
many variables including the reducing of speed towards the end of the
disk, and error or slow areas. I am only offering my opinion, which may
not be the best way. But Paul, do you see the read speed drop below half
the max towards the end? And have you ever saw the read speed decrease
by anything less than half when it did slow down? If not, then %50-%60
could work I would think.
Even then, we can limit how often the reopen can happen. I just ran a
test where it reopened every 3rd attempt (3 seconds) and only saw a
marginal decrease in overall performance. But that was a test, it
shouldn't be allowed to happen that often I don't think. But it needs to
be able to happen often enough to make a difference. Like I said, tricky...
Scott
On 8/25/2013 9:05 PM, Paul L Daniels wrote:
On Sun, 25 Aug 2013 20:42:16 -0400
Scott Dwyer <[email protected]> wrote:
To be more specific, every second (every screen update) put the
current rate in a rotating 10 value table. So this table would always
To make it a bit easier for programming, you can do this without a
table/array - something I did when doing the firmware for a piece of
testing electronics with a very small microcontroller.
Keep two averages, one for the last 100 seconds, and one for the last
5~10. Rembering that the data rate drops off towards the end, which
might trigger things if we keep the peak average.
http://en.wikipedia.org/wiki/Moving_average#Simple_moving_average
Paul.
_______________________________________________
Bug-ddrescue mailing list
[email protected]
https://lists.gnu.org/mailman/listinfo/bug-ddrescue