My post of a few hours ago, "Data Rescue II: clone to disk image?", can be 
ignored for now. A sparse image seemed to work, but the problem with cloning 
the 20-GB drive was the same as the problem I'm having now with a clone of a 
250-GB drive, this time to a new 500-GB drive:

Here's a snapshot of one moment in the cloning, as described in the progress 
box:

Copying from /dev/rdisk4s3 to /dev/rdisk3
Copied  28.9 MB of 232.8 GB (block 59136)
Copied good: 0 bytes, skipped bad: 0 bytes [I'm curious why both these numbers 
remained at 0!]
Estimated time remaining: 5020 hours

Here are the changed lines of info a while later:
Copied 135.6 MB of 232.8 GB (block 277760)
Estimated time remaining: 6392 hours

There was a moment when the amount copied increased rapidly, but then it went 
back to copying just a few MB per minute. Here are some lines from 
"ScanEngineLog.txt" at various times:

   223.334 Clone Copying 249925094400 bytes from /dev/rdisk4s3 to /dev/rdisk3
   232.753 Clone Can't read 131072 @ 0
Then there were no more such errors until
   263.014 Clone Can't read 131072 @ 1080000
   272.228 Clone Can't read 131072 @ 10a0000
The errors continued for every 128-KB block up to
  1616.088 Clone Can't read 131072 @ 2280000
  1625.436 Clone Can't read 131072 @ 22a0000

Then there were no more such errors for a short while, timewise, but with the 
reading going very fast.

The errors resumed with
  1643.633 Clone Can't read 131072 @ 7f00000
  1653.152 Clone Can't read 131072 @ 7f20000
and continued to
  2729.264 Clone Can't read 131072 @ 8bc0000
  2738.434 Clone Can't read 131072 @ 8be0000
when I gave up and aborted the clone. The log ended with
  2738.443 Clone Copied 146800640 bytes with 252 read errors, 0 write errors
  2738.445 Clone Clone failed

So, over 100 MB were apparently successfully copied, with 31.5 MB unreadable, 
at least when reading in 128-KB (131072-byte) chunks. (Does one get better 
recovery reading in smaller chunks?)

[I'm guessing that the numbers at the start of each line are the time in 
seconds from the start of the process -- the numbers do begin with 0.000!. This 
would be consistent with my own rough estimates of lapsed time.]

I'm wondering if it's possible to make Data Rescue give up reading much more 
quickly (with way fewer retries)  when there are errors. If not, are the 
retries forced by the OS? If Data Rescue can't do it, is there any program that 
can, perhaps one running as root?

Suggestions will be much appreciated, the sooner the better!

 - Thanks in advance,
 - Aaron

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed Low End Mac's G3-5 List, a 
group for those using G3, G4, and G5 desktop Macs - with a particular focus on 
Power Macs.
The list FAQ is at http://lowendmac.com/lists/g-list.shtml and our netiquette 
guide is at http://www.lowendmac.com/lists/netiquette.shtml
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/g3-5-list?hl=en
Low End Mac RSS feed at feed://lowendmac.com/feed.xml
-~----------~----~----~----~------~----~------~--~---

Reply via email to