On Mar 6, 2011, at 9:50 AM, Neil Laubenthal wrote:

> Sometimes you can't repair the disk if you're booted from it. Here are a 
> couple things to try.
> 
> 1. Backup the data portion of the drive using Finder Copy

This is never an advisable way of backing up files. Many types of files are 
invisible and many types of files the Finder just is programmed not to display. 
Copying an entire folder should copy any invisible files contained within it 
but there's no assurance when viewing through the Finder that you're ever 
grabbing everything. 

> Verify the size and number of files in the original and backup drives to make 
> sure you got everything.

See note on this above. If you can't see it or the Finder doesn't count it, you 
have no valid reconciliation with such a method. 


On Mar 6, 2011, at 12:41 PM, LuKreme wrote:

> On 6-Mar-2011, at 07:28, Ashley Aitken wrote:
>> 
>> However, I have used Carbon Copy Cloner to copy the disk onto another disk 
>> without problem. I assume this means CCC is just copying what it thinks is 
>> correct and the resulting disk may be missing data, files, etc.
> 
> I think CCC is smarter than this. And yes, it can succeed when DiskCopy 
> fails. In all likelihood, the CCC copy is good.

CCC will use a patched version of rsync, which scans directories and builds its 
list. If the file is currently linked in the filesystem CCC will copy it. 

Of course the same is true of any other tool. 

Be aware though that not all utilities copy all file metadata. If in doubt 
about a tool, run Backup Bouncer against it in a test. 

The best, and generally fastest, method to copy a filesystem is to use the asr 
tool. `man asr`. 

> Disk Warrior is a good too to have. It's not terribly expensive, and it will 
> save you from a lot of problems that other tools don't (of course, there are 
> also a lot of things it can't do, so it's not a total solution).
> 
> Disk Warrior's ability to rebuild the directory structure has saved me 
> several times.

However DiskWarrior is invasive, and can be lossy in operation. It should only 
be used as a last resort. A better method is to clone the filesystem to another 
volume, again preferably using asr, which will maintain all currently available 
files. 

The integrity of the data with files after any issues such as these may be 
suspect, especially if the fileystem was on a RAID5 which is very well known 
for silent data corruption. This is why RAID5 should not be used. 


On Mar 6, 2011, at 8:04 PM, Chris Murphy wrote:

> FWIW, Carbon Copy and Disk Utility copy (not sector copy) duplicates of 
> volumes are not identical to the original.

In the case of CCC this is incorrect. If you're not seeing this behavior you 
are either using it impropperly or have an older version. 

Dik Utility can copy files or block copy. In each case it preserves properly 
but asr should be used instead. Note that in some case it may uncompress 
compressed files depending on the target and operation type, but this is not an 
issue. If it is, use asr. 

> I have extensively sorted this out using du, diff, md5 and hfsdebug. Creation 
> and modified dates that are not exactly reset from original, ACLs not 
> restored from original.

See above. This can be verified with Backup Bouncer. 


> And most frequently what's not restored correct is File Security Information 
> - the consequences of which I don't know. And then also Attributes. A huge 
> percentage of small system help and programs (the small BSD programs) are not 
> stored anymore in the data fork. Apple is compressing them, and putting them 
> in the Attribute File,

Um... no, not actually accurate. It's just using a named resource fork of the 
file for storage. 

> a component of HFS just like the Catalog File.

No, incorrect. No file data goes in any of the filesystem catalogs or b-trees, 
all file data is stored in the file, though "file" here will include [old 
style] resource forks and other named forks. 

> So these files take up zero sectors on disk since they are stored in the 
> Attribute File.

Again, no. 

> When duplicated in any way other than a Time Machine restore or sector copy, 
> these files are extracted from the Attribute File, decompressed and resaved 
> as normal data fork files. They double or quadruple in size. On a recent 
> restore this ended up making out for about 2G worth of difference.

Again, false premise, false conclusion. 

On Mar 7, 2011, at 3:58 AM, Markus Hitter wrote:
> Copying the raw device with "dd" is a good idea, as this reduces reading head 
> movement to a minimum. IIRC, copying a raw device to a file gives you 
> something you can convert with diskutil to a valid DMG. Please don't nail me 
> down to the details, it's quite some time since I've done that last time.

The results of dd can be renamed .dmg as is it /is/ a disk image. 

> If the disk still overheats, you can use "dd" to copy in smaller chunks. Read 
> 10 MB, sleep a second, repeat. Takes some time, but gives reliable results.

Use one of the smarter utilities for duplicating bad disks, like ddrescue or 
dd_rescue. Google.

> In the worst case, you have to instruct "dd" to fill sectors with read 
> failure with zeroes. So you loose only a few single sectors instead of entire 
> files. Beware, "dd" simply drops sectors with read failure by default, making 
> the copy useless. But there are switches to avoid that.

The utilities I mentioned are much smarter than this and obviate this. 


> On Mar 7, 2011, at 2:22 PM, Chris Murphy wrote:
>> See thread "Mac OS vs Fedora disk performance"
> 
> Which, btw, is flawed.
> 
>> I am seeing a HUGE difference in dd performance between the block device 
>> /dev/disk0 and the raw device /dev/rdisk0. I know using something like:
>> 
>> dd if=/dev/disk0 of=/dev/disk1s2/diskimages/imageofhotharddrive.iso bs=256k
>> 
>> will produce a valid ISO that you can mount and traverse the files. It will 
>> contain the full partition map, directory, everything. I have not tried 
>> using the raw device:
>> 
>> dd if=/dev/rdisk0 of=/dev/disk1s2/diskimages/imageofhotharddrive.iso bs=256k
>> 
>> This would be 6x faster on my machine, but I don't know the difference with 
>> XNU between block level and raw device other than performance. Obviously I 
>> would prefer to use rdisk because it is way faster. But I don't know what 
>> the result is.

:sigh:

If you don't understand what you're doing I'm not sure why you're doing it, yet 
alone recommending it. 

The disk device is a logical block level representation of the device, each 
block, in order, represents the corresponding logical block number of the 
logical disk. 

The raw device is a physical block level representation of the device, each 
block, in order, represents the corresponding "physical" block of the disk 
device. This "physical" block is not actually the real physical block on the 
disk, but the disk's presentation of these blocks to the OS. The actual blocks 
on the disk are constantly being remapped as needed. 

You should not use raw devices for making disk images or backups, or just about 
anything other than low level forensic clones. And comparisons of them may vary 
between any two dd's of the raw device. 

The reason that a dd of the raw device is faster is because it is, very 
generally speaking though not quite accurate, performing a spiral read of the 
device which generally is the fastest since it represents the shortest head 
path travel on the device. 

On Mar 8, 2011, at 12:03 AM, Chris Murphy wrote:
> OK well, then it should be simple for you to explain why /dev/disk0 results 
> in 17MB/s read, and /dev/rdisk0 results in 107MB/s read, and on Linux 
> /dev/sda result in 107MB/s read which is the block level device and there is 
> no raw device created anymore.

Indeed it is and the above should clarify that. On other unices (including 
linux) there still are raw devices.

On Mar 8, 2011, at 4:27 AM, Markus Hitter wrote:
> Mac OS X is designed to be comfortable, not neccessarily quick. Low level 
> performance is a field where Linux really shines.

More than just comfortable, it's designed to be safe. It takes the "do no evil, 
cause not harm" philosophy. It performs read after writes and checks data. 
Linux, the whore that is is being just a kernel, doesn't do any of that because 
that's not the responsibility of a kernel, data integrity, that's the 
responsibility of the higher level systems. Linux is quicker because it's doing 
the bare minimum. It's not a good choice if you care about data integrity over 
speed. 

Mr Magill is correct in his accounts of DEC and their work with I/O block 
sizes. Of course they had a vested interest since the pager of the VAX used 512 
byte blocks. 

The FFS had been around for some time in BSD, well before ultrix. And Ultrix 
wasn't the first unice for DEC equipment by a long shot, it came about because 
of MIPS, OSF, and the desire to provide a more "open" standardized, modern 
product that could compete on performance not just becasue it was a unice. 

The history of "fixing" BSD code is also an interesting one. Often it was know 
that there were performance or other issues that continues to get ported along 
because it was not considered a bug, but a "feature" because 'that's the way 
BSD does it and we want to do things the same way so the expectations are the 
same.' This was especially true of the IP stack where many strange and wrong 
things occur, even to this day. Backward compatibility is a bitch. 

As my colleague Simsong (Simson Garfinkel) writes: "Almost every piece of Unix 
softwar (as well as software for several other widely used operating systems) 
has been deveoped without comprehensive specifications. As a result you cannot 
easily tell when a program has actually failed. Indeed, what appears to be a 
bug to users of the program might be a feature that was intentionally planted 
by the program authors."

Hence "It's not a bug; it's an undocumented feature!" DEC used this 
consistently since the PDP days when they used the terms "bug" and "feature" in 
their reporting of test results to distinguish between undocumented actions of 
delivered software products that were unacceptable and tolerable, respectively. 
The Jargon File defines "Feature" as "a surprising property of a program. 
Occasionally documented."

As Mr Magill, and others, point out, as DEC was in the forefront of producing 
hardware used for Unix systems they were also on the forefront of producing 
larger and larger drives. In the beginning many drives were "tuned" into the 
OS, drivers (and actually hard wiring in some cases) for specific "effect". 
This worked early on when drive choices were limited, hardware expensive, and 
good software less a commodity. At the time hw was expensive, programmers 
cheap. [In the late 80's and 90's programmers became expensive and hardware 
cheap, and we're not again seeing this reversal as hw comes with our breakfast 
cereal and programmers, while not exactly cheap, are producing FOSS which is 
'free'.] Earlier on there wasn't much in the way of disks to choose from and 
the pace of development was slow. Removable disk platters ruled because the 
drive (not the disk) was expensive and the data exceeded the need for online 
access. As the technology to produce larger sized disks that were fixed became 
en vogue, with the RP07 [MASBUSS] and then RA series (which produced further 
savings by common controllers) the focus turned again on fixing disk problems 
as the sizes were now so much larger than what had been seen previously and 
filesystem repair times were impossible to deal with as was poor I/O 
performance. When you have more data, user tend to ask for it and that means 
more and larger I/Os for which you must deal. 

> On Mar 9, 2011, at 11:28 AM, David Herren wrote:
>> Ah, good times. I often wonder what happened to the Alpha we used to use at 
>> the college. I rather liked that box.

Intel stole DECs technology and as it was sold, and sold again (first to Compaq 
then to HP) the suits were settled. The Alpha technology still is around in 
Intel's Itanium architectures. (which, btw, still runs VMS.)


But... back to the topic at hand...

Use asr to reliably clone your filesystems. 

Use Backup Bouncer to check if any particular tool copies all OS X file 
metadata. 

CCC should copy all file metadata if used properly too, but asr, especially in 
block copy mode, is faster. 

A dd of a logical disk device produces an uncompressed disk image. (Pretty much 
true on any platform.)

Only use DiskWarrior and similar tools as a last resort. They will scavenge and 
attempt to rebuild filesystems that diskutil will not, but this is because they 
are attempting to recreate, in a less than safe manner, attributes which are 
corrupt. Instead a clone or copy of the filesystem (again preferably with asr) 
is safer in all cases. Of course this means you have a scratch disk. If you 
don't, take a good backup and use fsck -r and then if that doesn't work try 
DiskWarior or TechTool Pro or some third party tool as a last resort and do 
careful comparisons for missing or corrupted files. Assume any files after such 
operations to be have potential data corruption. 

-d

------------------------------------------------------------------------
Dan Shoop
[email protected]
GoogleVoice: 1-646-402-5293
aim: iWiring
twitter: @colonelmode




-d

------------------------------------------------------------------------
Dan Shoop
[email protected]
GoogleVoice: 1-646-402-5293
aim: iWiring
twitter: @colonelmode



_______________________________________________
MacOSX-admin mailing list
[email protected]
http://www.omnigroup.com/mailman/listinfo/macosx-admin

Reply via email to