On Mon, Feb 18, 2013 at 5:55 AM, Macs R We <[email protected]> wrote:
>
> On Feb 18, 2013, at 12:31 AM, Arno Hautala wrote:
>>
>> You could always grep for 'backupd' in /var/log/system.log and post a few
>> complete backup cycles. This would point out if there's a specific stage
>> that is taking a long time.
>
> Since his problem isn't the inspecting time, but the data size, maybe tmutil
> can give him what he needs to solve his problem.

I was thinking that it may not be a size problem at all. A backup of a
folder with very many files in it will require creating and destroying
symlinks for every file in the folder when backups are created and
destroyed. Checking all those files at the start of the backup,
creating symlinks in folders that also contain changed data, and
destroying many symlinks could be taking up quite a bit of time for a
small amount of changed data.

1 GB also doesn't strike me as obscene for a single backup. It's on
the highside for my experience, but not out of the question,
especially given the comment about working with Xcode. Coding and
compiling could easily generate a GB of data that needs to be checked
for changes.

A local drive should be able to transfer a GB in seconds, heck even a
network backup should be able to handle that in minutes at most.
That's why I think the issue isn't the amount of data, but the number
of files.

Regardless, you're right. Getting a diff of a long backup to the
previous state should hopefully point out what is going on. If 'tmutil
compare' doesn't help, there's always 'fseventer' or 'fsusage'.

-- 
arno  s  hautala    /-|   [email protected]

pgp b2c9d448
_______________________________________________
MacOSX-talk mailing list
[email protected]
http://www.omnigroup.com/mailman/listinfo/macosx-talk

Reply via email to