> You cannot securely use a sparse file because the whole point of
> truecrypt is to make the encrypted container look like a big blob of
> noise.  If you were able to make a sparse file, anyone would be able
> to tell where the data was 

Yeah, I've heard this by reading the truecrypt docs before.  I don't care.  

Even if using sparse file containers, the encryption and security are good
enough.  Plus, millions of people use sparsebundles or sparse truecrypt
files.  It may not be suitable for some purposes (NSA), but a lot of people
like to use sparseness to their advantage.  I have something to gain by
using sparse volumes:  Time to backup, volume expandability, and
availability of disk space.  Which means I disagree with the totally
one-sided assessment against using a sparse file.  


> I also wish you could break files into smaller chunks, but if you are
> performing your backups with something like rsync, it would only
> transfer the parts of the file that changed.  I think there are other
> tools, like rdiff-backup, that let you more easily save just the parts
> that changed, like an incremental backup.  

I've also heard this before, and I'm sad to say, I tried it, and it's false.
Under normal operation, rsync and rdiff-backup look at the timestamps,
filesizes, and other low-cost information to determine if a file has
changed.  If it has, the whole file is sent.  If you enable the --in-place
switch, it's supposed to just send the chunks that have changed ... and it
does ... but unfortunately ... the method to calculate which blocks have
changed is to read and diff the entire file at both the source and
destination.  This is slower than simply sending the whole file again.  When
I was testing this, it took 30 mins to backup my parallels image for the
first time by rsync'ing on 10/100 network.  And when I tried to
"incrementally" send the same file after some miniscule internal changes ...
I let it run for an hour before I killed it.

I've had discussions with the rsync and rdiff-backup developers about this,
and long story short, nobody seems to care.  So I'm planning to code it up
myself one of these days, but for now, the fastest way to backup either a
Truecrypt volume, or a Virtual machine disk image ... is simply to copy the
whole file every time.  Or, if you're lucky enough to store the file on a
ZFS volume, then ZFS can send incremental snapshots with just the changed
blocks.


> However, doing that [breaking the file into chunks] would
> also provide some kind of weakness, as an attacker could analyze the
> backup and deduce which areas contain data that changed more
> frequently.

Again, don't care.  As long as my data is encrypted with a strong password
and a "strong" algorithm, I feel it's safe enough.  I do have faith that
there are no attackers with access repeatedly to my backup store.  I am not
a spy.  Nobody's life depends on it.  I have things like my bank passwords,
and root passwords and stuff to encrypt.  My goal is not to be unhackable,
but to take reasonable steps to protect my data from casual interception.  I
am not interested in keeping the CIA or NSA out of my affairs.  And they're
not interested in my affairs either.  Still it would be irresponsible to
leave the stuff on disk unencrypted.

I lock my car with a key.  Don't have any biometrics to start my car.
Somebody could steal my car if they wanted to, but it's not as easy as if I
left the keys in it, ready for access.  That's good enough for me.


_______________________________________________
Tech mailing list
[email protected]
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/

Reply via email to