Re: disk encryption modes (Re: RE: Two ideas for random number generation)

2002-04-27 Thread Joseph Ashwood

- Original Message -
From: Adam Back [EMAIL PROTECTED]

 On Fri, Apr 26, 2002 at 11:48:11AM -0700, Joseph Ashwood wrote:
  From: Bill Stewart [EMAIL PROTECTED]
   I've been thinking about a somewhat different but related problem
lately,
   which is encrypted disk drives.  You could encrypt each block of the
disk
   with a block cypher using the same key (presumably in CBC or some
similar
   mode), but that just feels weak.
 
  Why does it feel weak? CBC is provably as secure as the block cipher
(when
  used properly), and a disk drive is really no different from many
others. Of
  course you have to perform various gyrations to synchronise everything
  correctly, but it's doable.

 The weakness is not catastrophic, but depending on your threat model
 the attacker may see the ciphertexts from multiple versions of the
 plaintext in the edit, save cycle.

That could be a problem, you pointed out more information in your other
message, but obviously this would have to be dealt with somehow. I was goign
to suggest that maybe it would be better to encrypt at the file level, but
this can very often leak more information, and depending on how you do it,
will leak directory stucture. There has to be a better solution.

  Well it's not all the complicated. That same key, and encrypt the disk
  block number, or address or anything else.

 Performance is often at a premium in disk driver software --
 everything moving to-and-from the disk goes through these drivers.

 Encrypt could be slow, encrypt for IV is probably overkill.  IV
 doesn't have to be unique, just different, or relatively random
 depending on the mode.

 The performance hit for computing IV depends on the driver type.

 Where the driver is encrypting disk block at a time, then say 512KB
 divided (standard smallest disk block size) into AES block sized
 chunks 16 bytes each is 32 encrypts per IV geenration.  So if IV
 generation is done with a block encrypt itself that'll slow the system
 down by 3.125% right there.

 If the driver is higher level using file-system APIs etc it may have
 to encrypt 1 cipher block size at a time each with a different IV, use
 encrypt to derive IVs in this scenario, and it'll be a 100% slowdown
 (encryption will take twice as long).

That is a good point, of course we could just use the old standby solution,
throw hardware at it. The hardware encrypts at disk (or even disk cache)
speed on the drive, eliminating all issues of this type. Not a particularly
cost-effective solution in many cases, but a reasonable option for others.

  This becomes completely redoable (or if you're willing to sacrifice
  a small portion of each block you can even explicitly stor ethe IV.

 That's typically not practical, not possible, or anyway very
 undesirable for performance (two disk hits instead of one),
 reliability (write one without the other and you lose data).

Actually I was referring to changing the data portion of the block from
{data}
to
{IV, data}

placing all the IVs at the head of every read. This of course will sacrifice
k bits of the data space for little reason.

   I've been thinking that Counter Mode AES sounds good, since it's easy
   to find the key for a specific block.   Would it be good enough just
to
  use
Hash( (Hash(Key, block# ))
   or some similar function instead of a more conventional crypto
function?
 
  Not really you'd have to change the key every time you write to
  disk, not exactly a good idea, it makes key distribution a
  nightmare, stick with CBC for disk encryption.

 CBC isn't ideal as described above.  Output feedback modes like OFB
 and CTR are even worse as you can't reuse the IV or the attacker who
 is able to see previous disk image gets XOR of two plaintext versions.

 You could encrypt twice (CBC in each direction or something), but that
 will again slow you down by a factor of 2.

 Note in the file system level scenario an additional problem is file
 system journaling, and on-the-fly disk defragmentation -- this can
 result in the file system intentionally leaving copies of previous or
 the same plaintexts encrypted with the same key and logical position
 within a file.

Yeah the defragmentation would have to be smart, it can't simply copy the
dick block (with the disk block based IV) to a new location. This problem
disappears in the {IV, data} block type, but that has other problems that
are at least as substantial.

 So it's easy if performance is not an issue.

Or if you decide to throw hardware at it.

 Another approach was Paul Crowley's Mercy cipher which has a 4Kbit
 block size (= 512KB = sector sized).  But it's a new cipher and I
 think already had some problems, though performance is much better
 than eg AES with double CBC, and it means you can use ECB mode per
 block and key derived with a key-derivation function salted by the
 block-number (the cipher includes such a concept directly in it's
 key-schedule), or CBC mode with an IV derived from the block number

Re: disk encryption modes (Re: RE: Two ideas for random number generation)

2002-04-27 Thread Adam Back

Joseph Ashwood wrote:
 Adam Back Wrote:
   This becomes completely redoable (or if you're willing to sacrifice
   a small portion of each block you can even explicitly stor ethe IV.
 
  That's typically not practical, not possible, or anyway very
  undesirable for performance (two disk hits instead of one),
  reliability (write one without the other and you lose data).
 
 Actually I was referring to changing the data portion of the block
 from {data} to {IV, data}

Yes I gathered, but this what I was referring to when I said not
possible.  The OSes have 512Kbytes ingrained into them.  I think you'd
have a hard time changing it.  If you _could_ change that magic
number, that'd be a big win and make the security easy: just pick a
new CPRNG generated IV everytime you encrypt a block.  (CPRNG based on
SHA1 or RC4 is pretty fast, or less cryptographic could be
sufficient depending on threat model).

 placing all the IVs at the head of every read. This of course will
 sacrifice k bits of the data space for little reason.

Security / space trade off with no performance hit (other than needing
to write 7% or 14% more data depending on size of IV) is probably more
desirable than having to doubly encrypt the block and take a 2x cpu
overhead hit.  However as I mentioned I don't think it's practical /
possible due to OS design.

  Note in the file system level scenario an additional problem is file
  system journaling, and on-the-fly disk defragmentation -- this can
  result in the file system intentionally leaving copies of previous or
  the same plaintexts encrypted with the same key and logical position
  within a file.
 
 Yeah the defragmentation would have to be smart, it can't simply copy the
 dick block (with the disk block based IV) to a new location. 

Well with the sector level encryption, the encryption is below the
defragmentation so file chunks get decrypted and re-encrypted as
they're defragmented.

With the file system level stuff the offset is likley logical (file
offset etc) rather than absolute so you don't mind if the physical
address changes.  (eg. loopback in a file, or file system APIs on
windows).

  Another approach was Paul Crowley's Mercy cipher which has a 4Kbit
  block size (= 512KB = sector sized).  But it's a new cipher and I
  think already had some problems, though performance is much better
  than eg AES with double CBC, and it means you can use ECB mode per
  block and key derived with a key-derivation function salted by the
  block-number (the cipher includes such a concept directly in it's
  key-schedule), or CBC mode with an IV derived from the block number
  and only one block, so you don't get the low-tide mark of edits you
  get with CBC.
 
 It's worse than that, there's actually an attack on the cipher. Paul details
 this fairly well on his page about Mercy.

Yes, that's what I was referring to by already had some problems.

Adam
--
http://www.cypherspace.org/adam/



Re: Re: disk encryption modes (Re: RE: Two ideas for random number generation)

2002-04-27 Thread Joseph Ashwood

- Original Message -
From: Adam Back [EMAIL PROTECTED]

 Joseph Ashwood wrote:
  Actually I was referring to changing the data portion of the block
  from {data} to {IV, data}

 Yes I gathered, but this what I was referring to when I said not
 possible.  The OSes have 512Kbytes ingrained into them.  I think you'd
 have a hard time changing it.  If you _could_ change that magic
 number, that'd be a big win and make the security easy: just pick a
 new CPRNG generated IV everytime you encrypt a block.  (CPRNG based on
 SHA1 or RC4 is pretty fast, or less cryptographic could be
 sufficient depending on threat model).

From what I've seen of a few OSs there really isn't that much binding to 512
Kbytes in the OS per se, but the file system depends on it completely.
Regardless the logic place IMO to change this is at the disk level, if the
drive manufacturers can be convinced to produce drives that offer 512K+16
byte sectors. Once that initial break happens, all the OSs will play catchup
to support the drive, that will break the hardwiring and give us our extra
space. Of course convincing the hardware vendors to do this without a
substantial hardware reason will be extremely difficult. On our side though
is that I know that hard disks store more than just the data, they also
store a checksum, and some sector reassignment information (SCSI drives are
especially good at this, IDE does it under the hood if at all), I'm sure
there's other information, if this could be expanded by 16 bytes, that'd
supply the necessary room. Again convincing the vendors to supply this would
be a difficult task, and would require the addition of functionality to the
hard drive to either decrypt on the fly, or hand the key over to the driver.

  Yeah the defragmentation would have to be smart, it can't simply copy
the
  di[s]k block (with the disk block based IV) to a new location.

 Well with the sector level encryption, the encryption is below the
 defragmentation so file chunks get decrypted and re-encrypted as
 they're defragmented.

 With the file system level stuff the offset is likley logical (file
 offset etc) rather than absolute so you don't mind if the physical
 address changes.  (eg. loopback in a file, or file system APIs on
 windows).

That's true, I was thinking more as something that will for now run in
software and in the future gets pushed down to the hardware and we can use a
smartcard/USBKey/whatever comes out next to feed it the key. A
meta-filesystem would be useful as a short term measure, but it still keeps
all the keys in system memory where programs can access them, if we can
maintain the option of moving it to hardware later on, I think that would be
a better solution (although also a harder one).

I feel like I'm missing something that'll be obvious once I've found it.
Hmm, maybe there is a halfway decent solution (although not at all along the
same lines). For some reason I was just remembering SAN networks, it's a
fairly known problem to design and build secure file system protocols
(although they don't get used much). So it might actually be a simpler
concept to build a storage area network using whatever extra hardened OSs we
need, with only the BIOS being available without a smartcard, put the smart
card in, the smartcard itself decrypts/encrypts sector keys (or maybe some
larger grouping), the SAN host decrypts the rest. Pull out the smartcard,
the host can detect that, flush all caches and shut itself off. This has
some of the same problems, but at least we're not going to have to design a
hard drive, and since it's a remote file system I believe most OSs assume
very little about sector sizes. Of course as far as I'm concerned this
should still be just a stopgap measure until we can move that entire SAN
host inside the client computer.

Now for the biggest question, how do we get Joe Public to actually use this
correctly (take the smart card with them, or even not choose weak
passwords)?
Joe




RE: Re: disk encryption modes (Re: RE: Two ideas for random number generation)

2002-04-27 Thread JonathanW
Title: RE: Re: disk encryption modes (Re: RE: Two ideas for random number generation)





Instead of adding 16 bytes to the size of each sector for sector IV's how about having a separate file (which could be stored on a compact flash card, CDRW or other portable media) that contains the IV's for each disk sector? You could effectively wipe the encrypted disk merely by wiping the IV file, which would be much faster than securely erasing the entire disk. If the IV file was not available, decryption would be impossible even if the main encryption key was rubberhosed it otherwise leaked. This could be a very desirable feature for the tinfoil-hat-LINUX crowd--as long as you have posession if the compact flash card with the IV file, an attacker with your laptop isn't going to get far cracking your encryption, especially if you have the driver constructed to use a dummy IV file on the laptop somewhere after X number of failed passphrase entries to provide plausible deniability for the existence of the compact flash card.

To keep the IV file size reasonable, you might want to encrypt logical blocks (1K-8K, depending on disk size, OS, and file system used, vs 512 bytes) instead of individual sectors, especially if the file system thinks in terms of blocks instead of sectors. I don't see the value of encrypting below the granularity of what the OS is ever going to write to disk.




Re: RE: Re: disk encryption modes (Re: RE: Two ideas for random number generation)

2002-04-27 Thread Joseph Ashwood
Title: RE: Re: disk encryption modes (Re: RE: Two ideas for random number generation)





  - Original Message - 
  From: 
  [EMAIL PROTECTED] 
  To: [EMAIL PROTECTED] 
  
  Sent: Saturday, April 27, 2002 12:11 
  PM
  Subject: CDR: RE: Re: disk encryption 
  modes (Re: RE: Two ideas for random number generation)
  
  Instead of adding 16 bytes to the size of each sector for 
  sector IV's how about having a separate file (which could be stored on a 
  compact flash card, CDRW or other portable media) that contains the IV's for 
  each disk sector? 
Not a very good solution.

  
  You could effectively wipe the encrypted disk merely by wiping 
  the IV file, which would be much faster than securely erasing the entire disk. 
  
  
Actually that wouldn't work, at least not in CBC mode 
(which is certainly my, and seems to be generally favored for disk encryption). 
In CBC mode, not having the IV (setting the IV to 0) only destroys the first 
block, after that everything decrypts normally, so the only wiped portion of the 
sector is the first block.

  
  If the IV file was not available, decryption would be 
  impossible even if the main encryption key was rubberhosed it otherwise 
  leaked. This could be a very desirable feature for the tinfoil-hat-LINUX 
  crowd--as long as you have posession if the compact flash card with the IV 
  file, an attacker with your laptop isn't going to get far cracking your 
  encryption, especially if you have the driver constructed to use a dummy IV 
  file on the laptop somewhere after X number of failed passphrase entries to 
  provide plausible deniability for the existence of the compact flash 
  card.
  
And then the attacker would just get all of your file 
except the first block (assuming the decryption key is found).

  
  To keep the IV file size reasonable, you might want to encrypt 
  logical blocks (1K-8K, depending on disk size, OS, and file system used, vs 
  512 bytes) instead of individual sectors, especially if the file system thinks 
  in terms of blocks instead of sectors. I don't see the value of encrypting 
  below the granularity of what the OS is ever going to write to 
disk.

That is a possibility, and actually I'm sure it's 
occurred to the hard drive manufacturers that the next time they do a full 
overhaul of the wire protocol they should enable larger blocks (if they haven't 
already, like I said before, I'm not a hard drive person). This would serve them 
very well as they would have to store less information increasing the disk size 
producible per cost (even if not by much every penny counts when you sell a 
billion devices). Regardless this could be useful for the disk encryption, but 
assuming worst case won't lose us anything in the long run, and should enable 
the best case to be done more easily, so for the sake of simplicity, and 
satisfying the worst case, I'll keep on calling them sectors until there's a 
reason not to.
  

  Joe