Re: Protection for quasi-offline memory nabbing

2008-03-26 Thread Alex Alten

At 10:38 AM 3/21/2008 -0700, Jon Callas wrote:


Despite that my hypotheses are only that, and I have no experimental
data, I think that using a large block cipher mode like EME to induce
a pseudo-random, maximally-fragile bit region is an excellent
mitigation strategy.


Isn't EME patented?  - Alex

--

Alex Alten
[EMAIL PROTECTED]



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Protection for quasi-offline memory nabbing

2008-03-21 Thread Steven M. Bellovin
I've been thinking about similar issues.  It seems to me that just
destroying the key schedule is a big help -- enough bits will change in
the key that data recovery using just the damaged key is hard, per
comments in the paper itself.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Protection for quasi-offline memory nabbing

2008-03-21 Thread Jack Lloyd
On Tue, Mar 18, 2008 at 09:46:45AM -0700, Jon Callas wrote:

 What operates like a block cipher on a large chunk?
 Tweakable modes like EME.

Or as a non-patented alternative one could use the Bear/Lion
constructions [1], which can encrypt arbitrary size blocks at
reasonably good speeds (depending on the performance characteristics
of the stream cipher and hash function they are instantiated with).

-Jack

[1] http://www.cl.cam.ac.uk/~rja14/Papers/bear-lion.pdf

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Protection for quasi-offline memory nabbing

2008-03-21 Thread Jon Callas


On Mar 19, 2008, at 6:56 PM, Steven M. Bellovin wrote:

I've been thinking about similar issues.  It seems to me that just
destroying the key schedule is a big help -- enough bits will change  
in

the key that data recovery using just the damaged key is hard, per
comments in the paper itself.


It is. That's something everyone should consider doing. However, I was  
struck by the decay curves shown in the Cold Boot paper. The memory  
decays in an S-curve. Interestingly, both the smoothest S-curve and  
the sharpest were in the most recent equipment.


However, this suggests that for a relatively small object (like a 256- 
bit key) is apt to see little damage. If you followed the strategy of  
checking for single-bit errors, then double-bit, then triple-bit, I  
hypothesize that this simple strategy would be productive, because of  
that curve.


(I also have a few hypotheses on which bits will go first. I  
hypothesize that a high-power bit surrounded by low-power ones will go  
first, and a low-power bit amongst high-power ones will go last. I  
also hypothesize that a large random area is reasonably likely to get  
an early single-bit error. My rationale is that the area as a whole is  
going to have relatively high power 'consumption' because it is  
random, but the random area is going to have local artifacts that will  
hasten a local failure. Assuming that 1 is high-power and 0 is low- 
power, you expect to see a bitstring of 00100 or 0001000 relatively  
often in a blob of 32kbits (4KB) or 64kbits (8KB), and those lonely  
ones will have a lot of stress on them.)


Despite that my hypotheses are only that, and I have no experimental  
data, I think that using a large block cipher mode like EME to induce  
a pseudo-random, maximally-fragile bit region is an excellent  
mitigation strategy.


Now all we need is someone to do the work and write the paper.

Jon

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]