Re: Protection for quasi-offline memory nabbing

2008-03-26 Thread Alex Alten

At 10:38 AM 3/21/2008 -0700, Jon Callas wrote:


Despite that my hypotheses are only that, and I have no experimental
data, I think that using a large block cipher mode like EME to induce
a pseudo-random, maximally-fragile bit region is an excellent
mitigation strategy.


Isn't EME patented?  - Alex

--

Alex Alten
[EMAIL PROTECTED]



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Protection for quasi-offline memory nabbing

2008-03-21 Thread Steven M. Bellovin
I've been thinking about similar issues.  It seems to me that just
destroying the key schedule is a big help -- enough bits will change in
the key that data recovery using just the damaged key is hard, per
comments in the paper itself.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Protection for quasi-offline memory nabbing

2008-03-21 Thread Jack Lloyd
On Tue, Mar 18, 2008 at 09:46:45AM -0700, Jon Callas wrote:

 What operates like a block cipher on a large chunk?
 Tweakable modes like EME.

Or as a non-patented alternative one could use the Bear/Lion
constructions [1], which can encrypt arbitrary size blocks at
reasonably good speeds (depending on the performance characteristics
of the stream cipher and hash function they are instantiated with).

-Jack

[1] http://www.cl.cam.ac.uk/~rja14/Papers/bear-lion.pdf

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Protection for quasi-offline memory nabbing

2008-03-21 Thread Jon Callas


On Mar 19, 2008, at 6:56 PM, Steven M. Bellovin wrote:

I've been thinking about similar issues.  It seems to me that just
destroying the key schedule is a big help -- enough bits will change  
in

the key that data recovery using just the damaged key is hard, per
comments in the paper itself.


It is. That's something everyone should consider doing. However, I was  
struck by the decay curves shown in the Cold Boot paper. The memory  
decays in an S-curve. Interestingly, both the smoothest S-curve and  
the sharpest were in the most recent equipment.


However, this suggests that for a relatively small object (like a 256- 
bit key) is apt to see little damage. If you followed the strategy of  
checking for single-bit errors, then double-bit, then triple-bit, I  
hypothesize that this simple strategy would be productive, because of  
that curve.


(I also have a few hypotheses on which bits will go first. I  
hypothesize that a high-power bit surrounded by low-power ones will go  
first, and a low-power bit amongst high-power ones will go last. I  
also hypothesize that a large random area is reasonably likely to get  
an early single-bit error. My rationale is that the area as a whole is  
going to have relatively high power 'consumption' because it is  
random, but the random area is going to have local artifacts that will  
hasten a local failure. Assuming that 1 is high-power and 0 is low- 
power, you expect to see a bitstring of 00100 or 0001000 relatively  
often in a blob of 32kbits (4KB) or 64kbits (8KB), and those lonely  
ones will have a lot of stress on them.)


Despite that my hypotheses are only that, and I have no experimental  
data, I think that using a large block cipher mode like EME to induce  
a pseudo-random, maximally-fragile bit region is an excellent  
mitigation strategy.


Now all we need is someone to do the work and write the paper.

Jon

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Protection for quasi-offline memory nabbing

2008-03-19 Thread Jon Callas

Such as Cold Boot, etc.

There have been a number of conversations among my colleagues on how  
to ameliorate this, particularly with an eye to making suspend mode  
safer.


In the Cold Boot paper, the authors suggested XORing a piece of random  
memory onto the dangerous bits, so as to fuzz them. This is a clever  
idea, but we didn't like it, particularly because XOR doesn't have the  
best diffusion in the world.


The solution we came up with is to use EME mode (or equivalent) with a  
fixed key. The outline is that you encrypt all dangerous data, like  
volume key, key expansion, etc, with a fixed key into a chunk that you  
keep to the side.


This relies upon the property of EME (and other large-block, tweakable  
modes) that a single-bit error in the ciphertext propagates to an  
error in the entire plaintext. Consequently, a very low rate of memory  
decay turns into complete protection of that sensitive data.


Upon suspend, you erase and deallocate the active store, and on wake  
you decrypt the fuzzed copy to get your keys and state variables back.


If you want to one-plus this, you could have a timeout on the drive so  
that if it's idle for N seconds, you do same.


When we came up with this, we wondered if it was patentable. We've  
decided that it isn't, that this is something that is obvious to  
someone skilled in the art. Our reasoning is something like:


Cold Boot paper suggests XORing random memory
but -- XOR has cruddy diffusion
What has better diffusion?
(discard suggestions like lead, churches, and very small rocks)
Block ciphers have great diffusion
but -- block ciphers operate on only a small chunk
What operates like a block cipher on a large chunk?
Tweakable modes like EME.
QED

The rest is just software engineering.

The cool thing about using EME (or equivalent) is that the larger the  
chunk you create, the better you survive a Cold Boot attack.


Note, however, that an attacker who can grab memory with no errors in  
it, such as someone who is playing DMA games, still gets the keys. To  
protect against that, you have to have an authentication mechanism,  
which is outside the scope of this -- we want something that is  
transparent, but can make people worry less about suspending their  
laptop.


Also note that you don't really need a full cipher. All you need is  
reversible diffusion that maximizes damage on a single-bit error.  
However, the danger in coming up with another function is that you're  
effectively designing special-purpose crypto. Yes, it's really special- 
purpose coding, not crypto, but it's a lot safer to use crypto. We  
understand it better.


A number of people participated in our discussions and at least two  
people independently thought of the core idea. The people include but  
are not limited to (which means I apologize to everyone I forgot):  
Colin Plumb, Phil Zimmermann, Hal Finney, Andrey Jivsov, Will Price,  
David Finkelstein, and Bill Zhao.


Jon

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]