Re: work factor calculation for brute-forcing crypto

2009-07-19 Thread David Malone
On Fri, Jul 17, 2009 at 01:37:43PM -0500, 
travis+ml-cryptogra...@subspacefield.org wrote:
 I'm curious if there's a way to express this calculation as a
 mathematical formula, rather than an algorithm, but right now I'm just
 blanking on how I could do it.

This has been dubbed the guesswork of a distribution by some authors,
I think originating with:

John O. Pliam. The disparity between work and entropy in
cryptology. http://philby.ucsd.edu/cryptolib/1998/98-24.html,
February 1999

It turns out that its asymoptic behaviour is bit like that of Renyi
entropy, see:

E. Arikan. An inequality on guessing and its application
to sequential decoding. IEEE Transactions on Information
Theory, 42:99105, Janurary 1996.

I did an explanitory writeup of this with some experiments at:

http://www.maths.tcd.ie/~dwmalone/p/itt05.pdf

David.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Firewire threat to FDE

2008-03-21 Thread David Malone
On Wed, Mar 19, 2008 at 02:25:36PM -0400, Leichter, Jerry wrote:

[This has been thrashed out on other lists.]

 Just how would that help?  As I understand it, Firewire and PCMCIA
 provide a way for a device to access memory directly.  The OS doesn't
 have to do anything - in fact, it *can't* do anything.

The OS can program the Firewire controller not to allow DMA.

 The only possible protection here is at the hardware level:  The
 external interface controller must be able to run in a mode which
 blocks externally-initiated memory transactions.  Unfortunately,
 that may not be possible for some controllers.  Sure, the rules for
 (say) SCSI might say that a target is only supposed to begin sending
 after a request from an initiator - but it would take a rather
 sophisticated state machine to make sure to match things up properly,
 especially on a multi-point bus.

Isn't what you're describing here an IOMMU?

David.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Irish blood donor records

2008-02-21 Thread David Malone
It seems that disk containing records of the Irish Blood Transfusion
service seems to have been stolen in New York:

http://www.rte.ie/news/2008/0219/blood.html

Thankfully, the data was encrypted. The head of the IBTS said on
the news that there was a remote possibility of access, roughly
equivelent to winning the Euromillions Lottery 10 times in a row.
Based on the jackpot odds from:

http://en.wikipedia.org/wiki/EuroMillions

they are probably thinking around lg(76275360)*10 = 261.84, or 256
bits of key.

David.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: open source disk crypto update

2007-04-26 Thread David Malone
On Wed, Apr 25, 2007 at 03:32:43PM -0500, Travis H. wrote:
 I think a simple evolution would be to make /boot and/or /root on
 removable media (e.g. CD-ROM or USB drive) so that one could take it
 with you.

Marc Schiesser gave a tutorial at EuroBSDcon 2005 on encrypting the
whole hard drive on your laptop using FreeBSD. If I remember right,
He used the trick of booting from a USB drive. The notes from his
tutorial are here:


http://events.ccc.de/congress/2005/fahrplan/attachments/586-paper_Complete_Hard_Disk_Encryption.pdf

David.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: statistical inferences and PRNG characterization

2006-05-22 Thread David Malone
On Fri, May 19, 2006 at 06:51:55AM -0500, Travis H. wrote:
 As I understand it, when looking at output, one can take a
 hypothetical source model (e.g. P(0) = 0.3, P(1) = 0.7, all bits
 independent) and come up with a probability that the source may have
 generated that output.  One cannot, however, say what probability such
 a source had generated the output, because there is an infinite number
 of sources (e.g. P(0) = 0.2.., P(1) = 7.000...).  Can one say
 that, if the source must be A or B, what probability it actually was A
 (and if so, how)?

You could do this with relatively simple Bayesian classification.
Start with a prior assumption like As far as I know it is 50/50
that it is source A or B and then for the output you see you
calculate P(A|output) and P(B|outout) using Bayes rule, your
probabilistic model for the source and P(A) = P(B) = 0.5.

P(X|O) = P(O|X) P(X)/P(O)

A finite number of sources is not required here, as long as you're
willing to provide a prior distribution over all possible sources
that you can do calculations with.

 Also, it strikes me that it may not be possible to prove something
 cannot be distinguished from random, but that proofs must be of the
 opposite form, i.e. that some source is distinguishable from random.

I think you're still going to run into the problem of deciding what
is random, and that problem will be tied up in your choice of prior
distribution on the sources.

 Am I correct?  Are there any other subtleties in the application of
 statistics to crypto that anyone wishes to describe?  I have yet to
 find a good book on statistics in these kinds of situations, or for
 that matter in any.

I guess the usual proviso: these sort of calculations require
assumptions to make them possible, and the results should not be
confidently applied outside situations where those assumptions are
valid.

David.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Entropy Definition (was Re: passphrases with more than 160 bits of entropy)

2006-03-27 Thread David Malone
On Sat, Mar 25, 2006 at 07:26:51PM -0500, John Denker wrote:
 Executive summary:  Small samples do not always exhibit average behavior.

That's not the whole problem - you have to be looking at the right
average too.

For the long run encodability of a set of IID symbols produced with
probability p_i, then that average is the Shannon Entropy.  If
you're interested in the mean number of guesses (per symbol) required
to guess a long word formed from these symbols, then you should be
looking at (\sum_i \sqrt(p_i))^2. Other metrics (min entropy, work
factor, ...) require other averages.

To see this behaviour, you both need a large sample and the right
type of average to match your problem (and I've assumed IID).

David.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Linux RNG paper

2006-03-23 Thread David Malone
On Thu, Mar 23, 2006 at 01:55:30AM -0600, Travis H. wrote:
 It's annoying that the random number generator code calls the
 unpredictable stuff entropy.  It's unpredictability that we're
 concerned with, and Shannon entropy is just an upper bound on the
 predictability.  Unpredictability cannot be measured based on outputs
 of sources, it must be based on models of the source and attacker
 themselves.  But we all know that.  Maybe we should invent a term? 
 Untropy?

The problem is that we have to decide what out metric is before we
can give it a name. Shannon entropy is about the asympitic amount
of data needed to encode an average message. Kolmorogrov's entropy
(which got a mention here today) is about the shortest program that
can produce this string. These things aren't often important for
a PRNG or /dev/random like device.

One metric might be guessability (mean number of guesses required
or moments there of).  As you point out, Arikan and Massey have
shown that Shannon entropy are not particularly good estimates of
guessability. Generalisations of entropy, like Reni entropy do seem
to have meaning. The min-entropy mentioned in RFC 4086 seems
reasonable, though I don't think the rational given in the RFC is
actually correct.

David.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: another feature RNGs could provide

2005-12-28 Thread David Malone
On Tue, Dec 27, 2005 at 11:34:15PM +, Ben Laurie wrote:
 If you don't have sufficient plain/ciphertext, then of course you can
 choose incorrect pairs.

Yep - that's my point. The thing to note is that for an arbitrary
permutation, knowing the image of n plaintexts tells you (almost)
nothing else.  Usually for a block cipher with a smaller key space,
knowing a plaintext/ciphertext pair actually has a pretty big impact
on what you know about the key, and this is how people usually think
about block ciphers.

In AES with a 128 bit block and 256 bit key, if the images are
uniformly and independently distributed, then each pair known reduces
the possible amount of key space by about 128 bits, so 2 or 3 pairs
will nail the key down with reasonable probability. For good measure
we could say 20 or 30 would be sufficient, even if the images aren't
well distributed.

For S_(2^128) the original key space has (2^128)! keys so it is
about 128*(2^128) bits. Knowing 30 pairs here will reduce the key
space by about 128*30 bits, leaving us with 128*(2^128) - 128*30 =
128*(2^128-30) bits. We've barely had any impact at all, because
the key space was much bigger to begin with.

Of course, this also shows why using an arbitrary permutation in
S_(2^128) isn't very practical - you need to store 128*(2^128) bits
to remember which one you're using!

David.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: another feature RNGs could provide

2005-12-27 Thread David Malone
On Tue, Dec 27, 2005 at 03:26:59AM -0600, Travis H. wrote:
 On 12/26/05, Ben Laurie [EMAIL PROTECTED] wrote:
  Surely if you do this, then there's a meet-in-the middle attack: for a
  plaintext/ciphertext pair, P, C, I choose random keys to encrypt P and
  decrypt C. If E_A(P)=D_B(C), then your key was A.B, which reduces the
  strength of your cipher from 2^x to 2^(x/2)?

 Almost true.  The cardinality of the symmetric group S_(2^x) is
 (2^x)!, so it reduces it from (2^x)! to roughly sqrt((2^x)!).  That's
 still a lot.

I'm fairly sure knowing that E(P) = C reduces the key space from
(2^x)!  to (2^x - 1)!, because you've just got to choose images for
the remaining 2^x - 1 possible blocks.

I think a problem with Ben's arument is in assuming that knowing
E_A(P)=D_B(C) tells you that your key was A.B. For example, suppose
my key K is the permutation:

1 - 2
2 - 3
3 - 4
4 - 1

and my P = 2. Now we know E_K(P) = C = 3. Ben guesses A:

1 - 1
2 - 3
3 - 2
4 - 4

and B:

1 - 1
2 - 2
3 - 3
4 - 4

He sees that E_A(P) = E_A(2) = 3 = D_B(3), and so assumes that K =
A.B. But A.B = A != K.

(In this example, imagine x = 2, and we label the blocks 00 = 1,
01 = 2, 10 = 3, 11 = 4.)

David.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]