[Cryptography] Plug for crypto.stackexchange.com

2013-10-12 Thread David Wagner
I've noticed quite a few questions on this list
recently of the form How do I do X? What is
the right cryptographic primitive for goal X? etc.

I'd like to plug the following site:

http://crypto.stackexchange.com/
Cryptography Stack Exchange

It is an excellent place to post questions like
that and get helpful answers.  I encourage folks
to give it a try, if they have questions like the
ones I listed above.  By posting there, you will
not only get good answers, but those answers
will also be documented in a form that's well-suited
for others with the same problem to find and
benefit from.  I'm not trying to drive people
away from this mailing list, just pointing out
an additional resource that may be helpful.

Or, if you're feeling helpful and community-minded,
you can subscribe and help answer other people's
questions there.

(That site is like Stack Overflow, for those familiar
with Stack Overflow, except that it is focused on
cryptography.  There is also a site on information
security: http://security.stackexchange.com/ )
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: Encryption and authentication modes

2010-07-24 Thread David Wagner

Florian Weimer  wrote:
* David McGrew:
 can I ask what your interest in AEAD is?  Is there a particular
 application that you have in mind?

 I just want to create a generic API which takes a key (most of the
 time, a randomly generated session key) and can encrypt and decrypt
 small blobs.  Application code should not need to worry about details
 (except getting key management right, which is difficult enough).
 More popular modes such as CBC lack this property, it's too easy to
 misuse them.

Thanks for explaining the use case.

In addition to the dedicated AEAD modes (e.g., CCM, EAX, GCM) that
you already know about, you might look at encrypt-then-authenticate.
It might make a nice and simple solution for this particular use case.
In EtA, you encrypt with any secure encryption scheme, then append a
MAC that covers the entire ciphertext (including the IV).  For instance,
AES-CBC + CMAC is a fine combination.  It's pretty simple to implement.

 A mode which does not rely as much on proper randomness for the IV
 would be somewhat nice to have, but it seems that the only choice
 there is SIV, which has received less scrutiny than other modes.

I'm afraid that, for full security, secure encryption does require
either randomness or state.  The technical slogan is deterministic
encryption schemes are not semantically secure.  A more down-to-earth
way to say it is that if you use a deterministic, stateless encryption
scheme to encrypt the same message twice, you'll get the same ciphertext
both times, which leaks some information to an eavesdropper.  In some
applications that might be OK, but for a general-purpose encryption
scheme, one might arguably prefer something that doesn't have such
an unnecessary weakness.

A weaker goal is graceful degradation: a weak random-number source
should not cause a catastrophic loss of security.  Some modes of
operation are more robust in the face of repeated or non-random IVs;
e.g., CBC mode is more robust than CTR mode.

Is obtaining proper randomness hard on your platform?  On most
desktop/server platforms, it is easy: just read from /dev/urandom.

If it's hard, there are various hardening schemes you can use to reduce
your dependence upon randomness.  I don't know whether they're worth the
effort/complexity/performance loss; that depends upon your usage scenario.
For instance, one cute hardening step you can do is to pick a separate
secret key for a PRF (e.g., CMAC), and then hash a random value together
with the message itself to obtain the IV for the encryption mode.  e.g.,
to encrypt message M:

  Encrypt(M):
  1. IV = CMAC(K0, random || M)
  2. C = AES-CBC-Encrypt(K1, IV, M)
  3. T = CMAC(K2, IV || C)
  4. return IV || C || T

(If you don't like storing 3 separate keys, use standard key separation
techniques: K0 = CMAC(K, 0), K1 = CMAC(K, 1), K2 = CMAC(K, 3).)

Of course, this hardening scheme does have a performance impact.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: A Fault Attack Construction Based On Rijmen's Chosen-Text Relations Attack

2010-07-22 Thread David Wagner
Alfonso De Gregorio wrote:
 The last Thursday, Vincent Rijmen announced a new clever attack on   
 AES (and KASUMI) in a report posted to the Cryptology ePrint   
 Archive: Practical-Titled Attack on AES-128 Using Chosen-Text   
 Relations, http://eprint.iacr.org/2010/337

Jonathan Katz wrote:
 Err...I read that paper by Rijmen as a bit of a joke. I think he was
 poking fun at some of these unrealistic attack models.

Alfonso De Gregorio wrote:
 Now, I expect the unusual nature of the attack model might stir up a  
 lively discussion. My post was soliciting comments in this regard.

For what it's worth, I read Vincent Rijmen's paper in the same way as
Jonathan Katz.  I don't think it's intended to be taken at face value;
if you took it seriously, one of us needs to read it again.  Rather,
I saw it as written with tongue embedded firmly in cheek: I took it as
a serious argument, hidden behind some gentle humor.

Vincent Rijmen could have written a sober, systematic critique of the
direction some of the field has gone in, carefully explaining in great
detail why some recent attack models are unrealistic.  That would have
been the safe, standard, and somewhat boring way to present such an
argument.  But instead Rijmen wrote a one-page lighthearted piece that
implicitly makes its point -- without ever having to come out and say it
-- by taking this research direction to its absurd extreme and showing
us all where it leads to.  It follows in a long intellectual tradition
of saying the opposite of what you mean -- of arguing with a straight
face what is self-evidently a ridiculous position -- and trusting in
the intelligence of the reader to draw the obvious conclusions.

Personally, I found it an effective communication style.  I thought the
point came across very clearly.  And, I have to admit I enjoyed seeing
someone having a spot of fun with what can otherwise be a somewhat dry
topic.  I thought it was brilliantly done.

Sorry to be unable to provide any lively discussion.  I think Vincent
Rijmen's paper makes the point well, and I don't have anything to add.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Question w.r.t. AES-CBC IV

2010-07-10 Thread David Wagner
Jerry Leichter  wrote:
 CTR mode is dangerous unless you're also doing message authentication,  

Nitpick:

That's true of CBC mode, too, and almost any other encryption mode.
Encryption without authentication is dangerous; if you need to encrypt,
you almost always need message authentication as well.

(I will agree that CTR mode encryption without message authentication
is often even more dangerous than CBC mode encryption without message
authentication, but usually neither is a good idea.)

Setting that minor nitpick aside, the discussion here seems like good
advice.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Possibly questionable security decisions in DNS root management

2009-10-22 Thread David Wagner
Florian Weimer  wrote:
 And you better randomize some bits covered by RRSIGs on DS RRsets.
 Directly signing data supplied by non-trusted source is quite risky.
 (It turns out that the current signing schemes have not been designed
 for this type of application, but the general crypto community is very
 slow at realizing this discrepancy.)

Could you elaborate?  I'm not sure what you're referring to or why it
would be quite risky to sign unrandomized messages.  Modern, well-designed
signature schemes are designed to resist chosen-message attack.  They do
not require the user of the signature scheme to randomize the messages
to be signed.  I'm not sure what discrepancy you're referring to.

Back to DNSSEC: The original criticism was that DNSSEC has covert
channels.  So what?  If you're connected to the Internet, covert
channels are a fact of life, DNSSEC or no.  The added risk due to any
covert channels that DNSSEC may enable is somewhere between negligible
and none, as far as I can tell.  So I don't understand that criticism.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Detecting attempts to decrypt with incorrect secret key in OWASP ESAPI

2009-09-17 Thread David Wagner
Kevin W. Wall wrote:
 So given these limited choices, what are the best options to the
 questions I posed in my original post yesterday?

Given these choices, I'd suggest that you first encrypt with AES-CBC mode.
Then apply a message authentication code (MAC) to the whole ciphertext
(including the IV).  You then send the ciphertext followed the MAC digest.

SHA1-HMAC would be a reasonable choice of algorithm for message
authentication.  Sun's JCA appears to support SHA1-HMAC.

http://java.sun.com/javase/6/docs/technotes/guides/security/crypto/CryptoSpec.html#Mac
http://java.sun.com/javase/6/docs/technotes/guides/security/StandardNames.html#Mac

You'll want to use key separation to derive two separate keys.  So
if the key K is the master key, you could use

Kenc  = SHA1-HMAC(K, encrypt)
Kauth = SHA1-HMAC(K, authenticate)

or you could use

Kenc  = AES-ECB(K, all-zeros)
Kauth = AES-ECB(K, all-ones)

(Either is fine.)  Then use Kenc as the crypto key for AES-CBC encryption
and Kauth as the crypto key for SHA1-HMAC authentication.

If you are encrypting messages that will be sent over a two-way channel,
you'll probably want to either use a different crypto key for each
direction or include a direction bit in the inputs to the key separation
step.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Detecting attempts to decrypt with incorrect secret key in OWASP ESAPI

2009-09-16 Thread David Wagner
Advice: if you're creating something for general-purpose use, at a minimum
make sure it provides authentication, integrity, *and* confidentiality.
A reasonable choice might be Encrypt-then-Authenticate where you first
encrypt with AES-CBC, then append a AES-CMAC message authentication
code on the ciphertext (including the IV).  As a bonus, this will detect
decrypting with the wrong key.

Use key separation -- with a pseudorandom function -- to derive the
encryption key and authentication key from the master crypto key
supplied by the user: e.g., Encryptkey = AES(K, all-zeros), Authkey =
AES(K, all-ones).

(You could replace AES-CMAC with SHA1-HMAC, but why would you want to?)

(From a cryptotheory point of view, what you want is a mode of operation
that meets IND-CCA2 /\ INT-CTXT, or at least IND-CCA2 /\ INT-PTXT.)

Advice: Provide one mode, and make it secure.  Try to avoid
configurability, because then someone will choose a poor configuration.

Suggestion: Provide documentation to warn the programmer against using a
password (or something based on a password) as the crypto key.  Provide
support for PBKDF2, with a reasonable default choice of parameters and
appropriate warnings in the documentation, for those who absolutely must
use a password-derived crypto key.

Recommendation: Read the book Practical Cryptography by Ferguson and
Schneier, or at least the chapters on message security.  It's the best
source I know to teach you some of the crypto-engineering practicum.




Kevin W. Wall wrote:
I have considered using an HMAC-SHA1 as a keyless MIC to do this,
using something like:

   MIC = HMAC-SHA1( nonce, IV + secretKey )
or
   MIC = HMAC-SHA1( nonce, IV + plaintext )

and then also include the random nonce and the MIC itself in the CipherText
class so it can be validated later by the decrypter, with the understanding
that the plaintext resulting from the decryption operation should only be
used if the MIC can be properly validated. (Probably an exception would
be thrown if the dsig was not valid so there would be no choice.)

I would not recommend either of these.  It's better to use a MAC as I
suggest at the top of this email.

Whatever you do, don't choose your second MIC option: if the plaintext
comes from a low-entropy space, it leaks the value of the plaintext
(the plaintext can be recovered by brute-force search over all possible
plaintexts).

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Detecting attempts to decrypt with incorrect secret key in OWASP ESAPI

2009-09-16 Thread David Wagner
I don't exactly follow the argument for using CCM mode instead
AES-CBC encryption followed by AES-CMAC, and I'm not familiar with
the political/perception arguments (who complains about the latter?),
but whatever.  It's hardly worth arguing over.  The cryptographic mode
of operation is unlikely to be the weakest link in your system, and the
security differences between CCM mode vs AES-CBC + AES-CMAC seem minor,
so it doesn't seem worth worrying too much about it: CCM mode seems good
enough.  I'm not sure I'm familiar with the arguments against EAX mode
(full disclosure: I'm a co-author on the EAX paper and hence probably
biased), but again, whatever.  These three choices are all good enough and
the security differences between them seem minor.  In my view, choosing
any of the three would be a reasonable choice.  Just my personal opinion.


ObNitpick:

Joseph Ashwood wrote:
 Since you already have CBC available, my first suggestion would be CBC-MAC 
 (IV = 0x000, okcs5 padding works fine, MAC = final block of ciphertext), 
 it has good strong security proofs behind it, and is fast. [...]

Are you sure?  For vanilla CBC-MAC, the security proofs don't apply to
variable-length messages, and I recall that there are known attacks on
vanilla CBC-MAC when message lengths can vary (I'm not claiming those
attacks are necessarily realistic in all applications, but they may be).
AES-CMAC is a nice design that addresses this problem.  CMAC is based
upon CBC-MAC, but addresses the imperfections of vanilla CBC-MAC.

Personally, I wouldn't recommend vanilla CBC-MAC as a choice of message
authentication primitive; CMAC seems better in every dimension.  CMAC is
basically a CBC-MAC, but with all the details done right.  CMAC also
has the benefit that it has been standardized by NIST.

http://en.wikipedia.org/wiki/CMAC
http://csrc.nist.gov/publications/nistpubs/800-38B/SP_800-38B.pdf 

Bottom line: If you're going to use a standalone CBC-based MAC together
with a standalone encryption algorithm, I'd recommend using CMAC as your
message authentication code, not AES-CBC.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: brute force physics Was: cleversafe...

2009-08-12 Thread David Wagner
Alexander Klimov  wrote:
 A problem with this reasoning is that the physical world and the usual
 digital computers have exponential simulation gap (it is known at
 least in one direction: to simulate N entangled particles on a digital
 computer one needs computations exponential in N). This can be
 considered as a reason to suspect that physical world is
 non-polynomially faster than the usual computers (probably even to an
^^^
 extent that all NP computations can be simulated).
  

It is a commonly-held myth that quantum computers could solve
NP-complete problems, but most experts who have studied the issue
believe this is probably not the case.  There is no reason to think
that quantum computation could solve all NP problems in polynomial
time, and in fact, there are good reasons to believe this is
likely not the case.

(Do note that factoring is not NP-complete.)

See, e.g.

http://en.wikipedia.org/wiki/Quantum_computing#Relation_to_computational_complexity_theory

or for a more nuanced and deep treatment of these issues,

http://www.scottaaronson.com/papers/npcomplete.pdf

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: work factor calculation for brute-forcing crypto

2009-07-19 Thread David Wagner
Assume for a moment that we have a random number generator which is
non-uniform, and we are using it to generate a key.

What I'd like to do is characterize the work factor involved in
brute-force search of the key space, assuming that the adversary
has knowledge of the characteristics of the random number generator?

You may want to use the guessing entropy.
I've written about it before here:

http://www.cs.berkeley.edu/~daw/my-posts/entropy-measures 

Christian Cachin's thesis is a wonderful introduction to entropy
measures, including the guessing entropy.

By the way, I'll make the obvious recommendation: Try to avoid
using a non-uniform random number generator to generate a cryptographic
key, if you can.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Activation protocol for tracking devices

2009-03-04 Thread David Wagner
Santiago Aguiar  wrote:
 As I wrote in my last email, in Brazil they are devising a protocol to 
 activate tracking/blocking devices to be installed from factory in 
 *every* vehicle, starting progressively from august 2009. The idea is 
 that a service operator (SO) can activate a device to work with it, by 
 first asking a centralized agency, the DENATRAN (department of transit), 
 that must authorize the activation request. Once activated, the device 
 keeps in that state until the SO deactivates it or until DENATRAN 
 reconfigures the device SIM card remotely to change it IMSI to a special 
 network operated by DENATRAN.

This does sound like it introduces novel risks.  I would suggest that
rather than spending too much energy on the cryptomath, it would make
sense to focus energy on the systems issues and the security requirements.

1) Is the system really intended to allow a single government agency
to deactivate a car, without permission from the owner of that car?
If so, that creates systematic risks that should be examined carefully.
Is there any chance of revising the security requirements, so that consent
of the owner is required?  Good requirements engineering may be able to
make as big a difference as any amount of crypto.

2) Strong audit logs would appear to be important.  In particular, here
are a few ideas.  One might require that anytime a car is deactivated,
a postcard is sent to the owner of that car letting them know of the
deactivation and who authorized it.  One could also require that an audit
log be kept of every deactivation event and who precisely authorized it,
and mandate that the owner of a car has the right to a copy of the audit
log for their own car at any point, without delay.

3) You might consider advocating an opt-out policy, where car owners
can turn off the functionality that allows deactivation of their car
without their permission, and/or turn off the tracking functionality.

4) You might want to ask about what protects the location privacy of
car operators.  Does this system provide a third party with the power
to track the movements of cars around the country?  That sounds like a
serious privacy risk to me.  What controls are there to protect privacy,
surveillance, or government abuse of power?

5) I would think that another possible security concern may be social
engineering: if DENATRAN has the power and is authorized to deactivate
cars, one tempting method to maliciously deactivate someone's car might
be to convince DENATRAN to deactivate it.  How will that be prevented?
What are the procedures that DENATRAN will follow before deactivating
a car?  Are these required by law or regulation?

6) Are there penalties for inadvertent, incorrect, or unauthorized
deactivation of a car?  One possibility might be to require that the
agency or the business pay a fee to the owner of the car if the owner's
car is improperly deactivated.  That might then put the onus of securing
the infrastructure on the folks who can do something about it.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Cube cryptanalysis?

2008-08-21 Thread David Wagner
Steve Bellovin writes:
Greg, assorted folks noted, way back when, that Skipjack looked a lot
like a stream cipher.  Might it be vulnerable?

I'm still absorbing Adi's new ideas, and I haven't looked at this in any
detail, so anything I say should be taken with an enormous grain of salt.
But, off-hand, I'd guess not.  I don't see anything that immediately
makes me worried about Skipjack, or AES for that matter.

In its most basic form, Adi Shamir's cube attack applies when some bit of
the output of the stream cipher (or block cipher, etc.) can be written as
a polynomial of the key and input such that the degree of the polynomial
is not too large.  One major innovation is that the attack applies even
if the number of terms in the polynomial is enormous -- say, way too
many to explicitly write down the polynomial.  When the degree is not
too large, Adi showed some powerful techniques for recovering the key.

Adi pointed out that this might be especially relevant to many LFSR-based
stream ciphers.  The reason is that many LFSR-based stream cipher use a
non-linear filter function of low degree.  Often, the key loading process
also has relatively low degree.  The LFSR itself is linear and hence does
not increase the degree.  The attack only seems directly relevant to a
subset of stream cipher architectures -- for instance, Adi mentioned that
he does not know how to apply it to some clock-controlled LFSR-based
stream ciphers, such as A5/1 -- but the class of stream ciphers it
applies to is an important and common class of stream ciphers.

In contrast, I don't expect this to threaten most modern block ciphers.
Most block ciphers contain enough rounds, and enough non-algebraic
structure in each round, to ensure that the degree of the resulting
polynomial will be large, so in those cases the attack does not seem
applicable.  But of course I could well be missing something, and it's
always possible there could be further advances.

It's a brilliant piece of research.  If you weren't at CRYPTO, you missed
an outstanding talk (and this wasn't the only one!).

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Looking through a modulo operation

2008-07-22 Thread David Wagner
Matt Ball writes:
Another attacking avenue is the 32-bit initial seed.  If the
implementation re-seeds frequently, or leaks to you the first outputs
after initialization, then you only have to brute-force the 32-bit
seed space, times the number of samples since reseeding.

Well, that's good and broken then, isn't it?  No ifs about it.
It doesn't matter whether the implementation re-seeds frequently or
rarely.  It doesn't matter whether it leaks to you the first outputs
after initialization or only later ones.  If it's using a 32-bit seed,
this sucker is just plain broken, period.

The attack is trivial -- it's just exhaustive keysearch.

Attack: Given ~ 3 known outputs from the RNG, try all possible 32-bit
seeds.  You'll be able to recognize when your guess at the seed is correct
because the output from your guessed seed will match the observed output.
This assumes you know the offsets of your outputs (how many times the
RNG has been cranked before producing your known outputs), but even
if you only have partial information about that you can still make a
variant of this attack work.  The exhaustive search attack has to try
only 2^32 possibilities, so I'm guessing this attack should only take
minutes (at worst, hours) on a fast computer.

My earlier email about fancy attacks on this scheme is irrelevant.
There's no need to bother with fancy attacks, when the PRNG only uses
a 32-bit seed -- that's just blatantly insecure.  This is a textbook
error, one of the oldest mistakes in the book.  (For example, look
here:  http://www.cs.berkeley.edu/~daw/papers/ddj-netscape.html)

Using this PRNG for security-critical purposes would not be wise.




I'll include your code snippet for seeding the PRNG below.  Note: I'm
assuming, per your comments, that unsigned long is a 32-bit type.

Here is the function that performs the initial seeding, based on a 32-bit seed:

static void __set_random32(struct rnd_state *state, unsigned long s)
{
if (s == 0)
s = 1;  /* default seed is 1 */
#define LCG(n) (69069 * n)
state-s1 = LCG(s);
state-s2 = LCG(state-s1);
state-s3 = LCG(state-s2);
/* warm it up */
__random32(state);
__random32(state);
__random32(state);
__random32(state);
__random32(state);
__random32(state);
}

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Looking through a modulo operation

2008-07-21 Thread David Wagner
Florian Weimer writes:
 I've got a function f : S - X x S where S = (Z/2Z)**96 and
 X = (Z/2Z)**32.  Suppose that s_0 is fixed and (x_i, s_i) = f(s_{i-1}).
 (f implements a PRNG.  The s_i are subsequent internal states and the
 x_i are results.)

 Now f happens to be linear.  I know the values of x_i, x_{i+1}, ...,
 x_{i+k} modulo N, [where N = 28233].
 Is it possible to recover s_i with reasonable effort (better than brute
 force, and k should be in the hundreds, not thousands)?

Yes.  I will show two attacks.  The results: Attack #1 is a
straightforward improvement to exhaustive search, and takes 2^52 work (~
10-20 CPU-years?).  Attack #2 is a more sophisticated meet-in-the-middle
attack, and takes 2^37 work and 2^35 space; the best instantiation I've
found looks like it would involve 16GB of RAM and a few days or weeks
of computation.  Both attacks recover the entire state of the PRNG,
given only a handful of known outputs.  It's possible there may be
other, better attacks.

My conclusion is that this PRNG is not cryptographically strong and
should not be used anywhere that you may face an adversary who is
motivated to break the PRNG.  In short, this PRNG is broken.


Attack #1: Given x mod N, there are only about 2^32/28233 = 2^17.2
possibilities for x, and it's easy to enumerate them all.  Suppose we
know x_1 mod N, x_2 mod N, ..., x_7 mod N.  Enumerate all
(2^17.2)^3 = 2^51.6 possibilities for x_1, x_2, and x_3.  For each
such possibility, you know 96 bits of output from a linear system,
and there are 96 unknowns, so you can solve for s_0.  Given s_0,
you can compute the predicted value of x_4, .., x_7 and compare them
to the observed values of x_4 mod N, etc.  If everything matches,
you've found the original seed s_0 and broken the PRNG.

This requires trying 2^51.6 possibilities, so it's a bit less than
2^51.6 work.  That's already a small enough number that the system
is not cryptographically secure.

This can be sped up slightly by noting that x_4 is a linear function
of x_1,x_2,x_3.  We can precompute the form of that linear function
and express it as a 32x96 matrix.  The best approach I can see will take
something like 500 CPU cycles to compute x_4 from x_1,..,x_3, so I'd
expect the total number of CPU cycles needed at something like 2^60.6
or so.  On a 4GHz machine, that's like 13 CPU-years.  Of course it is
trivially parallelizable.


Attack #2: If we were given inferred values of x_1, x_2, x_3, and x_4,
we could check them for consistency, since they represent 128
equations in 96 unknowns.  In particular, we can find a linear function
g from 128 bits to 32 bits such that g(x_1,x_2,x_3,x_4) = 0 if
x_1,..,x_4 are consistent with having been produced by this PRNG.
In fact, this can be broken down further, so that if x_1,..,x_4
are consistent, they will satisfy this equation:
  h(x_1,x_2) = h'(x_3,x_4).
Call this the check equation.  Here h,h' are linear functions from 64
bits to 32 bits, so if x_1,..,x_4 are not consistent, they have only a
2^-32 chance of satisfying this check equation.

Suppose we are given x_1 mod N, .., x_8 mod N.  We'll enumerate all
2^34.4 possibilities for x_1,x_2, compute h(x_1,x_2) for each, and
store them in a hash table or sorted list keyed on the 32-bit value
of h(x_1,x_2).  Once that table is built, we'll enumerate all 2^34.4
possibilities for x_3,x_4, compute h'(x_3,x_4) for each, and find
all occurrences in the table where
  h(x_1,x_2) = h'(x_3,x_4).
On average, we expect to find about 2^34.4/2^32 = 2^2.4 matches.
For each such match, compute the predicted value of x_5 as a function
of x_1,..,x_4 and see whether it agrees with the observed value of
x_5 mod N, etc., to weed out false matches.  In total, you'll need
to explore 2^34.4 + 2^34.4*2^2.4 ~= 2^37 possibilities, and you'll
need space for 2^34.4 entries in the table.

We can store the table as a sorted list.  This list will take up about
1600 GB, and you could buy enough hard disks for that at ~ $2000, say.
You can add an index in memory that tells you, for each value of
h(x_1,x_2), which block or sector on disk those entries of the list is
found at.  You'll need to make 2^34.4 random seeks into this hard disk
array.  With good disk head scheduling algorithms you might be able to
get the seek time down a bit, but even optimistically we're probably
still talking about a year of computation.

Fortunately, we can trade time for space.  With 16GB of RAM, we can
store 1/128th of the list: namely, all values where the low 7 bits
of h(x_1,x_2) are some fixed value V.  We cycle through all choices
of V, repeating the attack once for each V.  With this choice, I expect
the amount of time spent waiting for RAM to be comparable to the CPU
cost of the attack.  For each V, we have to evaluate a linear function
2^35.4 times.  It looks to me like this will take a few CPU-days.


Disclaimers: I have not checked any of the above analysis.  I have
not implemented it to test whether these attacks actually work.  Even
if the 

User interface, security, and simplicity

2008-05-06 Thread David Wagner

In article [EMAIL PROTECTED] you write:
On Sun, May 04, 2008 at 10:24:13PM -0400, Thor Lancelot Simon wrote:
 I believe that those who supply security products have a responsibility
 to consider the knowledge, experience, and tendencies of their likely
 users to the greatest extent to which they're able, and supply products
 which will function properly _as users are likely to apply them_.

The TLS support in Postfix tries to behave sensibly with easy setings.

- Cipher list selection is indirect, via grades: export, low,
medium and high. The actual ciphers for each grade are buried
in parameters users are advised to not mess with.

- The cipher grade for opportunistic TLS is export, but you single
out a destination for mandatory TLS, the grade rises to medium.

[..]

- With the upcoming EECDH support, users don't choose curves
directly, they again choose a security grade, and the correspnding
curves are configurable via parameters they are not expected to
ever look at or modify.

This struck me as poor design, not good design.  Asking the user to
make these kinds of choices seems like the kind of thing that only a
cryptographer could consider sensible.  In this day and age, software
should not be asking users to choose ciphers.  Rather, the software
should just pick a sensible high-grade security level (e.g., AES-128,
RSA-1024 or RSA-2048) and go with that, and avoid bothering the user.
Why even offer low as an option?  (And this export business sounds
like a throwback to a decade ago; why is that still there?)

Good crypto is cheap.  Asking a user is expensive and risky.

So I think there should be a broad design bias towards *implicit* correct
behaviour in all system features, with rope available for advanced users
to *explicitly* craft more complex use-cases. Once you have that, practical
security is not too difficult.

Amen.  I know of quite a few software packages that could use more of
that philosophy.

The same is true in the source code, unsafe practices are avoided
globally, (e.g. both strcpy() and strncpy() are absent together with fixed
size automatic buffers) rather than used with care locally. I won't bore
you with all the implementation safety habits, but there are many.

It's too bad that today such elementary practices are something to brag
about.  Perhaps one day we'll be lucky enough that the answer to these
questions becomes more like of course we use safe programming practices;
what kind of incompetent amateurs do you take us for?.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Toshiba shows 2Mbps hardware RNG

2008-02-14 Thread David Wagner
Crawford Nathan-HMGT87 writes:
One of the problems with the Linux random number generator
is that it happens to be quite slow, especially if you need a lot of
data.

/dev/urandom is blindingly fast.  For most applications, that's
all you need.

(Of course there are many Linux applications that use /dev/random
simply because they don't know any better, but that's a pretty weak
argument for a fast hardware RNG.)

A fast hardware RNG could be useful but I'm not convinced high
speed matters all that much for most applications.  Grab 128 bits
for a hardware RNG, feed it through AES-CTR to generate an unending
stream of pseudorandom bits -- that's good enough for most applications.

(Yes, I know there are exceptions where pseudorandomness is not
enough.  But even the exceptions rarely need true random numbers at
a rate of several Mbps.)

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Fixing SSL (was Re: Dutch Transport Card Broken)

2008-02-09 Thread David Wagner
Tim Dierks writes:
(there are totally different reasons that client certs aren't being
widely adopted, but that's beside the point).

I'd be interested in hearing your take on why SSL client certs aren't
widely adopted.  It seems like they could potentially help with the
phishing problem (at least, the problem of theft of web authenticators
-- it obviously won't help with theft of SSNs).  If users don't know
the authentication secret, they can't reveal it.  The nice thing about
using client certs instead of passwords is that users don't know the
private key -- only the browser knows the secret key.

The standard concerns I've heard are: (a) SSL client supports aren't
supported very well by some browsers; (b) this doesn't handle the
mobility problem, where the user wants to log in from multiple different
browsers.  So you'd need a different mechanism for initially registering
the user's browser.

By the way, it seems like one thing that might help with client certs
is if they were treated a bit like cookies.  Today, a website can set
a cookie in your browser, and that cookie will be returned every time
you later visit that website.  This all happens automatically.  Imagine
if a website could instruct your browser to transparently generate a
public/private keypair for use with that website only and send the
public key to that website.  Then, any time that the user returns to
that website, the browser would automatically use that private key to
authenticate itself.  For instance, boa.com might instruct my browser
to create one private key for use with *.boa.com; later,
citibank.com could instruct my browser to create a private key for
use with *.citibank.com.  By associating the private key with a specific
DNS domain (just as cookies are), this means that the privacy
implications of client authentication would be comparable to the
privacy implications of cookies.  Also, in this scheme, there wouldn't
be any need to have your public key signed by a CA; the site only needs
to know your public key (e.g., your browser could send self-signed
certs), which eliminates the dependence upon the third-party CAs.
Any thoughts on this?

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


More info in my AES128-CBC question

2007-04-22 Thread David Wagner
Hagai Bar-El writes:
What Aram wrote is many of the attendees have very little security
experience, not: there are no attendees with security experience.
There are people at the relevant OMA group who know enough about
security, but just like in the real world -- they are outnumbered by
plain feature-set people, and thus have to come up with very clear
arguments to get their way.

So the people who don't know anything about security are reluctant to
listen to those who do?  That's not a good sign.  It may be standard
operating procedure in groups like this, but that doesn't make it right.
It's still dysfunctional and dangerrous.  If the committee doesn't have
a commitment to security and is reluctant to listen to the experts,
that's a risk factor.

If you're sick and you go to a doctor, do you tell the doctor you'd
better come up with some very clear arguments if you want me to follow
your advice?  Do you tell your doctor you'd better build a strong case
before I will listen to you?  I would hope not.  That would be silly.
Doctors are medical professionals with a great deal of training and
expertise in the subject.  They can speak with authority when it comes
to your health.  So why do people with no training in security think
that they can freely ignore the advice of security professionals without
any negative consequences?


I do not know the protocol in question, but in a nutshell: Generally,
CBC with a fixed IV (be it zero or any other value) is to be avoided for
the reason described in previous posts. In some circumstances this
restriction may be relaxed, such as:

(1) if the first unknown (to the attacker) block _always_ follows (not
necessarily immediately) a session-specific block (a block that is not
likely to repeat for the same key, such as a message-id). For example,
if every encrypted structure starts with an id that never repeats among
structures, and all security-wise meaningful blocks follow it, you are
_probably_ safe.

(2) if the key is never re-used among structures you encrypt.

Right.

AND (3) If you don't care about replacement attacks on the (1 to i)
blocks that will result only in a (possibly-undetected) corruption when
decrypting the i+1 block (rather than two blocks, with a varying and
non-attacker-changeable). [...]

Wait a minute.  This reference to replacement attacks has me concerned.  
Does the protocol use a message authentication code (MAC)?  I hope so.
If your protocol uses a MAC, and uses it properly, then replacement
attacks are not an issue, and the only issue with using a fixed IV is 
related to confidentiality.  If you don't use a MAC, you've got bigger
problems, and even random IVs won't be enough.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: analysis and implementation of LRW

2007-01-24 Thread David Wagner
Thanks to everyone who responded with more information about IEEE
P1619.  Here are some of the additional links, with my reactions:

Andrea Pasquinucci points to:
 http://en.wikipedia.org/wiki/IEEE_P1619#LRW_issue

Ben Laurie points to:
 http://grouper.ieee.org/groups/1619/email/msg00558.html

Wikipedia points to two concerns with LRW: (1) LRW isn't secure if you use
it to encrypt part of the key; (2) something having to do with collisions.
For these reasons, Wikipedia says that IEEE P1619 is moving to XEX-AES.

I think (1) is a valid concern and a legitimate reason for IEEE P1619
to move to another mode.  XEX-AES is a great mode and this seems like a
solid move for IEEE P1619.  XEX-AES rests on solid foundations, and there
are good grounds for confidence in its design.  I would add one caveat,
though.  I am not aware of any proof that XEX-AES -- or any other mode,
for that matter -- is secure when used to encrypt its own key.  This is
not a flaw in XEX-AES, but rather a generic property of standard models
of security for symmetric-key encryption.  So I wouldn't be inclined to
get too comfortable with the idea of encrypting the key under itself.

I'm not 100% certain I follow what (2) is trying to get at, but it
sounds to me like a non-issue.  One interpretation of (2) is that the
concern is that if part of the key is chosen in a non-uniform way (say,
as a password), then LRW is insecure.  Of course, you should not use any
mode in that way, and I don't know of anyone who suggests otherwise.
The remedy is straightforward: crypto keys should be truly uniform.
This is standard advice that applies to all modes of operation.

Another possible interpretation of (2) is that if you use LRW to encrypt
close to 2^64 blocks of plaintext, and if you are using a 128-bit block
cipher, then you have a significant chance of a birthday collision,
which may leak partial information about the plaintext or key.  That's
absolutely true, though it is pretty much a standard feature of any mode
of operation based on 128-bit block ciphers.  Standard advice is to change
keys long before that happens, and that advice doesn't seem terribly
hard to follow.  (See, e.g., my prior post on this subject for evidence
that this doesn't seem likely to be a serious problem for current disk
encryption applications.  That's fortunate for narrow-block cryptography,
because otherwise none of the solutions would be acceptable.)  So it
sounds like concern (2) is a bit of a red herring, and LRW is still ok
for applications that won't be used to encrypt the key or any material
derived from the key.

The good news out of IEEE P1619 is that a number of excellent modes of
operation are coming out of that effort, and other applications should
be able to take advantage of the good work that P1619 is doing.  This is
good stuff.

Disclaimer: Of course, LRW is of personal interest to me, so I'm sure
I'm biased.  Form your own opinions accordingly.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: analysis and implementation of LRW

2007-01-23 Thread David Wagner
Jim Hughes writes:
 The IEEE P1619 standard group has dropped LRW mode. It has a vulnerability
 that that are collisions that will divulge the mixing key which will reduce
 the mode to ECB.

Peter Gutmann asks:
 Is there any more information on this anywhere?  I haven't been able to find
 anything in the P1619 archives (or at least not under an obvious heading).

Alexander Klimov replies:
Probably http://grouper.ieee.org/groups/1619/email/msg00962.html

Huh.  Was that the reason?  I suspect there may have been more to it
than that.  The message at the cited URL basically says that if the
attacker somehow manages to learn half of the key material, then bad
things happen.  That alone isn't likely to lead to rejecting or accepting
a mode of operation, I would think.  You know the old joke.  Patient:
Doctor, doctor, it hurts if I let the attacker learn part of my key.
Doctor: Well, don't do that, then.

I should perhaps mention that there is some further background which
no one has brought up yet, and which may help to understand the IEEE
P1619 work.  I know there was a concern on the IEEE P1619 mailing list
that, if the encryption key is sitting in memory, when you suspend to
disk, if the suspend image is encrypted using that same key, then you
are encrypting the key under itself (C = E(K,K)).  Encrypting the key
under itself is in general a potentially unsafe thing to do.  For one
thing, it voids the security warranty; every provable security theorem
that I know of requires that the plaintext must not depend on the key.
For another thing, with some modes, there are known attacks where
encrypting the key under itself might reveal partial information about
the key.  LRW is one of those modes where there is such a known weakness.
I understand that the IEEE P1619 group came to the conclusion that it was
not reasonable to re-architect existing software to ensure that it would
never encrypt the key under itself.  This then creates a requirement that
the mode of operation must be safe to use even when encrypting a key
under itself.  That is indeed an interesting requirement, and one that
seems to legitimately rule out a number of existing modes of operation
for IEEE P1619.

With that background, I want to circle back to the message from Jim
Hughes.  I was aware of this encrypt-a-key-under-itself issue, and it's
an interesting one.  But Jim Hughes didn't cite that as the reason for
rejecting LRW; his mention of collisions made it sound to me like he
may have been thinking of something else.  Perhaps I misunderstood his
intent.  It might be helpful to have clarification on this: Jim, were
you suggesting that there is a second issue, or have I misunderstood you?

If there is an issue with collisions, I'd be interested to understand
it better.  Does anyone have any more information on this anywhere?
Does this refer to the birthday bound issue, that if you use a 128-bit
block cipher, then the security warranty is violated once you encrypt
close to 2^64 blocks of data?  Or is it something other than that?
My apologies if I've totally misunderstood the P1619 group's reasoning.

I suspect there may be a lot we can learn from IEEE P1619's effort.
Thanks to everyone for their comments.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Startup to launch new random number generator from space

2006-12-24 Thread David Wagner
Udhay Shankar reports:
http://news.zdnet.com/2100-1009_22-6142935.html

British start-up Yuzoz has announced that it will be launching its 
beta service in the next two weeks--an online random-number generator 
driven by astronomical events.

Heh heh.  Pretty amusing.  I guess the founders haven't really thought
this through.  One problem with such a service, of course, is total
reliance upon Yuzoz: Yuzoz learns all your secret keys -- and so does
any hacker who figures out how to break into Yuzoz's servers.  That doesn't
sound like such a great deal -- especially considering that high-quality
random-number sources are not that hard to come by.

I guess we can take ill-conceived startups like this as a sign of
increasing awareness about the security risks and the need for security
solutions, even if there is some, err, lack of sophistication about how
to distinguish good security technology from bad.  (Quantum crypto seems
like another one for that camp.  Oracle's Unbreakable marketing slogan
was another good one.)

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Exponent 3 damage spreads...

2006-09-17 Thread David Wagner
James A. Donald [EMAIL PROTECTED] writes:
Parameters should not be expressed in the relevant part
of the signature.  The only data that should be
encrypted with the RSA private key and decrypted with
the public key is the hash result itself, and the
padding.  If the standard specifies that additional
material should be encrypted, the standard is in error
and no one should follow it.

I agree with this comment, and with many of the other sensible
comments you have made in this thread.

I would modify what you said slightly: it may be reasonable to include a
field identifying the hash algorithm alongside the hash digest.  But apart
from that, throwing in additional optional parameters strikes me as just
asking for trouble.

It seems to me that e=3 is a distraction.  I think that these security
holes have revealed some more fundamental issues here that are independent
of the value of e you use.

It seems to me that the problems can be attributed to two problems:
(a) implementation bugs (failures to implement the spec faithfully); and
(b) ad hoc signatures schemes that have never been adequately validated.
In more detail:

  (a) Any implementation that doesn't check whether there is extra
  junk left over after the hash digest isn't implementing the PKCS#1.5
  standard correctly.  That's a bug in the implementation.  Of course,
  as we know, if you use buggy implementations that fail to implement
  the specification faithfully, all bets are off

  (b) The discussion of parameter fields in PKCS#1.5 signatures
  illustrates a second, orthogonal problem.  If your implementation
  supports appending additional parameter fields of some general
  structure, then you have not implemented conventional PKCS#1.5
  signatures as they are usually understood; instead, you have implemented
  some extension.  That raises a natural question: Why should we think
  that the extended scheme is still secure?  I see no reason to think
  that throwing in additional parameters after the hash digest is a safe
  thing to do.  I suggest that part of the problem here is that people
  are using signature padding schemes that have not been validated and
  have not been proven secure.  These PKCS#1.5 variants that allow you to
  include various optional ASN.1 crud alongside the hash digest have never
  been proven secure.  These days, using an ad hoc padding scheme that
  has not been proven secure is asking for trouble.  Why are people still
  deploying cryptographic schemes that haven't been properly validated?

I would suggest that there are two lessons we can learn from this
experience: (a) maybe more attention needs to be paid to verifying
that our implementations correctly implement the specification; and,
(b) maybe more attention needs to be paid to validating that the spec
defines a cryptographic mode of operation that is sensible and secure --
and provable security might be a good starting point for this.

Consequently, I think the focus on e=3 is misguided.  I think we should
be more concerned by the fact that our crypto implementations have
implementation bugs, and that our specs were never adequately validated.
This time, the impact of those failures may have been worse for signatures
using e=3, but it seems to me that this is more an accident than anything
particularly fundamental.  The latest problems with e=3 are just the symptom,
not the root cause.  I think it's worth putting some effort into treating
the root cause, not just the symptom.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Solving systems of multivariate polynomials modulo 2^32

2006-08-14 Thread David Wagner
Danilo Gligoroski writes:
[...] solve a system of 3 polynomials of order 3
with 3 variables x1, x2 and x3 in the set Z_{2^32} and
coeficients also in Z_{2^32} [...]

Here is a trick that should solve these kinds of equations extremely
quickly.  First, you solve the system of equations modulo 2.  Then, for
each mod-2 solution, you try to extend it to a solution modulo 4.  Then,
for each mod-4 solution, you extend it to a solution modulo 8.  And so on.
This is sometimes known under the name Hensel lifting.

Here's an example.  Suppose we have the equations:
x*y + z   = 1
x^3 + y^2 * z = 1
x + y + z = 0

Step 1: Find all solutions modulo 2.  This is easy: you just have to try
2^3 = 8 possible assignments and see which one satisfy the equations.  In
this case, the only solution is x=0, y=1, z=1 (mod 2).

Step 2: Find all solutions modulo 4.  This is again easy: since we know
x=0 (mod 2), then the only two possibilities for x mod 4 are 0 or 2.
Likewise, there are only two possibilities for y mod 4 and z mod 4.
Trying all 2^3 = 8 possible assignments, we find that the only two
solutions are x=0, y=3, z=1 (mod 4) and x=2, y=1, z=1 (mod 4).

Step 3. Find all solutions modulo 8.  First, take the mod-4 solution
x=0, y=3, z=1 (mod 4) and try extending it all 2^3 possible ways, to
see which of them lead to mod-8 solutions.  Then, take the other mod-4
solution x=2, y=1, z=1 (mod 4) and try extending it all 2^3 possible
ways, too.  The result is the set of mod-8 solutions.

Step 4. Find all solutions modulo 16.  etc.

We see that this requires performing 32 of these steps.  On average, we
expect about one feasible solution to remain possible after each step
(though this number may vary).  Each step requires testing 8 possible
assignments, times the number of assignments from the prior step.  Thus,
on the average case we can expect this to run very fast.

Note that this only works for Z_{2^32}.  It doesn't work for GF(2^32).

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Factorization polynomially reducible to discrete log - known fact or not?

2006-07-12 Thread David Wagner
Ondrej Mikle  wrote:
I believe I have the proof that factorization of N=p*q (p, q prime) is 
polynomially reducible to discrete logarithm problem. Is it a known fact 
or not?

Be careful: when most people talk about the assumption that the
discrete log problem being hard, they usually are referring to the
hardness of discrete logs modulo a large prime.  In contrast, you
seem to be talking about the hardness of discrete logs modulo an
RSA modulus.  Those two things are not the same.

It is well-known that if you can solve discrete logs modulo a RSA
modulus N in polytime, then you can factor N in polytime.  This is
a standard result that is well-known to anyone who studies this field.
If you've re-discovered this result, you haven't got anything new.

The algorithm is very simple:
1. Choose a big random value x from some very broad range
   (say, {1,2,..,N^2}).
2. Pick a random element g (mod N).
3. Compute y = g^x (mod N).
4. Ask for the discrete log of y to the base g, and get back some
   answer x' such that y = g^x' (mod N).
5. Compute x-x'.  Note that x-x' is a multiple of phi(N), and
   it is highly likely that x-x' is non-zero.  It is well-known
   that given a non-zero multiple of phi(N), you can factor N in
   polynomial time.

There is no known proof that if you can factor N in polytime, you
can solve discrete logs modulo N in polynomial time.  (In practice,
if N is a 2048-bit RSA modulus that is a product of two 1024-bit
primes, if you can factor N, you can solve discrete logs modulo N
more efficiently by solving two discrete log problems modulo 1024-bit
prime numbers and then applying the Chinese remainder theorem.  But
the latter is still asymptotically superpolynomial.)

There is no known proof that if you can solve discrete logs modulo
a prime p in polytime, then you can factor a RSA modulus N in polytime.

There is no known proof that if you can factor a RSA modulus N in
polytime, then you can solve discrete logs modulo a prime p in polytime.

If you can solve any of the latter three problems, then you've got
something new, and many cryptographers will be interested.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Interesting bit of a quote

2006-07-12 Thread David Wagner
[EMAIL PROTECTED]
 Been with a reasonable number of General Counsels
 on this sort of thing.  Maybe you can blame them
 and not SB1386 for saying that if you cannot prove
 the data didn't spill then it is better corporate
 risk management to act as if it did spill.

Well, are you sure you haven't confused what they're saying about SOX, vs
what they're saying about SB1386?  It's easy for me to believe that they'd
say this about SOX, but the plain language of SB1386 seems pretty clear.

(It would also be easy for me to believe that a General Counsel would
say that if you have knowledge of a breach of security in one of your
systems and reason to believe that an unauthorized individual gained
access to personal information as a result, then you must assume that
you have to notify every person whose data was stored in the system and
who may have been affected by the breach, unless you can prove that those
persons weren't affected by that breach.  But that's very different from
how you characterized SB1386.)

If General Counsels are really saying that SB1386 requires you to act
as if data has spilled, even in absence of any reason whatsoever to
think there has been any kind of security breach or unauthorized access,
merely because you don't have proof that it hasn't spilled -- then yes,
that does sound strange to me.  That is not my understanding of the
intent of SB1386, and it is not what the language of SB1386 seems to say.

Then again, maybe your General Counsels know something that I don't;
it's always possible that the text of the law is misleading, or that
I'm missing something.  They're the legal experts, not me.

Personally, my suggestion is as follows: The next time that a General
Counsel claims to you that SB1386 requires you to assume data has spilled
(even in absence of any reason to believe there has been a security
breach) until you can prove to the contrary, I suggest you quote from
the text of SB1386, and let us know how they respond.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Factorization polynomially reducible to discrete log - known

2006-07-12 Thread David Wagner
  The algorithm is very simple:
  1. Choose a big random value x from some very broad range
(say, {1,2,..,N^2}).
  2. Pick a random element g (mod N).
  3. Compute y = g^x (mod N).
  4. Ask for the discrete log of y to the base g, and get back some
answer x' such that y = g^x' (mod N).
  5. Compute x-x'.  Note that x-x' is a multiple of phi(N), and
it is highly likely that x-x' is non-zero.  It is well-known
that given a non-zero multiple of phi(N), you can factor N in
polynomial time.
 
 Not exactly. Consider N = 3*7 = 21, phi(N) = 12, g = 4, x = 2, x' = 5. 
 You'll only get a multiple of phi(N) if g was a generator of the 
 multiplicative group Z_N^*.

When N is a large RSA modulus, there is a non-trivial probability that g
will be a generator (or that g will be such that x-x' lets you factor N).
The above is good enough for a polytime reduction.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Irish eVoting Vetoed

2006-07-05 Thread David Wagner
The Irish government's commission's report on the NEDAP/Powervote system 
has been published. (PDFs on the site)

http://www.cev.ie/htm/report/download_second.htm

As a secure system, it leaves a lot to be desired and it seems to be an 
example in how not to implement an eVoting system. Just reading the 
report, I am beginning to wonder which has more holes - a lump of 
activated charcoal or this eVoting system.

Agreed.  I think the quality of the technical analysis in the report
is a disappointment.  The report states the Commission's opinion that
the voting machine can be trusted.  However, the technical content
of the report is, in my opinion, starkly at odds with that conclusion:
  1) The report discloses several vulnerabilities that ought to raise
  questions about the security of the system.
  2) Moreover, there are frank admissions of major gaps in the
  Commission's analysis.  These revelations suggest to me that
  there is, at present, no rational basis for confidence in the
  correct operation of these voting machines.
After reading the report, I suspect that no one knows whether the system
is trustworthy or not (and that includes the Commission and all the
Commission's experts).  To be clear, I am not claiming that the system
is known to be untrustworthy; but neither is it known to be trustworthy.

It seems to me that the technical material in the report would better
support the conclusion that there is no convincing evidence that the
voting machines are fit for purpose.  I find the report's defense of the
voting machines unpersuasive.  It is puzzling to me why the Commission
is willing to recommend the voting machines.  I feel bad for any Irish
citizens who may be forced to rely on these machines in their election.


Let me share some quotes from the report that stuck out for me, along
with some commentary discussing my reaction to those quotes:


``The Commission has not conducted a line-by-line review of the software
embedded in the voting machine.'' -- p.60

  My comments: This is a confession that the Commission has no idea
  whether the system is trustworthy or not.  For all we know, a Trojan
  horse or malicious logic could be hidden somewhere in the source
  code.  The only way to detect such malicious code is by inspecting
  the source code.  When some of the source code is left uninspected,
  we have no way of knowing whether that uninspected part of the code
  might contain malicious logic.  Because the software is written in
  C, a single piece of malicious logic hidden anywhere in the code can
  subvert the working of the entire system.  Consequently, there is no
  rational basis for confidence that the system is free of backdoors.
  The Commission has no idea whether these voting machines might contain
  hidden backdoors -- and neither do I.


``further analysis, investigation and testing, and possibly amendment
of the C code [embedded in the voting machine] will be required.'' -- p.96

``the carrying out of substantive testing or verification of the system
lies beyond the scope of the Commissionts remit'' -- p.112

  My comments: The Commission recognizes that its own analysis of the
  source code is lacking.  Why are they willing to recommend the system
  before they have performed the analysis that would be needed to
  determine whether the system is trustworthy or not?


``The Commission has observed no mechanism within the system that would
enable operators, observers and voters to satisfy themselves independently
that the hardware and software of the voting machine are authentic and
that they are correct versions that have been tested and certified and
that have been approved for use by the electoral authorities.'' -- p.61

``it is not readily possible, nor is it required by prescribed procedures,
for operators or others to confirm independently that the version of the
C code installed on the voting machine or the programming/reading unit is
the correct version and to verify that it has not been altered.'' -- p.97

  My comments: This is a potentially significant vulnerability.  It is
  an admission that, if someone found a way of tampering with the code
  installed on the voting machines, election officials would have no way
  of detecting such tampering.


``data on ballot modules [e.g., electronic votes files] is not
cryptographically signed to prevent unauthorised alteration'' -- p.72

``The tests carried out by the Commission indicated that it would be
possible to access data, including votes, transmitted on CDs and to
alter the data without detection: [...] data, including votes, on CD is
not cryptographically signed to prevent unauthorized alteration [...]
There are thus significant hardware and data security vulnerabilities
associated with the use of CDs [...]'' -- p.88

  My comments: Another admission of a serious vulnerability in the system.
  The Commission has concluded that vote records can be tampered with
  while they are in transit, and that there is no way 

Chinese WAPI protocol?

2006-06-15 Thread David Wagner
hank you to everyone who corrected the errors in my earlier post.  As has
been pointed out, the SMS4 block cipher was disclosed earlier this year.

Nonetheless, many of my concerns about the security of WAPI remain.
We already have a perfectly good solution out there; 802.11i is a good
scheme, and has been vetted by many folks.  In contrast, WAPI has
received very little analysis by security folks.  WAPI's underlying
block cipher is some special proprietary design that has never been
published in a peer-reviewed academic conference and does not seem to
have received much, if any, scrutiny from experts in block cipher design
-- and certainly nothing approaching the degree of scrutiny that AES
(the cipher used in 802.11i) has seen.  Similar comments apply to the
protocols in 802.11i vs the protocols in WAPI.

The 802.11 working group has put together a lengthy, 40+ page technical
analysis full of defects, ambiguities, and security risks in WAPI.
Their technical analysis is compelling and pretty damning, in my view.
I think we should commend the IEEE 802.11 group for doing such an outstanding
job of technical analysis.

In comparison, when you read the documents from the Chinese national
body, you get a very different impression.  For instance, the Chinese
rebuttal tries to defend the use of a secret proprietary block cipher.
What the heck are they thinking?  Don't they know anything about how to
design secure systems?  It seems clear that the people who are writing the
Chinese advocacy documents are not technical experts in security; perhaps
they are politicians or lawyers, but they're not security engineers.

Of course, the elephant in the room is that China is a giant and growing
market.  China knows that, and seems to want to exploit that fact to
ensure kickbacks and profits for local Chinese companies.  Everyone who
is anyone wants to sell to that marketplace, and I'm sure they have
to be somewhat circumspect to avoid alienating potential customers.
I'm not trying to sell anything to China, so I guess I'm free to speak
my mind.  I persuaded by the analysis I've seen that WAPI is poorly
thought out, not ready for standardization, and shouldn't be approved
at this time.  It tries to solve an already-solved  problem, and does
it in an inferior way.  I'm concerned about the security risks of WAPI.
The Chinese national body gives no appearance of seeking the best solution
and gives every appearance of allowing profit and political considerations
to trump technical merit.  I think ISO has been put in a tough position,
and I think we should applaud them for (so far) resisting the pressure
to adopt WAPI despite the intense pressure that has been applied to them.

I remain concerned about the security risks of WAPI, even to those of us
who live outside China.  Anytime you ship a wireless card that supports
two different wireless standards, you run the risk of attacks that reduce
your strength to the weaker of the two standard.  For instance, one can
imagine You are now in China attacks that fool a wireless card into
(wrongly) thinking it is in China and entering the less-secure WAPI mode.
If WAPI has any security vulnerabilities, this could endanger everyone
whose wireless card supports WAPI, whether they think they are using
WAPI or not.  One can hope that such a risk won't come to pass, but why
take any chances?

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Chinese WAPI protocol?

2006-06-12 Thread David Wagner
Richard Salz [EMAIL PROTECTED] wrote:
Today in slashdot (http://it.slashdot.org/it/06/06/12/0710232.shtml) there 
was an article about China wanting to get WAPI accepted as a new wireless 
security standard.  Has anyone looked at it?

Adam Perez wrote:
I have not looked at WAPI, but they have been trying to get it approved
for a number of years, check out http://en.wikipedia.org/wiki/WAPI
(has link to algorithm) and
http://www.foxnews.com/story/0,2933,199082,00.html.

As far as I can tell, WAPI (the Chinese proposal) uses proprietary
unpublished cryptographic algorithms.  The specification is secret
and confidential.  It uses the SMS4 block cipher, which is secret and
patented. [*]

I don't think that makes any sense, from a security point of view.
That's what got the 802.11 folks in trouble the last time.  If the
authors of WAPI won't make their spec and their algorithms, there is no
basis for trust in their scheme.  This is no way to design a standard,
and from the outside, it looks like adopting WAPI would be unwise.  It
was a bad idea the last time it was proposed, and it's still a bad idea.

Frankly, it's disappointing that any proposal that involves use of secret
homebrew crypto would be taken even the slightest bit seriously, no matter
what country's government is pushing it.  From a technical point of view,
it sounds like something that should have been rejected with prejudice
long ago.


[*] Contrary to what Adam Perez's email might suggest, Wikipedia does
not have a link to a specification of SMS4 or of WAPI.  Wikipedia has
an entry for SMS4, but about all it says is that not much is publicly
known about SMS4.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


GnuTLS (libgrypt really) and Postfix

2006-02-13 Thread David Wagner
John Denker [EMAIL PROTECTED] writes:
Werner Koch retorted:
 I disagree strongly here.  Any code which detects an impossible state
 or an error clearly due to a programming error by the caller should
 die as soon as possible.  

That is a remarkably unprofessional suggestion.  I hope the people
who write software for autopilots, pacemakers, antilock brakes,
etc. do not follow this suggestion.

This just shows the dangers of over-generalization.

Of course, we have to decide which is more important: integrity,
or availability.  I suspect that in the overwhelming majority (perhaps
all) of the cases where libgcrypt is used, integrity is more important
than availability.  If that is true, well, if in doubt, it's better to
fail closed than to fail open.

You rightly points out that there are important applications where
availability is more important than integrity.  However, I suspect
those cases are not too common when building Internet-connected desktop
applications.

I think the attitude that it's better to die than to risk letting an
attacker take control of the crypto library is defensible, in many cases.
Of course, it would be better for a crypto library to document this
assumption explicitly than to leave it up to users to discover it the
hard way, but I would not agree with the suggestion that this exit before
failing open stance is always inappropriate.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


RNG quality verification

2005-12-22 Thread David Wagner
Philipp G#ring [EMAIL PROTECTED] writes:
I have been asked by to verify the quality of the random numbers which are 
used for certificate requests that are being sent to us, to make sure that 
they are good enough, and we donĀ“t issue certificates for weak keys.

Go tell whoever wrote your requirements that they (to be frank) don't
know what they're talking about.  What they're asking for doesn't make
any sense.  You should ask them what problem they're trying to solve.
Don't let them try to tell you how to solve it; you just need to know
the goal, not the mechanism.

The standard solution is to just not worry about this at all, and say
that it is the user's responsibility to choose good random numbers.
If the user fails to do so, they're the one who bears the costs of their
failure, so why should you care?

If the goal is to hold the hands of your users, then you might want to
think carefully about whether you want to be in that business, what are
the most likely failure modes, and what is the best way to deal with it.
(Trying to check whether their numbers are random probably isn't the best
answer.)  Most CA's have gravitated towards the opinion that that's not
something they can control, nor do they want to, nor should they -- and
that sounds reasonable to me.  But if you want to be in the hand-holding
business, you're going to have to do an awful lot more than just check
the random numbers.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


RNG implementations and their problems

2005-12-04 Thread David Wagner
So far I haven't seen any userland tools for updating the entropy count.

This is unfortunate, because sometimes I generate entropy on one machine
and want to pipe it into the /dev/random pool.

However, I cannot update entropy counts [...]

This is a security feature.  If non-root programs could write to
/dev/random and update its entropy count, they could lie about the
entropy count and harm others on the system who rely on /dev/random.

By the way, rngd already does pretty much what you want.  Have you
looked at it?

It would be pretty easy to hack egd or prngd to periodically feed
the entropy they have gathered into /dev/random, using the appropriate
ioctl()s and root-level access.  Seems like that would be good enough.

It would also be trivial to write a 'cat'-like program that takes
data on stdin and uses the appropriate ioctl()s to write it to /dev/random.
Problem solved.

But I am skeptical that this problem is very widespread.  I doubt there
are many people who have found this to be a barrier.

I would also question why you are using /dev/random.  For most purposes,
/dev/urandom is the right default.

Reading from /dev/urandom empties the entropy pool immediately; this is
unfriendly to people who need real random numbers.

Few applications truly need /dev/random.  Those applications that do need
/dev/random usually need random numbers only in very small quantities, and
for them, the deplete the pool behavior seems like the right semantics.

It is certainly true that there are many poorly-thought out applications
that use /dev/random even though /dev/urandom would have been a
better choice.  However, I just can't get too bent out of shape if
those applications suffer as a result of their questionable choice.
Those applications will just have to deal with the consequences of
their misdesign.  If it becomes a problem, maybe that will be sufficient
motivation for them to reconsider their use of /dev/random and switch
over to /dev/urandom, like they should have done in the first place.

Anyway, on the question of whether to use write() or ioctl() to update
the entropy pool from user land, my suspicion is that the current
semantics just hasn't been a very big problem for anyone, and so no
one has cared enough to bother writing code to change the behavior.
It's also not clear to me that current interface is problematic or
that your suggestion is a better choice.  But in any event, if you
are motivated enough to try to write code and submit patches that
would implement your preferred solution, you should probably take this
discussion over to the linux-kernel mailing list.

Getting good randomness shouldn't be platform-specific,
and shouldn't fail in silent or unsafe ways at runtime.

I'm not sure what this is referring to.  As far as I know, /dev/{u,}random
doesn't fail in silent or unsafe ways at runtime.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


timing attack countermeasures (nonrandom but unpredictable delays)

2005-11-16 Thread David Wagner
Travis writes:
The naive countermeasure to timing attacks is to add a random delay,
but of course that can be averaged out by repeating the computation. 
I have never heard anyone propose a delay that is based on the input,
and maybe some per-machine secret, so that it is unpredictable but
constant.  Of course this wouldn't solve averaging across different
inputs from some subset, but it would prevent averaging on the same
value.

Sadly, I don't think this works.

In many cases, the observed time depends both on the input and on some
other random noise.  In such cases, averaging attacks that use the same
input over and over again will continue to work, despite the use of
a pseudorandom input-dependent delay.  For instance, think of a timing
attack on AES, where the time compute the map X |-- AES(K,X) depends only
on K and X, but where the measured time depends on the computation time
(which might be a deterministic function of K and X) plus the network
latency (which is random).  Indeed, in this example even the computation
time might not be a deterministic function of K and X: it might depend
on the state of the cache, which might have some random component.

And there are many settings where you can average across many slightly
different inputs.

So I don't think this kind of approach is likely to anywhere, except
possibly in a few specialized settings.

In many cases, a better defense than random delays is to make the total
time constant and independent of the inputs.  Then averaging is pointless.
It's not always possible, but when it is, it's worth considering.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Defending users of unprotected login pages with TrustBar 0.4.9.93

2005-09-20 Thread David Wagner
Amir Herzberg writes:
However, quite a few of these sites invoke SSL/TLS only _after_ user has
typed in her user name and pw, and clicked `submit`. This allows a MITM
adversary to send a modified login page to the user, which sends the pw
to the attacker (rather than encrypting it and sending to the site). See
below link to a `Hall of Shame (HoS)` listing such sites.

There are few things we can do about this. We can try to convince these
sites to use SSL/TLS _before_ asking for userid and pw; I tried, and few
fixed, but most did not.

But this isn't enough.  The only way for a user to be secure against such
attacks is to type in a https:-style URL into the address bar directly, or
to load a https:-style URL from a bookmark.  Users have to always remember
to type in https://www.bank.com; they must never use http://www.bank.com,
or they will be insecure.  Training users to follow this discipline is not
a trivial task.

I'm not sure it is fair to blame this solely on the web sites.
The problem is that the https: model for web security is broken, if
attackers are mounting active attacks, DNS spoofing, and other kinds of
man-in-the-middle attacks.  The problem is not with SSL; the problem is
with the model for how SSL is applied to solve the web security problem,
and with the user interaction model.  Fixing this probably requires
changes to web browsers and/or web servers.  So, a Hall of Shame seems
a little over the top to me, since there is no obvious way that the
web site could fix this on its own.

TrustBar's solution to this conundrum is a nice one.  I like it.
But it does require changing the web browser.

One thing that web sites could do to help is to always make
https://www.foo.com work just as well as http://www.foo.com, and
then browser plug-ins could simply translate http://www.foo.com -
https://www.foo.com for all sensitive sites.  Of course, web site
operators may be reluctant to take this step on performance grounds.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


MIT talk: Special-Purpose Hardware for Integer Factoring

2005-09-15 Thread David Wagner
Victor Duchovni  wrote:
 Joint works with [...]

Is it politically correct to not cite DJB in this context [...]

The phrase joint work with XXX means that this was a collaboration
between XXX and the speaker.  If DJB wasn't part of the collaboration,
then of course he wouldn't be on that list.

This is different from a related work section, where one would cite
prior research.  But joint work with is not a citation; it's like
the list of authors on a paper.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


encrypted tapes (was Re: Papers about Algorithm hiding ?)

2005-06-08 Thread David Wagner
Ben Laurie writes:
Why is it bad for the page to be downloaded clear? What matters is the 
destination is encrypted, surely?

Because the page you downloaded in the clear contains the https: URL
in the post method.  How do you know that this is the right URL?  If
you got the page in the clear, you don't.  An attacker who can provide
a spoofed page (by DNS cache poisoning, pharming, MITM attacks, or
any other method) could substitute a post URL that sends your sensitive
data to hackers-r-us.com.

That said, I don't see how adding an extra login page to click on helps.
If the front page is unencrypted, then a spoofed version of that page
can send you to the wrong place.  Sure, if users were to check SSL
certificates extremely carefully, they might be able to detect the funny
business -- but we know that users don't do this in practice.

Dan Bernstein has been warning of this risk for many years.
http://cr.yp.to/djbdns/bugtraq/[EMAIL PROTECTED]
http://cr.yp.to/dnscache/bugtraq/[EMAIL PROTECTED]

As far as I can tell, if the front page is unencrypted, and if the
attacker can mount DNS cache poisoning, pharming, or other web spoofing
attacks -- then you're hosed.  Did I get something wrong?

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Microsoft info-cards to use blind signatures?

2005-05-23 Thread David Wagner
http://www.idcorner.org/index.php?p=88
The Identity Corner
Stephan Brands

I am genuinely excited about this
development, if it can be taken as an indication that Microsoft is getting
serious about privacy by design for identity management. That is a big
if, however: indeed, the same Microsoft researcher who came up with the
patent (hello Dan!) was also responsible for Microsoft e-cash patent no.
5,768,385 that was granted in 1998 but was never pursued.

What a strange criticism of Microsoft!  Here is something to know about
patents: many companies file patents all the time.  That doesn't mean
they are committing to build a product around every patent they file.
The fact that Microsoft hasn't pursued patent 5,768,385 tells you
essentially nothing about what they are going to do with this patent.

I wouldn't take patent filings as an indicator of intent or of future
business strategy.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


DTV Content Protection (fwd from [EMAIL PROTECTED])

2005-05-23 Thread David Wagner
Anonymous  wrote:
DTV Content Protection

[...] Similar concepts are presented in
http://apache.dataloss.nl/~fred/www.nunce.org/hdcp/hdcp111901.htm by
Scott Crosby, Ian Goldberg, Robert Johnson, Dawn Song and David Wagner.
This paper assumes (unlike Irwin) that attackers have access to the
private keys of chosen devices.  This is a questionable assumption [...]

The final version of that paper is at
http://www.cs.berkeley.edu/~daw/papers/hdcp-drm01.ps
Quoting from the paper's conclusion section:

  To recover the center's master secret, an attacker needs 40 key pairs,
  and we point out a variety of ways to get them.  An attacker can reverse
  engineer 40 different HDCP video software utilities, he can break open
  40 devices and extract the keys via reverse engineering, or he can
  simply license the keys from the trusted center.  According to the
  HDCP License Agreement, device manufacturers can buy 1 key pairs
  for $16000.  Given these 40 spanning keys, the master secret can be
  recovered in seconds.  So in essence, the trusted authority sells a
  large portion of its master secret to every HDCP licensee.

The $16,000 figure is taken from page 21 of
http://www.digital-cp.com/data/hdcp_license_agreement.pdf
Of course, you have to sign an NDA, too, but I'm not sure whether
that would deter a serious bad guy.

So, in effect, the trusted center has agreed to sell its master secret
for $16,000 and a promise.


Thank you for your post.  It was chock-full of interesting information --
particularly the bits about DTCP, which I had never seen before.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Propping up SHA-1 (or MD5)

2005-03-25 Thread David Wagner
Ben Laurie writes:
It was suggested at the SAAG meeting at the Minneapolis IETF that a way 
to deal with weakness in hash functions was to create a new hash 
function from the old like so:

H'(x)=Random || H(Random || x)

Yes.  Suppose we use this for signing.  The crucial part is to have
the *signer* choose the Random value when computing the signature.
This may be secure even if H fails to be collision-resistant, because
even if an attacker finds a collision for H, he doesn't know which
Random value the signer is going to use.

More generally, we could try to use any universal one-way hash function
(UOWHF).  This concept is also known as target collision resistant (TCR).
It is natural to conjecture that H' is a UOWHF, i.e., is TCR, and this
may be true even if H is not collision-resistant.  Of course, there is
no proof of this, and this conjecture is speculative, but it does weaken
the assumptions we are making about our hash.

I have been advocating this kind of construction ever since hearing about
the hash cryptanalysis results last August.  Not everyone agrees with me,
and there is a lengthy discussion going on about this on the IRTF CFRG
working group.
  http://www1.ietf.org/mail-archive/web/cfrg/current/threads.html
  http://www1.ietf.org/mail-archive/web/cfrg/current/thrd2.html
  http://www1.ietf.org/mail-archive/web/cfrg/current/thrd3.html

However, this allows an attacker to play with Random (the advice I've 
seen is that if one is going to use an IV with a hash function, then one 
should transfer the IV with integrity checks to deny attackers this 
freedom).

No, not if you use it right.  The way to use this is to have the signer
choose the value of Random, not anyone else.  A signer can play with Random
and maybe find collisions M,M', but in this case the signer will be viewed
as having signed both M and M', so this doesn't help the signer at all.

Another objection is that this construction changes the API at the 
sender end, which could lead to a great deal of complexity when the use 
of the hash API is deeply embedded.

Shouldn't be a big deal for signing.  A much bigger deal is that this
changes the on-the-wire format.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


What is to be said about pre-image resistance?

2005-03-25 Thread David Wagner
Ian G writes:
Collision resistance of message digests is effected by the birthday
paradox, but that does not effect pre-image resistance.  (correct?)

So can we suggest that for pre-image resistance, the strength of
the SHA-1 algorithm may have been reduced from 160 to 149?

Well, I'm not sure that the difference between 2^160 and 2^149
would be very significant in practice, even if there were some
redunction like this, but--

As far as I can tell, the pre-image resistance of SHA1 has not been
significantly threatened by these attacks, or at least, the authors
do not claim any results on pre-image resistance of SHA1.

http://www1.ietf.org/mail-archive/web/cfrg/current/msg00790.html

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Secure Science issues preview of their upcoming block cipher

2005-03-25 Thread David Wagner
Jerrold Leichter writes:
They don't claim that:

   This cipher is ... provably just as secure as AES-128.

I can come up with a cipher provably just as secure as AES-128 very quickly

Actually, I think Adam is totally right.

Have you looked at their scheme?
  http://www.securescience.net/ciphers/csc2/
The way to come up with a cipher provably as secure as AES-128 is to use
AES-128 as part of your cipher -- but their scheme does not do anything
like that.

I am very skeptical about claims that they have a mathematical proof that
CS2-128 is as secure as AES-128.  I want to see the proof.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Simson Garfinkel analyses Skype - Open Society Institute

2005-01-28 Thread David Wagner
Adam Shostack [EMAIL PROTECTED] writes:
On Mon, Jan 10, 2005 at 08:33:41PM -0800, David Wagner wrote:
| In article [EMAIL PROTECTED] you write:
| Voice Over Internet Protocol and Skype Security
| Is Skype secure?
| 
| The answer appears to be, no one knows.  The report accurately reports
| that because the security mechanisms in Skype are secret, it is impossible
| to analyze meaningfully its security.  Most of the discussion of the
| potential risks and questions seems quite good to me.
| 
| But in one or two places the report says things like A conversation on
| Skype is vastly more private than a traditional analog or ISDN telephone
| and Skype is more secure than today's VoIP systems.  I don't see any
| basis for statements like this.  Unfortunately, I guess these sorts of
| statements have to be viewed as blind guesswork.  Those claims probably
| should have been omitted from the report, in my opinion -- there is
| really no evidence either way.  Fortunately, these statements are the
| exception and only appear in one or two places in the report.

The basis for these statements is what the other systems don't do.  My
Vonage VOIP phone has exactly zero security.  It uses the SIP-TLS
port, without encryption.  It doesn't encrypt anything.  So, its easy
to be more secure than that.  So, while it may be bad cryptography, it
is still better than the alternatives.  Unfortunately.

I don't buy it.  How do you know that Skype is more secure, let alone
vastly more private?  Maybe Skype is just as insecure as those other
systems.  For all we know, maybe Skype is doing the moral equivalent
of encrypting with the all-zeros key, or using a repeating xor with a
many-time pad, or somesuch.  Without more information, we just don't know.

I'm sorry to pick nits, but I have to stand by my statement.  No matter
how atrociously bad other systems may be, I don't see any basis for saying
that Skype is any better.  It might be better, or it might be just as bad.
We don't know.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Simson Garfinkel analyses Skype - Open Society Institute

2005-01-11 Thread David Wagner
In article [EMAIL PROTECTED] you write:
Voice Over Internet Protocol and Skype Security
Simson L. Garfinkel
http://www.soros.org/initiatives/information/articles_publications/articles/security_20050107/OSI_Skype5.pdf

Is Skype secure?

The answer appears to be, no one knows.  The report accurately reports
that because the security mechanisms in Skype are secret, it is impossible
to analyze meaningfully its security.  Most of the discussion of the
potential risks and questions seems quite good to me.

But in one or two places the report says things like A conversation on
Skype is vastly more private than a traditional analog or ISDN telephone
and Skype is more secure than today's VoIP systems.  I don't see any
basis for statements like this.  Unfortunately, I guess these sorts of
statements have to be viewed as blind guesswork.  Those claims probably
should have been omitted from the report, in my opinion -- there is
really no evidence either way.  Fortunately, these statements are the
exception and only appear in one or two places in the report.

All in all, a useful analysis.  Thanks for posting that.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Entropy and PRNGs

2005-01-10 Thread David Wagner
John Denker writes:
Well, of course indeed!  That notion of entropy -- the entropy
in the adversary's frame of reference -- is precisely the
notion that is appropriate to any adversarial situation, as I
have consistently and clearly stated in my writings;
[...]
There is only one entropy that matters in an adversarial
situation.  The so-called unconditional entropy H(X) is
merely a wild overestimate of the only thing that matters.

Ok.  I see that you were already well aware of the point Ben Laurie
was making, and indeed it was obvious to you.  Great.

But I have seen people for who this was definitely not obvious, and
who failed to recognize the distinction between the two concepts or
the need to use conditional entropy until it was pointed out to them.
I guess Ben's paper is going to be useful for them, but not for you.

I imagine a smart person such as DAW should be able to come
up with five schemes in five minutes whereby UUID generation
can be delegated to virtually any machine that wants it.
MAC(eth0) /concat/ local counter will do for scheme #1.
[...]
Horsefeathers.  For generating  UUIDs,  _zero_ entropy is
sufficient, and no positive amount of entropy (unconditional
or otherwise) can be called necessary.

You're right.  I take it back.  I accept your point about UUIDs.
There are schemes that avoid the need for randomness (entropy).
Thank you.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Entropy and PRNGs

2005-01-09 Thread David Wagner
John Denker writes:
Ben Laurie wrote:
 http://www.apache-ssl.org/randomness.pdf

I just took a look at the first couple of pages.
IMHO it has much room for improvement.

I guess I have to take exception.  I disagree.  I think Ben Laurie's
paper is quite good.  I thought your criticisms missed some of the points
he was trying to make (these points are subtle, so this is completely
understandable).  Presumably his paper could be criticized as not clear
enough, since it seems it did not convey those points adequately, but
I don't think his paper is inaccurate.  I'll respond point-by-point.

*) For instance, on page 2 it says

 I can have a great source of entropy, but if an attacker knows
 all about that source, then it gives me no unpredictability at
 all.

That's absurd.  If it has no unpredictability, it has no entropy,
according to any reasonable definition of entropy, including the
somewhat loose definition on page 1.

Actually, I think Ben got it right.  Entropy depends on context.
The attacker might have extra context that allows him to narrow down
the possible values of the randomness samples.

For instance, imagine if we use packet inter-arrival times (measured down
to the nanosecond) as our randomness source.  From the point of view of
an outsider, there might a lot of entropy in these times, perhaps tens
of bits.  However, from the point of view of an attacker who can eavesdrop
on our local area network, there might be very little or no entropy.

This is the difference between unconditional and conditional entropy that
Ben was trying to introduce.  In information-theoretic notation, H(X)
vs H(X|Y).  Let X = packet inter-arrival time, and Y = everything seen by
a local eavesdropper, and you will see that H(X|Y) can be much smaller
than H(X).  Indeed, we can have H(X|Y) = 0 even if H(X) is very large.
This is Ben's point, and it is a good one.

*) Later on page 2 it says:

 Cryptographers want conditional entropy, but for UUIDs, and
 other applications, what is needed is unconditional entropy.

First of all, you can't throw around the term conditional
entropy without saying what it's conditioned on.

Conditioned on everything known to the attacker, of course.

Also importanty, for UUIDs no entropy is required at all.
A globally-accessible counter has no entropy whatsoever, and
suffices to solve the UUID problem

A counter is fine as long as there is only one machine in the universe
that will ever assign UUIDs.  However, if you want to do distributed
generation of UUIDs, then counters are insufficient because there is no
way to prevent overlap of two machine's counter spaces.

Perhaps what Ben should have said is that:
* Unconditional entropy is sufficient for UUIDs;
  conditional entropy is not needed.
* For centrally-assigned UUIDs, even unconditional entropy is unnecessary;
  a centrally-managed counter is fine.
* For distributed, unsynchronized assignment of UUIDs, unconditional
  entropy appears to be necessary and sufficient.

*) At the bottom of page 2 it says:

 Well, for many purposes, the system time has plenty of
 unconditional entropy, because it is usually different when
 used to generate different UUIDs.

No, the system time has no entropy whatsoever, not by any
reasonable definition of entropy.

Ok, this seems like a fair criticism.

*) On page 4, I find the name trueprng to be an oxymoron.
The P in PRNG stands for Pseudo, which for thousands of
years has meant false, i.e. the opposite of true.

Another reasonable point.  Perhaps truerng would be a better name, then?

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


SSL/TLS passive sniffing

2005-01-04 Thread David Wagner
Florian Weimer [EMAIL PROTECTED] writes:
I'm slightly troubled by claims such as this one:
  http://lists.debian.org/debian-devel/2004/12/msg01950.html
   [which says: If you're going to use /dev/urandom then you might
as well just not encrypt the session at all.]

That claim is totally bogus, and I doubt whether that poster has any
clue about this subject.  As far as we know, Linux's /dev/urandom is just
fine, once it has been seeded properly.  Pay no attention to those who
don't know what they are talking about.

(That poster wants you to believe that, since /dev/urandom uses a
cryptographic-strength pseudorandom number generator rather than a
true entropy source, it is useless.  Don't believe it.  The poster is
confused and his claims are wrong.)

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


The Pointlessness of the MD5 attacks

2004-12-22 Thread David Wagner
Ben Laurie writes:
Dan Kaminsky's recent posting seems to have caused some excitement, but 
I really can't see why. In particular, the idea of having two different 
executables with the same checksum has attracted attention.

But the only way I can see to exploit this would be to have code that 
did different things based on the contents of some bitmap. My contention 
is that if the code is open, then it will be obvious that it does 
something bad if a bit is tweaked, and so will be suspicious, even if 
the something bad is not triggered in the version seen.

So, to exploit this successfully, you need code that cannot or will not 
be inspected. My contention is that any such code is untrusted anyway, 
so being able to change its behaviour on the basis of embedded bitmap 
changes is a parlour trick. You may as well have it ping a website to 
find out whether to misbehave.

I guess I disagree.  Imagine that the code has some block cipher with
some S-boxes hardcoded into it.  The code uses this block cipher to
decrypt an associated ciphertext and outputs (or takes some action based
on) the resulting message.  This is an example of code that could be
used to fool a MD5 checksum.  Moreover, I don't have a great deal of
confidence that even a careful code inspection would cause the code to
be considered suspicious.  Consequently, I don't have great confidence
that such an attack would be detected.

I know it is tempting to think that, look, Wang et al only found a pair
of random-looking messages that collide; they didn't claim to find a pair
of meaningful messages that collide; and maybe we can hope that there is
no way to come up with a pair of meaningful-looking colliding messages.
But I think that kind of hope is unfounded, and acting on hope is
asking for trouble.  I believe the only safe course now is to assume
that MD5's collision resistance is totally broken.  If Wang et al can
find meaningless-looking collisions today, it seems all too likely that
someone else may be able to find meaningful-looking collisions tomorrow.
Hoping that the latter will be hard even though the former is known to
be easy seems too optimistic for my tastes.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


The Pointlessness of the MD5 attacks

2004-12-22 Thread David Wagner
Ben Laurie writes:
Indeed, but what's the point? If you control the binary, just distribute 
the malicious version in the first place.

Where this argument breaks down is that someone might have partial
but not total control over the binary.  This partial control might
not be enough for them to distribute a malicious version straightforwardly,
but just enough to exploit a MD5 collision.  It is hard to be confident
that such an attack scenario is impossible.

To give one contrived example, imagine that the Windows 2010 binary
comes with an image file that is displayed as part of the splash start
screen.  Imagine that the graphic designer is allowed to supply that
image, but the graphic designer has no other authorized access to the
source or binary of Windows.  Now a disgruntled graphic designer might
be able to arrange to find a MD5 collision MD5(img1) = MD5(img2) so that
img1 looks like an entirely reasonable Windows splash screen, but img2
contains some scrawled epithet (Tired of Windows crashing all the time?
Try Linux!).  Or, even more contrived, imagine that img1.jpg looks
like a completely normal JPG file, but img2.jpg exploits some buffer
overrun in the startup screen's JPG decoder to overwrite the program's
image with some other malicious code.

Sure, these scenarios are contrived and unlikely.  But how do you
know that there is not some other (possibly more complex but less
contrived) scenario that you would consider more troubling?

People seem to be having a hard time grasping what I'm trying to say, so 
perhaps I should phrase it as a challenge: find me a scenario where you 
can use an MD5 collision to mount an attack in which I could not mount 
an equally effective attack without using an MD5 collision.

I've got a better challenge: show me a convincing argument that no such
scenario exists.

What I'm trying to get at is that you've got the burden of proof
backwards.  Implicit in your challenge is the idea that we should
keep trusting MD5 until someone finds a convincing argument that it is
insecure in practice.  My argument is that this is much too trusting.
I believe that, given the theoretical results on MD5, we should not have
any trust whatsoever in the security of MD5 as a collision-resistant
hash until someone is able to offer a convincing argument that MD5 is
secure enough in practice despite its known weaknesses.

I could try to answer your challenge.  I might even be able to devise
some solution to your challenge that would satisfy you.  For instance,
maybe the image file attack above qualifies as a solution.  Or maybe
the S-box table attack in my previous email is good enough.  But I don't
really want to argue about whether I have found a valid answer to your
challenge.  I shouldn't be required to meet that burden -- the burden
of proof should be on whoever wants to believe that MD5 is secure.

Why should the burden be on MD5 defenders?  Not just because I said so.
Part of the reason is that there are just too many complex scenarios
to consider.  Suppose I conceded that I couldn't find a scenario you'd
accept.  What would that prove?  Very little.  Even if I can't think of
a suitable scenario for you off the top of my head, that doesn't mean
that with more thought I wouldn't find one.  Even if I spent a month
trying and still couldn't find one, that doesn't mean that others can't.

My experience is that if it is possible to find a theoretical attack with
one day's work, it is often possible to extend this to a more practical
attack with, say, one week's work.  Bruce Schneier puts this concisely:
Attacks always get better.  Trusting in MD5's collision-resistance
amounts to assuming that cryptanalysts of MD5 will get this far, but
no farther, and that seems like a pretty questionable assumption to me.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


SSL/TLS passive sniffing

2004-11-30 Thread David Wagner
Ian Grigg writes:
I note that disctinction well!  Certificate based systems
are totally vulnerable to a passive sniffing attack if the
attacker can get the key.  Whereas Diffie Hellman is not,
on the face of it.  Very curious...

No, that is not accurate.  Diffie-Hellman is also insecure if the private
key is revealed to the adversary.  The private key for Diffie-Hellman
is the private exponent.  If you learn the private exponent that one
endpoint used for a given connection, and if you have intercepted that
connection, you can derive the session key and decrypt the intercepted
traffic.

Perhaps the distinction you had in mind is forward secrecy.  If you use
a different private key for every connection, then compromise of one
connection's private key won't affect other connections.  This is
true whether you use RSA or Diffie-Hellman.  The main difference is
that in Diffie-Hellman, key generation is cheap and easy (just an
exponentiation), while in RSA key generation is more expensive.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Seth Schoen's Hard to Verify Signatures

2004-09-08 Thread David Wagner
Hal Finney wrote:
[...] hard to verify signature [...]
Choose the number of modular squarings, t, that you want the verifier
to have to perform.  Suppose you choose t = 1 billion.  Now you will
sign your value using an RSA key whose exponent e = 2^t + 1.
The way you sign, even using such a large e, is to compute
phi = (p-1)*(q-1) and to compute e' = e mod phi, which can be done using
about 30 squarings of 2 mod phi.  You then compute the secret exponent
d as the multiplicative inverse mod phi of e', in the standard way that
is done for RSA keys.  Using this method you can sign about as quickly
as for regular RSA keys.

However, the verifier has a much harder problem. [...] To verify [...]
will require t = 1 billion modular squaring operations.

This signature scheme has an interesting property: once you've done
the lengthy computation, you can quickly prove to someone else that
the signature is valid.  In particular, there is a short and easy to
prove that the signature is valid, based on listing x, x^2, x^4, x^16,
x^256, x^65536, ..., x^e and giving a zero knowledge proof that your
list is correct.  The details can be found here (see esp. Section 3):
  D. Boneh, M. Naor, Timed Commitments, CRYPTO 2000.
  http://crypto.stanford.edu/~dabo/abstracts/timedcommit.html
The signer can also create a proof like this if he wants.  Note that
Boneh and Naor consider a somewhat similar but not equivalent notion of
timed signatures, which may also be of interest.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


More problems with hash functions

2004-09-01 Thread David Wagner
Jerrold Leichter  wrote:
Joux's attack says:  Find single block messages M1 and M1' that collide on
the blank initial state.  Now find messages M2 amd M2' that collide with
the (common) final state from M1 and M1'.  Then you hav four 2-block
collisions for the cost of two:  M1|M2, M1'|M2, and so on.

But even a simple XOR breaks this.  M1 and M1' have the same hash H, but the
state being passed is now very different:  H,M1 in one case, H,M1' in the
other.  So M1|M2 and M1'|M2 are completely different:  Both started the second
step with the compression function in state H, but the first compressed
M1 XOR M2, and the second M1' XOR M2.

Doesn't work, alas.  A trivial adjustment to Joux's attack also defeats
your proposal.

Suppose M1 and M1' collide on the blank initial state.  Let M2 be
arbitrary.  Then M1|M2 and M1'|M2 collide, and the final state after
processing them is H,M2 in both cases.  Now find messages M3 and
M3' that collide if processed starting from the common state H,M2.
Then you have four 3-block collisions for the cost of two: M1|M2|M3,
M1'|M2|M3, etc.

With this small tweak, Joux's attack will go through.  So, your scheme
offers little or no additional resistance to Joux's attack.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


?splints for broken hash functions

2004-09-01 Thread David Wagner
Hal Finney writes:
[John Denker proposes:] the Bi are the input blocks:
  (IV) - B1 - B2 - B3 - ... Bk - H1
  (IV) - B2 - B3 - ... Bk - B1 - H2
then we combine H1 and H2 nonlinearly.

This does not add any strength against Joux's attack.  One can find
collisions for this in 80*2^80 time with Joux's attack.

First, generate 2^80 collisions for the top line.  Find B1,B1* that
produce a collision, i.e., C(IV,B1)=C(IV,B1*)=V2.  Then, find B2,B2*
that produce a collision, i.e., C(V2,B2)=C(V2,B2*)=V3.  Continue to
find B3,B3*, ..., Bk,Bk*.  Note that we can combine this in any way
we like (e.g., B1, B2*, B3*, B4, .., Bk) to get 2^80 different messages
that all produce the same output in the top line (same H1).

Next, look at the bottom line.  For each of the 2^80 ways to combine
the above blocks, compute what output you get in the bottom line.
By the birthday paradox, you will find some pair that produce a
collision in the bottom line (same H2).  But that pair also produces
a collision in the top line (since all pairs collide in the top line),
so you have a collision for the whole hash (same H1,H2).

[...] how about this simpler construction?
  (IV1) - B1 - B2 - B3 - ... Bk - H1
  (IV2) - B1 - B2 - B3 - ... Bk - H2
and again combine H1 and H2.

The same attack applies.  This construction is not secure against
Joux's attack, either.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: DRM of the mirror universe

2004-04-14 Thread David Wagner
Jani Nurminen  wrote:
I had this idea about reversing the roles of the actors in a typical DRM
system, and thinking about where it might lead.
[...]
This kind of application of DRM would probably guarantee privacy of the
personal data of each individual person, while at the same time allow
companies access to that data with the consent of the user.

This kind of idea has been discussed before.  Personally, I'm not
convinced we're likely to see this take off any time soon.  There are
some technological barriers, but a bigger one may well be incentives.
I'd argue the true source of many privacy problems may be more from the
imbalance in bargaining power between the consumer and the corporation
(the corporation can often more or less dictate terms -- or, at least,
there is not much room for personalized negotiation and bargaining).
Fixing the power imbalance may well be a precondition to deploying
technology-based privacy defenses (be it DRM, or anything else).

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Difference between TCPA-Hardware and a smart card (was: example: secure computing kernel needed)

2004-01-03 Thread David Wagner
Jerrold Leichter  wrote:
All of this is fine as long as there is a one-to-one association between
machines and owners of those machines.  Consider the example I gave
earlier:  A shared machine containing the standard distribution of the
trusted computing software.  All the members of the group that maintain the
software will want to have the machine attest, to them, that it is properly
configured and operating as intended.

I think you may be giving up too quickly.  It looks to me like
this situation can be handled by owner-directed attestation (e.g.,
Owner Override, or Owner Gets Key).  Do you agree?

To see why, let's go back to the beginning, and look at the threat
model.  If multiple people are doing shared development on a central
machine, that machine must have an owner -- let's call him Linus.  Now
ask yourself: Do those developers trust Linus?

If the developers don't trust Linus, they're screwed.  It doesn't how
much attestation you throw at the problem, Linus can always violate their
security model.  As always, you've got to trust root (the system
administrator); nothing new here.

Consequently, it seems to me we only need to consider a threat model
where the developers trust Linus.  (Linus need not be infallible, but the
developers should believe Linus won't intentionally try to violate their
security goals.)  In this case, owner-directed attestation suffices.
Do you see why?  Linus's machine will produce an attestation, signed
by Linus's key, of what software is running.  Since the developers trust
Linus, they can then verify this attestation.  Note that the developers
don't need to trust each other, but they do need to trust the owner/admin
of the shared box.  So, it seems to me we can get by without third-party
attestation.

This scenario sounds pretty typical to me.  Most machines have a single
owner.  Most machines have a system administrator (who must be trusted).
I don't think I'm making unrealistic assumptions.

You're trying to make the argument that feature X (here, remote attestation for
multiple mutually-suspicious parties) has no significant uses.  Historically,
arguments like this are losers.

Yes, this is a fair point.  I suppose I would say I'm arguing that
feature X (third-party attestation) seems to have few significant uses.
It has some uses, but it looks like they are in the minority; for the
most part, it seems that feature X is unnecessary.  At the same time,
many people are worried that feature X comes with significant costs.

At least, this is how it looks to me.  Maybe I've got something wrong.
If these two points are both accurate, this is an interesting observation.
If they're inaccurate, I'd be very interested to hear where they fail.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: example: secure computing kernel needed

2003-12-29 Thread David Wagner
Jerrold Leichter wrote:
| *Any* secure computing kernel that can do
| the kinds of things we want out of secure computing kernels, can also
| do the kinds of things we *don't* want out of secure computing kernels.

David Wagner wrote:
| It's not hard to build a secure kernel that doesn't provide any form of
| remote attestation, and almost all of the alleged harms would go away if
| you remove remote attestation.  In short, you *can* have a secure kernel
| without having all the kinds of things we don't want.

Jerrold Leichter wrote:
The question is not whether you *could* build such a thing - I agree, it's
quite possible.  The question is whether it would make enough sense that it
would gain wide usage.  I claim not.

Good.  I'm glad we agree that one can build a remote kernel without
remote attestation; that's progress.  But I dispute your claim that remote
attestation is critical to securing our machines.  As far as I can see,
remote attestation seems (with some narrow exceptions) pretty close to
worthless for the most common security problems that we face today.

Your argument is premised on the assumption that it is critical to defend
against attacks where an adversary physically tampers with your machine.
But that premise is wrong.

Quick quiz: What's the dominant threat to the security of our computers?
It's not attacks on the hardware, that's for sure!  Hardware attacks
aren't even in the top ten.  Rather, our main problems are with insecure
software: buffer overruns, configuration errors, you name it.

When's the last time someone mounted a black bag operation against
your computer?  Now, when's the last time a worm attacked your computer?
You got it-- physical attacks are a pretty minimal threat for most users.

So, if software insecurity is the primary problem facing us, how does
remote attestation help with software insecurity?  Answer: It doesn't, not
that I can see, not one bit.  Sure, maybe you can check what software is
running on your computer, but that doesn't tell you whether the software
is any good.  You can check whether you're getting what you asked for,
but you have no way to tell whether what you asked for is any good.

Let me put it another way.  Take a buggy, insecure application, riddled
with buffer overrun vulnerabilities, and add remote attestation.  What do
you get?  Answer: A buggy, insecure application, riddled with buffer
overrun vulnerabilities.  In other words, remote attestation doesn't
help if your trusted software is untrustworthy -- and that's precisely
the situation we're in today.  Remote attestation just doesn't help with
the dominant threat facing us right now.

For the typical computer user, the problems that remote attestation solves
are in the noise compared to the real problems of computer security
(e.g., remotely exploitable buffer overruns in applications).  Now,
sure, remote attestation is extremely valuable for a few applications,
such as digital rights management.  But for typical users?  For most
computer users, rather than providing an order of magnitude improvement
in security, it seems to me that remote attestation will be an epsilon
improvement, at best.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Difference between TCPA-Hardware and a smart card (was: example: secure computing kernel needed)

2003-12-29 Thread David Wagner
Rick Wash  wrote:
There are many legitimate uses of remote attestation that I would like to
see.  For example, as a sysadmin, I'd love to be able to verify that my
servers are running the appropriate software before I trust them to access
my files for me.  Remote attestation is a good technical way of doing that.

This is a good example, because it brings out that there are really
two different variants of remote attestation.  Up to now, I've been
lumping them together, but I shouldn't have been.  In particular, I'm
thinking of owner-directed remote attestation vs. third-party-directed
remote attestation.  The difference is who wants to receive assurance of
what software is running on a computer; the former mechanism allows to
convince the owner of that computer, while the latter mechanism allows
to convince third parties.

If I understand correctly, TCPA and Palladium provide third-party-directed
remote attestation.  Intel, or Dell, or someone like that will generate
a keypair, embed it inside the trusted hardware that comes with your
computer, and you (the owner) are never allowed to learn the corresponding
private key.  This allows your computer to prove to Intel, or Dell, or
whoever, what software is running on your machine.  You can't lie to them.

In owner-directed remote attestation, you (the owner) would generate the
keypair and you (the owner) would learn the private key -- not Intel, or
Dell, or whoever.  This allows your computer to prove to you what software
is running on your machine.  However, you can't use this to convince Intel,
or Dell, or anyone else, what software is running your machine, unless they
know you and trust you.

I -- and others -- have been arguing that it is remote attestation that
is the key, from a policy point of view; it is remote attestation that
enables applications like DRM, software lock-in, and the like.  But this
is not quite right.  Rather, it is presence of third-party-directed remote
attestation that enables these alleged harms.

Owner-directed remote attestation does not enable these harms.  If I know
the private key used for attestation on my own machine, then remote attestation
is not very useful to (say) Virgin Records for DRM purposes, because I could
always lie to Virgin about what software is running on my machine.  Likewise,
owner-directed remote attestation doesn't come with the risk of software
lock-in that third-party-directed remote attestation creates.

So it seems that third-party-directed remote attestation is really where
the controversy is.  Owner-directed remote attestation doesn't have these
policy tradeoffs.

Finally, I'll come back to the topic you raised by noting that your
example application is one that could be supported with owner-directed
remote attestation.  You don't need third-party-directed remote
attestation to support your desired use of remote attestation.  So, TCPA
or Palladium could easily fall back to only owner-directed attestation
(not third-party-attestation), and you'd still be able to verify the
software running on your own servers without incurring new risks of DRM,
software lock-in, or whatever.

I should mention that Seth Schoen's paper on Trusted Computing anticipates
many of these points and is well worth reading.  His notion of owner
override basically converts third-party-directed attestation into
owner-directed attestation, and thereby avoids the policy risks that so
many have brought up.  If you haven't already read his paper, I highly
recommend it.  http://www.eff.org/Infra/trusted_computing/20031001_tc.php

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: example: secure computing kernel needed

2003-12-29 Thread David Wagner
Ed Reed wrote:
There are many business uses for such things, like checking to see
if locked down kiosk computers have been modified (either hardware
or software),

I'm a bit puzzled why you'd settle for detecting changes when you
can prevent them.  Any change you can detect, you can also prevent
before it even happens.  So the problem statement sounds a little
contrived to me -- but I don't really know anything about kiosks,
so maybe I'm missing something.

In any case, this is an example of an application where owner-directed
remote attestation suffices, so one could support this application
without enabling any of the alleged harms.  (See my previous email.)
In other words, this application is consistent with an Owner Override.

verifying that users have not excercised their god-given
right to install spy-ware and viruses (since they're running with
administrative priviledges, aren't they?),

It sounds like the threat model is that the sysadmins don't trust the
users of the machine.  So why are the sysadmins giving users administrator
or root access to the machine?  It sounds to me like the real problem
here is a broken security architecture that doesn't match up to the
security threat, and remote attestation is a hacked-up patch that's not
going to solve the underlying problems.  But that's just my reaction,
without knowing more.

In any case, this application is also consistent with owner-directed
remote attestation or an Owner Override.

and satisfying a consumer
that the server they're connected to is (or isn't) running software
that
records has adequate security domain protections to protect the users
data (perhaps backup files) the user entrusts to the server.

If I don't trust the administrators of that machine to protect sensitive
data appropriately, why would I send sensitive data to them?  I'm not
sure I understand the threat model or the problem statement.

But again, this seems to be another example application that's compatible
with owner-directed remote attestation or an Owner Override.


Summary: None of these applications require full-strength
(third-party-directed) remote attestation.  It seems that an Owner
Override would not disturb these applications.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: example: secure computing kernel needed

2003-12-23 Thread David Wagner
William Arbaugh  wrote:
David Wagner writes:
 As for remote attestion, it's true that it does not directly let a remote
 party control your computer.  I never claimed that.  Rather, it enables
 remote parties to exert control over your computer in a way that is
 not possible without remote attestation.  The mechanism is different,
 but the end result is similar.

If that is the case, then strong authentication provides the same 
degree of control over your computer. With remote attestation, the 
distant end determines if they wish to communicate with you based on 
the fingerprint of your configuration. With strong authentication, the 
distant end determines if they wish to communicate with you based on 
your identity.

I must confess I'm puzzled why you consider strong authentication
the same as remote attestation for the purposes of this analysis.

It seems to me that your note already identifies one key difference:
remote attestation allows the remote computer to determine if they wish
to speak with my machine based on the software running on my machine,
while strong authentication does not allow this.

As a result, remote attestation enables some applications that strong
authentication does not.  For instance, remote attestation enables DRM,
software lock-in, and so on; strong authentication does not.  If you
believe that DRM, software lock-in, and similar effects are undesirable,
then the differences between remote attestation and strong authentication
are probably going to be important to you.

So it seems to me that the difference between authenticating software
configurations vs. authenticating identity is substantial; it affects the
potential impact of the technology.  Do you agree?  Did I miss something?
Did I mis-interpret your remarks?



P.S. As a second-order effect, there seems to be an additional difference
between remote attestation (authentication of configurations) and
strong authentication (authentication of identity).  Remote attestation
provides the ability for negative attestation of a configuration:
for instance, imagine a server which verifies not only that I do have
RealAudio software installed, but also that I do not have any Microsoft
Audio software installed.  In contrast, strong authentication does
not allow negative attestation of identity: nothing prevents me from
sharing my crypto keys with my best friend, for instance.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: example: secure computing kernel needed

2003-12-22 Thread David Wagner
William Arbaugh  wrote:
On Dec 16, 2003, at 5:14 PM, David Wagner wrote:
 Jerrold Leichter  wrote:
 We've met the enemy, and he is us.  *Any* secure computing kernel 
 that can do
 the kinds of things we want out of secure computing kernels, can also 
 do the
 kinds of things we *don't* want out of secure computing kernels.

 I don't understand why you say that.  You can build perfectly good
 secure computing kernels that don't contain any support for remote
 attribution.  It's all about who has control, isn't it?

There is no control of your system with remote attestation. Remote 
attestation simply allows the distant end of a communication to 
determine if your configuration is acceptable for them to communicate 
with you.

But you missed my main point.  Leichter claims that any secure kernel is
inevitably going to come with all the alleged harms (DRM, lock-in, etc.).
My main point is that this is simply not so.

There are two very different pieces here: that of a secure kernel, and
that of remote attestation.  They are separable.  TCPA and Palladium
contain both pieces, but that's just an accident; one can easily imagine
a Palladium-- that doesn't contain any support for remote attestation
whatsoever.  Whatever you think of remote attestation, it is separable
from the goal of a secure kernel.

This means that we can have a secure kernel without all the harms.
It's not hard to build a secure kernel that doesn't provide any form of
remote attestation, and almost all of the alleged harms would go away if
you remove remote attestation.  In short, you *can* have a secure kernel
without having all the kinds of things we don't want.  Leichter's claim
is wrong.

This is an important point.  It seems that some TCPA and Palladium
advocates would like to tie together security with remote attestion; it
appears they would like you to believe you can't have a secure computer
without also enabling DRM, lock-in, and the other harms.  But that's
simply wrong.  We can have a secure computer without enabling all the
alleged harms.  If we don't like the effects of TCPA and Palladium,
there's no reason we need to accept them.  We can have perfectly good
security without TCPA or Palladium.

As for remote attestion, it's true that it does not directly let a remote
party control your computer.  I never claimed that.  Rather, it enables
remote parties to exert control over your computer in a way that is
not possible without remote attestation.  The mechanism is different,
but the end result is similar.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: example: secure computing kernel needed

2003-12-18 Thread David Wagner
Jerrold Leichter  wrote:
We've met the enemy, and he is us.  *Any* secure computing kernel that can do
the kinds of things we want out of secure computing kernels, can also do the
kinds of things we *don't* want out of secure computing kernels.

I don't understand why you say that.  You can build perfectly good
secure computing kernels that don't contain any support for remote
attribution.  It's all about who has control, isn't it?

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: safety of Pohlig-Hellman with a common modulus?

2003-12-07 Thread David Wagner
Peter Fairbrother  wrote:
Nope. In P-H there is no g. A ciphertext is M^k mod p. An attacker won't
know k, and usually won't know M, but see below. I don't know what the
problem is called, but it isn't DLP. Anyone?

Ok, I was being slightly loose.  To be more precise, the security of
Pohlig-Hellman is based on the Decision Diffie-Hellman (DDH) problem,
I believe.  But the best known attack on DDH (when you're working in
a prime-order subgroup) is to compute discrete logs.

Not usually.  In general index calculus attacks don't work on P-H, [...]

Sure they do.  If I have a known plaintext pair (M,C), where
C = M^k (mod p), then with two discrete log computations I can
compute k, since k = dlog_g(C)/dlog_g(M) (mod p-1).  This works for
any generator g, so I can do the precomputation for any g I like.

When using P-H I usually pre-encrypt data in any old symmetric cipher with a
random IV and any old key, to avoid known plaintext attacks.

Ok, that sounds like it should work.  To break the composed scheme,
one would need to break both P-H and the other symmetric cipher.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Are there...one-way encryption algorithms

2003-11-21 Thread David Wagner
Anton Stiglic wrote:
David Wagner [EMAIL PROTECTED] wrote:
 martin f krafft  wrote:
   - Bob encrypts A(M) with key B and sends it to Alice
   - Alice decrypts B(A(M)) with key A, leaving B(M), sends it to Bob
   - Bob decrypts B(M) with key B leaving him with M.
 
 Are there algorithms for this already? What's the scheme called?

 It's called Pollig-Hellman.

If I'm not mistaken you are wrong.

You're right.  The above protocol is essentially Shamir's 3-pass
protocol, not Pohlig-Hellman.

Pohlig-Hellman is the encryption scheme A(M) = M^A mod p.  If you
instantiate Krafft's proposal with the Pohlig-Hellman encryption scheme,
you get a working (and secure) instance of Shamir's 3-pass protocol.

Thank you for correcting my error!

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: A-B-a-b encryption

2003-11-17 Thread David Wagner
martin f krafft  wrote:
it came up lately in a discussion, and I couldn't put a name to it:
a means to use symmetric crypto without exchanging keys:

  - Alice encrypts M with key A and sends it to Bob
  - Bob encrypts A(M) with key B and sends it to Alice
  - Alice decrypts B(A(M)) with key A, leaving B(M), sends it to Bob
  - Bob decrypts B(M) with key B leaving him with M.

Are there algorithms for this already? What's the scheme called?

It's called Pollig-Hellman.  It only works if your encryption scheme
is commutative.  Most symmetric-key encryption schemes aren't commutative,
but one scheme that does work is A(M) = M^A mod p.  One scheme that doesn't
work is A(M) = M xor A; XOR is indeed commutative, but it becomes insecure
when used in the above protocol.

Anyway, the Pollig-Hellman protocol is no better (and probably no worse)
than a straight Diffie-Hellman, so there seems to be little reason to adopt
it.  Just stick to standard Diffie-Hellman.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Are there...

2003-11-16 Thread David Wagner
Enzo Michelangeli wrote:
...one-way encryption algorithms guaranteed to be injective (i.e.,
deterministically collision-free)?

Every encryption algorithm is injective, otherwise decryption
would be ambiguous.  In other words, if x and x' are two different
plaintexts, then E_k(x) != E_k(x').

I'm looking for algorithms where every piece of code and data is public,
thus excluding conventional enciphering with a secret key.

Ok, in that case, use a public-key encryption algorithm.  Same deal.

And, if you want to ensure that E_k(x) != E_k'(x') whenever
(k,x) != (k',x'), define E_k(x) = (k, EE_k(x)) where EE is some
public-key encryption algorithm; EE_k(x) denotes the result of encrypting
plaintext x under public key k.  It can't hurt security to include the
public key in the ciphertext.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL, client certs, and MITM (was WYTM?)

2003-10-23 Thread David Wagner
Thor Lancelot Simon  wrote:
Can you please posit an *exact* situation in which a man-in-the-middle
could steal the client's credit card number even in the presence of a
valid server certificate?

Sure.  If I can assume you're talking about SSL/https as it is
typically used in ecommerce today, that's easy.  Subvert DNS to
redirect the user to a site under controller of the attacker.
Then it doesn't matter whether the legitimate site has a valid server
cert or not.  Is this the kind of scenario you were looking for?

http://lists.insecure.org/lists/bugtraq/1999/Nov/0202.html

Can you please explain *exactly* how using a
client-side certificate rather than some other form of client authentication
would prevent this?

Gonna make me work harder on this one, eh?  Well, ok, I'll give it a try.
Here's one possible way that you might be able to use client certs to
help (assuming client certs were usable and well-supported by browsers).
Beware: I'm making this one up as I go, so it's entirely possible there
are security flaws with my proposal; I'd welcome feedback.

When I establish a credit card with Visa, I generate a new client
certificate for this purpose and register it with www.visa.com.  When I
want to buy a fancy hat from www.amazon.com, Amazon re-directs me to
  https://ssl.visa.com/buy.cgi?payto=amazonamount=$29.99item=hat
My web browser opens a SSL channel to Visa's web server, authenticating my
presence using my client cert.  Visa presents me a description of the item
Amazon claims I want to buy, and asks me to confirm the request over that
authenticated channel.  If I confirm it, Visa forwards payment to Amazon
and debits my account.  Visa can tell whose account to debit by looking
at the mapping between my client certs and account numbers.  If Amazon
wants to coordinate, it can establish a separate secure channel with Visa.
(Key management for vendors is probably easier than for customers.)

I can't see any MITM attacks against this protocol.  The crucial point is
that Visa will only initiate payment if it receives confirmation from me,
over a channel where Visa has authenticated that I'm on the other end,
to do so.  A masquerading server doesn't learn any secrets that it can
use to authorize bogus transactions.

Does this work?

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL, client certs, and MITM (was WYTM?)

2003-10-22 Thread David Wagner
Tom Otvos wrote:
As far as I can glean, the general consensus in WYTM is that MITM
attacks are very low (read:
inconsequential) probability.  Is this *really* true?

I'm not aware of any such consensus.
I suspect you'd get plenty of debate on this point.
But in any case, widespread exploitation of a vulnerability
shouldn't be a prerequisite to deploying countermeasures.

If we see a plausible future threat and the stakes are high enough,
it is often prudent to deploy defenses in advance against the possibility
that attackers.  If we wait until the attacks are widespread, it may be
too late to stop them.  It often takes years (or possibly a decade or more:
witness IPSec) to design and widely deploy effective countermeasures.

It's hard to predict with confidence which of the many vulnerabilities
will be popular among attackers five years from now, and I've been very wrong,
in both directions, many times.  In recognition of our own fallibility at
predicting the future, the conclusion I draw is that it is a good idea
to be conservative.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Quantum cryptography finally commercialized?

2003-09-17 Thread David Wagner
R. A. Hettinga wrote:
http://www.net-security.org/news.php?id=3583
 
Quantum cryptography finally commercialized?
Posted by Mirko Zorz - LogError
Tuesday, 16 September 2003, 1:23 PM CET

For the onlookers, this article is misinformed and should
not be relied upon for evaluating quantum cryptography.

The rest of the article contains statements like the following:

MagiQ's Navajo creates encryption keys that change up to 1,000 times a
second to prevent eavesdroppers from deciphering the transmitted data
packets.  [...]  While AES is very secure, the combination of AES and
Navajo is theoretically absolutely secure: unbreakable.

The unbreakable claim is unfounded.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: quantum hype

2003-09-14 Thread David Wagner
Arnold G. Reinhold wrote:
I think there is another problem with quantum cryptography. Putting 
aside the question of the physical channel, there is the black box at 
either end that does all this magical quantum stuff. One has to trust 
that black box.

- Its design has to thoroughly audited  and the integrity of each unit verified
- It has to be shipped securely from some factory or depot to each end point
- It has to be continuously protected from tampering.

Yes.  Several years ago, Adi Shamir presented some fascinating
attacks on the implementation of such black boxes at Cryptrec, so
it is not something that should be taken for granted.

It seems to me one could just as well ship a 160 GB hard drive filled 
with random keying material to each endpoint.

Well, I agree.  If we get to use complexity-based crypto that is
not proven secure, like AES, RSA, or the like, then we can do much
better than quantum crypto.  The only real attraction of quantum crypto
that I can see is that its security does not rely on unproven
complexity-theoretic conjectures.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: quantum hype

2003-09-13 Thread David Wagner
martin f krafft  wrote:
So MagiQ and others claim that the technology is theoretically
unbreakable. How so? If I have 20 bytes of data to send, and someone
reads the photon stream before the recipient, that someone will have
access to the 20 bytes before the recipient can look at the 20
bytes, decide they have been tampered with, and alert the sender.

You're absolutely right.  Quantum cryptography *assumes* that you
have an authentic, untamperable channel between sender and receiver.
The standard quantum key-exchange protocols are only applicable when
there is some other mechanism guaranteeing that the guy at the other end
of the fibre optic cable is the guy you wanted to talk to, and that noone
else can splice into the middle of the cable and mount a MITM attack.

One corollary of this is that, if we want end-to-end security, one can't
stick classical routers or other such equipment in the middle of the
connection between you and I.  If we want to support quantum crypto,
the conventional network architectures just won't work, because any two
endpoints who want to communicate have to have a direct piece of glass.
Quantum crypto might work fine for dedicated point-to-point links,
but it seems to be lousy for large networks.

For these reasons, and other reasons, quantum crypto looks pretty
impractical to me, for most practical purposes.  There is some very
pretty theory behind it, but I predict quantum crypto will never replace
general-purpose network encryption schemes like SSH, SSL, and IPSec.

As you say, there is a lot of hype out there, but as you're discovering,
it has to be read very carefully.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: quantum hype

2003-09-13 Thread David Wagner
 On 09/13/2003 05:06 PM, David Wagner wrote:
   Quantum cryptography *assumes* that you
   have an authentic, untamperable channel between sender and receiver.
 
 Not true.  The signal is continually checked for
 tampering;  no assumption need be made.

Quantum crypto only helps me exchange a key with whoever
is on the other end of the fibre optic link.  How do I know
that the person I exchanged a key with is the person I wanted
to exchange a key with?  I don't ... unless I can make extra
assumptions (such as that I have a guaranteed-authentic channel
to the party I want to communicate with).

If I can't make any physical assumptions about the authenticity
properties of the underlying channel, I can end up with a scenario
like this: I wanted to exchange a key securely with Bob, but instead,
unbeknownest to me, I ended up securely exchanging key with Mallet.

I believe the following is an accurate characterization:
 Quantum provides confidentiality (protection against eavesdropping),
 but only if you've already established authenticity (protection
 against man-in-the-middle attacks) some other way.
Tell me if I got anything wrong.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Code breakers crack GSM cellphone encryption

2003-09-09 Thread David Wagner
Vin McLellan  wrote:
A5/2 was the equivalent of 40-bit DES, presumed to be relatively weak and 
developed as an export standard.

Yeah.  Except it would be more accurate to place A5/2's strength as
roughly equivalent to 17-bit DES.  A5/1's strength is roughly equivalent
to that of 40-bit DES.

Of course, the GSM folks didn't exactly do a great job of disclosing
these facts.  They did disclose that A5/2 was the exportable version.
However, when A5/2 was first designed, SAGE put out a report that claimed
years of security analysis on A5/2 had been done and no mathematical
weaknesses had been found.  Now that we've seen A5/2, that report suffers
from a certain credibility gap, to put it mildly...

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Code breakers crack GSM cellphone encryption

2003-09-09 Thread David Wagner
One point your analysis misses is that there are public policy
implications to deploying a phone system that enemy countries can
routinely intercept.  Not all attacks are financially motivated.

Is it a good thing for our infrastructure to be so insecure?
Do we want other countries listening to our GSM calls?  Do other
countries want us listening to their GSM calls?  Is it a good thing
if such interception is made easier?  Sure, it may be in the SIGINT
agencies' interests for GSM to be crackable, but is it in the
public interest?  It's not clear.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Code breakers crack GSM cellphone encryption

2003-09-08 Thread David Wagner
John Doe Number Two  wrote:
It's nice to see someone 'discovering' what Lucky Green already figured-out
years ago.  I wonder if they'll cut him a check.

No, no, no!  This is new work, novel and different from what was
previously known.  In my opinion, it is an outstanding piece of research.

Barkan, Biham, and Keller establish two major results:

1. A5/2 can be cracked in real-time using a passive ciphertext only
attack, due to the use of error-correcting coding before encryption.

2. All other GSM calls (including those encoded using A5/1 and A5/3) can
be cracked using an active attack.  This attack exploits a protocol flaw:
the session key derivation process does not depend on which encryption
algorithm was selected, hence one can mount an attack on A5/2, learn
the A5/2 key, and this will be the same key used for A5/1 or A5/3 calls.

(they also make other relevant observations, but the above two are
probably the most significant discoveries)

Their attacks permit eavesdropping as well as billing fraud.

See their paper at CRYPTO 2003 for more details.  I am disappointed that
you seem to be criticizing their work before even reading their paper.
I encourage you to read the paper -- it really is interesting.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: cryptographic ergodic sequence generators?

2003-09-07 Thread David Wagner
Perry E. Metzger wrote:
I've noted to others on this before that for an application like
the IP fragmentation id, it might be even better if no repeats
occurred in any block of 2^31 (n being 32) but the sequence did not
repeat itself (or at least could be harmlessly reseeded at very very
long intervals).

Let E_k(.) be a secure block cipher on 31 bits with key k.
(For instance, E might be 16 rounds of Luby-Rackoff using
f(x) = AES_{AES_{k}(i)}(x) as the Feistel function in the ith round.)
Pick an unending sequence of keys k0, k1, k2, ... for E.

Then your desired sequence can be constructed by
  E_k0(0), E_k0(1), E_k0(2), ..., E_k0(2^31 - 1),
  2^31 + E_k1(0), 2^31 + E_k1(1), 2^31 + E_k1(2), ..., 2^31 + E_k1(2^31 - 1),
  E_k2(0), E_k2(1), E_k2(2), ..., E_k2(2^31 - 1),
  2^31 + E_k3(0), 2^31 + E_k3(1), 2^31 + E_k3(2), ..., 2^31 + E_k3(2^31 - 1),
  ...,

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: traffic analysis

2003-08-29 Thread David Wagner
John S. Denker wrote:
More specifically, anybody who thinks the scheme
I described is vulnerable to a timing attack isn't
paying attention.  I addressed this point several
times in my original note.  All transmissions
adhere to a schedule -- independent of the amount,
timing, meaning, and other characteristics of the
payload.

Are you sure you understood the attack?  The attack assumes that
communications links are insecure.  The *transmission* from Alice may
adhere to a fixed schedule, but that doesn't prevent the attacker from
introducing delays into the packets after transmission.

For instance, suppose I want to find out who is viewing my web site.
I have a hunch that Alice is visiting my web site right this instant,
and I want to test that hunch.  I delay Alice's outgoing packets, and I
check whether the incoming traffic to my web contains matching delays.
If so, it's a good bet that Alice has a connection open to my site.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Nullsoft's WASTE communication system

2003-06-02 Thread David Wagner
Eric Rescorla  wrote:
E(M) || H(M)- This is still quite dangerous.  If the attacker 
   can somehow reset the IV, then they can mount
   an attack on the first cipher block.

Also, it can violate confidentiality.  If M is guessable,
the guess can be confirmed using H(M).

E(M || H(M))- This is hard to attack with block ciphers, but
   easy with stream ciphers.

Even for block ciphers, it's vulnerable against chosen-message
attack, although I agree this weakness may be more or less theoretical.


I certainly agree with all your comments.  I can't imagine why
they invented their own crypto, rather than just using SSL.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]