Re: [Cryptography] Opening Discussion: Speculation on BULLRUN

2013-09-06 Thread Kevin W. Wall

On 9/6/2013 1:05 PM, Perry E. Metzger wrote:
I have re-read the NY Times article. It appears to only indicate that 
this was *a* standard that was sabotaged, not that it was the only 
one. In particular, the Times merely indicates that they can now 
confirm that this particular standard was sabotaged, but presumably it 
was far from the only target. 


WEP was so bad it's hard to think anyone could have done that intentionally.
OTOH, stupidity usually wins out over malice.  Besides, I don't believe 
that WEP

fits the other attributes of the story.

But seriously, sabotage can manifest itself in a lot of different ways. 
Perhaps their

HUMINT promoted attitudes of jealously and backstabbing. Those means would
likely be more efficient means to get something you want. Eventually 
everyone
gets weary and will agree on practically anything even if it isn't near 
optimal,

especially it it had been suggested early on and then discarded because the
committee decided they could do better. There's also politics, bribes, 
and other

gratuity they might offer.

There's more than one one to dumb down standards besides just suggesting
the wording of some crypto details which is what everyone seems to be
assuming they did. Maybe all they did was encourage an dumb idea that
someone else offered.

-kevin
--
Blog: http://off-the-wall-security.blogspot.com/
The most likely way for the world to be destroyed, most experts agree,
is by accident. That's where we come in; we're computer professionals.
We *cause* accidents.-- Nathaniel Borenstein
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: 'Padding Oracle' Crypto Attack Affects Millions of ASP.NET Apps

2010-10-03 Thread Kevin W. Wall
Peter Gutmann wrote:
 Jerry Leichter leich...@lrw.com writes:
 
 By the way, the don't acknowledge whether it was the login ID or the
 password that was wrong example is one of those things everyone knows -
 along with change your password frequently - that have long passed their
 use by date.  
 
 You got there before I did - real-world studies of users have shown that a
 common failure mode for this is that when users get their user name wrong they
 then try every password they can think of under the assumption that they've
 remembered the wrong password for the site.  So not only does not
 distinguishing between incorrect username and incorrect password not help [0],
 it actually makes things much, much worse by training users to enter every
 password for every site they know.
 
 Peter.
 
 [0] Well, it helps the attackers I guess...

There's other reasons that this is still done that relate to regulatory issues.
E.g., if the user names are considered by the regulatory body as sensitive PII,
this sometimes happens that these regulatory bodies mandate that one should not
distinguish between invalid user name or invalid password. So you can argue that
those regulatory bodies are misguided and/or behind the times, but can't always
blame the application developers. At other times, it is just some ill-advised
corporate policy that developers are forced to adhere to. I'm sure that you all
know well that those who understand the risks best are not always those setting
policy.

-kevin
--
Kevin W. Wall
The most likely way for the world to be destroyed, most experts agree,
is by accident. That's where we come in; we're computer professionals.
We cause accidents.-- Nathaniel Borenstein, co-creator of MIME

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: 'Padding Oracle' Crypto Attack Affects Millions of ASP.NET Apps

2010-09-29 Thread Kevin W. Wall
Thai Duong wrote:
 On Tue, Sep 28, 2010 at 12:49 PM, Peter Gutmann
 pgut...@cs.auckland.ac.nz wrote:
 
 Ye gods, how can you screw something that simple up that much?  They use the
 appropriate, and secure, HMAC-SHA1 and AES, but manage to apply it backwards!
 
 I guess they just follow SSL.
 
 BTW, they screw up more badly in other places. Download .NET
 Reflector, decompile .NET source, and do a grep 'DecryptString',
 you'll see at least three places where they don't even use a MAC at
 all.

So, I think I brought this up once before with Thai, but isn't the
pre-shared key version of W3C's XML Encrypt also going to be vulnerable
to a padding oracle attack. IIRC, W3C doesn't specify MAC at all, so unless
you use XML Digital Signature after using XML Encrypt w/ a PSK, then
it seems to me you are screwed in that case as well. And there are
some cases where using a random session key that's encrypted with a
recipient's public key is just not scalable (e.g., when sending out
to over something like Java Message Service, or the Tibco Bus, or
almost anything that uses multicast). And even if a new XML Encrypt
spec for using with PSK was adopted tomorrow, the adoption would take
quite a long time.  Sure hope I'm wrong about that. Maybe one of
you real cryptographers can set me straight on this.

-kevin
--
Kevin W. Wall
The most likely way for the world to be destroyed, most experts agree,
is by accident. That's where we come in; we're computer professionals.
We cause accidents.-- Nathaniel Borenstein, co-creator of MIME

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: 'Padding Oracle' Crypto Attack Affects Millions of ASP.NET Apps

2010-09-27 Thread Kevin W. Wall
Peter Gutmann wrote:
 Tom Ritter t...@ritter.vg writes:
 
 What's weird is I find confusing literature about what *is* the default for
 protecting the viewstate.
 
 I still haven't seen the paper/slides from the talk so it's a bit hard to
 comment on the specifics, but if you're using .NET's FormsAuthenticationTicket
 (for cookie-based auth, not viewstate protection) then you get MAC protection
 built-in, along with other nice features like sliding cookie expiration (the
 cookie expires relative to the last active use of the site rather than an
 absolute time after it was set).  I've used it in the past as an example of
 how to do cookie-based auth right

FYI...I just received confirmation from my company's on-site consultant from
Microsoft that .NET's FormsAuthenticationTicket is also vulnerable to
this padding oracle attack. So apparently Microsoft didn't apply the MAC
protection quite right in their implementation.

-kevin
-- 
Kevin W. Wall
The most likely way for the world to be destroyed, most experts agree,
is by accident. That's where we come in; we're computer professionals.
We cause accidents.-- Nathaniel Borenstein, co-creator of MIME

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Proper way to check for JCE Unlimited Strength Jurisdiction Policy files

2009-11-25 Thread Kevin W. Wall
Hi list...hope there are some Java developers out there and that this is not
too off topic for this list's charter.

Does anyone know the *proper* (and portable) way to check if a Java VM is
using the Java Cryptography Extension (JCE) Unlimited Strength Jurisdiction
Policy files (e.g., for JDK 6, see
https://cds.sun.com/is-bin/INTERSHOP.enfinity/WFS/CDS-CDS_Developer-Site/en_US/-/USD/viewproductdetail-start?productref=jce_policy-6-oth-...@cds-cds_developer.)

I would like something that works with at least Java 5 and later and that does
not have any false positives or negatives. I also would _prefer_ some test
that does not require locating and parsing the policy files within the JVM's
installed JCE local_policy.jar and US_export_policy.jar files as that seems
kludgey and might not work with future JDKs.

My first thought was just to try a naive dummy encryption of a test string
using a 256-bit AES key. However, I realized that this might still succeed
without having the JCE Unlimited Strength Jurisdiction Policy files installed
if the JCE provider being used implemented some exemption mechanism (i.e., see
javax.crypto.ExemptionMechanism) such as key recovery, key weakening, or
key escrow.

Then I saw that javax.crypto.Cipher class has a getExemptionMechanism() method
that returns either an ExemptionMechanism object associated with the Cipher
object OR null if there is no exemption mechanism being used.  So I figured
I could then do the naive encryption of some dummy string using 256-bit
AES/CBC/NoPadding and if that succeeded AND cipher.getExemptionMechanism()
returned null, THEN I could assume that the JCE Unlimited Strength Jurisdiction
Policy files were installed. (When the default strong JCE jurisdiction
policy files are installed, the max allowed AES key size is 128-bits.)

Does that seem like a sound plan or is there more that I need to check? If
not, please explain what else I will need to do.

Thanks in advance,

-kevin wall
-- 
Kevin W. Wall
The most likely way for the world to be destroyed, most experts agree,
is by accident. That's where we come in; we're computer professionals.
We cause accidents.-- Nathaniel Borenstein, co-creator of MIME

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Proper way to check for JCE Unlimited Strength Jurisdiction Policy files

2009-11-25 Thread Kevin W. Wall
FWIW, my implementation of this for OWASP ESAPI is at:
http://code.google.com/p/owasp-esapi-java/source/browse/trunk/src/test/java/org/owasp/esapi/reference/CryptoPolicy.java

The main() is there just for stand-alone testing. From the ESAPI JUnit tests,
I call:
  if ( keySize  128  !CryptoPolicy.isUnlimitedStrengthCryptoAvailable() )
  {
System.out.println(Skipping test for  + cipherXform +  where key size  +
   is  + keySize + ; install JCE Unlimited Strength  +
   Jurisdiction Policy files to run this test.);
return;
  }

Would appreciate it if someone could take 5 min to look at this CryptoPolicy
source to see if it looks correct. It's only 90 lines including comments and
white space. It tried to check the exemption mechanism but am not sure I
am understanding it correctly.

Thanks,
-kevin

-Original Message-
Kevin W. Wall wrote:
 Hi list...hope there are some Java developers out there and that this is not
 too off topic for this list's charter.
 
 Does anyone know the *proper* (and portable) way to check if a Java VM is
 using the Java Cryptography Extension (JCE) Unlimited Strength Jurisdiction
 Policy files (e.g., for JDK 6, see
 https://cds.sun.com/is-bin/INTERSHOP.enfinity/WFS/CDS-CDS_Developer-Site/en_US/-/USD/viewproductdetail-start?productref=jce_policy-6-oth-...@cds-cds_developer.)
 
 I would like something that works with at least Java 5 and later and that does
 not have any false positives or negatives. I also would _prefer_ some test
 that does not require locating and parsing the policy files within the JVM's
 installed JCE local_policy.jar and US_export_policy.jar files as that seems
 kludgey and might not work with future JDKs.
 
 My first thought was just to try a naive dummy encryption of a test string
 using a 256-bit AES key. However, I realized that this might still succeed
 without having the JCE Unlimited Strength Jurisdiction Policy files installed
 if the JCE provider being used implemented some exemption mechanism (i.e., see
 javax.crypto.ExemptionMechanism) such as key recovery, key weakening, or
 key escrow.
 
 Then I saw that javax.crypto.Cipher class has a getExemptionMechanism() method
 that returns either an ExemptionMechanism object associated with the Cipher
 object OR null if there is no exemption mechanism being used.  So I figured
 I could then do the naive encryption of some dummy string using 256-bit
 AES/CBC/NoPadding and if that succeeded AND cipher.getExemptionMechanism()
 returned null, THEN I could assume that the JCE Unlimited Strength 
 Jurisdiction
 Policy files were installed. (When the default strong JCE jurisdiction
 policy files are installed, the max allowed AES key size is 128-bits.)
 
 Does that seem like a sound plan or is there more that I need to check? If
 not, please explain what else I will need to do.
 
 Thanks in advance,
 
 -kevin wall


-- 
Kevin W. Wall
The most likely way for the world to be destroyed, most experts agree,
is by accident. That's where we come in; we're computer professionals.
We cause accidents.-- Nathaniel Borenstein, co-creator of MIME

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Question about Shamir secret sharing scheme

2009-10-03 Thread Kevin W. Wall
Hi list...I have a question about Shamir's secret sharing.

According to the _Handbook of Applied Cryptography_
Shamir’s secret sharing (t,n) threshold scheme works as follows:

SUMMARY: a trusted party distributes shares of a secret S to n users.
RESULT: any group of t users which pool their shares can recover S.

The trusted party T begins with a secret integer S ≥ 0 it wishes
to distribute among n users.
(a) T chooses a prime p  max(S, n), and defines a0 = S.
(b) T selects t−1 random, independent coefficients defining the random
polynomial over Zp.
(c) T computes Si = f(i) mod p, 1 ≤ i ≤ n (or for any n distinct
points i, 1 ≤ i ≤ p − 1), and securely transfers the share Si
to user Pi , along with public index i.

The secret S can then be computed by finding f(0) more or less by
using Lagrangian interpolation on the t shares, the points (i, Si).

The question that a colleague and I have is there any cryptographic
purpose of computing the independent coefficients over the finite
field, Zp ?  The only reason that we can see to doing this is to
keep the sizes of the shares Si bounded within some reasonable range
and it seems as though one could just do something like allowing T
choose random coefficients from a sufficient # of bytes and just
do all the calculations without the 'mod p' stuff. We thought perhaps
Shamir did the calculations of Zp because things like Java's BigInteger
or BigDecimal weren't widely available when came up with this
scheme back in 1979.

So other than perhaps compatibility with other implementations (which
we are not really too concerned about) is there any reason to continue
to do the calculations over Zp ???

Thanks,
-kevin
-- 
Kevin W. Wall
The most likely way for the world to be destroyed, most experts agree,
is by accident. That's where we come in; we're computer professionals.
We cause accidents.-- Nathaniel Borenstein, co-creator of MIME


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Detecting attempts to decrypt with incorrect secret key in OWASP ESAPI

2009-09-17 Thread Kevin W. Wall
Peter Gutmann wrote:
 David Wagner d...@cs.berkeley.edu writes:
 
 (You could replace AES-CMAC with SHA1-HMAC, but why would you want to?)
 
 The answer to that depends on whether you need to support an existing base of
 crypto software and hardware.  Even though (in this case) it's a new standard,
 it still requires support from the underlying crypto libraries.  If little or
 none of those do AES-CMAC yet (I don't think Windows CryptoAPI does, only very
 recent versions of OpenSSL do... it's not looking good) then you'd want to
 stick with HMAC-SHA1.
 
 (Forestalling the inevitable but developers can implement AES-CMAC 
 themselves 
 from raw AES that I'm sure someone will follow up with, the target audience 
 for this is web application developers, not cryptographers, so you need to 
 give them something that works as required out of the box).

Peter has hit the proverbial nail right on the head. I apologize if I did
not make this clear in my original post, but but one goal of OWASP ESAPI
is to not require a whole lot of dependencies. In the context of Java crypto,
this means that we *ideally* would like to have no other dependency other than
the SunJCE, that is, the Sun reference implementation for JCE. Recently we
decided to required Java 6, but even with that, our choices for cipher
algorithms, cipher modes, and padding schemes are limited. For SunJCE,
this is what we have available to choose from:

Supported symmetric cipher algorithms:
AES, DES, DESede, Blowfish, RC2, ARCFOUR

Supported cipher modes:
CBC, CFB, CTR, CTS, ECB, OFB, PCBC

Supported padding schemes:
NoPadding, PKCS5Padding ISO10126Padding
OAEPPadding, OAEPWithdigestAndmgfPadding
PKCS1Padding, SSL3Padding

(Obviously some of these padding schemes such as OAEP are not suitable
with symmetric ciphers. Or at least I don't think they are.)

So given these limited choices, what are the best options to the
questions I posed in my original post yesterday? As Peter mentioned, we
want to give web app developers something that will work out-of-the-box.
For that reason we don't even want to require that developers use some other
JCE provider like Bouncy Castle, Cryptix, IAIK, etc. even though they may
have more suitable cipher modes or padding schemes.

Lastly, I wanted to respond to one other point that David Wagner brought
up in an earlier reply:

 Advice: Provide one mode, and make it secure.  Try to avoid
 configurability, because then someone will choose a poor configuration.

There's a few reasons for supporting different configurations here. One
is the backward compatibility with previous ESAPI versions, and the second
is to support legacy cases. My experience at my day job is that no one
really changes the crypto defaults anyway if you make it easy enough for them
to use. The main exception is if they have to be compatible with something else
such as some 3rd party vendor software that uses a different mode, etc.
What we can try to do is provide adequate warning in documentation or
in logged warnings if one tries to use anything other then the default.

BTW, thanks to all who replied. I've learned quite a bit from all your
responses, but it looks like I have a lot of research to do before I
understand everything that all of you said.

Regards,
-kevin
-- 
Kevin W. Wall
The most likely way for the world to be destroyed, most experts agree,
is by accident. That's where we come in; we're computer professionals.
We cause accidents.-- Nathaniel Borenstein, co-creator of MIME

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Detecting attempts to decrypt with incorrect secret key in OWASP ESAPI

2009-09-16 Thread Kevin W. Wall
 that the encryption and
   decryption operations are sufficiently fast not to cause a
   bottleneck even when millions of encryptions are done.
2) Approximately 90+% of the encryption that occurs deals with the
   plaintext being very short (usually ASCII, occasionally EBCDIC)
   string data such as SSNs, CC#s, bank account information, etc. (This
   is driven by regulatory and compliance issues.)

Because of these two observations I am concerned that the digital signature
operation and its corresponding validation will had significant processing
overhead relative to the actual encryption / decryption operations.

Thus I'd prefer something lighter weight than digital signatures to
accomplish more or less the same thing.

I have considered using an HMAC-SHA1 as a keyless MIC to do this,
using something like:

MIC = HMAC-SHA1( nonce, IV + secretKey )
or
MIC = HMAC-SHA1( nonce, IV + plaintext )

and then also include the random nonce and the MIC itself in the CipherText
class so it can be validated later by the decrypter, with the understanding
that the plaintext resulting from the decryption operation should only be
used if the MIC can be properly validated. (Probably an exception would
be thrown if the dsig was not valid so there would be no choice.)

However, I am not a cryptanalyst so I am not sure how secure this is (if at
all). My intuition tells me that it's not as good as using a DSig, but it
should be significantly faster (or if not, rather than using an HMAC,
alternatively just using something like SHA-256 and prepending the nonce
to the rest).

Note that I am now writing this ESAPI Java crypto code so that one has the
choice of not doing these MIC calculations at all simply by setting a
property in ESAPI.properties, but I have made to made the default to have it
enabled to do this calculation in the encryption and validate it during the
decryption.

On why I included the IV in the MIC (or DSig) calculations...
=
The ESAPI default will be to use a random IV appropriate for the cipher
whenever that cipher mode requires an IV. Thus in the minimalist case, someone
who needs to persist the ciphertext (say in a DB) will need to store the
IV+ciphertext. There is a method on CipherText to return the base64-encoded
IV+ciphertext byte array, like it is done in W3C's XML Encrypt specification.

I added the random IV into the MIC (or DSig) calculation in part to add
to the entropy and in part to be able to detect attempts of an adversary
to change the IV to something of their liking.

In the DSig case, it serves more or less as a nonce (with the assumption
that because it's a ciphertext block size of random bytes it is unlikely
to be repeated). It probably isn't as useful for a MIC calculation, but
figured it couldn't hurt since byte concatenation is cheap.

I had considered using something without the nonce like

MIC = HMAC-SHA1( IV, plaintext )

but since I was already uneasy about using a MIC in the first place, I decided
to do it the way shown above with an additional random nonce thinking I can
ensure that the nonce is randomly chosen and sufficiently large (e.g.,
say 160-bits or longer, independent of the cipher algorithm's block size).

The second reason I added the IV into the mix is because I figured that this
would make it possible to detect an adversary tampering with the IV.
(Well, it would for the DSig for sure; perhaps less so for the plaintext version
of the MIC if it is possible for the adversary to do any type of chosen
plaintext attack.)

The reason that I want to be able to detect this is because I read somewhere
in a paper by some cryptographer (maybe David Wagner, but am not sure)
that there were some esoteric cryptographic attacks that could leak a few
bits of the secret key or something if an adversary could get someone to attempt
to decrypt some ciphertext using IVs that the adversary could manipulate.
(I think maybe this had to do with IPSec but don't recall exactly as it's
been several years ago.) But in a nutshell, I was hoping that including the
IV would have the secondary benefit of preventing these types of attacks.
[Note: Any references to papers referencing something like this would be
appreciated.]

So, having provided all of that background, in summary, here are my
three questions:
1) Is using either of these MIC calculations cryptographically secure?
2) If answer to #1, is 'yes', which one is safer / more secure?
3) If answer to #1 is 'no', do you have any suggestions less
   computationally expensive then digital signatures that would
   allow us to detect attempts to decrypt with the incorrect secret
   key and/or an adversary attempting to alter the IV prior to the
   decryption.

Thanks in advance to all who respond,
-kevin--
Kevin W. Wall
The most likely way for the world to be destroyed, most experts agree,
is by accident. That's where we come